Feb 27 10:26:18 crc systemd[1]: Starting Kubernetes Kubelet... Feb 27 10:26:18 crc restorecon[4693]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Feb 27 10:26:18 crc restorecon[4693]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 27 10:26:18 crc restorecon[4693]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 27 10:26:18 crc restorecon[4693]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 27 10:26:18 crc restorecon[4693]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 27 10:26:18 crc restorecon[4693]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 27 10:26:18 crc restorecon[4693]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 27 10:26:18 crc restorecon[4693]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 27 10:26:18 crc restorecon[4693]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 27 10:26:18 crc restorecon[4693]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 27 10:26:18 crc restorecon[4693]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 27 10:26:18 crc restorecon[4693]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 27 10:26:19 crc restorecon[4693]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 27 10:26:19 crc restorecon[4693]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Feb 27 10:26:20 crc kubenswrapper[4728]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 27 10:26:20 crc kubenswrapper[4728]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Feb 27 10:26:20 crc kubenswrapper[4728]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 27 10:26:20 crc kubenswrapper[4728]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 27 10:26:20 crc kubenswrapper[4728]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 27 10:26:20 crc kubenswrapper[4728]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.476193 4728 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.483047 4728 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.483067 4728 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.483074 4728 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.483079 4728 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.483084 4728 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.483091 4728 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.483098 4728 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.483103 4728 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.483108 4728 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.483113 4728 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.483117 4728 feature_gate.go:330] unrecognized feature gate: Example Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.483122 4728 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.483126 4728 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.483141 4728 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.483149 4728 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.483154 4728 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.483159 4728 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.483163 4728 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.483168 4728 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.483172 4728 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.483177 4728 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.483182 4728 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.483186 4728 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.483191 4728 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.483196 4728 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.483202 4728 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.483208 4728 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.483214 4728 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.483219 4728 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.483225 4728 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.483229 4728 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.483234 4728 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.483239 4728 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.483245 4728 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.483251 4728 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.483257 4728 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.483263 4728 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.483268 4728 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.483273 4728 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.483278 4728 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.483282 4728 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.483295 4728 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.483300 4728 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.483305 4728 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.483310 4728 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.483315 4728 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.483320 4728 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.483324 4728 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.483329 4728 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.483333 4728 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.483340 4728 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.483346 4728 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.483351 4728 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.483356 4728 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.483361 4728 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.483365 4728 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.483370 4728 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.483376 4728 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.483380 4728 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.483384 4728 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.483388 4728 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.483393 4728 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.483397 4728 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.483402 4728 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.483407 4728 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.483411 4728 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.483416 4728 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.483420 4728 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.483425 4728 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.483429 4728 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.483433 4728 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.484249 4728 flags.go:64] FLAG: --address="0.0.0.0" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.484265 4728 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.484281 4728 flags.go:64] FLAG: --anonymous-auth="true" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.484288 4728 flags.go:64] FLAG: --application-metrics-count-limit="100" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.484294 4728 flags.go:64] FLAG: --authentication-token-webhook="false" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.484309 4728 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.484324 4728 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.484330 4728 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.484336 4728 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.484341 4728 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.484347 4728 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.484352 4728 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.484357 4728 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.484363 4728 flags.go:64] FLAG: --cgroup-root="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.484368 4728 flags.go:64] FLAG: --cgroups-per-qos="true" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.484373 4728 flags.go:64] FLAG: --client-ca-file="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.484377 4728 flags.go:64] FLAG: --cloud-config="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.484382 4728 flags.go:64] FLAG: --cloud-provider="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.484387 4728 flags.go:64] FLAG: --cluster-dns="[]" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.484402 4728 flags.go:64] FLAG: --cluster-domain="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.484408 4728 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.484413 4728 flags.go:64] FLAG: --config-dir="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.484418 4728 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.484423 4728 flags.go:64] FLAG: --container-log-max-files="5" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.484430 4728 flags.go:64] FLAG: --container-log-max-size="10Mi" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.484436 4728 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.484443 4728 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.484449 4728 flags.go:64] FLAG: --containerd-namespace="k8s.io" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.484454 4728 flags.go:64] FLAG: --contention-profiling="false" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.484460 4728 flags.go:64] FLAG: --cpu-cfs-quota="true" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.484466 4728 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.484471 4728 flags.go:64] FLAG: --cpu-manager-policy="none" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.484477 4728 flags.go:64] FLAG: --cpu-manager-policy-options="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.484483 4728 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.484488 4728 flags.go:64] FLAG: --enable-controller-attach-detach="true" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.484494 4728 flags.go:64] FLAG: --enable-debugging-handlers="true" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.484499 4728 flags.go:64] FLAG: --enable-load-reader="false" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.484524 4728 flags.go:64] FLAG: --enable-server="true" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.484529 4728 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.484541 4728 flags.go:64] FLAG: --event-burst="100" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.484546 4728 flags.go:64] FLAG: --event-qps="50" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.484551 4728 flags.go:64] FLAG: --event-storage-age-limit="default=0" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.484565 4728 flags.go:64] FLAG: --event-storage-event-limit="default=0" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.484571 4728 flags.go:64] FLAG: --eviction-hard="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.484582 4728 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.484587 4728 flags.go:64] FLAG: --eviction-minimum-reclaim="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.484591 4728 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.484597 4728 flags.go:64] FLAG: --eviction-soft="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.484606 4728 flags.go:64] FLAG: --eviction-soft-grace-period="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.484611 4728 flags.go:64] FLAG: --exit-on-lock-contention="false" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.484616 4728 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.484621 4728 flags.go:64] FLAG: --experimental-mounter-path="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.484626 4728 flags.go:64] FLAG: --fail-cgroupv1="false" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.484631 4728 flags.go:64] FLAG: --fail-swap-on="true" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.484636 4728 flags.go:64] FLAG: --feature-gates="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.484642 4728 flags.go:64] FLAG: --file-check-frequency="20s" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.484647 4728 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.484653 4728 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.484660 4728 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.484665 4728 flags.go:64] FLAG: --healthz-port="10248" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.484670 4728 flags.go:64] FLAG: --help="false" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.484675 4728 flags.go:64] FLAG: --hostname-override="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.484680 4728 flags.go:64] FLAG: --housekeeping-interval="10s" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.484686 4728 flags.go:64] FLAG: --http-check-frequency="20s" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.484690 4728 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.484695 4728 flags.go:64] FLAG: --image-credential-provider-config="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.484700 4728 flags.go:64] FLAG: --image-gc-high-threshold="85" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.484705 4728 flags.go:64] FLAG: --image-gc-low-threshold="80" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.484711 4728 flags.go:64] FLAG: --image-service-endpoint="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.484716 4728 flags.go:64] FLAG: --kernel-memcg-notification="false" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.484721 4728 flags.go:64] FLAG: --kube-api-burst="100" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.484726 4728 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.484731 4728 flags.go:64] FLAG: --kube-api-qps="50" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.484736 4728 flags.go:64] FLAG: --kube-reserved="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.484741 4728 flags.go:64] FLAG: --kube-reserved-cgroup="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.484745 4728 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.484750 4728 flags.go:64] FLAG: --kubelet-cgroups="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.484755 4728 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.484768 4728 flags.go:64] FLAG: --lock-file="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.484774 4728 flags.go:64] FLAG: --log-cadvisor-usage="false" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.484778 4728 flags.go:64] FLAG: --log-flush-frequency="5s" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.484783 4728 flags.go:64] FLAG: --log-json-info-buffer-size="0" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.484791 4728 flags.go:64] FLAG: --log-json-split-stream="false" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.484796 4728 flags.go:64] FLAG: --log-text-info-buffer-size="0" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.484801 4728 flags.go:64] FLAG: --log-text-split-stream="false" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.484807 4728 flags.go:64] FLAG: --logging-format="text" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.484812 4728 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.484819 4728 flags.go:64] FLAG: --make-iptables-util-chains="true" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.484824 4728 flags.go:64] FLAG: --manifest-url="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.484830 4728 flags.go:64] FLAG: --manifest-url-header="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.484839 4728 flags.go:64] FLAG: --max-housekeeping-interval="15s" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.484844 4728 flags.go:64] FLAG: --max-open-files="1000000" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.484850 4728 flags.go:64] FLAG: --max-pods="110" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.484856 4728 flags.go:64] FLAG: --maximum-dead-containers="-1" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.484861 4728 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.484865 4728 flags.go:64] FLAG: --memory-manager-policy="None" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.484870 4728 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.484875 4728 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.484881 4728 flags.go:64] FLAG: --node-ip="192.168.126.11" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.484886 4728 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.484898 4728 flags.go:64] FLAG: --node-status-max-images="50" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.484904 4728 flags.go:64] FLAG: --node-status-update-frequency="10s" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.484909 4728 flags.go:64] FLAG: --oom-score-adj="-999" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.484915 4728 flags.go:64] FLAG: --pod-cidr="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.484921 4728 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.484936 4728 flags.go:64] FLAG: --pod-manifest-path="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.484942 4728 flags.go:64] FLAG: --pod-max-pids="-1" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.484947 4728 flags.go:64] FLAG: --pods-per-core="0" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.484952 4728 flags.go:64] FLAG: --port="10250" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.484957 4728 flags.go:64] FLAG: --protect-kernel-defaults="false" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.484962 4728 flags.go:64] FLAG: --provider-id="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.484966 4728 flags.go:64] FLAG: --qos-reserved="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.484971 4728 flags.go:64] FLAG: --read-only-port="10255" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.484977 4728 flags.go:64] FLAG: --register-node="true" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.484990 4728 flags.go:64] FLAG: --register-schedulable="true" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.484995 4728 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.485004 4728 flags.go:64] FLAG: --registry-burst="10" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.485009 4728 flags.go:64] FLAG: --registry-qps="5" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.485014 4728 flags.go:64] FLAG: --reserved-cpus="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.485019 4728 flags.go:64] FLAG: --reserved-memory="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.485026 4728 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.485032 4728 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.485038 4728 flags.go:64] FLAG: --rotate-certificates="false" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.485043 4728 flags.go:64] FLAG: --rotate-server-certificates="false" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.485048 4728 flags.go:64] FLAG: --runonce="false" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.485053 4728 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.485058 4728 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.485063 4728 flags.go:64] FLAG: --seccomp-default="false" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.485068 4728 flags.go:64] FLAG: --serialize-image-pulls="true" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.485073 4728 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.485078 4728 flags.go:64] FLAG: --storage-driver-db="cadvisor" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.485083 4728 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.485088 4728 flags.go:64] FLAG: --storage-driver-password="root" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.485093 4728 flags.go:64] FLAG: --storage-driver-secure="false" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.485098 4728 flags.go:64] FLAG: --storage-driver-table="stats" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.485103 4728 flags.go:64] FLAG: --storage-driver-user="root" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.485108 4728 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.485113 4728 flags.go:64] FLAG: --sync-frequency="1m0s" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.485118 4728 flags.go:64] FLAG: --system-cgroups="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.485123 4728 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.485131 4728 flags.go:64] FLAG: --system-reserved-cgroup="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.485135 4728 flags.go:64] FLAG: --tls-cert-file="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.485164 4728 flags.go:64] FLAG: --tls-cipher-suites="[]" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.485177 4728 flags.go:64] FLAG: --tls-min-version="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.485182 4728 flags.go:64] FLAG: --tls-private-key-file="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.485186 4728 flags.go:64] FLAG: --topology-manager-policy="none" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.485191 4728 flags.go:64] FLAG: --topology-manager-policy-options="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.485196 4728 flags.go:64] FLAG: --topology-manager-scope="container" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.485202 4728 flags.go:64] FLAG: --v="2" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.485209 4728 flags.go:64] FLAG: --version="false" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.485223 4728 flags.go:64] FLAG: --vmodule="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.485229 4728 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.485234 4728 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.485391 4728 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.485400 4728 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.485405 4728 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.485410 4728 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.485414 4728 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.485418 4728 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.485423 4728 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.485428 4728 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.485434 4728 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.485438 4728 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.485444 4728 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.485448 4728 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.485453 4728 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.485457 4728 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.485462 4728 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.485466 4728 feature_gate.go:330] unrecognized feature gate: Example Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.485470 4728 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.485475 4728 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.485479 4728 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.485484 4728 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.485489 4728 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.485493 4728 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.485497 4728 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.485519 4728 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.485525 4728 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.485532 4728 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.485537 4728 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.485541 4728 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.485546 4728 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.485550 4728 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.485555 4728 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.485559 4728 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.485564 4728 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.485579 4728 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.485584 4728 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.485589 4728 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.485593 4728 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.485599 4728 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.485604 4728 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.485609 4728 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.485614 4728 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.485618 4728 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.485623 4728 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.485627 4728 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.485631 4728 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.485635 4728 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.485642 4728 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.485646 4728 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.485651 4728 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.485655 4728 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.485659 4728 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.485663 4728 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.485669 4728 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.485673 4728 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.485678 4728 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.485682 4728 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.485687 4728 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.485691 4728 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.485695 4728 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.485700 4728 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.485704 4728 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.485709 4728 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.485713 4728 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.485717 4728 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.485723 4728 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.485729 4728 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.485734 4728 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.485740 4728 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.485775 4728 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.485803 4728 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.485810 4728 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.485818 4728 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.497894 4728 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.497956 4728 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.498123 4728 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.498145 4728 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.498155 4728 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.498165 4728 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.498174 4728 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.498182 4728 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.498189 4728 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.498197 4728 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.498205 4728 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.498213 4728 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.498221 4728 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.498229 4728 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.498237 4728 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.498245 4728 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.498253 4728 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.498261 4728 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.498268 4728 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.498276 4728 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.498284 4728 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.498291 4728 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.498299 4728 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.498306 4728 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.498314 4728 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.498322 4728 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.498343 4728 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.498351 4728 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.498359 4728 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.498366 4728 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.498374 4728 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.498381 4728 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.498389 4728 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.498397 4728 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.498405 4728 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.498412 4728 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.498420 4728 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.498427 4728 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.498435 4728 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.498443 4728 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.498451 4728 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.498458 4728 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.498465 4728 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.498473 4728 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.498481 4728 feature_gate.go:330] unrecognized feature gate: Example Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.498488 4728 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.498496 4728 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.498528 4728 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.498537 4728 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.498545 4728 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.498552 4728 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.498560 4728 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.498568 4728 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.498575 4728 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.498583 4728 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.498593 4728 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.498605 4728 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.498616 4728 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.498627 4728 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.498637 4728 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.498645 4728 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.498656 4728 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.498678 4728 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.498686 4728 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.498694 4728 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.498702 4728 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.498710 4728 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.498719 4728 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.498729 4728 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.498736 4728 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.498744 4728 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.498752 4728 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.498759 4728 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.498771 4728 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.499073 4728 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.499086 4728 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.499095 4728 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.499103 4728 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.499110 4728 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.499118 4728 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.499125 4728 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.499133 4728 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.499141 4728 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.499148 4728 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.499155 4728 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.499163 4728 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.499170 4728 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.499178 4728 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.499189 4728 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.499198 4728 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.499206 4728 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.499214 4728 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.499222 4728 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.499229 4728 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.499237 4728 feature_gate.go:330] unrecognized feature gate: Example Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.499244 4728 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.499251 4728 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.499260 4728 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.499279 4728 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.499287 4728 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.499296 4728 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.499303 4728 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.499311 4728 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.499319 4728 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.499326 4728 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.499334 4728 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.499341 4728 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.499349 4728 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.499359 4728 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.499368 4728 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.499378 4728 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.499386 4728 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.499395 4728 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.499405 4728 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.499415 4728 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.499424 4728 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.499433 4728 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.499441 4728 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.499449 4728 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.499458 4728 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.499468 4728 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.499478 4728 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.499486 4728 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.499494 4728 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.499526 4728 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.499534 4728 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.499542 4728 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.499549 4728 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.499558 4728 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.499565 4728 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.499573 4728 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.499580 4728 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.499588 4728 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.499595 4728 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.499616 4728 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.499624 4728 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.499632 4728 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.499640 4728 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.499647 4728 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.499655 4728 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.499663 4728 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.499670 4728 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.499678 4728 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.499685 4728 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.499710 4728 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.499722 4728 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.500827 4728 server.go:940] "Client rotation is on, will bootstrap in background" Feb 27 10:26:20 crc kubenswrapper[4728]: E0227 10:26:20.508707 4728 bootstrap.go:266] "Unhandled Error" err="part of the existing bootstrap client certificate in /var/lib/kubelet/kubeconfig is expired: 2026-02-24 05:52:08 +0000 UTC" logger="UnhandledError" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.512391 4728 bootstrap.go:101] "Use the bootstrap credentials to request a cert, and set kubeconfig to point to the certificate dir" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.512499 4728 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.514228 4728 server.go:997] "Starting client certificate rotation" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.514258 4728 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.514414 4728 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.545399 4728 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 27 10:26:20 crc kubenswrapper[4728]: E0227 10:26:20.546536 4728 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.164:6443: connect: connection refused" logger="UnhandledError" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.547671 4728 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.563650 4728 log.go:25] "Validated CRI v1 runtime API" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.602960 4728 log.go:25] "Validated CRI v1 image API" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.604642 4728 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.609025 4728 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-02-27-10-21-33-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.609074 4728 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.634838 4728 manager.go:217] Machine: {Timestamp:2026-02-27 10:26:20.632267599 +0000 UTC m=+0.594633785 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799998 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:08a24311-ed07-4912-ba2b-648ea93d1dc5 BootID:79ce2621-f919-4f1d-8b5b-b727bcba43c7 Filesystems:[{Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:bd:6a:15 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:bd:6a:15 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:38:12:21 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:53:8b:c5 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:d5:e9:de Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:66:1d:a0 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:42:60:60:c6:38:58 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:2a:9d:d3:88:b7:54 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.635344 4728 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.635571 4728 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.636214 4728 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.636679 4728 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.636723 4728 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.637030 4728 topology_manager.go:138] "Creating topology manager with none policy" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.637050 4728 container_manager_linux.go:303] "Creating device plugin manager" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.637774 4728 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.637839 4728 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.638090 4728 state_mem.go:36] "Initialized new in-memory state store" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.638217 4728 server.go:1245] "Using root directory" path="/var/lib/kubelet" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.642183 4728 kubelet.go:418] "Attempting to sync node with API server" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.642222 4728 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.642261 4728 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.642281 4728 kubelet.go:324] "Adding apiserver pod source" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.642300 4728 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.646427 4728 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.647461 4728 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.649087 4728 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.164:6443: connect: connection refused Feb 27 10:26:20 crc kubenswrapper[4728]: E0227 10:26:20.649241 4728 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.164:6443: connect: connection refused" logger="UnhandledError" Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.649252 4728 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.164:6443: connect: connection refused Feb 27 10:26:20 crc kubenswrapper[4728]: E0227 10:26:20.649340 4728 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.164:6443: connect: connection refused" logger="UnhandledError" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.650125 4728 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.651894 4728 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.651936 4728 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.651951 4728 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.651967 4728 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.651989 4728 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.652002 4728 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.652015 4728 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.652036 4728 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.652052 4728 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.652068 4728 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.652086 4728 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.652100 4728 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.653823 4728 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.654550 4728 server.go:1280] "Started kubelet" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.654707 4728 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.654799 4728 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.655250 4728 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.164:6443: connect: connection refused Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.655321 4728 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 27 10:26:20 crc systemd[1]: Started Kubernetes Kubelet. Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.658789 4728 server.go:460] "Adding debug handlers to kubelet server" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.661632 4728 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.661699 4728 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.661841 4728 volume_manager.go:287] "The desired_state_of_world populator starts" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.661863 4728 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 27 10:26:20 crc kubenswrapper[4728]: E0227 10:26:20.661888 4728 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.662035 4728 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 27 10:26:20 crc kubenswrapper[4728]: E0227 10:26:20.662431 4728 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.164:6443: connect: connection refused" interval="200ms" Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.662656 4728 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.164:6443: connect: connection refused Feb 27 10:26:20 crc kubenswrapper[4728]: E0227 10:26:20.662830 4728 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.164:6443: connect: connection refused" logger="UnhandledError" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.662863 4728 factory.go:55] Registering systemd factory Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.662886 4728 factory.go:221] Registration of the systemd container factory successfully Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.663293 4728 factory.go:153] Registering CRI-O factory Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.663328 4728 factory.go:221] Registration of the crio container factory successfully Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.663463 4728 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.663538 4728 factory.go:103] Registering Raw factory Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.663571 4728 manager.go:1196] Started watching for new ooms in manager Feb 27 10:26:20 crc kubenswrapper[4728]: E0227 10:26:20.662742 4728 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.164:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.189813943620a7b8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:26:20.65448748 +0000 UTC m=+0.616853616,LastTimestamp:2026-02-27 10:26:20.65448748 +0000 UTC m=+0.616853616,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.664632 4728 manager.go:319] Starting recovery of all containers Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.679778 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.679870 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.679890 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.679904 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.679919 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.679932 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.679946 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.679958 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.679974 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.680009 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.680022 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.680034 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.680048 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.680063 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.680078 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.680094 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.680108 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.680119 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.680131 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.680146 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.680159 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.680171 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.680183 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.680196 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.680209 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.680221 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.680239 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.680252 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.680265 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.680276 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.680286 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.680299 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.680314 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.680328 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.680338 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.680351 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.680365 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.680375 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.680386 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.680401 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.680448 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.680460 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.680475 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.680489 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.680517 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.680532 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.680546 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.680558 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.680573 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.680587 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.680601 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.680613 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.680662 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.680679 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.680694 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.680711 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.680727 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.680740 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.680754 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.680767 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.680780 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.680793 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.680807 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.680823 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.680840 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.680853 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.680865 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.680878 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.680890 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.680900 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.680912 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.680927 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.680941 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.680954 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.680966 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.680982 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.680995 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.681009 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.681023 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.681037 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.681052 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.681066 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.681078 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.681091 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.681134 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.681148 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.681160 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.681175 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.681188 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.681199 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.681217 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.681234 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.681247 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.681261 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.681276 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.681291 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.681306 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.681321 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.681334 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.681347 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.681362 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.681375 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.681390 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.681404 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.681428 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.681447 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.681464 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.681479 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.681494 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.681528 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.681544 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.681560 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.681576 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.681591 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.681605 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.681620 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.681634 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.686590 4728 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.686981 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.687001 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.687019 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.687235 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.687263 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.687614 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.687635 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.687651 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.687664 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.687679 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.687693 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.687712 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.687727 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.687741 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.687754 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.687767 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.687782 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.687795 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.687810 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.687826 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.687840 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.687855 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.687870 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.687885 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.687899 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.687912 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.687929 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.687942 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.687955 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.687971 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.687982 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.687999 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.688012 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.688027 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.689565 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.689699 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.689765 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.689813 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.689895 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.689930 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.689947 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.689997 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.690021 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.690088 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.690121 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.690151 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.690180 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.690211 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.690231 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.690261 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.690316 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.690351 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.690383 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.690426 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.690443 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.690459 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.690475 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.690492 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.690578 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.690619 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.690649 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.690682 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.690712 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.690743 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.690759 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.690792 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.690825 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.690857 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.690890 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.690923 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.690980 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.690998 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.691015 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.691077 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.691097 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.691134 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.691165 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.691197 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.691229 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.691260 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.691277 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.691307 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.691337 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.691369 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.691386 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.691430 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.691461 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.691495 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.691531 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.691529 4728 manager.go:324] Recovery completed Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.691549 4728 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.691624 4728 reconstruct.go:97] "Volume reconstruction finished" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.691638 4728 reconciler.go:26] "Reconciler: start to sync state" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.702059 4728 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.703799 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.703850 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.703867 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.707491 4728 cpu_manager.go:225] "Starting CPU manager" policy="none" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.707533 4728 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.707564 4728 state_mem.go:36] "Initialized new in-memory state store" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.719758 4728 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.721860 4728 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.723547 4728 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.723593 4728 kubelet.go:2335] "Starting kubelet main sync loop" Feb 27 10:26:20 crc kubenswrapper[4728]: E0227 10:26:20.723645 4728 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 27 10:26:20 crc kubenswrapper[4728]: W0227 10:26:20.724327 4728 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.164:6443: connect: connection refused Feb 27 10:26:20 crc kubenswrapper[4728]: E0227 10:26:20.724416 4728 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.164:6443: connect: connection refused" logger="UnhandledError" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.730534 4728 policy_none.go:49] "None policy: Start" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.731363 4728 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.731399 4728 state_mem.go:35] "Initializing new in-memory state store" Feb 27 10:26:20 crc kubenswrapper[4728]: E0227 10:26:20.762117 4728 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.781816 4728 manager.go:334] "Starting Device Plugin manager" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.782144 4728 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.782184 4728 server.go:79] "Starting device plugin registration server" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.782856 4728 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.782892 4728 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.783224 4728 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.783363 4728 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.783396 4728 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 27 10:26:20 crc kubenswrapper[4728]: E0227 10:26:20.797972 4728 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.824446 4728 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc"] Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.824582 4728 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.827229 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.827277 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.827291 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.827460 4728 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.827858 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.827929 4728 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.828462 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.828558 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.828578 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.828918 4728 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.829013 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.829097 4728 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.832026 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.832649 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.832694 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.831912 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.833237 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.833275 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.833279 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.833294 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.833306 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.833495 4728 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.833657 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.833720 4728 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.834411 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.834430 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.834442 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.834557 4728 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.834829 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.834898 4728 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.835188 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.835211 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.835221 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.835240 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.835258 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.835266 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.835384 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.835420 4728 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.835937 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.835964 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.836561 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.836699 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.836722 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.836733 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:26:20 crc kubenswrapper[4728]: E0227 10:26:20.863077 4728 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.164:6443: connect: connection refused" interval="400ms" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.883115 4728 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.884480 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.884543 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.884557 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.884588 4728 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 27 10:26:20 crc kubenswrapper[4728]: E0227 10:26:20.885020 4728 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.164:6443: connect: connection refused" node="crc" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.893262 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.893309 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.893336 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.893356 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.893421 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.893447 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.893566 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.893620 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.893681 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.893746 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.893814 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.893896 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.893956 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.894007 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.894053 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.995090 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.995159 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.995198 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.995230 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.995262 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.995290 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.995314 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.995359 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.995365 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.995366 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.995457 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.995302 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.995320 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.995541 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.995580 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.995622 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.995643 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.995648 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.995661 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.995685 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.995623 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.995700 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.995728 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.995729 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.995747 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.995757 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.995771 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.995812 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.995822 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 10:26:20 crc kubenswrapper[4728]: I0227 10:26:20.995852 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 10:26:21 crc kubenswrapper[4728]: I0227 10:26:21.085403 4728 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 10:26:21 crc kubenswrapper[4728]: I0227 10:26:21.087015 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:26:21 crc kubenswrapper[4728]: I0227 10:26:21.087076 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:26:21 crc kubenswrapper[4728]: I0227 10:26:21.087092 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:26:21 crc kubenswrapper[4728]: I0227 10:26:21.087127 4728 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 27 10:26:21 crc kubenswrapper[4728]: E0227 10:26:21.087784 4728 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.164:6443: connect: connection refused" node="crc" Feb 27 10:26:21 crc kubenswrapper[4728]: I0227 10:26:21.168054 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 27 10:26:21 crc kubenswrapper[4728]: I0227 10:26:21.182135 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 27 10:26:21 crc kubenswrapper[4728]: I0227 10:26:21.204362 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 27 10:26:21 crc kubenswrapper[4728]: W0227 10:26:21.219059 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-2b8c10e682015f68ef46e41f7cd3963e83e4c1186ce19fe64b02f89478816156 WatchSource:0}: Error finding container 2b8c10e682015f68ef46e41f7cd3963e83e4c1186ce19fe64b02f89478816156: Status 404 returned error can't find the container with id 2b8c10e682015f68ef46e41f7cd3963e83e4c1186ce19fe64b02f89478816156 Feb 27 10:26:21 crc kubenswrapper[4728]: W0227 10:26:21.220880 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-a5a6e4ca1556d1ae9e3bfcd86c75fbed39c26a593f3d99b7d2ade0a5b7db57db WatchSource:0}: Error finding container a5a6e4ca1556d1ae9e3bfcd86c75fbed39c26a593f3d99b7d2ade0a5b7db57db: Status 404 returned error can't find the container with id a5a6e4ca1556d1ae9e3bfcd86c75fbed39c26a593f3d99b7d2ade0a5b7db57db Feb 27 10:26:21 crc kubenswrapper[4728]: I0227 10:26:21.225907 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 10:26:21 crc kubenswrapper[4728]: W0227 10:26:21.226438 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-61dc6d0bc6091e6ac75ebe929b83ad8fb1bf4a00fb3df47a5ff24cb01bee303f WatchSource:0}: Error finding container 61dc6d0bc6091e6ac75ebe929b83ad8fb1bf4a00fb3df47a5ff24cb01bee303f: Status 404 returned error can't find the container with id 61dc6d0bc6091e6ac75ebe929b83ad8fb1bf4a00fb3df47a5ff24cb01bee303f Feb 27 10:26:21 crc kubenswrapper[4728]: I0227 10:26:21.232979 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 27 10:26:21 crc kubenswrapper[4728]: W0227 10:26:21.249822 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-9eaf7506dfb017276e2319799638fdabc5f37c65696050063a526fbb6f89ba92 WatchSource:0}: Error finding container 9eaf7506dfb017276e2319799638fdabc5f37c65696050063a526fbb6f89ba92: Status 404 returned error can't find the container with id 9eaf7506dfb017276e2319799638fdabc5f37c65696050063a526fbb6f89ba92 Feb 27 10:26:21 crc kubenswrapper[4728]: E0227 10:26:21.264762 4728 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.164:6443: connect: connection refused" interval="800ms" Feb 27 10:26:21 crc kubenswrapper[4728]: W0227 10:26:21.272185 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-e2c3359761c3ed821e837b01cc8c13bbf464216d5a5597027bb492834ea57edf WatchSource:0}: Error finding container e2c3359761c3ed821e837b01cc8c13bbf464216d5a5597027bb492834ea57edf: Status 404 returned error can't find the container with id e2c3359761c3ed821e837b01cc8c13bbf464216d5a5597027bb492834ea57edf Feb 27 10:26:21 crc kubenswrapper[4728]: I0227 10:26:21.488857 4728 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 10:26:21 crc kubenswrapper[4728]: I0227 10:26:21.490132 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:26:21 crc kubenswrapper[4728]: I0227 10:26:21.490160 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:26:21 crc kubenswrapper[4728]: I0227 10:26:21.490169 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:26:21 crc kubenswrapper[4728]: I0227 10:26:21.490192 4728 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 27 10:26:21 crc kubenswrapper[4728]: E0227 10:26:21.490554 4728 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.164:6443: connect: connection refused" node="crc" Feb 27 10:26:21 crc kubenswrapper[4728]: I0227 10:26:21.656994 4728 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.164:6443: connect: connection refused Feb 27 10:26:21 crc kubenswrapper[4728]: W0227 10:26:21.713581 4728 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.164:6443: connect: connection refused Feb 27 10:26:21 crc kubenswrapper[4728]: E0227 10:26:21.713734 4728 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.164:6443: connect: connection refused" logger="UnhandledError" Feb 27 10:26:21 crc kubenswrapper[4728]: I0227 10:26:21.728979 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"e2c3359761c3ed821e837b01cc8c13bbf464216d5a5597027bb492834ea57edf"} Feb 27 10:26:21 crc kubenswrapper[4728]: I0227 10:26:21.730185 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"9eaf7506dfb017276e2319799638fdabc5f37c65696050063a526fbb6f89ba92"} Feb 27 10:26:21 crc kubenswrapper[4728]: I0227 10:26:21.731668 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"61dc6d0bc6091e6ac75ebe929b83ad8fb1bf4a00fb3df47a5ff24cb01bee303f"} Feb 27 10:26:21 crc kubenswrapper[4728]: I0227 10:26:21.732741 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"2b8c10e682015f68ef46e41f7cd3963e83e4c1186ce19fe64b02f89478816156"} Feb 27 10:26:21 crc kubenswrapper[4728]: I0227 10:26:21.733529 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"a5a6e4ca1556d1ae9e3bfcd86c75fbed39c26a593f3d99b7d2ade0a5b7db57db"} Feb 27 10:26:21 crc kubenswrapper[4728]: W0227 10:26:21.939866 4728 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.164:6443: connect: connection refused Feb 27 10:26:21 crc kubenswrapper[4728]: E0227 10:26:21.939949 4728 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.164:6443: connect: connection refused" logger="UnhandledError" Feb 27 10:26:22 crc kubenswrapper[4728]: E0227 10:26:22.066610 4728 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.164:6443: connect: connection refused" interval="1.6s" Feb 27 10:26:22 crc kubenswrapper[4728]: W0227 10:26:22.080603 4728 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.164:6443: connect: connection refused Feb 27 10:26:22 crc kubenswrapper[4728]: E0227 10:26:22.080719 4728 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.164:6443: connect: connection refused" logger="UnhandledError" Feb 27 10:26:22 crc kubenswrapper[4728]: W0227 10:26:22.086671 4728 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.164:6443: connect: connection refused Feb 27 10:26:22 crc kubenswrapper[4728]: E0227 10:26:22.086752 4728 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.164:6443: connect: connection refused" logger="UnhandledError" Feb 27 10:26:22 crc kubenswrapper[4728]: I0227 10:26:22.291751 4728 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 10:26:22 crc kubenswrapper[4728]: I0227 10:26:22.293171 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:26:22 crc kubenswrapper[4728]: I0227 10:26:22.293214 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:26:22 crc kubenswrapper[4728]: I0227 10:26:22.293226 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:26:22 crc kubenswrapper[4728]: I0227 10:26:22.293253 4728 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 27 10:26:22 crc kubenswrapper[4728]: E0227 10:26:22.293770 4728 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.164:6443: connect: connection refused" node="crc" Feb 27 10:26:22 crc kubenswrapper[4728]: I0227 10:26:22.656207 4728 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.164:6443: connect: connection refused Feb 27 10:26:22 crc kubenswrapper[4728]: I0227 10:26:22.709640 4728 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 27 10:26:22 crc kubenswrapper[4728]: E0227 10:26:22.710547 4728 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.164:6443: connect: connection refused" logger="UnhandledError" Feb 27 10:26:22 crc kubenswrapper[4728]: I0227 10:26:22.737393 4728 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="260dc784cf125e969fa94b843db3a3a67e3c26425a7bd3a4715e1d1a65223dc8" exitCode=0 Feb 27 10:26:22 crc kubenswrapper[4728]: I0227 10:26:22.737478 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"260dc784cf125e969fa94b843db3a3a67e3c26425a7bd3a4715e1d1a65223dc8"} Feb 27 10:26:22 crc kubenswrapper[4728]: I0227 10:26:22.737633 4728 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 10:26:22 crc kubenswrapper[4728]: I0227 10:26:22.739342 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:26:22 crc kubenswrapper[4728]: I0227 10:26:22.739370 4728 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="6f4065dfe803d293615bc7131d6949ba9fbd78c633b7d762e3b10370e0d91406" exitCode=0 Feb 27 10:26:22 crc kubenswrapper[4728]: I0227 10:26:22.739401 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:26:22 crc kubenswrapper[4728]: I0227 10:26:22.739428 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:26:22 crc kubenswrapper[4728]: I0227 10:26:22.739432 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"6f4065dfe803d293615bc7131d6949ba9fbd78c633b7d762e3b10370e0d91406"} Feb 27 10:26:22 crc kubenswrapper[4728]: I0227 10:26:22.739438 4728 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 10:26:22 crc kubenswrapper[4728]: I0227 10:26:22.740376 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:26:22 crc kubenswrapper[4728]: I0227 10:26:22.740440 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:26:22 crc kubenswrapper[4728]: I0227 10:26:22.740457 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:26:22 crc kubenswrapper[4728]: I0227 10:26:22.744145 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"6509749e807d3a87bf519c4558e8554d94dea68a44f9f67f49f06231429278cf"} Feb 27 10:26:22 crc kubenswrapper[4728]: I0227 10:26:22.744169 4728 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 10:26:22 crc kubenswrapper[4728]: I0227 10:26:22.744177 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"68ff56cc50dacd18bee3d3bd29036c98d1d75121b11e2fe9a2d0ad8fc733ca8a"} Feb 27 10:26:22 crc kubenswrapper[4728]: I0227 10:26:22.744193 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"dc9a3578429a9c159c1460b66efc10d6bb8088cffd975fea95f2caedec653149"} Feb 27 10:26:22 crc kubenswrapper[4728]: I0227 10:26:22.744205 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"2a5ab953af522332825eec5ebd360d933ca20bf07db8f0bc94fcb5702fdce3d6"} Feb 27 10:26:22 crc kubenswrapper[4728]: I0227 10:26:22.744992 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:26:22 crc kubenswrapper[4728]: I0227 10:26:22.745007 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:26:22 crc kubenswrapper[4728]: I0227 10:26:22.745018 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:26:22 crc kubenswrapper[4728]: I0227 10:26:22.746005 4728 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="c860355459019bda5fddc0a658236e97a674f965aa2eff648e97bbaa30161b75" exitCode=0 Feb 27 10:26:22 crc kubenswrapper[4728]: I0227 10:26:22.746095 4728 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 10:26:22 crc kubenswrapper[4728]: I0227 10:26:22.746100 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"c860355459019bda5fddc0a658236e97a674f965aa2eff648e97bbaa30161b75"} Feb 27 10:26:22 crc kubenswrapper[4728]: I0227 10:26:22.748345 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:26:22 crc kubenswrapper[4728]: I0227 10:26:22.748403 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:26:22 crc kubenswrapper[4728]: I0227 10:26:22.748427 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:26:22 crc kubenswrapper[4728]: I0227 10:26:22.751243 4728 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="5cf8e26fed561fc883edbc30e5a062308c49ec85bc0efb0a64fef14f9dacb1b2" exitCode=0 Feb 27 10:26:22 crc kubenswrapper[4728]: I0227 10:26:22.751296 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"5cf8e26fed561fc883edbc30e5a062308c49ec85bc0efb0a64fef14f9dacb1b2"} Feb 27 10:26:22 crc kubenswrapper[4728]: I0227 10:26:22.751322 4728 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 10:26:22 crc kubenswrapper[4728]: I0227 10:26:22.752210 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:26:22 crc kubenswrapper[4728]: I0227 10:26:22.752235 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:26:22 crc kubenswrapper[4728]: I0227 10:26:22.752244 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:26:22 crc kubenswrapper[4728]: I0227 10:26:22.755703 4728 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 10:26:22 crc kubenswrapper[4728]: I0227 10:26:22.756989 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:26:22 crc kubenswrapper[4728]: I0227 10:26:22.757018 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:26:22 crc kubenswrapper[4728]: I0227 10:26:22.757026 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:26:23 crc kubenswrapper[4728]: I0227 10:26:23.656923 4728 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.164:6443: connect: connection refused Feb 27 10:26:23 crc kubenswrapper[4728]: W0227 10:26:23.665090 4728 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.164:6443: connect: connection refused Feb 27 10:26:23 crc kubenswrapper[4728]: E0227 10:26:23.665193 4728 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.164:6443: connect: connection refused" logger="UnhandledError" Feb 27 10:26:23 crc kubenswrapper[4728]: E0227 10:26:23.670105 4728 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.164:6443: connect: connection refused" interval="3.2s" Feb 27 10:26:23 crc kubenswrapper[4728]: I0227 10:26:23.759204 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"fd5f667231cb95d3d072f38635815d04c45ae9803c179df54367329131afd05a"} Feb 27 10:26:23 crc kubenswrapper[4728]: I0227 10:26:23.759287 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"144568fcb21837838d999f90cb51207fff11edb023b2d975dad1980e0817ca43"} Feb 27 10:26:23 crc kubenswrapper[4728]: I0227 10:26:23.759311 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"19f04d61914f64b94f46aebdc8a7580de81f5ba7e15935be045d4e92bd0cd889"} Feb 27 10:26:23 crc kubenswrapper[4728]: I0227 10:26:23.759331 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"f4e8835f3ff41008225e090a7c460900d18997805869ec9191f1e7cce3c5778e"} Feb 27 10:26:23 crc kubenswrapper[4728]: I0227 10:26:23.761604 4728 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="e814ab45a5c01528b03c209cbe306360276b0a22ca9270fe7ce3bf36e0ef5f7e" exitCode=0 Feb 27 10:26:23 crc kubenswrapper[4728]: I0227 10:26:23.761728 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"e814ab45a5c01528b03c209cbe306360276b0a22ca9270fe7ce3bf36e0ef5f7e"} Feb 27 10:26:23 crc kubenswrapper[4728]: I0227 10:26:23.761787 4728 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 10:26:23 crc kubenswrapper[4728]: I0227 10:26:23.763393 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:26:23 crc kubenswrapper[4728]: I0227 10:26:23.763475 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:26:23 crc kubenswrapper[4728]: I0227 10:26:23.763493 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:26:23 crc kubenswrapper[4728]: I0227 10:26:23.764366 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"898b69784e6781fc21037c323b2802404ffaf0a7584f0623b644ffb3a605aaa1"} Feb 27 10:26:23 crc kubenswrapper[4728]: I0227 10:26:23.764437 4728 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 10:26:23 crc kubenswrapper[4728]: I0227 10:26:23.765717 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:26:23 crc kubenswrapper[4728]: I0227 10:26:23.765786 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:26:23 crc kubenswrapper[4728]: I0227 10:26:23.765810 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:26:23 crc kubenswrapper[4728]: I0227 10:26:23.768397 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"5590dd2a64c5ce1cfcffcf7f25149f7664dd72d2f811e4062447c35c066644ae"} Feb 27 10:26:23 crc kubenswrapper[4728]: I0227 10:26:23.768458 4728 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 10:26:23 crc kubenswrapper[4728]: I0227 10:26:23.768477 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"8a784f19afff543a28851473da4380cc37388d631ee168f5d6cc5969b97a3f02"} Feb 27 10:26:23 crc kubenswrapper[4728]: I0227 10:26:23.768540 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"2fe83994e55479337722acb63999d86f58d298b82b4ab6eab9b1bb66c9471ee0"} Feb 27 10:26:23 crc kubenswrapper[4728]: I0227 10:26:23.768608 4728 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 10:26:23 crc kubenswrapper[4728]: I0227 10:26:23.769624 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:26:23 crc kubenswrapper[4728]: I0227 10:26:23.769678 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:26:23 crc kubenswrapper[4728]: I0227 10:26:23.769701 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:26:23 crc kubenswrapper[4728]: I0227 10:26:23.769776 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:26:23 crc kubenswrapper[4728]: I0227 10:26:23.769850 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:26:23 crc kubenswrapper[4728]: I0227 10:26:23.769867 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:26:23 crc kubenswrapper[4728]: I0227 10:26:23.894906 4728 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 10:26:23 crc kubenswrapper[4728]: I0227 10:26:23.896100 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:26:23 crc kubenswrapper[4728]: I0227 10:26:23.896145 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:26:23 crc kubenswrapper[4728]: I0227 10:26:23.896159 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:26:23 crc kubenswrapper[4728]: I0227 10:26:23.896196 4728 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 27 10:26:23 crc kubenswrapper[4728]: E0227 10:26:23.896854 4728 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.164:6443: connect: connection refused" node="crc" Feb 27 10:26:23 crc kubenswrapper[4728]: E0227 10:26:23.988752 4728 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.164:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.189813943620a7b8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:26:20.65448748 +0000 UTC m=+0.616853616,LastTimestamp:2026-02-27 10:26:20.65448748 +0000 UTC m=+0.616853616,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:26:24 crc kubenswrapper[4728]: W0227 10:26:24.447729 4728 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.164:6443: connect: connection refused Feb 27 10:26:24 crc kubenswrapper[4728]: E0227 10:26:24.447849 4728 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.164:6443: connect: connection refused" logger="UnhandledError" Feb 27 10:26:24 crc kubenswrapper[4728]: I0227 10:26:24.774642 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"badb92551d119ebeda1345ef3d53e16a350f1a832720069dcd30192516d09e7e"} Feb 27 10:26:24 crc kubenswrapper[4728]: I0227 10:26:24.774737 4728 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 10:26:24 crc kubenswrapper[4728]: I0227 10:26:24.776107 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:26:24 crc kubenswrapper[4728]: I0227 10:26:24.776139 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:26:24 crc kubenswrapper[4728]: I0227 10:26:24.776150 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:26:24 crc kubenswrapper[4728]: I0227 10:26:24.777275 4728 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="51f217547dadd68ada46a99f3964971e47d56b9b1e45e018813ddce20347d4c0" exitCode=0 Feb 27 10:26:24 crc kubenswrapper[4728]: I0227 10:26:24.777375 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"51f217547dadd68ada46a99f3964971e47d56b9b1e45e018813ddce20347d4c0"} Feb 27 10:26:24 crc kubenswrapper[4728]: I0227 10:26:24.777407 4728 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 10:26:24 crc kubenswrapper[4728]: I0227 10:26:24.777417 4728 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 10:26:24 crc kubenswrapper[4728]: I0227 10:26:24.777977 4728 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 27 10:26:24 crc kubenswrapper[4728]: I0227 10:26:24.778020 4728 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 10:26:24 crc kubenswrapper[4728]: I0227 10:26:24.778497 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:26:24 crc kubenswrapper[4728]: I0227 10:26:24.778569 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:26:24 crc kubenswrapper[4728]: I0227 10:26:24.778587 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:26:24 crc kubenswrapper[4728]: I0227 10:26:24.778758 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:26:24 crc kubenswrapper[4728]: I0227 10:26:24.778787 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:26:24 crc kubenswrapper[4728]: I0227 10:26:24.778798 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:26:24 crc kubenswrapper[4728]: I0227 10:26:24.778876 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:26:24 crc kubenswrapper[4728]: I0227 10:26:24.778919 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:26:24 crc kubenswrapper[4728]: I0227 10:26:24.778947 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:26:25 crc kubenswrapper[4728]: I0227 10:26:25.187037 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 10:26:25 crc kubenswrapper[4728]: I0227 10:26:25.783539 4728 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 10:26:25 crc kubenswrapper[4728]: I0227 10:26:25.783996 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"0d2407a2f1f0fcd2f2dd4efda991f2d014a4d8c85592f2e93df7e4860a46862f"} Feb 27 10:26:25 crc kubenswrapper[4728]: I0227 10:26:25.784028 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 10:26:25 crc kubenswrapper[4728]: I0227 10:26:25.784040 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"e13903f4839e360a4bd61167579f8ba8936c176b194af3ed693fc7a3b6c88fe2"} Feb 27 10:26:25 crc kubenswrapper[4728]: I0227 10:26:25.784049 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"1dd5470266565899e6fda78eec789f70994968d19b1001f6340a99cfd2b73933"} Feb 27 10:26:25 crc kubenswrapper[4728]: I0227 10:26:25.784060 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"497509ccbf9c511546f719138ff58231a55c407323b370c2687557b87a660c9e"} Feb 27 10:26:25 crc kubenswrapper[4728]: I0227 10:26:25.784565 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:26:25 crc kubenswrapper[4728]: I0227 10:26:25.784586 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:26:25 crc kubenswrapper[4728]: I0227 10:26:25.784595 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:26:26 crc kubenswrapper[4728]: I0227 10:26:26.140913 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 10:26:26 crc kubenswrapper[4728]: I0227 10:26:26.715254 4728 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 27 10:26:26 crc kubenswrapper[4728]: I0227 10:26:26.792474 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"8643806fee981c732e483bce1bf93a8e35ab71964444bb2b9c476d7c93f85869"} Feb 27 10:26:26 crc kubenswrapper[4728]: I0227 10:26:26.792542 4728 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 10:26:26 crc kubenswrapper[4728]: I0227 10:26:26.792642 4728 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 10:26:26 crc kubenswrapper[4728]: I0227 10:26:26.794302 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:26:26 crc kubenswrapper[4728]: I0227 10:26:26.794344 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:26:26 crc kubenswrapper[4728]: I0227 10:26:26.794355 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:26:26 crc kubenswrapper[4728]: I0227 10:26:26.794369 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:26:26 crc kubenswrapper[4728]: I0227 10:26:26.794411 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:26:26 crc kubenswrapper[4728]: I0227 10:26:26.794429 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:26:27 crc kubenswrapper[4728]: I0227 10:26:27.097750 4728 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 10:26:27 crc kubenswrapper[4728]: I0227 10:26:27.099202 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:26:27 crc kubenswrapper[4728]: I0227 10:26:27.099254 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:26:27 crc kubenswrapper[4728]: I0227 10:26:27.099266 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:26:27 crc kubenswrapper[4728]: I0227 10:26:27.099293 4728 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 27 10:26:27 crc kubenswrapper[4728]: I0227 10:26:27.460163 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 27 10:26:27 crc kubenswrapper[4728]: I0227 10:26:27.460366 4728 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 10:26:27 crc kubenswrapper[4728]: I0227 10:26:27.461971 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:26:27 crc kubenswrapper[4728]: I0227 10:26:27.462024 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:26:27 crc kubenswrapper[4728]: I0227 10:26:27.462040 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:26:27 crc kubenswrapper[4728]: I0227 10:26:27.470490 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 27 10:26:27 crc kubenswrapper[4728]: I0227 10:26:27.596097 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 27 10:26:27 crc kubenswrapper[4728]: I0227 10:26:27.637889 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 27 10:26:27 crc kubenswrapper[4728]: I0227 10:26:27.795067 4728 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 10:26:27 crc kubenswrapper[4728]: I0227 10:26:27.795152 4728 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 10:26:27 crc kubenswrapper[4728]: I0227 10:26:27.795788 4728 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 10:26:27 crc kubenswrapper[4728]: I0227 10:26:27.796581 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:26:27 crc kubenswrapper[4728]: I0227 10:26:27.796654 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:26:27 crc kubenswrapper[4728]: I0227 10:26:27.796668 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:26:27 crc kubenswrapper[4728]: I0227 10:26:27.796797 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:26:27 crc kubenswrapper[4728]: I0227 10:26:27.796865 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:26:27 crc kubenswrapper[4728]: I0227 10:26:27.796884 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:26:27 crc kubenswrapper[4728]: I0227 10:26:27.798235 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:26:27 crc kubenswrapper[4728]: I0227 10:26:27.798302 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:26:27 crc kubenswrapper[4728]: I0227 10:26:27.798324 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:26:27 crc kubenswrapper[4728]: I0227 10:26:27.985639 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 27 10:26:27 crc kubenswrapper[4728]: I0227 10:26:27.985900 4728 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 10:26:27 crc kubenswrapper[4728]: I0227 10:26:27.987628 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:26:27 crc kubenswrapper[4728]: I0227 10:26:27.987673 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:26:27 crc kubenswrapper[4728]: I0227 10:26:27.987688 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:26:28 crc kubenswrapper[4728]: I0227 10:26:28.797829 4728 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 10:26:28 crc kubenswrapper[4728]: I0227 10:26:28.798822 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:26:28 crc kubenswrapper[4728]: I0227 10:26:28.798845 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:26:28 crc kubenswrapper[4728]: I0227 10:26:28.798856 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:26:29 crc kubenswrapper[4728]: I0227 10:26:29.311074 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 27 10:26:29 crc kubenswrapper[4728]: I0227 10:26:29.800120 4728 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 10:26:29 crc kubenswrapper[4728]: I0227 10:26:29.800957 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:26:29 crc kubenswrapper[4728]: I0227 10:26:29.801013 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:26:29 crc kubenswrapper[4728]: I0227 10:26:29.801036 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:26:29 crc kubenswrapper[4728]: I0227 10:26:29.841904 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Feb 27 10:26:29 crc kubenswrapper[4728]: I0227 10:26:29.842164 4728 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 10:26:29 crc kubenswrapper[4728]: I0227 10:26:29.843783 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:26:29 crc kubenswrapper[4728]: I0227 10:26:29.843831 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:26:29 crc kubenswrapper[4728]: I0227 10:26:29.843848 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:26:30 crc kubenswrapper[4728]: I0227 10:26:30.597085 4728 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 27 10:26:30 crc kubenswrapper[4728]: I0227 10:26:30.597193 4728 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 27 10:26:30 crc kubenswrapper[4728]: E0227 10:26:30.798133 4728 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 27 10:26:31 crc kubenswrapper[4728]: I0227 10:26:31.015007 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Feb 27 10:26:31 crc kubenswrapper[4728]: I0227 10:26:31.015281 4728 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 10:26:31 crc kubenswrapper[4728]: I0227 10:26:31.017160 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:26:31 crc kubenswrapper[4728]: I0227 10:26:31.017227 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:26:31 crc kubenswrapper[4728]: I0227 10:26:31.017254 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:26:34 crc kubenswrapper[4728]: I0227 10:26:34.657174 4728 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Feb 27 10:26:34 crc kubenswrapper[4728]: I0227 10:26:34.817308 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 27 10:26:34 crc kubenswrapper[4728]: I0227 10:26:34.820831 4728 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="badb92551d119ebeda1345ef3d53e16a350f1a832720069dcd30192516d09e7e" exitCode=255 Feb 27 10:26:34 crc kubenswrapper[4728]: I0227 10:26:34.820938 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"badb92551d119ebeda1345ef3d53e16a350f1a832720069dcd30192516d09e7e"} Feb 27 10:26:34 crc kubenswrapper[4728]: I0227 10:26:34.821219 4728 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 10:26:34 crc kubenswrapper[4728]: I0227 10:26:34.822353 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:26:34 crc kubenswrapper[4728]: I0227 10:26:34.822392 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:26:34 crc kubenswrapper[4728]: I0227 10:26:34.822410 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:26:34 crc kubenswrapper[4728]: I0227 10:26:34.823145 4728 scope.go:117] "RemoveContainer" containerID="badb92551d119ebeda1345ef3d53e16a350f1a832720069dcd30192516d09e7e" Feb 27 10:26:34 crc kubenswrapper[4728]: W0227 10:26:34.871636 4728 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout Feb 27 10:26:34 crc kubenswrapper[4728]: I0227 10:26:34.871765 4728 trace.go:236] Trace[2100812930]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (27-Feb-2026 10:26:24.870) (total time: 10001ms): Feb 27 10:26:34 crc kubenswrapper[4728]: Trace[2100812930]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (10:26:34.871) Feb 27 10:26:34 crc kubenswrapper[4728]: Trace[2100812930]: [10.001276695s] [10.001276695s] END Feb 27 10:26:34 crc kubenswrapper[4728]: E0227 10:26:34.871798 4728 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Feb 27 10:26:35 crc kubenswrapper[4728]: W0227 10:26:35.000978 4728 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout Feb 27 10:26:35 crc kubenswrapper[4728]: I0227 10:26:35.001100 4728 trace.go:236] Trace[48179549]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (27-Feb-2026 10:26:24.999) (total time: 10001ms): Feb 27 10:26:35 crc kubenswrapper[4728]: Trace[48179549]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (10:26:35.000) Feb 27 10:26:35 crc kubenswrapper[4728]: Trace[48179549]: [10.001503751s] [10.001503751s] END Feb 27 10:26:35 crc kubenswrapper[4728]: E0227 10:26:35.001129 4728 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Feb 27 10:26:35 crc kubenswrapper[4728]: I0227 10:26:35.187535 4728 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="Get \"https://192.168.126.11:6443/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 27 10:26:35 crc kubenswrapper[4728]: I0227 10:26:35.187605 4728 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 27 10:26:35 crc kubenswrapper[4728]: I0227 10:26:35.828059 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 27 10:26:35 crc kubenswrapper[4728]: I0227 10:26:35.830244 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"fbe857530fff0ce3ed0a57d4b893233f1a9804bb286d42704a668098b8fc0f57"} Feb 27 10:26:35 crc kubenswrapper[4728]: I0227 10:26:35.830432 4728 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 10:26:35 crc kubenswrapper[4728]: I0227 10:26:35.831608 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:26:35 crc kubenswrapper[4728]: I0227 10:26:35.831653 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:26:35 crc kubenswrapper[4728]: I0227 10:26:35.831667 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:26:35 crc kubenswrapper[4728]: W0227 10:26:35.919336 4728 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:26:35Z is after 2026-02-23T05:33:13Z Feb 27 10:26:35 crc kubenswrapper[4728]: E0227 10:26:35.919479 4728 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:26:35Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 27 10:26:35 crc kubenswrapper[4728]: W0227 10:26:35.922625 4728 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:26:35Z is after 2026-02-23T05:33:13Z Feb 27 10:26:35 crc kubenswrapper[4728]: E0227 10:26:35.922746 4728 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:26:35Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 27 10:26:35 crc kubenswrapper[4728]: E0227 10:26:35.924480 4728 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:26:35Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 27 10:26:35 crc kubenswrapper[4728]: I0227 10:26:35.927426 4728 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:26:35Z is after 2026-02-23T05:33:13Z Feb 27 10:26:35 crc kubenswrapper[4728]: E0227 10:26:35.929470 4728 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:26:35Z is after 2026-02-23T05:33:13Z" event="&Event{ObjectMeta:{crc.189813943620a7b8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:26:20.65448748 +0000 UTC m=+0.616853616,LastTimestamp:2026-02-27 10:26:20.65448748 +0000 UTC m=+0.616853616,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:26:35 crc kubenswrapper[4728]: E0227 10:26:35.930031 4728 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:26:35Z is after 2026-02-23T05:33:13Z" interval="6.4s" Feb 27 10:26:35 crc kubenswrapper[4728]: I0227 10:26:35.931003 4728 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 27 10:26:35 crc kubenswrapper[4728]: I0227 10:26:35.931072 4728 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 27 10:26:35 crc kubenswrapper[4728]: E0227 10:26:35.935444 4728 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:26:35Z is after 2026-02-23T05:33:13Z" node="crc" Feb 27 10:26:36 crc kubenswrapper[4728]: I0227 10:26:36.661064 4728 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:26:36Z is after 2026-02-23T05:33:13Z Feb 27 10:26:36 crc kubenswrapper[4728]: I0227 10:26:36.835964 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 27 10:26:36 crc kubenswrapper[4728]: I0227 10:26:36.836840 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 27 10:26:36 crc kubenswrapper[4728]: I0227 10:26:36.839232 4728 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="fbe857530fff0ce3ed0a57d4b893233f1a9804bb286d42704a668098b8fc0f57" exitCode=255 Feb 27 10:26:36 crc kubenswrapper[4728]: I0227 10:26:36.839278 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"fbe857530fff0ce3ed0a57d4b893233f1a9804bb286d42704a668098b8fc0f57"} Feb 27 10:26:36 crc kubenswrapper[4728]: I0227 10:26:36.839388 4728 scope.go:117] "RemoveContainer" containerID="badb92551d119ebeda1345ef3d53e16a350f1a832720069dcd30192516d09e7e" Feb 27 10:26:36 crc kubenswrapper[4728]: I0227 10:26:36.839639 4728 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 10:26:36 crc kubenswrapper[4728]: I0227 10:26:36.840885 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:26:36 crc kubenswrapper[4728]: I0227 10:26:36.840927 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:26:36 crc kubenswrapper[4728]: I0227 10:26:36.840938 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:26:36 crc kubenswrapper[4728]: I0227 10:26:36.841638 4728 scope.go:117] "RemoveContainer" containerID="fbe857530fff0ce3ed0a57d4b893233f1a9804bb286d42704a668098b8fc0f57" Feb 27 10:26:36 crc kubenswrapper[4728]: E0227 10:26:36.841851 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 27 10:26:37 crc kubenswrapper[4728]: I0227 10:26:37.645238 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 27 10:26:37 crc kubenswrapper[4728]: I0227 10:26:37.645415 4728 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 10:26:37 crc kubenswrapper[4728]: I0227 10:26:37.646885 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:26:37 crc kubenswrapper[4728]: I0227 10:26:37.646932 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:26:37 crc kubenswrapper[4728]: I0227 10:26:37.646950 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:26:37 crc kubenswrapper[4728]: I0227 10:26:37.660866 4728 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:26:37Z is after 2026-02-23T05:33:13Z Feb 27 10:26:37 crc kubenswrapper[4728]: I0227 10:26:37.844127 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 27 10:26:38 crc kubenswrapper[4728]: W0227 10:26:38.275146 4728 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:26:38Z is after 2026-02-23T05:33:13Z Feb 27 10:26:38 crc kubenswrapper[4728]: E0227 10:26:38.275251 4728 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:26:38Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 27 10:26:38 crc kubenswrapper[4728]: I0227 10:26:38.659791 4728 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:26:38Z is after 2026-02-23T05:33:13Z Feb 27 10:26:39 crc kubenswrapper[4728]: I0227 10:26:39.256043 4728 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 10:26:39 crc kubenswrapper[4728]: I0227 10:26:39.256294 4728 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 10:26:39 crc kubenswrapper[4728]: I0227 10:26:39.257614 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:26:39 crc kubenswrapper[4728]: I0227 10:26:39.257677 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:26:39 crc kubenswrapper[4728]: I0227 10:26:39.257695 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:26:39 crc kubenswrapper[4728]: I0227 10:26:39.258384 4728 scope.go:117] "RemoveContainer" containerID="fbe857530fff0ce3ed0a57d4b893233f1a9804bb286d42704a668098b8fc0f57" Feb 27 10:26:39 crc kubenswrapper[4728]: E0227 10:26:39.258740 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 27 10:26:39 crc kubenswrapper[4728]: I0227 10:26:39.661647 4728 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:26:39Z is after 2026-02-23T05:33:13Z Feb 27 10:26:39 crc kubenswrapper[4728]: I0227 10:26:39.878188 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Feb 27 10:26:39 crc kubenswrapper[4728]: I0227 10:26:39.878439 4728 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 10:26:39 crc kubenswrapper[4728]: I0227 10:26:39.879949 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:26:39 crc kubenswrapper[4728]: I0227 10:26:39.880018 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:26:39 crc kubenswrapper[4728]: I0227 10:26:39.880036 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:26:39 crc kubenswrapper[4728]: I0227 10:26:39.899448 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Feb 27 10:26:40 crc kubenswrapper[4728]: I0227 10:26:40.197234 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 10:26:40 crc kubenswrapper[4728]: I0227 10:26:40.197480 4728 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 10:26:40 crc kubenswrapper[4728]: I0227 10:26:40.202112 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:26:40 crc kubenswrapper[4728]: I0227 10:26:40.202164 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:26:40 crc kubenswrapper[4728]: I0227 10:26:40.202177 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:26:40 crc kubenswrapper[4728]: I0227 10:26:40.203707 4728 scope.go:117] "RemoveContainer" containerID="fbe857530fff0ce3ed0a57d4b893233f1a9804bb286d42704a668098b8fc0f57" Feb 27 10:26:40 crc kubenswrapper[4728]: E0227 10:26:40.203889 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 27 10:26:40 crc kubenswrapper[4728]: I0227 10:26:40.207046 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 10:26:40 crc kubenswrapper[4728]: I0227 10:26:40.596817 4728 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 27 10:26:40 crc kubenswrapper[4728]: I0227 10:26:40.596873 4728 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 27 10:26:40 crc kubenswrapper[4728]: I0227 10:26:40.660554 4728 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:26:40Z is after 2026-02-23T05:33:13Z Feb 27 10:26:40 crc kubenswrapper[4728]: E0227 10:26:40.799187 4728 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 27 10:26:40 crc kubenswrapper[4728]: I0227 10:26:40.854618 4728 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 10:26:40 crc kubenswrapper[4728]: I0227 10:26:40.854685 4728 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 10:26:40 crc kubenswrapper[4728]: I0227 10:26:40.855886 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:26:40 crc kubenswrapper[4728]: I0227 10:26:40.855928 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:26:40 crc kubenswrapper[4728]: I0227 10:26:40.855895 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:26:40 crc kubenswrapper[4728]: I0227 10:26:40.856043 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:26:40 crc kubenswrapper[4728]: I0227 10:26:40.856064 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:26:40 crc kubenswrapper[4728]: I0227 10:26:40.855946 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:26:40 crc kubenswrapper[4728]: I0227 10:26:40.857119 4728 scope.go:117] "RemoveContainer" containerID="fbe857530fff0ce3ed0a57d4b893233f1a9804bb286d42704a668098b8fc0f57" Feb 27 10:26:40 crc kubenswrapper[4728]: E0227 10:26:40.857381 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 27 10:26:40 crc kubenswrapper[4728]: W0227 10:26:40.908262 4728 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:26:40Z is after 2026-02-23T05:33:13Z Feb 27 10:26:40 crc kubenswrapper[4728]: E0227 10:26:40.908366 4728 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:26:40Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 27 10:26:41 crc kubenswrapper[4728]: I0227 10:26:41.660689 4728 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:26:41Z is after 2026-02-23T05:33:13Z Feb 27 10:26:42 crc kubenswrapper[4728]: I0227 10:26:42.066672 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 10:26:42 crc kubenswrapper[4728]: I0227 10:26:42.066943 4728 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 10:26:42 crc kubenswrapper[4728]: I0227 10:26:42.068408 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:26:42 crc kubenswrapper[4728]: I0227 10:26:42.068456 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:26:42 crc kubenswrapper[4728]: I0227 10:26:42.068471 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:26:42 crc kubenswrapper[4728]: I0227 10:26:42.069027 4728 scope.go:117] "RemoveContainer" containerID="fbe857530fff0ce3ed0a57d4b893233f1a9804bb286d42704a668098b8fc0f57" Feb 27 10:26:42 crc kubenswrapper[4728]: E0227 10:26:42.069258 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 27 10:26:42 crc kubenswrapper[4728]: E0227 10:26:42.335324 4728 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:26:42Z is after 2026-02-23T05:33:13Z" interval="7s" Feb 27 10:26:42 crc kubenswrapper[4728]: I0227 10:26:42.336425 4728 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 10:26:42 crc kubenswrapper[4728]: I0227 10:26:42.337518 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:26:42 crc kubenswrapper[4728]: I0227 10:26:42.337545 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:26:42 crc kubenswrapper[4728]: I0227 10:26:42.337553 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:26:42 crc kubenswrapper[4728]: I0227 10:26:42.337572 4728 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 27 10:26:42 crc kubenswrapper[4728]: E0227 10:26:42.341126 4728 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:26:42Z is after 2026-02-23T05:33:13Z" node="crc" Feb 27 10:26:42 crc kubenswrapper[4728]: W0227 10:26:42.530647 4728 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:26:42Z is after 2026-02-23T05:33:13Z Feb 27 10:26:42 crc kubenswrapper[4728]: E0227 10:26:42.530757 4728 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:26:42Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 27 10:26:42 crc kubenswrapper[4728]: I0227 10:26:42.660428 4728 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:26:42Z is after 2026-02-23T05:33:13Z Feb 27 10:26:43 crc kubenswrapper[4728]: I0227 10:26:43.661001 4728 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:26:43Z is after 2026-02-23T05:33:13Z Feb 27 10:26:43 crc kubenswrapper[4728]: I0227 10:26:43.967260 4728 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 27 10:26:43 crc kubenswrapper[4728]: W0227 10:26:43.967335 4728 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:26:43Z is after 2026-02-23T05:33:13Z Feb 27 10:26:43 crc kubenswrapper[4728]: E0227 10:26:43.967429 4728 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:26:43Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 27 10:26:43 crc kubenswrapper[4728]: E0227 10:26:43.975218 4728 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:26:43Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 27 10:26:44 crc kubenswrapper[4728]: I0227 10:26:44.661148 4728 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:26:44Z is after 2026-02-23T05:33:13Z Feb 27 10:26:45 crc kubenswrapper[4728]: I0227 10:26:45.660265 4728 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:26:45Z is after 2026-02-23T05:33:13Z Feb 27 10:26:45 crc kubenswrapper[4728]: E0227 10:26:45.935320 4728 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:26:45Z is after 2026-02-23T05:33:13Z" event="&Event{ObjectMeta:{crc.189813943620a7b8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:26:20.65448748 +0000 UTC m=+0.616853616,LastTimestamp:2026-02-27 10:26:20.65448748 +0000 UTC m=+0.616853616,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:26:46 crc kubenswrapper[4728]: I0227 10:26:46.660721 4728 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:26:46Z is after 2026-02-23T05:33:13Z Feb 27 10:26:47 crc kubenswrapper[4728]: I0227 10:26:47.663888 4728 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:26:47Z is after 2026-02-23T05:33:13Z Feb 27 10:26:48 crc kubenswrapper[4728]: W0227 10:26:48.207560 4728 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:26:48Z is after 2026-02-23T05:33:13Z Feb 27 10:26:48 crc kubenswrapper[4728]: E0227 10:26:48.207672 4728 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:26:48Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 27 10:26:48 crc kubenswrapper[4728]: W0227 10:26:48.453311 4728 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:26:48Z is after 2026-02-23T05:33:13Z Feb 27 10:26:48 crc kubenswrapper[4728]: E0227 10:26:48.453437 4728 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:26:48Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 27 10:26:48 crc kubenswrapper[4728]: I0227 10:26:48.660479 4728 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:26:48Z is after 2026-02-23T05:33:13Z Feb 27 10:26:49 crc kubenswrapper[4728]: E0227 10:26:49.339811 4728 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:26:49Z is after 2026-02-23T05:33:13Z" interval="7s" Feb 27 10:26:49 crc kubenswrapper[4728]: I0227 10:26:49.342098 4728 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 10:26:49 crc kubenswrapper[4728]: I0227 10:26:49.343640 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:26:49 crc kubenswrapper[4728]: I0227 10:26:49.343877 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:26:49 crc kubenswrapper[4728]: I0227 10:26:49.344034 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:26:49 crc kubenswrapper[4728]: I0227 10:26:49.344198 4728 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 27 10:26:49 crc kubenswrapper[4728]: E0227 10:26:49.347559 4728 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:26:49Z is after 2026-02-23T05:33:13Z" node="crc" Feb 27 10:26:49 crc kubenswrapper[4728]: I0227 10:26:49.662291 4728 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:26:49Z is after 2026-02-23T05:33:13Z Feb 27 10:26:50 crc kubenswrapper[4728]: I0227 10:26:50.597033 4728 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 27 10:26:50 crc kubenswrapper[4728]: I0227 10:26:50.597111 4728 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 27 10:26:50 crc kubenswrapper[4728]: I0227 10:26:50.597180 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 27 10:26:50 crc kubenswrapper[4728]: I0227 10:26:50.597349 4728 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 10:26:50 crc kubenswrapper[4728]: I0227 10:26:50.598633 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:26:50 crc kubenswrapper[4728]: I0227 10:26:50.598666 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:26:50 crc kubenswrapper[4728]: I0227 10:26:50.598677 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:26:50 crc kubenswrapper[4728]: I0227 10:26:50.599856 4728 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cluster-policy-controller" containerStatusID={"Type":"cri-o","ID":"dc9a3578429a9c159c1460b66efc10d6bb8088cffd975fea95f2caedec653149"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container cluster-policy-controller failed startup probe, will be restarted" Feb 27 10:26:50 crc kubenswrapper[4728]: I0227 10:26:50.600147 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" containerID="cri-o://dc9a3578429a9c159c1460b66efc10d6bb8088cffd975fea95f2caedec653149" gracePeriod=30 Feb 27 10:26:50 crc kubenswrapper[4728]: I0227 10:26:50.660864 4728 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:26:50Z is after 2026-02-23T05:33:13Z Feb 27 10:26:50 crc kubenswrapper[4728]: E0227 10:26:50.800307 4728 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 27 10:26:50 crc kubenswrapper[4728]: I0227 10:26:50.882002 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/0.log" Feb 27 10:26:50 crc kubenswrapper[4728]: I0227 10:26:50.882400 4728 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="dc9a3578429a9c159c1460b66efc10d6bb8088cffd975fea95f2caedec653149" exitCode=255 Feb 27 10:26:50 crc kubenswrapper[4728]: I0227 10:26:50.882451 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"dc9a3578429a9c159c1460b66efc10d6bb8088cffd975fea95f2caedec653149"} Feb 27 10:26:51 crc kubenswrapper[4728]: I0227 10:26:51.660726 4728 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:26:51Z is after 2026-02-23T05:33:13Z Feb 27 10:26:51 crc kubenswrapper[4728]: I0227 10:26:51.888481 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/0.log" Feb 27 10:26:51 crc kubenswrapper[4728]: I0227 10:26:51.889011 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"b0b7f984c0efb3353dbb8919455c04f3ff92326789097be5efc6e0b5d2a52125"} Feb 27 10:26:51 crc kubenswrapper[4728]: I0227 10:26:51.889174 4728 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 10:26:51 crc kubenswrapper[4728]: I0227 10:26:51.890714 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:26:51 crc kubenswrapper[4728]: I0227 10:26:51.890773 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:26:51 crc kubenswrapper[4728]: I0227 10:26:51.890790 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:26:52 crc kubenswrapper[4728]: I0227 10:26:52.660988 4728 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:26:52Z is after 2026-02-23T05:33:13Z Feb 27 10:26:52 crc kubenswrapper[4728]: I0227 10:26:52.891609 4728 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 10:26:52 crc kubenswrapper[4728]: I0227 10:26:52.892632 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:26:52 crc kubenswrapper[4728]: I0227 10:26:52.892663 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:26:52 crc kubenswrapper[4728]: I0227 10:26:52.892674 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:26:53 crc kubenswrapper[4728]: I0227 10:26:53.659980 4728 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:26:53Z is after 2026-02-23T05:33:13Z Feb 27 10:26:54 crc kubenswrapper[4728]: I0227 10:26:54.658795 4728 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:26:54Z is after 2026-02-23T05:33:13Z Feb 27 10:26:55 crc kubenswrapper[4728]: I0227 10:26:55.660117 4728 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:26:55Z is after 2026-02-23T05:33:13Z Feb 27 10:26:55 crc kubenswrapper[4728]: I0227 10:26:55.724109 4728 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 10:26:55 crc kubenswrapper[4728]: I0227 10:26:55.728140 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:26:55 crc kubenswrapper[4728]: I0227 10:26:55.728183 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:26:55 crc kubenswrapper[4728]: I0227 10:26:55.728193 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:26:55 crc kubenswrapper[4728]: I0227 10:26:55.728831 4728 scope.go:117] "RemoveContainer" containerID="fbe857530fff0ce3ed0a57d4b893233f1a9804bb286d42704a668098b8fc0f57" Feb 27 10:26:55 crc kubenswrapper[4728]: E0227 10:26:55.941077 4728 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:26:55Z is after 2026-02-23T05:33:13Z" event="&Event{ObjectMeta:{crc.189813943620a7b8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:26:20.65448748 +0000 UTC m=+0.616853616,LastTimestamp:2026-02-27 10:26:20.65448748 +0000 UTC m=+0.616853616,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:26:56 crc kubenswrapper[4728]: E0227 10:26:56.346198 4728 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:26:56Z is after 2026-02-23T05:33:13Z" interval="7s" Feb 27 10:26:56 crc kubenswrapper[4728]: I0227 10:26:56.348330 4728 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 10:26:56 crc kubenswrapper[4728]: I0227 10:26:56.349921 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:26:56 crc kubenswrapper[4728]: I0227 10:26:56.349974 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:26:56 crc kubenswrapper[4728]: I0227 10:26:56.349993 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:26:56 crc kubenswrapper[4728]: I0227 10:26:56.350067 4728 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 27 10:26:56 crc kubenswrapper[4728]: E0227 10:26:56.354876 4728 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:26:56Z is after 2026-02-23T05:33:13Z" node="crc" Feb 27 10:26:56 crc kubenswrapper[4728]: I0227 10:26:56.661136 4728 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:26:56Z is after 2026-02-23T05:33:13Z Feb 27 10:26:56 crc kubenswrapper[4728]: I0227 10:26:56.904862 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Feb 27 10:26:56 crc kubenswrapper[4728]: I0227 10:26:56.905980 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 27 10:26:56 crc kubenswrapper[4728]: I0227 10:26:56.908961 4728 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="3042ae9f43adfddc493f967148f7581c03960800b63252b263a48c4cdd9e10e9" exitCode=255 Feb 27 10:26:56 crc kubenswrapper[4728]: I0227 10:26:56.909033 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"3042ae9f43adfddc493f967148f7581c03960800b63252b263a48c4cdd9e10e9"} Feb 27 10:26:56 crc kubenswrapper[4728]: I0227 10:26:56.909100 4728 scope.go:117] "RemoveContainer" containerID="fbe857530fff0ce3ed0a57d4b893233f1a9804bb286d42704a668098b8fc0f57" Feb 27 10:26:56 crc kubenswrapper[4728]: I0227 10:26:56.909284 4728 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 10:26:56 crc kubenswrapper[4728]: I0227 10:26:56.910657 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:26:56 crc kubenswrapper[4728]: I0227 10:26:56.910706 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:26:56 crc kubenswrapper[4728]: I0227 10:26:56.910730 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:26:56 crc kubenswrapper[4728]: I0227 10:26:56.911644 4728 scope.go:117] "RemoveContainer" containerID="3042ae9f43adfddc493f967148f7581c03960800b63252b263a48c4cdd9e10e9" Feb 27 10:26:56 crc kubenswrapper[4728]: E0227 10:26:56.911977 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 27 10:26:57 crc kubenswrapper[4728]: W0227 10:26:57.428183 4728 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:26:57Z is after 2026-02-23T05:33:13Z Feb 27 10:26:57 crc kubenswrapper[4728]: E0227 10:26:57.428266 4728 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:26:57Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 27 10:26:57 crc kubenswrapper[4728]: I0227 10:26:57.596498 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 27 10:26:57 crc kubenswrapper[4728]: I0227 10:26:57.596830 4728 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 10:26:57 crc kubenswrapper[4728]: I0227 10:26:57.598621 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:26:57 crc kubenswrapper[4728]: I0227 10:26:57.598691 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:26:57 crc kubenswrapper[4728]: I0227 10:26:57.598707 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:26:57 crc kubenswrapper[4728]: I0227 10:26:57.661593 4728 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:26:57Z is after 2026-02-23T05:33:13Z Feb 27 10:26:57 crc kubenswrapper[4728]: I0227 10:26:57.914575 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Feb 27 10:26:58 crc kubenswrapper[4728]: W0227 10:26:58.190985 4728 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:26:58Z is after 2026-02-23T05:33:13Z Feb 27 10:26:58 crc kubenswrapper[4728]: E0227 10:26:58.191076 4728 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:26:58Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 27 10:26:58 crc kubenswrapper[4728]: I0227 10:26:58.662134 4728 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:26:58Z is after 2026-02-23T05:33:13Z Feb 27 10:26:59 crc kubenswrapper[4728]: I0227 10:26:59.256285 4728 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 10:26:59 crc kubenswrapper[4728]: I0227 10:26:59.256476 4728 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 10:26:59 crc kubenswrapper[4728]: I0227 10:26:59.257819 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:26:59 crc kubenswrapper[4728]: I0227 10:26:59.257881 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:26:59 crc kubenswrapper[4728]: I0227 10:26:59.257901 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:26:59 crc kubenswrapper[4728]: I0227 10:26:59.258681 4728 scope.go:117] "RemoveContainer" containerID="3042ae9f43adfddc493f967148f7581c03960800b63252b263a48c4cdd9e10e9" Feb 27 10:26:59 crc kubenswrapper[4728]: E0227 10:26:59.258978 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 27 10:26:59 crc kubenswrapper[4728]: I0227 10:26:59.311564 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 27 10:26:59 crc kubenswrapper[4728]: I0227 10:26:59.311780 4728 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 10:26:59 crc kubenswrapper[4728]: I0227 10:26:59.313111 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:26:59 crc kubenswrapper[4728]: I0227 10:26:59.313157 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:26:59 crc kubenswrapper[4728]: I0227 10:26:59.313173 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:26:59 crc kubenswrapper[4728]: I0227 10:26:59.661273 4728 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:26:59Z is after 2026-02-23T05:33:13Z Feb 27 10:27:00 crc kubenswrapper[4728]: I0227 10:27:00.596557 4728 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 27 10:27:00 crc kubenswrapper[4728]: I0227 10:27:00.596621 4728 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 27 10:27:00 crc kubenswrapper[4728]: I0227 10:27:00.662242 4728 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:27:00Z is after 2026-02-23T05:33:13Z Feb 27 10:27:00 crc kubenswrapper[4728]: E0227 10:27:00.800903 4728 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 27 10:27:00 crc kubenswrapper[4728]: I0227 10:27:00.809154 4728 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 27 10:27:00 crc kubenswrapper[4728]: E0227 10:27:00.815660 4728 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:27:00Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 27 10:27:00 crc kubenswrapper[4728]: E0227 10:27:00.817185 4728 certificate_manager.go:440] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Reached backoff limit, still unable to rotate certs: timed out waiting for the condition" logger="UnhandledError" Feb 27 10:27:01 crc kubenswrapper[4728]: I0227 10:27:01.660582 4728 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:27:01Z is after 2026-02-23T05:33:13Z Feb 27 10:27:02 crc kubenswrapper[4728]: I0227 10:27:02.066419 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 10:27:02 crc kubenswrapper[4728]: I0227 10:27:02.066712 4728 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 10:27:02 crc kubenswrapper[4728]: I0227 10:27:02.068153 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:27:02 crc kubenswrapper[4728]: I0227 10:27:02.068202 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:27:02 crc kubenswrapper[4728]: I0227 10:27:02.068220 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:27:02 crc kubenswrapper[4728]: I0227 10:27:02.068836 4728 scope.go:117] "RemoveContainer" containerID="3042ae9f43adfddc493f967148f7581c03960800b63252b263a48c4cdd9e10e9" Feb 27 10:27:02 crc kubenswrapper[4728]: E0227 10:27:02.069059 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 27 10:27:02 crc kubenswrapper[4728]: I0227 10:27:02.660751 4728 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:27:02Z is after 2026-02-23T05:33:13Z Feb 27 10:27:03 crc kubenswrapper[4728]: E0227 10:27:03.351954 4728 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:27:03Z is after 2026-02-23T05:33:13Z" interval="7s" Feb 27 10:27:03 crc kubenswrapper[4728]: I0227 10:27:03.355003 4728 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 10:27:03 crc kubenswrapper[4728]: I0227 10:27:03.356497 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:27:03 crc kubenswrapper[4728]: I0227 10:27:03.356539 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:27:03 crc kubenswrapper[4728]: I0227 10:27:03.356584 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:27:03 crc kubenswrapper[4728]: I0227 10:27:03.356609 4728 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 27 10:27:03 crc kubenswrapper[4728]: E0227 10:27:03.362137 4728 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:27:03Z is after 2026-02-23T05:33:13Z" node="crc" Feb 27 10:27:03 crc kubenswrapper[4728]: I0227 10:27:03.659789 4728 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:27:03Z is after 2026-02-23T05:33:13Z Feb 27 10:27:04 crc kubenswrapper[4728]: I0227 10:27:04.660564 4728 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:27:04Z is after 2026-02-23T05:33:13Z Feb 27 10:27:05 crc kubenswrapper[4728]: I0227 10:27:05.662221 4728 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:27:05Z is after 2026-02-23T05:33:13Z Feb 27 10:27:05 crc kubenswrapper[4728]: E0227 10:27:05.947012 4728 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:27:05Z is after 2026-02-23T05:33:13Z" event="&Event{ObjectMeta:{crc.189813943620a7b8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:26:20.65448748 +0000 UTC m=+0.616853616,LastTimestamp:2026-02-27 10:26:20.65448748 +0000 UTC m=+0.616853616,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:27:06 crc kubenswrapper[4728]: W0227 10:27:06.348967 4728 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:27:06Z is after 2026-02-23T05:33:13Z Feb 27 10:27:06 crc kubenswrapper[4728]: E0227 10:27:06.349042 4728 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:27:06Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 27 10:27:06 crc kubenswrapper[4728]: I0227 10:27:06.660484 4728 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:27:06Z is after 2026-02-23T05:33:13Z Feb 27 10:27:07 crc kubenswrapper[4728]: I0227 10:27:07.661702 4728 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:27:07Z is after 2026-02-23T05:33:13Z Feb 27 10:27:07 crc kubenswrapper[4728]: I0227 10:27:07.994900 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 27 10:27:07 crc kubenswrapper[4728]: I0227 10:27:07.995129 4728 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 10:27:07 crc kubenswrapper[4728]: I0227 10:27:07.996895 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:27:07 crc kubenswrapper[4728]: I0227 10:27:07.996981 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:27:07 crc kubenswrapper[4728]: I0227 10:27:07.997000 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:27:08 crc kubenswrapper[4728]: I0227 10:27:08.660920 4728 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:27:08Z is after 2026-02-23T05:33:13Z Feb 27 10:27:09 crc kubenswrapper[4728]: I0227 10:27:09.661065 4728 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:27:09Z is after 2026-02-23T05:33:13Z Feb 27 10:27:10 crc kubenswrapper[4728]: W0227 10:27:10.053147 4728 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:27:10Z is after 2026-02-23T05:33:13Z Feb 27 10:27:10 crc kubenswrapper[4728]: E0227 10:27:10.053245 4728 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:27:10Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 27 10:27:10 crc kubenswrapper[4728]: E0227 10:27:10.356540 4728 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:27:10Z is after 2026-02-23T05:33:13Z" interval="7s" Feb 27 10:27:10 crc kubenswrapper[4728]: I0227 10:27:10.363030 4728 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 10:27:10 crc kubenswrapper[4728]: I0227 10:27:10.364968 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:27:10 crc kubenswrapper[4728]: I0227 10:27:10.365045 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:27:10 crc kubenswrapper[4728]: I0227 10:27:10.365068 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:27:10 crc kubenswrapper[4728]: I0227 10:27:10.365108 4728 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 27 10:27:10 crc kubenswrapper[4728]: E0227 10:27:10.368354 4728 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:27:10Z is after 2026-02-23T05:33:13Z" node="crc" Feb 27 10:27:10 crc kubenswrapper[4728]: I0227 10:27:10.597778 4728 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 27 10:27:10 crc kubenswrapper[4728]: I0227 10:27:10.597952 4728 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 27 10:27:10 crc kubenswrapper[4728]: I0227 10:27:10.658935 4728 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:27:10Z is after 2026-02-23T05:33:13Z Feb 27 10:27:10 crc kubenswrapper[4728]: E0227 10:27:10.801054 4728 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 27 10:27:11 crc kubenswrapper[4728]: I0227 10:27:11.661288 4728 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:27:11Z is after 2026-02-23T05:33:13Z Feb 27 10:27:12 crc kubenswrapper[4728]: I0227 10:27:12.660151 4728 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:27:12Z is after 2026-02-23T05:33:13Z Feb 27 10:27:13 crc kubenswrapper[4728]: I0227 10:27:13.661793 4728 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:27:13Z is after 2026-02-23T05:33:13Z Feb 27 10:27:14 crc kubenswrapper[4728]: I0227 10:27:14.663234 4728 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:27:14Z is after 2026-02-23T05:33:13Z Feb 27 10:27:15 crc kubenswrapper[4728]: I0227 10:27:15.661239 4728 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:27:15Z is after 2026-02-23T05:33:13Z Feb 27 10:27:15 crc kubenswrapper[4728]: I0227 10:27:15.724106 4728 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 10:27:15 crc kubenswrapper[4728]: I0227 10:27:15.725406 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:27:15 crc kubenswrapper[4728]: I0227 10:27:15.725463 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:27:15 crc kubenswrapper[4728]: I0227 10:27:15.725481 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:27:15 crc kubenswrapper[4728]: I0227 10:27:15.726374 4728 scope.go:117] "RemoveContainer" containerID="3042ae9f43adfddc493f967148f7581c03960800b63252b263a48c4cdd9e10e9" Feb 27 10:27:15 crc kubenswrapper[4728]: E0227 10:27:15.726678 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 27 10:27:15 crc kubenswrapper[4728]: E0227 10:27:15.954454 4728 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T10:27:15Z is after 2026-02-23T05:33:13Z" event="&Event{ObjectMeta:{crc.189813943620a7b8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:26:20.65448748 +0000 UTC m=+0.616853616,LastTimestamp:2026-02-27 10:26:20.65448748 +0000 UTC m=+0.616853616,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:27:16 crc kubenswrapper[4728]: I0227 10:27:16.663605 4728 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 27 10:27:17 crc kubenswrapper[4728]: E0227 10:27:17.364812 4728 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 27 10:27:17 crc kubenswrapper[4728]: I0227 10:27:17.368919 4728 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 10:27:17 crc kubenswrapper[4728]: I0227 10:27:17.370327 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:27:17 crc kubenswrapper[4728]: I0227 10:27:17.370385 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:27:17 crc kubenswrapper[4728]: I0227 10:27:17.370405 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:27:17 crc kubenswrapper[4728]: I0227 10:27:17.370444 4728 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 27 10:27:17 crc kubenswrapper[4728]: E0227 10:27:17.377308 4728 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Feb 27 10:27:17 crc kubenswrapper[4728]: I0227 10:27:17.663493 4728 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 27 10:27:18 crc kubenswrapper[4728]: I0227 10:27:18.662852 4728 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 27 10:27:19 crc kubenswrapper[4728]: I0227 10:27:19.663332 4728 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 27 10:27:20 crc kubenswrapper[4728]: I0227 10:27:20.597428 4728 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 27 10:27:20 crc kubenswrapper[4728]: I0227 10:27:20.597489 4728 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 27 10:27:20 crc kubenswrapper[4728]: I0227 10:27:20.597581 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 27 10:27:20 crc kubenswrapper[4728]: I0227 10:27:20.597718 4728 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 10:27:20 crc kubenswrapper[4728]: I0227 10:27:20.598929 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:27:20 crc kubenswrapper[4728]: I0227 10:27:20.598969 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:27:20 crc kubenswrapper[4728]: I0227 10:27:20.598978 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:27:20 crc kubenswrapper[4728]: I0227 10:27:20.599440 4728 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cluster-policy-controller" containerStatusID={"Type":"cri-o","ID":"b0b7f984c0efb3353dbb8919455c04f3ff92326789097be5efc6e0b5d2a52125"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container cluster-policy-controller failed startup probe, will be restarted" Feb 27 10:27:20 crc kubenswrapper[4728]: I0227 10:27:20.599550 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" containerID="cri-o://b0b7f984c0efb3353dbb8919455c04f3ff92326789097be5efc6e0b5d2a52125" gracePeriod=30 Feb 27 10:27:20 crc kubenswrapper[4728]: I0227 10:27:20.659795 4728 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 27 10:27:20 crc kubenswrapper[4728]: E0227 10:27:20.801148 4728 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 27 10:27:20 crc kubenswrapper[4728]: I0227 10:27:20.986422 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/1.log" Feb 27 10:27:20 crc kubenswrapper[4728]: I0227 10:27:20.988235 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/0.log" Feb 27 10:27:20 crc kubenswrapper[4728]: I0227 10:27:20.989003 4728 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="b0b7f984c0efb3353dbb8919455c04f3ff92326789097be5efc6e0b5d2a52125" exitCode=255 Feb 27 10:27:20 crc kubenswrapper[4728]: I0227 10:27:20.989058 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"b0b7f984c0efb3353dbb8919455c04f3ff92326789097be5efc6e0b5d2a52125"} Feb 27 10:27:20 crc kubenswrapper[4728]: I0227 10:27:20.989090 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"2ad85d84f28b4446861f8f7b5960756e284d3c2ece1aaf2a573f15e8dc955611"} Feb 27 10:27:20 crc kubenswrapper[4728]: I0227 10:27:20.989108 4728 scope.go:117] "RemoveContainer" containerID="dc9a3578429a9c159c1460b66efc10d6bb8088cffd975fea95f2caedec653149" Feb 27 10:27:20 crc kubenswrapper[4728]: I0227 10:27:20.989231 4728 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 10:27:20 crc kubenswrapper[4728]: I0227 10:27:20.991102 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:27:20 crc kubenswrapper[4728]: I0227 10:27:20.991151 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:27:20 crc kubenswrapper[4728]: I0227 10:27:20.991170 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:27:21 crc kubenswrapper[4728]: I0227 10:27:21.662982 4728 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 27 10:27:21 crc kubenswrapper[4728]: I0227 10:27:21.994475 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/1.log" Feb 27 10:27:21 crc kubenswrapper[4728]: I0227 10:27:21.996413 4728 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 10:27:21 crc kubenswrapper[4728]: I0227 10:27:21.997348 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:27:21 crc kubenswrapper[4728]: I0227 10:27:21.997430 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:27:21 crc kubenswrapper[4728]: I0227 10:27:21.997454 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:27:22 crc kubenswrapper[4728]: I0227 10:27:22.662792 4728 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 27 10:27:23 crc kubenswrapper[4728]: I0227 10:27:23.660793 4728 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 27 10:27:24 crc kubenswrapper[4728]: E0227 10:27:24.373586 4728 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 27 10:27:24 crc kubenswrapper[4728]: I0227 10:27:24.377568 4728 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 10:27:24 crc kubenswrapper[4728]: I0227 10:27:24.379191 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:27:24 crc kubenswrapper[4728]: I0227 10:27:24.379257 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:27:24 crc kubenswrapper[4728]: I0227 10:27:24.379282 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:27:24 crc kubenswrapper[4728]: I0227 10:27:24.379329 4728 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 27 10:27:24 crc kubenswrapper[4728]: E0227 10:27:24.386649 4728 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Feb 27 10:27:24 crc kubenswrapper[4728]: I0227 10:27:24.664270 4728 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 27 10:27:25 crc kubenswrapper[4728]: I0227 10:27:25.662829 4728 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 27 10:27:25 crc kubenswrapper[4728]: E0227 10:27:25.960372 4728 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189813943620a7b8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:26:20.65448748 +0000 UTC m=+0.616853616,LastTimestamp:2026-02-27 10:26:20.65448748 +0000 UTC m=+0.616853616,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:27:25 crc kubenswrapper[4728]: E0227 10:27:25.966094 4728 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189813943911a971 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:26:20.703836529 +0000 UTC m=+0.666202665,LastTimestamp:2026-02-27 10:26:20.703836529 +0000 UTC m=+0.666202665,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:27:25 crc kubenswrapper[4728]: E0227 10:27:25.969670 4728 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18981394391207ef default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:26:20.703860719 +0000 UTC m=+0.666226865,LastTimestamp:2026-02-27 10:26:20.703860719 +0000 UTC m=+0.666226865,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:27:25 crc kubenswrapper[4728]: E0227 10:27:25.974165 4728 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18981394391246b6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:26:20.70387679 +0000 UTC m=+0.666242926,LastTimestamp:2026-02-27 10:26:20.70387679 +0000 UTC m=+0.666242926,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:27:25 crc kubenswrapper[4728]: E0227 10:27:25.978341 4728 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189813943df16a16 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeAllocatableEnforced,Message:Updated Node Allocatable limit across pods,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:26:20.785609238 +0000 UTC m=+0.747975344,LastTimestamp:2026-02-27 10:26:20.785609238 +0000 UTC m=+0.747975344,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:27:25 crc kubenswrapper[4728]: E0227 10:27:25.987568 4728 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189813943911a971\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189813943911a971 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:26:20.703836529 +0000 UTC m=+0.666202665,LastTimestamp:2026-02-27 10:26:20.827261205 +0000 UTC m=+0.789627311,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:27:25 crc kubenswrapper[4728]: E0227 10:27:25.991905 4728 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18981394391207ef\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18981394391207ef default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:26:20.703860719 +0000 UTC m=+0.666226865,LastTimestamp:2026-02-27 10:26:20.827285895 +0000 UTC m=+0.789652001,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:27:26 crc kubenswrapper[4728]: E0227 10:27:26.002477 4728 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18981394391246b6\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18981394391246b6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:26:20.70387679 +0000 UTC m=+0.666242926,LastTimestamp:2026-02-27 10:26:20.827297186 +0000 UTC m=+0.789663302,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:27:26 crc kubenswrapper[4728]: E0227 10:27:26.008213 4728 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189813943911a971\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189813943911a971 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:26:20.703836529 +0000 UTC m=+0.666202665,LastTimestamp:2026-02-27 10:26:20.828499149 +0000 UTC m=+0.790865295,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:27:26 crc kubenswrapper[4728]: E0227 10:27:26.012377 4728 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18981394391207ef\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18981394391207ef default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:26:20.703860719 +0000 UTC m=+0.666226865,LastTimestamp:2026-02-27 10:26:20.828571251 +0000 UTC m=+0.790937397,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:27:26 crc kubenswrapper[4728]: E0227 10:27:26.016136 4728 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18981394391246b6\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18981394391246b6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:26:20.70387679 +0000 UTC m=+0.666242926,LastTimestamp:2026-02-27 10:26:20.828587771 +0000 UTC m=+0.790953907,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:27:26 crc kubenswrapper[4728]: E0227 10:27:26.022904 4728 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189813943911a971\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189813943911a971 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:26:20.703836529 +0000 UTC m=+0.666202665,LastTimestamp:2026-02-27 10:26:20.832626792 +0000 UTC m=+0.794992938,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:27:26 crc kubenswrapper[4728]: E0227 10:27:26.028036 4728 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18981394391207ef\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18981394391207ef default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:26:20.703860719 +0000 UTC m=+0.666226865,LastTimestamp:2026-02-27 10:26:20.832682984 +0000 UTC m=+0.795049130,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:27:26 crc kubenswrapper[4728]: E0227 10:27:26.032912 4728 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18981394391246b6\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18981394391246b6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:26:20.70387679 +0000 UTC m=+0.666242926,LastTimestamp:2026-02-27 10:26:20.832707495 +0000 UTC m=+0.795073641,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:27:26 crc kubenswrapper[4728]: E0227 10:27:26.037052 4728 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189813943911a971\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189813943911a971 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:26:20.703836529 +0000 UTC m=+0.666202665,LastTimestamp:2026-02-27 10:26:20.833253289 +0000 UTC m=+0.795619435,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:27:26 crc kubenswrapper[4728]: E0227 10:27:26.041565 4728 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189813943911a971\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189813943911a971 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:26:20.703836529 +0000 UTC m=+0.666202665,LastTimestamp:2026-02-27 10:26:20.83326384 +0000 UTC m=+0.795629956,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:27:26 crc kubenswrapper[4728]: E0227 10:27:26.045780 4728 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18981394391207ef\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18981394391207ef default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:26:20.703860719 +0000 UTC m=+0.666226865,LastTimestamp:2026-02-27 10:26:20.83328536 +0000 UTC m=+0.795651486,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:27:26 crc kubenswrapper[4728]: E0227 10:27:26.049220 4728 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18981394391207ef\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18981394391207ef default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:26:20.703860719 +0000 UTC m=+0.666226865,LastTimestamp:2026-02-27 10:26:20.83329571 +0000 UTC m=+0.795661856,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:27:26 crc kubenswrapper[4728]: E0227 10:27:26.053120 4728 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18981394391246b6\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18981394391246b6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:26:20.70387679 +0000 UTC m=+0.666242926,LastTimestamp:2026-02-27 10:26:20.833309611 +0000 UTC m=+0.795675727,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:27:26 crc kubenswrapper[4728]: E0227 10:27:26.057010 4728 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18981394391246b6\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18981394391246b6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:26:20.70387679 +0000 UTC m=+0.666242926,LastTimestamp:2026-02-27 10:26:20.833319761 +0000 UTC m=+0.795685907,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:27:26 crc kubenswrapper[4728]: E0227 10:27:26.060991 4728 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189813943911a971\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189813943911a971 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:26:20.703836529 +0000 UTC m=+0.666202665,LastTimestamp:2026-02-27 10:26:20.834424272 +0000 UTC m=+0.796790378,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:27:26 crc kubenswrapper[4728]: E0227 10:27:26.065018 4728 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18981394391207ef\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18981394391207ef default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:26:20.703860719 +0000 UTC m=+0.666226865,LastTimestamp:2026-02-27 10:26:20.834436632 +0000 UTC m=+0.796802738,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:27:26 crc kubenswrapper[4728]: E0227 10:27:26.069670 4728 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18981394391246b6\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18981394391246b6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:26:20.70387679 +0000 UTC m=+0.666242926,LastTimestamp:2026-02-27 10:26:20.834446932 +0000 UTC m=+0.796813038,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:27:26 crc kubenswrapper[4728]: E0227 10:27:26.074570 4728 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189813943911a971\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189813943911a971 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:26:20.703836529 +0000 UTC m=+0.666202665,LastTimestamp:2026-02-27 10:26:20.835204744 +0000 UTC m=+0.797570850,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:27:26 crc kubenswrapper[4728]: E0227 10:27:26.078183 4728 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18981394391207ef\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18981394391207ef default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:26:20.703860719 +0000 UTC m=+0.666226865,LastTimestamp:2026-02-27 10:26:20.835217854 +0000 UTC m=+0.797583960,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:27:26 crc kubenswrapper[4728]: E0227 10:27:26.086188 4728 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.18981394581f6272 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:26:21.224829554 +0000 UTC m=+1.187195660,LastTimestamp:2026-02-27 10:26:21.224829554 +0000 UTC m=+1.187195660,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:27:26 crc kubenswrapper[4728]: E0227 10:27:26.089992 4728 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18981394581fcd2b openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:26:21.224856875 +0000 UTC m=+1.187223021,LastTimestamp:2026-02-27 10:26:21.224856875 +0000 UTC m=+1.187223021,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:27:26 crc kubenswrapper[4728]: E0227 10:27:26.098155 4728 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1898139458bdb7fc openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:26:21.23520614 +0000 UTC m=+1.197572256,LastTimestamp:2026-02-27 10:26:21.23520614 +0000 UTC m=+1.197572256,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:27:26 crc kubenswrapper[4728]: E0227 10:27:26.105418 4728 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1898139459f96acf openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:26:21.255895759 +0000 UTC m=+1.218261895,LastTimestamp:2026-02-27 10:26:21.255895759 +0000 UTC m=+1.218261895,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:27:26 crc kubenswrapper[4728]: E0227 10:27:26.109615 4728 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189813945b36a3ff openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:26:21.276685311 +0000 UTC m=+1.239051407,LastTimestamp:2026-02-27 10:26:21.276685311 +0000 UTC m=+1.239051407,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:27:26 crc kubenswrapper[4728]: E0227 10:27:26.112052 4728 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189813947c5db686 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Created,Message:Created container kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:26:21.832894086 +0000 UTC m=+1.795260192,LastTimestamp:2026-02-27 10:26:21.832894086 +0000 UTC m=+1.795260192,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:27:26 crc kubenswrapper[4728]: E0227 10:27:26.114248 4728 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189813947c732065 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:26:21.834297445 +0000 UTC m=+1.796663551,LastTimestamp:2026-02-27 10:26:21.834297445 +0000 UTC m=+1.796663551,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:27:26 crc kubenswrapper[4728]: E0227 10:27:26.116249 4728 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189813947c77ab0b openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:26:21.834595083 +0000 UTC m=+1.796961229,LastTimestamp:2026-02-27 10:26:21.834595083 +0000 UTC m=+1.796961229,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:27:26 crc kubenswrapper[4728]: E0227 10:27:26.118592 4728 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189813947c7a6d96 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Created,Message:Created container wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:26:21.834775958 +0000 UTC m=+1.797142074,LastTimestamp:2026-02-27 10:26:21.834775958 +0000 UTC m=+1.797142074,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:27:26 crc kubenswrapper[4728]: E0227 10:27:26.119927 4728 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.189813947c7a7f66 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:26:21.834780518 +0000 UTC m=+1.797146624,LastTimestamp:2026-02-27 10:26:21.834780518 +0000 UTC m=+1.797146624,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:27:26 crc kubenswrapper[4728]: E0227 10:27:26.123050 4728 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189813947d3b070b openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Started,Message:Started container kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:26:21.847398155 +0000 UTC m=+1.809764261,LastTimestamp:2026-02-27 10:26:21.847398155 +0000 UTC m=+1.809764261,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:27:26 crc kubenswrapper[4728]: E0227 10:27:26.126603 4728 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189813947d5768fd openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:26:21.849258237 +0000 UTC m=+1.811624333,LastTimestamp:2026-02-27 10:26:21.849258237 +0000 UTC m=+1.811624333,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:27:26 crc kubenswrapper[4728]: E0227 10:27:26.130355 4728 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189813947d6cc093 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:26:21.850656915 +0000 UTC m=+1.813023021,LastTimestamp:2026-02-27 10:26:21.850656915 +0000 UTC m=+1.813023021,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:27:26 crc kubenswrapper[4728]: E0227 10:27:26.133599 4728 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189813947d7b62a7 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:26:21.851615911 +0000 UTC m=+1.813982017,LastTimestamp:2026-02-27 10:26:21.851615911 +0000 UTC m=+1.813982017,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:27:26 crc kubenswrapper[4728]: E0227 10:27:26.137369 4728 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.189813947d8bced5 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:26:21.852692181 +0000 UTC m=+1.815058287,LastTimestamp:2026-02-27 10:26:21.852692181 +0000 UTC m=+1.815058287,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:27:26 crc kubenswrapper[4728]: E0227 10:27:26.141180 4728 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189813947d8ea574 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Started,Message:Started container wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:26:21.852878196 +0000 UTC m=+1.815244302,LastTimestamp:2026-02-27 10:26:21.852878196 +0000 UTC m=+1.815244302,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:27:26 crc kubenswrapper[4728]: E0227 10:27:26.146056 4728 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189813948f7b23e3 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Created,Message:Created container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:26:22.153589731 +0000 UTC m=+2.115955837,LastTimestamp:2026-02-27 10:26:22.153589731 +0000 UTC m=+2.115955837,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:27:26 crc kubenswrapper[4728]: E0227 10:27:26.149423 4728 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189813949035eb08 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Started,Message:Started container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:26:22.165830408 +0000 UTC m=+2.128196514,LastTimestamp:2026-02-27 10:26:22.165830408 +0000 UTC m=+2.128196514,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:27:26 crc kubenswrapper[4728]: E0227 10:27:26.153187 4728 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189813949048b642 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:26:22.167062082 +0000 UTC m=+2.129428188,LastTimestamp:2026-02-27 10:26:22.167062082 +0000 UTC m=+2.129428188,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:27:26 crc kubenswrapper[4728]: E0227 10:27:26.156910 4728 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189813949bd2ec5f openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Created,Message:Created container kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:26:22.360669279 +0000 UTC m=+2.323035425,LastTimestamp:2026-02-27 10:26:22.360669279 +0000 UTC m=+2.323035425,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:27:26 crc kubenswrapper[4728]: E0227 10:27:26.160296 4728 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189813949c9c7fdc openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Started,Message:Started container kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:26:22.373879772 +0000 UTC m=+2.336245918,LastTimestamp:2026-02-27 10:26:22.373879772 +0000 UTC m=+2.336245918,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:27:26 crc kubenswrapper[4728]: E0227 10:27:26.163912 4728 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189813949cb454b2 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:26:22.375441586 +0000 UTC m=+2.337807732,LastTimestamp:2026-02-27 10:26:22.375441586 +0000 UTC m=+2.337807732,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:27:26 crc kubenswrapper[4728]: E0227 10:27:26.167126 4728 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18981394a9f56038 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Created,Message:Created container kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:26:22.597808184 +0000 UTC m=+2.560174290,LastTimestamp:2026-02-27 10:26:22.597808184 +0000 UTC m=+2.560174290,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:27:26 crc kubenswrapper[4728]: E0227 10:27:26.170867 4728 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18981394aa9d1654 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Started,Message:Started container kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:26:22.608799316 +0000 UTC m=+2.571165432,LastTimestamp:2026-02-27 10:26:22.608799316 +0000 UTC m=+2.571165432,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:27:26 crc kubenswrapper[4728]: E0227 10:27:26.175114 4728 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.18981394b27f6d6e openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:26:22.741073262 +0000 UTC m=+2.703439378,LastTimestamp:2026-02-27 10:26:22.741073262 +0000 UTC m=+2.703439378,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:27:26 crc kubenswrapper[4728]: E0227 10:27:26.178823 4728 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18981394b28feaac openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:26:22.7421539 +0000 UTC m=+2.704520036,LastTimestamp:2026-02-27 10:26:22.7421539 +0000 UTC m=+2.704520036,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:27:26 crc kubenswrapper[4728]: E0227 10:27:26.182667 4728 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18981394b35320f9 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:26:22.754947321 +0000 UTC m=+2.717313447,LastTimestamp:2026-02-27 10:26:22.754947321 +0000 UTC m=+2.717313447,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:27:26 crc kubenswrapper[4728]: E0227 10:27:26.185962 4728 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18981394b39a4dd5 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:26:22.759611861 +0000 UTC m=+2.721977987,LastTimestamp:2026-02-27 10:26:22.759611861 +0000 UTC m=+2.721977987,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:27:26 crc kubenswrapper[4728]: E0227 10:27:26.189733 4728 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18981394c070fdce openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Created,Message:Created container kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:26:22.975008206 +0000 UTC m=+2.937374312,LastTimestamp:2026-02-27 10:26:22.975008206 +0000 UTC m=+2.937374312,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:27:26 crc kubenswrapper[4728]: E0227 10:27:26.194049 4728 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.18981394c07d3d0a openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:26:22.975810826 +0000 UTC m=+2.938176932,LastTimestamp:2026-02-27 10:26:22.975810826 +0000 UTC m=+2.938176932,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:27:26 crc kubenswrapper[4728]: E0227 10:27:26.197488 4728 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18981394c18b1a3a openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Started,Message:Started container kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:26:22.993496634 +0000 UTC m=+2.955862740,LastTimestamp:2026-02-27 10:26:22.993496634 +0000 UTC m=+2.955862740,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:27:26 crc kubenswrapper[4728]: E0227 10:27:26.201253 4728 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.18981394c1add8fd openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:26:22.995773693 +0000 UTC m=+2.958139799,LastTimestamp:2026-02-27 10:26:22.995773693 +0000 UTC m=+2.958139799,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:27:26 crc kubenswrapper[4728]: E0227 10:27:26.205046 4728 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18981394c1b19273 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:26:22.996017779 +0000 UTC m=+2.958383885,LastTimestamp:2026-02-27 10:26:22.996017779 +0000 UTC m=+2.958383885,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:27:26 crc kubenswrapper[4728]: E0227 10:27:26.210550 4728 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18981394c1e10c02 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Created,Message:Created container etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:26:22.99912909 +0000 UTC m=+2.961495196,LastTimestamp:2026-02-27 10:26:22.99912909 +0000 UTC m=+2.961495196,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:27:26 crc kubenswrapper[4728]: E0227 10:27:26.214992 4728 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18981394c1f1cb26 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Created,Message:Created container kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:26:23.000226598 +0000 UTC m=+2.962592724,LastTimestamp:2026-02-27 10:26:23.000226598 +0000 UTC m=+2.962592724,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:27:26 crc kubenswrapper[4728]: E0227 10:27:26.219859 4728 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18981394c33ec41a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Started,Message:Started container kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:26:23.022048282 +0000 UTC m=+2.984414388,LastTimestamp:2026-02-27 10:26:23.022048282 +0000 UTC m=+2.984414388,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:27:26 crc kubenswrapper[4728]: E0227 10:27:26.225105 4728 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18981394c34fe948 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:26:23.023171912 +0000 UTC m=+2.985538018,LastTimestamp:2026-02-27 10:26:23.023171912 +0000 UTC m=+2.985538018,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:27:26 crc kubenswrapper[4728]: E0227 10:27:26.228461 4728 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18981394c35abe53 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Started,Message:Started container etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:26:23.023881811 +0000 UTC m=+2.986247917,LastTimestamp:2026-02-27 10:26:23.023881811 +0000 UTC m=+2.986247917,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:27:26 crc kubenswrapper[4728]: E0227 10:27:26.232073 4728 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18981394cd94ad74 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Created,Message:Created container kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:26:23.19545074 +0000 UTC m=+3.157816856,LastTimestamp:2026-02-27 10:26:23.19545074 +0000 UTC m=+3.157816856,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:27:26 crc kubenswrapper[4728]: E0227 10:27:26.235404 4728 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18981394ce18ed2c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Created,Message:Created container kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:26:23.204117804 +0000 UTC m=+3.166483920,LastTimestamp:2026-02-27 10:26:23.204117804 +0000 UTC m=+3.166483920,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:27:26 crc kubenswrapper[4728]: E0227 10:27:26.238835 4728 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18981394ce4f8817 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Started,Message:Started container kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:26:23.207696407 +0000 UTC m=+3.170062513,LastTimestamp:2026-02-27 10:26:23.207696407 +0000 UTC m=+3.170062513,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:27:26 crc kubenswrapper[4728]: E0227 10:27:26.242660 4728 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18981394ce5d8083 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:26:23.208611971 +0000 UTC m=+3.170978097,LastTimestamp:2026-02-27 10:26:23.208611971 +0000 UTC m=+3.170978097,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:27:26 crc kubenswrapper[4728]: E0227 10:27:26.248254 4728 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18981394cefcdc69 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Started,Message:Started container kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:26:23.219055721 +0000 UTC m=+3.181421827,LastTimestamp:2026-02-27 10:26:23.219055721 +0000 UTC m=+3.181421827,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:27:26 crc kubenswrapper[4728]: E0227 10:27:26.251790 4728 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18981394cf0d681b openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:26:23.220140059 +0000 UTC m=+3.182506195,LastTimestamp:2026-02-27 10:26:23.220140059 +0000 UTC m=+3.182506195,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:27:26 crc kubenswrapper[4728]: E0227 10:27:26.257296 4728 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18981394da24da19 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Created,Message:Created container kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:26:23.406225945 +0000 UTC m=+3.368592061,LastTimestamp:2026-02-27 10:26:23.406225945 +0000 UTC m=+3.368592061,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:27:26 crc kubenswrapper[4728]: E0227 10:27:26.261130 4728 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18981394da4b5b45 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Created,Message:Created container kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:26:23.408749381 +0000 UTC m=+3.371115497,LastTimestamp:2026-02-27 10:26:23.408749381 +0000 UTC m=+3.371115497,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:27:26 crc kubenswrapper[4728]: E0227 10:27:26.266731 4728 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18981394db29dc28 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Started,Message:Started container kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:26:23.423331368 +0000 UTC m=+3.385697484,LastTimestamp:2026-02-27 10:26:23.423331368 +0000 UTC m=+3.385697484,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:27:26 crc kubenswrapper[4728]: E0227 10:27:26.271235 4728 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18981394db50005b openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Started,Message:Started container kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:26:23.425831003 +0000 UTC m=+3.388197109,LastTimestamp:2026-02-27 10:26:23.425831003 +0000 UTC m=+3.388197109,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:27:26 crc kubenswrapper[4728]: E0227 10:27:26.274942 4728 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18981394db5fb4da openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:26:23.42686025 +0000 UTC m=+3.389226376,LastTimestamp:2026-02-27 10:26:23.42686025 +0000 UTC m=+3.389226376,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:27:26 crc kubenswrapper[4728]: E0227 10:27:26.278963 4728 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18981394e846e6b6 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Created,Message:Created container kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:26:23.643338422 +0000 UTC m=+3.605704528,LastTimestamp:2026-02-27 10:26:23.643338422 +0000 UTC m=+3.605704528,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:27:26 crc kubenswrapper[4728]: E0227 10:27:26.283420 4728 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18981394e923aeec openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Started,Message:Started container kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:26:23.657807596 +0000 UTC m=+3.620173742,LastTimestamp:2026-02-27 10:26:23.657807596 +0000 UTC m=+3.620173742,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:27:26 crc kubenswrapper[4728]: E0227 10:27:26.286961 4728 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18981394e939d044 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:26:23.659257924 +0000 UTC m=+3.621624070,LastTimestamp:2026-02-27 10:26:23.659257924 +0000 UTC m=+3.621624070,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:27:26 crc kubenswrapper[4728]: E0227 10:27:26.290985 4728 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18981394ef879a19 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:26:23.765019161 +0000 UTC m=+3.727385307,LastTimestamp:2026-02-27 10:26:23.765019161 +0000 UTC m=+3.727385307,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:27:26 crc kubenswrapper[4728]: E0227 10:27:26.295855 4728 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18981394f763cf89 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:26:23.896891273 +0000 UTC m=+3.859257389,LastTimestamp:2026-02-27 10:26:23.896891273 +0000 UTC m=+3.859257389,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:27:26 crc kubenswrapper[4728]: E0227 10:27:26.300406 4728 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18981394f84b9b3c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:26:23.912082236 +0000 UTC m=+3.874448342,LastTimestamp:2026-02-27 10:26:23.912082236 +0000 UTC m=+3.874448342,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:27:26 crc kubenswrapper[4728]: E0227 10:27:26.303430 4728 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18981394fc26895a openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Created,Message:Created container etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:26:23.97676169 +0000 UTC m=+3.939127826,LastTimestamp:2026-02-27 10:26:23.97676169 +0000 UTC m=+3.939127826,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:27:26 crc kubenswrapper[4728]: E0227 10:27:26.308054 4728 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18981394fdddcbec openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Started,Message:Started container etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:26:24.005549036 +0000 UTC m=+3.967915182,LastTimestamp:2026-02-27 10:26:24.005549036 +0000 UTC m=+3.967915182,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:27:26 crc kubenswrapper[4728]: E0227 10:27:26.311933 4728 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189813952c13b430 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:26:24.78083384 +0000 UTC m=+4.743199966,LastTimestamp:2026-02-27 10:26:24.78083384 +0000 UTC m=+4.743199966,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:27:26 crc kubenswrapper[4728]: E0227 10:27:26.315904 4728 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1898139539b6c942 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Created,Message:Created container etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:26:25.00962541 +0000 UTC m=+4.971991556,LastTimestamp:2026-02-27 10:26:25.00962541 +0000 UTC m=+4.971991556,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:27:26 crc kubenswrapper[4728]: E0227 10:27:26.319221 4728 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189813953a5ffe83 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Started,Message:Started container etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:26:25.020714627 +0000 UTC m=+4.983080733,LastTimestamp:2026-02-27 10:26:25.020714627 +0000 UTC m=+4.983080733,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:27:26 crc kubenswrapper[4728]: E0227 10:27:26.320075 4728 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189813953a754b45 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:26:25.022110533 +0000 UTC m=+4.984476639,LastTimestamp:2026-02-27 10:26:25.022110533 +0000 UTC m=+4.984476639,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:27:26 crc kubenswrapper[4728]: E0227 10:27:26.323644 4728 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1898139548bfae98 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Created,Message:Created container etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:26:25.261866648 +0000 UTC m=+5.224232784,LastTimestamp:2026-02-27 10:26:25.261866648 +0000 UTC m=+5.224232784,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:27:26 crc kubenswrapper[4728]: E0227 10:27:26.326809 4728 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1898139549bce8ba openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Started,Message:Started container etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:26:25.278462138 +0000 UTC m=+5.240828274,LastTimestamp:2026-02-27 10:26:25.278462138 +0000 UTC m=+5.240828274,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:27:26 crc kubenswrapper[4728]: E0227 10:27:26.329840 4728 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1898139549cba4db openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:26:25.279427803 +0000 UTC m=+5.241793949,LastTimestamp:2026-02-27 10:26:25.279427803 +0000 UTC m=+5.241793949,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:27:26 crc kubenswrapper[4728]: E0227 10:27:26.334863 4728 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18981395566ba818 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Created,Message:Created container etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:26:25.491240984 +0000 UTC m=+5.453607090,LastTimestamp:2026-02-27 10:26:25.491240984 +0000 UTC m=+5.453607090,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:27:26 crc kubenswrapper[4728]: E0227 10:27:26.338331 4728 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1898139557681a2f openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Started,Message:Started container etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:26:25.507785263 +0000 UTC m=+5.470151389,LastTimestamp:2026-02-27 10:26:25.507785263 +0000 UTC m=+5.470151389,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:27:26 crc kubenswrapper[4728]: E0227 10:27:26.341717 4728 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1898139557805e9c openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:26:25.509375644 +0000 UTC m=+5.471741750,LastTimestamp:2026-02-27 10:26:25.509375644 +0000 UTC m=+5.471741750,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:27:26 crc kubenswrapper[4728]: E0227 10:27:26.345628 4728 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18981395658b8f13 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Created,Message:Created container etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:26:25.744989971 +0000 UTC m=+5.707356087,LastTimestamp:2026-02-27 10:26:25.744989971 +0000 UTC m=+5.707356087,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:27:26 crc kubenswrapper[4728]: E0227 10:27:26.349016 4728 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18981395664a18c8 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Started,Message:Started container etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:26:25.757477064 +0000 UTC m=+5.719843210,LastTimestamp:2026-02-27 10:26:25.757477064 +0000 UTC m=+5.719843210,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:27:26 crc kubenswrapper[4728]: E0227 10:27:26.354659 4728 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189813956660b2b6 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:26:25.758958262 +0000 UTC m=+5.721324368,LastTimestamp:2026-02-27 10:26:25.758958262 +0000 UTC m=+5.721324368,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:27:26 crc kubenswrapper[4728]: E0227 10:27:26.358523 4728 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18981395739249b7 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Created,Message:Created container etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:26:25.980311991 +0000 UTC m=+5.942678097,LastTimestamp:2026-02-27 10:26:25.980311991 +0000 UTC m=+5.942678097,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:27:26 crc kubenswrapper[4728]: E0227 10:27:26.363896 4728 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18981395749ab078 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Started,Message:Started container etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:26:25.9976398 +0000 UTC m=+5.960005906,LastTimestamp:2026-02-27 10:26:25.9976398 +0000 UTC m=+5.960005906,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:27:26 crc kubenswrapper[4728]: E0227 10:27:26.371410 4728 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Feb 27 10:27:26 crc kubenswrapper[4728]: &Event{ObjectMeta:{kube-controller-manager-crc.1898139686c1cb8d openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Feb 27 10:27:26 crc kubenswrapper[4728]: body: Feb 27 10:27:26 crc kubenswrapper[4728]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:26:30.597159821 +0000 UTC m=+10.559525957,LastTimestamp:2026-02-27 10:26:30.597159821 +0000 UTC m=+10.559525957,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 27 10:27:26 crc kubenswrapper[4728]: > Feb 27 10:27:26 crc kubenswrapper[4728]: E0227 10:27:26.375773 4728 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1898139686c30121 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:26:30.597239073 +0000 UTC m=+10.559605219,LastTimestamp:2026-02-27 10:26:30.597239073 +0000 UTC m=+10.559605219,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:27:26 crc kubenswrapper[4728]: E0227 10:27:26.380338 4728 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.18981394e939d044\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18981394e939d044 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:26:23.659257924 +0000 UTC m=+3.621624070,LastTimestamp:2026-02-27 10:26:34.827103099 +0000 UTC m=+14.789469215,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:27:26 crc kubenswrapper[4728]: E0227 10:27:26.385208 4728 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.18981394f763cf89\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18981394f763cf89 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:26:23.896891273 +0000 UTC m=+3.859257389,LastTimestamp:2026-02-27 10:26:35.042220105 +0000 UTC m=+15.004586211,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:27:26 crc kubenswrapper[4728]: E0227 10:27:26.389432 4728 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.18981394f84b9b3c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18981394f84b9b3c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:26:23.912082236 +0000 UTC m=+3.874448342,LastTimestamp:2026-02-27 10:26:35.052592924 +0000 UTC m=+15.014959050,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:27:26 crc kubenswrapper[4728]: E0227 10:27:26.393340 4728 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Feb 27 10:27:26 crc kubenswrapper[4728]: &Event{ObjectMeta:{kube-apiserver-crc.18981397985e0e47 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:6443/livez": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Feb 27 10:27:26 crc kubenswrapper[4728]: body: Feb 27 10:27:26 crc kubenswrapper[4728]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:26:35.187580487 +0000 UTC m=+15.149946593,LastTimestamp:2026-02-27 10:26:35.187580487 +0000 UTC m=+15.149946593,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 27 10:27:26 crc kubenswrapper[4728]: > Feb 27 10:27:26 crc kubenswrapper[4728]: E0227 10:27:26.397367 4728 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18981397985ebfe6 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:6443/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:26:35.187625958 +0000 UTC m=+15.149992054,LastTimestamp:2026-02-27 10:26:35.187625958 +0000 UTC m=+15.149992054,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:27:26 crc kubenswrapper[4728]: E0227 10:27:26.403212 4728 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Feb 27 10:27:26 crc kubenswrapper[4728]: &Event{ObjectMeta:{kube-apiserver-crc.18981397c4ae8248 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 403 Feb 27 10:27:26 crc kubenswrapper[4728]: body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 27 10:27:26 crc kubenswrapper[4728]: Feb 27 10:27:26 crc kubenswrapper[4728]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:26:35.931050568 +0000 UTC m=+15.893416684,LastTimestamp:2026-02-27 10:26:35.931050568 +0000 UTC m=+15.893416684,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 27 10:27:26 crc kubenswrapper[4728]: > Feb 27 10:27:26 crc kubenswrapper[4728]: E0227 10:27:26.406536 4728 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18981397c4af5719 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 403,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:26:35.931105049 +0000 UTC m=+15.893471175,LastTimestamp:2026-02-27 10:26:35.931105049 +0000 UTC m=+15.893471175,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:27:26 crc kubenswrapper[4728]: E0227 10:27:26.411465 4728 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Feb 27 10:27:26 crc kubenswrapper[4728]: &Event{ObjectMeta:{kube-controller-manager-crc.18981398dac91554 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Feb 27 10:27:26 crc kubenswrapper[4728]: body: Feb 27 10:27:26 crc kubenswrapper[4728]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:26:40.596858196 +0000 UTC m=+20.559224312,LastTimestamp:2026-02-27 10:26:40.596858196 +0000 UTC m=+20.559224312,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 27 10:27:26 crc kubenswrapper[4728]: > Feb 27 10:27:26 crc kubenswrapper[4728]: E0227 10:27:26.415415 4728 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18981398dac9bf1a openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:26:40.596901658 +0000 UTC m=+20.559267764,LastTimestamp:2026-02-27 10:26:40.596901658 +0000 UTC m=+20.559267764,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:27:26 crc kubenswrapper[4728]: E0227 10:27:26.422223 4728 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.18981398dac91554\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Feb 27 10:27:26 crc kubenswrapper[4728]: &Event{ObjectMeta:{kube-controller-manager-crc.18981398dac91554 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Feb 27 10:27:26 crc kubenswrapper[4728]: body: Feb 27 10:27:26 crc kubenswrapper[4728]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:26:40.596858196 +0000 UTC m=+20.559224312,LastTimestamp:2026-02-27 10:26:50.597090927 +0000 UTC m=+30.559457073,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 27 10:27:26 crc kubenswrapper[4728]: > Feb 27 10:27:26 crc kubenswrapper[4728]: E0227 10:27:26.423458 4728 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.18981398dac9bf1a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18981398dac9bf1a openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:26:40.596901658 +0000 UTC m=+20.559267764,LastTimestamp:2026-02-27 10:26:50.597144848 +0000 UTC m=+30.559510994,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:27:26 crc kubenswrapper[4728]: E0227 10:27:26.427800 4728 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1898139b2f06d3ab openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Killing,Message:Container cluster-policy-controller failed startup probe, will be restarted,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:26:50.600125355 +0000 UTC m=+30.562491451,LastTimestamp:2026-02-27 10:26:50.600125355 +0000 UTC m=+30.562491451,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:27:26 crc kubenswrapper[4728]: E0227 10:27:26.430498 4728 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189813947d5768fd\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189813947d5768fd openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:26:21.849258237 +0000 UTC m=+1.811624333,LastTimestamp:2026-02-27 10:26:50.72204385 +0000 UTC m=+30.684409976,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:27:26 crc kubenswrapper[4728]: E0227 10:27:26.435112 4728 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189813948f7b23e3\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189813948f7b23e3 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Created,Message:Created container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:26:22.153589731 +0000 UTC m=+2.115955837,LastTimestamp:2026-02-27 10:26:50.949176058 +0000 UTC m=+30.911542204,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:27:26 crc kubenswrapper[4728]: E0227 10:27:26.438850 4728 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189813949035eb08\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189813949035eb08 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Started,Message:Started container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:26:22.165830408 +0000 UTC m=+2.128196514,LastTimestamp:2026-02-27 10:26:50.958728955 +0000 UTC m=+30.921095091,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:27:26 crc kubenswrapper[4728]: E0227 10:27:26.445453 4728 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.18981398dac91554\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Feb 27 10:27:26 crc kubenswrapper[4728]: &Event{ObjectMeta:{kube-controller-manager-crc.18981398dac91554 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Feb 27 10:27:26 crc kubenswrapper[4728]: body: Feb 27 10:27:26 crc kubenswrapper[4728]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:26:40.596858196 +0000 UTC m=+20.559224312,LastTimestamp:2026-02-27 10:27:00.596607458 +0000 UTC m=+40.558973564,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 27 10:27:26 crc kubenswrapper[4728]: > Feb 27 10:27:26 crc kubenswrapper[4728]: E0227 10:27:26.451474 4728 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.18981398dac9bf1a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18981398dac9bf1a openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:26:40.596901658 +0000 UTC m=+20.559267764,LastTimestamp:2026-02-27 10:27:00.596645719 +0000 UTC m=+40.559011825,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 10:27:26 crc kubenswrapper[4728]: E0227 10:27:26.456812 4728 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.18981398dac91554\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Feb 27 10:27:26 crc kubenswrapper[4728]: &Event{ObjectMeta:{kube-controller-manager-crc.18981398dac91554 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Feb 27 10:27:26 crc kubenswrapper[4728]: body: Feb 27 10:27:26 crc kubenswrapper[4728]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:26:40.596858196 +0000 UTC m=+20.559224312,LastTimestamp:2026-02-27 10:27:10.597910415 +0000 UTC m=+50.560276551,Count:4,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 27 10:27:26 crc kubenswrapper[4728]: > Feb 27 10:27:26 crc kubenswrapper[4728]: I0227 10:27:26.660632 4728 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 27 10:27:26 crc kubenswrapper[4728]: I0227 10:27:26.724844 4728 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 10:27:26 crc kubenswrapper[4728]: I0227 10:27:26.726183 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:27:26 crc kubenswrapper[4728]: I0227 10:27:26.726232 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:27:26 crc kubenswrapper[4728]: I0227 10:27:26.726244 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:27:27 crc kubenswrapper[4728]: I0227 10:27:27.597620 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 27 10:27:27 crc kubenswrapper[4728]: I0227 10:27:27.597855 4728 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 10:27:27 crc kubenswrapper[4728]: I0227 10:27:27.599253 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:27:27 crc kubenswrapper[4728]: I0227 10:27:27.599403 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:27:27 crc kubenswrapper[4728]: I0227 10:27:27.599435 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:27:27 crc kubenswrapper[4728]: I0227 10:27:27.607298 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 27 10:27:27 crc kubenswrapper[4728]: I0227 10:27:27.660241 4728 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 27 10:27:27 crc kubenswrapper[4728]: I0227 10:27:27.724772 4728 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 10:27:27 crc kubenswrapper[4728]: I0227 10:27:27.726422 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:27:27 crc kubenswrapper[4728]: I0227 10:27:27.726748 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:27:27 crc kubenswrapper[4728]: I0227 10:27:27.726918 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:27:27 crc kubenswrapper[4728]: I0227 10:27:27.728073 4728 scope.go:117] "RemoveContainer" containerID="3042ae9f43adfddc493f967148f7581c03960800b63252b263a48c4cdd9e10e9" Feb 27 10:27:28 crc kubenswrapper[4728]: I0227 10:27:28.013451 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Feb 27 10:27:28 crc kubenswrapper[4728]: I0227 10:27:28.016567 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"0f09311526d9d8d971440bee60c506ffb423eed73c2ac58c81934c2df487ae48"} Feb 27 10:27:28 crc kubenswrapper[4728]: I0227 10:27:28.016660 4728 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 10:27:28 crc kubenswrapper[4728]: I0227 10:27:28.016837 4728 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 10:27:28 crc kubenswrapper[4728]: I0227 10:27:28.017067 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 27 10:27:28 crc kubenswrapper[4728]: I0227 10:27:28.019154 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:27:28 crc kubenswrapper[4728]: I0227 10:27:28.019187 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:27:28 crc kubenswrapper[4728]: I0227 10:27:28.019346 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:27:28 crc kubenswrapper[4728]: I0227 10:27:28.019821 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:27:28 crc kubenswrapper[4728]: I0227 10:27:28.019869 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:27:28 crc kubenswrapper[4728]: I0227 10:27:28.019823 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:27:28 crc kubenswrapper[4728]: I0227 10:27:28.662265 4728 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 27 10:27:29 crc kubenswrapper[4728]: I0227 10:27:29.021852 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Feb 27 10:27:29 crc kubenswrapper[4728]: I0227 10:27:29.022579 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Feb 27 10:27:29 crc kubenswrapper[4728]: I0227 10:27:29.025155 4728 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="0f09311526d9d8d971440bee60c506ffb423eed73c2ac58c81934c2df487ae48" exitCode=255 Feb 27 10:27:29 crc kubenswrapper[4728]: I0227 10:27:29.025200 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"0f09311526d9d8d971440bee60c506ffb423eed73c2ac58c81934c2df487ae48"} Feb 27 10:27:29 crc kubenswrapper[4728]: I0227 10:27:29.025459 4728 scope.go:117] "RemoveContainer" containerID="3042ae9f43adfddc493f967148f7581c03960800b63252b263a48c4cdd9e10e9" Feb 27 10:27:29 crc kubenswrapper[4728]: I0227 10:27:29.025650 4728 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 10:27:29 crc kubenswrapper[4728]: I0227 10:27:29.025871 4728 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 10:27:29 crc kubenswrapper[4728]: I0227 10:27:29.026544 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:27:29 crc kubenswrapper[4728]: I0227 10:27:29.026580 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:27:29 crc kubenswrapper[4728]: I0227 10:27:29.026592 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:27:29 crc kubenswrapper[4728]: I0227 10:27:29.027541 4728 scope.go:117] "RemoveContainer" containerID="0f09311526d9d8d971440bee60c506ffb423eed73c2ac58c81934c2df487ae48" Feb 27 10:27:29 crc kubenswrapper[4728]: E0227 10:27:29.027717 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 27 10:27:29 crc kubenswrapper[4728]: I0227 10:27:29.034097 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:27:29 crc kubenswrapper[4728]: I0227 10:27:29.034185 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:27:29 crc kubenswrapper[4728]: I0227 10:27:29.034477 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:27:29 crc kubenswrapper[4728]: I0227 10:27:29.256173 4728 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 10:27:29 crc kubenswrapper[4728]: W0227 10:27:29.474343 4728 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 27 10:27:29 crc kubenswrapper[4728]: E0227 10:27:29.474407 4728 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Feb 27 10:27:29 crc kubenswrapper[4728]: I0227 10:27:29.658992 4728 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 27 10:27:30 crc kubenswrapper[4728]: I0227 10:27:30.029683 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Feb 27 10:27:30 crc kubenswrapper[4728]: I0227 10:27:30.031574 4728 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 10:27:30 crc kubenswrapper[4728]: I0227 10:27:30.032677 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:27:30 crc kubenswrapper[4728]: I0227 10:27:30.032714 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:27:30 crc kubenswrapper[4728]: I0227 10:27:30.032722 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:27:30 crc kubenswrapper[4728]: I0227 10:27:30.033220 4728 scope.go:117] "RemoveContainer" containerID="0f09311526d9d8d971440bee60c506ffb423eed73c2ac58c81934c2df487ae48" Feb 27 10:27:30 crc kubenswrapper[4728]: E0227 10:27:30.033368 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 27 10:27:30 crc kubenswrapper[4728]: I0227 10:27:30.660343 4728 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 27 10:27:30 crc kubenswrapper[4728]: E0227 10:27:30.801380 4728 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 27 10:27:30 crc kubenswrapper[4728]: W0227 10:27:30.980314 4728 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 27 10:27:30 crc kubenswrapper[4728]: E0227 10:27:30.980360 4728 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" Feb 27 10:27:31 crc kubenswrapper[4728]: E0227 10:27:31.378418 4728 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 27 10:27:31 crc kubenswrapper[4728]: I0227 10:27:31.387539 4728 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 10:27:31 crc kubenswrapper[4728]: I0227 10:27:31.388620 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:27:31 crc kubenswrapper[4728]: I0227 10:27:31.388674 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:27:31 crc kubenswrapper[4728]: I0227 10:27:31.388690 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:27:31 crc kubenswrapper[4728]: I0227 10:27:31.388725 4728 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 27 10:27:31 crc kubenswrapper[4728]: E0227 10:27:31.392655 4728 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Feb 27 10:27:31 crc kubenswrapper[4728]: I0227 10:27:31.661819 4728 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 27 10:27:32 crc kubenswrapper[4728]: I0227 10:27:32.067006 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 10:27:32 crc kubenswrapper[4728]: I0227 10:27:32.067412 4728 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 10:27:32 crc kubenswrapper[4728]: I0227 10:27:32.068479 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:27:32 crc kubenswrapper[4728]: I0227 10:27:32.068608 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:27:32 crc kubenswrapper[4728]: I0227 10:27:32.068635 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:27:32 crc kubenswrapper[4728]: I0227 10:27:32.069647 4728 scope.go:117] "RemoveContainer" containerID="0f09311526d9d8d971440bee60c506ffb423eed73c2ac58c81934c2df487ae48" Feb 27 10:27:32 crc kubenswrapper[4728]: E0227 10:27:32.069949 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 27 10:27:32 crc kubenswrapper[4728]: I0227 10:27:32.661103 4728 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 27 10:27:32 crc kubenswrapper[4728]: I0227 10:27:32.819271 4728 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 27 10:27:32 crc kubenswrapper[4728]: I0227 10:27:32.836432 4728 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 27 10:27:33 crc kubenswrapper[4728]: I0227 10:27:33.661958 4728 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 27 10:27:34 crc kubenswrapper[4728]: I0227 10:27:34.660679 4728 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 27 10:27:35 crc kubenswrapper[4728]: I0227 10:27:35.631574 4728 csr.go:261] certificate signing request csr-dqgqg is approved, waiting to be issued Feb 27 10:27:35 crc kubenswrapper[4728]: I0227 10:27:35.640089 4728 csr.go:257] certificate signing request csr-dqgqg is issued Feb 27 10:27:35 crc kubenswrapper[4728]: I0227 10:27:35.681141 4728 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Feb 27 10:27:36 crc kubenswrapper[4728]: I0227 10:27:36.513417 4728 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 27 10:27:36 crc kubenswrapper[4728]: I0227 10:27:36.641072 4728 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-02-24 05:54:36 +0000 UTC, rotation deadline is 2027-01-04 02:19:50.999959279 +0000 UTC Feb 27 10:27:36 crc kubenswrapper[4728]: I0227 10:27:36.641144 4728 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7455h52m14.358822961s for next certificate rotation Feb 27 10:27:38 crc kubenswrapper[4728]: I0227 10:27:38.393581 4728 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 10:27:38 crc kubenswrapper[4728]: I0227 10:27:38.395775 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:27:38 crc kubenswrapper[4728]: I0227 10:27:38.395841 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:27:38 crc kubenswrapper[4728]: I0227 10:27:38.395866 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:27:38 crc kubenswrapper[4728]: I0227 10:27:38.396156 4728 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 27 10:27:38 crc kubenswrapper[4728]: I0227 10:27:38.407227 4728 kubelet_node_status.go:115] "Node was previously registered" node="crc" Feb 27 10:27:38 crc kubenswrapper[4728]: I0227 10:27:38.407590 4728 kubelet_node_status.go:79] "Successfully registered node" node="crc" Feb 27 10:27:38 crc kubenswrapper[4728]: E0227 10:27:38.407627 4728 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Feb 27 10:27:38 crc kubenswrapper[4728]: I0227 10:27:38.411663 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:27:38 crc kubenswrapper[4728]: I0227 10:27:38.411712 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:27:38 crc kubenswrapper[4728]: I0227 10:27:38.411730 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:27:38 crc kubenswrapper[4728]: I0227 10:27:38.411753 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:27:38 crc kubenswrapper[4728]: I0227 10:27:38.411770 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:27:38Z","lastTransitionTime":"2026-02-27T10:27:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:27:38 crc kubenswrapper[4728]: E0227 10:27:38.430856 4728 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:27:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:27:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:27:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:27:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"79ce2621-f919-4f1d-8b5b-b727bcba43c7\\\",\\\"systemUUID\\\":\\\"08a24311-ed07-4912-ba2b-648ea93d1dc5\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:27:38 crc kubenswrapper[4728]: I0227 10:27:38.445285 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:27:38 crc kubenswrapper[4728]: I0227 10:27:38.445369 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:27:38 crc kubenswrapper[4728]: I0227 10:27:38.445395 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:27:38 crc kubenswrapper[4728]: I0227 10:27:38.445429 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:27:38 crc kubenswrapper[4728]: I0227 10:27:38.445452 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:27:38Z","lastTransitionTime":"2026-02-27T10:27:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:27:38 crc kubenswrapper[4728]: E0227 10:27:38.461222 4728 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:27:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:27:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:27:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:27:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"79ce2621-f919-4f1d-8b5b-b727bcba43c7\\\",\\\"systemUUID\\\":\\\"08a24311-ed07-4912-ba2b-648ea93d1dc5\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:27:38 crc kubenswrapper[4728]: I0227 10:27:38.471554 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:27:38 crc kubenswrapper[4728]: I0227 10:27:38.471765 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:27:38 crc kubenswrapper[4728]: I0227 10:27:38.471899 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:27:38 crc kubenswrapper[4728]: I0227 10:27:38.472037 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:27:38 crc kubenswrapper[4728]: I0227 10:27:38.472164 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:27:38Z","lastTransitionTime":"2026-02-27T10:27:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:27:38 crc kubenswrapper[4728]: E0227 10:27:38.487445 4728 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:27:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:27:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:27:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:27:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"79ce2621-f919-4f1d-8b5b-b727bcba43c7\\\",\\\"systemUUID\\\":\\\"08a24311-ed07-4912-ba2b-648ea93d1dc5\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:27:38 crc kubenswrapper[4728]: I0227 10:27:38.498005 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:27:38 crc kubenswrapper[4728]: I0227 10:27:38.498067 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:27:38 crc kubenswrapper[4728]: I0227 10:27:38.498091 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:27:38 crc kubenswrapper[4728]: I0227 10:27:38.498120 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:27:38 crc kubenswrapper[4728]: I0227 10:27:38.498140 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:27:38Z","lastTransitionTime":"2026-02-27T10:27:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:27:38 crc kubenswrapper[4728]: E0227 10:27:38.514591 4728 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:27:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:27:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:27:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:27:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"79ce2621-f919-4f1d-8b5b-b727bcba43c7\\\",\\\"systemUUID\\\":\\\"08a24311-ed07-4912-ba2b-648ea93d1dc5\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:27:38 crc kubenswrapper[4728]: E0227 10:27:38.514904 4728 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 27 10:27:38 crc kubenswrapper[4728]: E0227 10:27:38.514942 4728 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:27:38 crc kubenswrapper[4728]: E0227 10:27:38.615730 4728 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:27:38 crc kubenswrapper[4728]: E0227 10:27:38.716541 4728 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:27:38 crc kubenswrapper[4728]: E0227 10:27:38.817722 4728 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:27:38 crc kubenswrapper[4728]: E0227 10:27:38.918617 4728 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:27:39 crc kubenswrapper[4728]: E0227 10:27:39.019467 4728 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:27:39 crc kubenswrapper[4728]: E0227 10:27:39.120374 4728 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:27:39 crc kubenswrapper[4728]: E0227 10:27:39.221424 4728 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:27:39 crc kubenswrapper[4728]: I0227 10:27:39.318697 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 27 10:27:39 crc kubenswrapper[4728]: I0227 10:27:39.318921 4728 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 10:27:39 crc kubenswrapper[4728]: I0227 10:27:39.320639 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:27:39 crc kubenswrapper[4728]: I0227 10:27:39.320705 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:27:39 crc kubenswrapper[4728]: I0227 10:27:39.320722 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:27:39 crc kubenswrapper[4728]: E0227 10:27:39.322404 4728 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:27:39 crc kubenswrapper[4728]: E0227 10:27:39.423423 4728 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:27:39 crc kubenswrapper[4728]: E0227 10:27:39.524653 4728 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:27:39 crc kubenswrapper[4728]: E0227 10:27:39.625399 4728 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:27:39 crc kubenswrapper[4728]: E0227 10:27:39.725977 4728 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:27:39 crc kubenswrapper[4728]: E0227 10:27:39.826489 4728 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:27:39 crc kubenswrapper[4728]: E0227 10:27:39.927157 4728 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:27:40 crc kubenswrapper[4728]: E0227 10:27:40.027476 4728 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:27:40 crc kubenswrapper[4728]: E0227 10:27:40.128195 4728 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:27:40 crc kubenswrapper[4728]: E0227 10:27:40.229138 4728 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:27:40 crc kubenswrapper[4728]: E0227 10:27:40.330075 4728 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:27:40 crc kubenswrapper[4728]: E0227 10:27:40.431055 4728 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:27:40 crc kubenswrapper[4728]: E0227 10:27:40.531433 4728 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:27:40 crc kubenswrapper[4728]: E0227 10:27:40.631867 4728 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:27:40 crc kubenswrapper[4728]: E0227 10:27:40.732219 4728 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:27:40 crc kubenswrapper[4728]: E0227 10:27:40.801747 4728 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 27 10:27:40 crc kubenswrapper[4728]: E0227 10:27:40.832341 4728 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:27:40 crc kubenswrapper[4728]: E0227 10:27:40.933364 4728 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:27:41 crc kubenswrapper[4728]: E0227 10:27:41.033531 4728 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:27:41 crc kubenswrapper[4728]: I0227 10:27:41.111674 4728 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 27 10:27:41 crc kubenswrapper[4728]: E0227 10:27:41.134736 4728 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:27:41 crc kubenswrapper[4728]: E0227 10:27:41.235146 4728 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:27:41 crc kubenswrapper[4728]: E0227 10:27:41.335983 4728 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:27:41 crc kubenswrapper[4728]: E0227 10:27:41.436740 4728 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:27:41 crc kubenswrapper[4728]: E0227 10:27:41.537172 4728 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:27:41 crc kubenswrapper[4728]: E0227 10:27:41.637342 4728 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:27:41 crc kubenswrapper[4728]: E0227 10:27:41.737666 4728 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:27:41 crc kubenswrapper[4728]: E0227 10:27:41.837783 4728 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:27:41 crc kubenswrapper[4728]: E0227 10:27:41.938576 4728 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:27:42 crc kubenswrapper[4728]: E0227 10:27:42.039417 4728 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:27:42 crc kubenswrapper[4728]: E0227 10:27:42.139824 4728 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:27:42 crc kubenswrapper[4728]: E0227 10:27:42.240957 4728 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:27:42 crc kubenswrapper[4728]: E0227 10:27:42.341693 4728 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:27:42 crc kubenswrapper[4728]: E0227 10:27:42.441841 4728 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:27:42 crc kubenswrapper[4728]: E0227 10:27:42.542699 4728 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:27:42 crc kubenswrapper[4728]: E0227 10:27:42.643608 4728 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:27:42 crc kubenswrapper[4728]: E0227 10:27:42.744724 4728 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:27:42 crc kubenswrapper[4728]: E0227 10:27:42.845823 4728 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:27:42 crc kubenswrapper[4728]: E0227 10:27:42.946603 4728 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:27:43 crc kubenswrapper[4728]: E0227 10:27:43.046818 4728 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:27:43 crc kubenswrapper[4728]: E0227 10:27:43.147837 4728 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:27:43 crc kubenswrapper[4728]: E0227 10:27:43.248881 4728 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:27:43 crc kubenswrapper[4728]: E0227 10:27:43.349990 4728 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:27:43 crc kubenswrapper[4728]: E0227 10:27:43.450724 4728 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:27:43 crc kubenswrapper[4728]: E0227 10:27:43.551915 4728 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:27:43 crc kubenswrapper[4728]: E0227 10:27:43.652087 4728 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:27:43 crc kubenswrapper[4728]: E0227 10:27:43.752605 4728 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:27:43 crc kubenswrapper[4728]: E0227 10:27:43.853554 4728 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:27:43 crc kubenswrapper[4728]: E0227 10:27:43.954522 4728 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:27:44 crc kubenswrapper[4728]: E0227 10:27:44.055599 4728 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:27:44 crc kubenswrapper[4728]: E0227 10:27:44.155737 4728 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:27:44 crc kubenswrapper[4728]: E0227 10:27:44.256489 4728 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:27:44 crc kubenswrapper[4728]: E0227 10:27:44.356654 4728 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:27:44 crc kubenswrapper[4728]: E0227 10:27:44.457330 4728 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:27:44 crc kubenswrapper[4728]: E0227 10:27:44.558367 4728 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:27:44 crc kubenswrapper[4728]: E0227 10:27:44.658735 4728 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:27:44 crc kubenswrapper[4728]: I0227 10:27:44.724246 4728 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 10:27:44 crc kubenswrapper[4728]: I0227 10:27:44.725898 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:27:44 crc kubenswrapper[4728]: I0227 10:27:44.725959 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:27:44 crc kubenswrapper[4728]: I0227 10:27:44.725977 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:27:44 crc kubenswrapper[4728]: I0227 10:27:44.727000 4728 scope.go:117] "RemoveContainer" containerID="0f09311526d9d8d971440bee60c506ffb423eed73c2ac58c81934c2df487ae48" Feb 27 10:27:44 crc kubenswrapper[4728]: E0227 10:27:44.727278 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 27 10:27:44 crc kubenswrapper[4728]: E0227 10:27:44.759009 4728 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:27:44 crc kubenswrapper[4728]: E0227 10:27:44.859464 4728 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:27:44 crc kubenswrapper[4728]: E0227 10:27:44.960601 4728 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:27:45 crc kubenswrapper[4728]: E0227 10:27:45.061660 4728 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:27:45 crc kubenswrapper[4728]: E0227 10:27:45.161934 4728 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:27:45 crc kubenswrapper[4728]: E0227 10:27:45.262453 4728 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:27:45 crc kubenswrapper[4728]: E0227 10:27:45.362712 4728 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:27:45 crc kubenswrapper[4728]: E0227 10:27:45.463139 4728 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:27:45 crc kubenswrapper[4728]: E0227 10:27:45.563429 4728 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:27:45 crc kubenswrapper[4728]: E0227 10:27:45.664224 4728 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:27:45 crc kubenswrapper[4728]: E0227 10:27:45.765383 4728 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:27:45 crc kubenswrapper[4728]: E0227 10:27:45.865804 4728 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:27:45 crc kubenswrapper[4728]: E0227 10:27:45.966632 4728 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:27:46 crc kubenswrapper[4728]: E0227 10:27:46.067645 4728 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:27:46 crc kubenswrapper[4728]: E0227 10:27:46.168319 4728 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:27:46 crc kubenswrapper[4728]: E0227 10:27:46.269269 4728 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:27:46 crc kubenswrapper[4728]: E0227 10:27:46.370272 4728 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:27:46 crc kubenswrapper[4728]: E0227 10:27:46.471429 4728 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:27:46 crc kubenswrapper[4728]: E0227 10:27:46.571735 4728 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:27:46 crc kubenswrapper[4728]: E0227 10:27:46.672738 4728 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:27:46 crc kubenswrapper[4728]: E0227 10:27:46.773588 4728 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:27:46 crc kubenswrapper[4728]: E0227 10:27:46.873766 4728 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:27:46 crc kubenswrapper[4728]: E0227 10:27:46.974485 4728 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:27:47 crc kubenswrapper[4728]: E0227 10:27:47.074980 4728 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:27:47 crc kubenswrapper[4728]: E0227 10:27:47.176116 4728 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:27:47 crc kubenswrapper[4728]: E0227 10:27:47.276552 4728 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:27:47 crc kubenswrapper[4728]: E0227 10:27:47.377663 4728 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:27:47 crc kubenswrapper[4728]: E0227 10:27:47.478189 4728 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:27:47 crc kubenswrapper[4728]: E0227 10:27:47.578578 4728 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:27:47 crc kubenswrapper[4728]: E0227 10:27:47.679568 4728 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:27:47 crc kubenswrapper[4728]: E0227 10:27:47.780704 4728 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:27:47 crc kubenswrapper[4728]: E0227 10:27:47.881056 4728 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:27:47 crc kubenswrapper[4728]: E0227 10:27:47.981948 4728 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:27:48 crc kubenswrapper[4728]: E0227 10:27:48.082246 4728 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:27:48 crc kubenswrapper[4728]: E0227 10:27:48.183306 4728 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:27:48 crc kubenswrapper[4728]: E0227 10:27:48.283963 4728 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:27:48 crc kubenswrapper[4728]: E0227 10:27:48.384587 4728 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:27:48 crc kubenswrapper[4728]: E0227 10:27:48.485575 4728 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:27:48 crc kubenswrapper[4728]: E0227 10:27:48.533662 4728 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Feb 27 10:27:48 crc kubenswrapper[4728]: I0227 10:27:48.537955 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:27:48 crc kubenswrapper[4728]: I0227 10:27:48.538055 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:27:48 crc kubenswrapper[4728]: I0227 10:27:48.538079 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:27:48 crc kubenswrapper[4728]: I0227 10:27:48.538595 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:27:48 crc kubenswrapper[4728]: I0227 10:27:48.538873 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:27:48Z","lastTransitionTime":"2026-02-27T10:27:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:27:48 crc kubenswrapper[4728]: E0227 10:27:48.554959 4728 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:27:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:27:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:27:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:27:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"79ce2621-f919-4f1d-8b5b-b727bcba43c7\\\",\\\"systemUUID\\\":\\\"08a24311-ed07-4912-ba2b-648ea93d1dc5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:27:48 crc kubenswrapper[4728]: I0227 10:27:48.564403 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:27:48 crc kubenswrapper[4728]: I0227 10:27:48.564453 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:27:48 crc kubenswrapper[4728]: I0227 10:27:48.564530 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:27:48 crc kubenswrapper[4728]: I0227 10:27:48.564555 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:27:48 crc kubenswrapper[4728]: I0227 10:27:48.564572 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:27:48Z","lastTransitionTime":"2026-02-27T10:27:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:27:48 crc kubenswrapper[4728]: E0227 10:27:48.580842 4728 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:27:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:27:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:27:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:27:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"79ce2621-f919-4f1d-8b5b-b727bcba43c7\\\",\\\"systemUUID\\\":\\\"08a24311-ed07-4912-ba2b-648ea93d1dc5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:27:48 crc kubenswrapper[4728]: I0227 10:27:48.585215 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:27:48 crc kubenswrapper[4728]: I0227 10:27:48.585260 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:27:48 crc kubenswrapper[4728]: I0227 10:27:48.585276 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:27:48 crc kubenswrapper[4728]: I0227 10:27:48.585296 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:27:48 crc kubenswrapper[4728]: I0227 10:27:48.585312 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:27:48Z","lastTransitionTime":"2026-02-27T10:27:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:27:48 crc kubenswrapper[4728]: E0227 10:27:48.595063 4728 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:27:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:27:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:27:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:27:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"79ce2621-f919-4f1d-8b5b-b727bcba43c7\\\",\\\"systemUUID\\\":\\\"08a24311-ed07-4912-ba2b-648ea93d1dc5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:27:48 crc kubenswrapper[4728]: I0227 10:27:48.599166 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:27:48 crc kubenswrapper[4728]: I0227 10:27:48.599204 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:27:48 crc kubenswrapper[4728]: I0227 10:27:48.599216 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:27:48 crc kubenswrapper[4728]: I0227 10:27:48.599233 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:27:48 crc kubenswrapper[4728]: I0227 10:27:48.599244 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:27:48Z","lastTransitionTime":"2026-02-27T10:27:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:27:48 crc kubenswrapper[4728]: E0227 10:27:48.613267 4728 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:27:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:27:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:27:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:27:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"79ce2621-f919-4f1d-8b5b-b727bcba43c7\\\",\\\"systemUUID\\\":\\\"08a24311-ed07-4912-ba2b-648ea93d1dc5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:27:48 crc kubenswrapper[4728]: E0227 10:27:48.613482 4728 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 27 10:27:48 crc kubenswrapper[4728]: E0227 10:27:48.613548 4728 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:27:48 crc kubenswrapper[4728]: E0227 10:27:48.714415 4728 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:27:48 crc kubenswrapper[4728]: E0227 10:27:48.814934 4728 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:27:48 crc kubenswrapper[4728]: E0227 10:27:48.916002 4728 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:27:49 crc kubenswrapper[4728]: E0227 10:27:49.016910 4728 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:27:49 crc kubenswrapper[4728]: E0227 10:27:49.117530 4728 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:27:49 crc kubenswrapper[4728]: E0227 10:27:49.218669 4728 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:27:49 crc kubenswrapper[4728]: E0227 10:27:49.318981 4728 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:27:49 crc kubenswrapper[4728]: E0227 10:27:49.420067 4728 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:27:49 crc kubenswrapper[4728]: E0227 10:27:49.520985 4728 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:27:49 crc kubenswrapper[4728]: E0227 10:27:49.621413 4728 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:27:49 crc kubenswrapper[4728]: E0227 10:27:49.721561 4728 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:27:49 crc kubenswrapper[4728]: E0227 10:27:49.821799 4728 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:27:49 crc kubenswrapper[4728]: E0227 10:27:49.922425 4728 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:27:50 crc kubenswrapper[4728]: E0227 10:27:50.022537 4728 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:27:50 crc kubenswrapper[4728]: E0227 10:27:50.123612 4728 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:27:50 crc kubenswrapper[4728]: E0227 10:27:50.229362 4728 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:27:50 crc kubenswrapper[4728]: E0227 10:27:50.330002 4728 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:27:50 crc kubenswrapper[4728]: E0227 10:27:50.430405 4728 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:27:50 crc kubenswrapper[4728]: E0227 10:27:50.531495 4728 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:27:50 crc kubenswrapper[4728]: E0227 10:27:50.632733 4728 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:27:50 crc kubenswrapper[4728]: E0227 10:27:50.733313 4728 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:27:50 crc kubenswrapper[4728]: E0227 10:27:50.801926 4728 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 27 10:27:50 crc kubenswrapper[4728]: E0227 10:27:50.834449 4728 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:27:50 crc kubenswrapper[4728]: E0227 10:27:50.935213 4728 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:27:51 crc kubenswrapper[4728]: E0227 10:27:51.063872 4728 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:27:51 crc kubenswrapper[4728]: E0227 10:27:51.164655 4728 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:27:51 crc kubenswrapper[4728]: E0227 10:27:51.265177 4728 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:27:51 crc kubenswrapper[4728]: E0227 10:27:51.365818 4728 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:27:51 crc kubenswrapper[4728]: E0227 10:27:51.466869 4728 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:27:51 crc kubenswrapper[4728]: E0227 10:27:51.567339 4728 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:27:51 crc kubenswrapper[4728]: E0227 10:27:51.668021 4728 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 10:27:51 crc kubenswrapper[4728]: I0227 10:27:51.692849 4728 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 27 10:27:51 crc kubenswrapper[4728]: I0227 10:27:51.770587 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:27:51 crc kubenswrapper[4728]: I0227 10:27:51.770653 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:27:51 crc kubenswrapper[4728]: I0227 10:27:51.770665 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:27:51 crc kubenswrapper[4728]: I0227 10:27:51.770681 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:27:51 crc kubenswrapper[4728]: I0227 10:27:51.770691 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:27:51Z","lastTransitionTime":"2026-02-27T10:27:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:27:51 crc kubenswrapper[4728]: I0227 10:27:51.872776 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:27:51 crc kubenswrapper[4728]: I0227 10:27:51.872836 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:27:51 crc kubenswrapper[4728]: I0227 10:27:51.872859 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:27:51 crc kubenswrapper[4728]: I0227 10:27:51.872887 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:27:51 crc kubenswrapper[4728]: I0227 10:27:51.872908 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:27:51Z","lastTransitionTime":"2026-02-27T10:27:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:27:51 crc kubenswrapper[4728]: I0227 10:27:51.975716 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:27:51 crc kubenswrapper[4728]: I0227 10:27:51.975763 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:27:51 crc kubenswrapper[4728]: I0227 10:27:51.975777 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:27:51 crc kubenswrapper[4728]: I0227 10:27:51.975972 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:27:51 crc kubenswrapper[4728]: I0227 10:27:51.975991 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:27:51Z","lastTransitionTime":"2026-02-27T10:27:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.079042 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.079132 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.079156 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.079182 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.079202 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:27:52Z","lastTransitionTime":"2026-02-27T10:27:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.181899 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.181963 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.181981 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.182008 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.182026 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:27:52Z","lastTransitionTime":"2026-02-27T10:27:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.283783 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.283823 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.283833 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.283846 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.283856 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:27:52Z","lastTransitionTime":"2026-02-27T10:27:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.386012 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.386076 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.386093 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.386120 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.386137 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:27:52Z","lastTransitionTime":"2026-02-27T10:27:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.488828 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.488873 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.488882 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.488895 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.488904 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:27:52Z","lastTransitionTime":"2026-02-27T10:27:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.592098 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.592158 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.592176 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.592200 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.592217 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:27:52Z","lastTransitionTime":"2026-02-27T10:27:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.692663 4728 apiserver.go:52] "Watching apiserver" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.695010 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.695065 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.695083 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.695111 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.695130 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:27:52Z","lastTransitionTime":"2026-02-27T10:27:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.700429 4728 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.700809 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c"] Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.701279 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.701349 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.701483 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 10:27:52 crc kubenswrapper[4728]: E0227 10:27:52.701656 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.701665 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 27 10:27:52 crc kubenswrapper[4728]: E0227 10:27:52.701873 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.702378 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.702580 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 10:27:52 crc kubenswrapper[4728]: E0227 10:27:52.702682 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.706325 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.706413 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.706434 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.706624 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.706815 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.706995 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.708105 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.708119 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.708661 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.730071 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.741821 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.753014 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.762900 4728 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.768917 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.778611 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.794114 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.799595 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.799667 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.799710 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.799754 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.799821 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.799855 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.799889 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.799924 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.799959 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.799990 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.800022 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.800054 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.800085 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.800116 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.800148 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.800181 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.800217 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.800254 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.800285 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.800327 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.800359 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.800391 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.800425 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.800457 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.800488 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.800548 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.800582 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.800616 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.800647 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.800679 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.800712 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.800744 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.800778 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.800812 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.800849 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.800881 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.800915 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.800948 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.800982 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.801017 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.801051 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.801085 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.801118 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.801151 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.801184 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.801217 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.801209 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.801251 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.801293 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.801306 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.801346 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.801364 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.801364 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.801343 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.801388 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.801434 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:27:52Z","lastTransitionTime":"2026-02-27T10:27:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.801445 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.802578 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.802623 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.802664 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.802698 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.802733 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.802767 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.803202 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.803250 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.803291 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.803331 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.803370 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.803408 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.803444 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.803479 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.803544 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.803600 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.803692 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.803726 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.803761 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.803813 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.803855 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.803897 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.803933 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.803970 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.804005 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.804039 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.804075 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.804111 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.804145 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.804179 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.804334 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.804372 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.804406 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.804441 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.804478 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.804557 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.804594 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.804666 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.804708 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.804746 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.804781 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.804816 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.804850 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.804885 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.804921 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.804958 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.804995 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.805032 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.805069 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.805104 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.805139 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.805176 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.805212 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.805248 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.805282 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.805318 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.805404 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.805444 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.805479 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.805963 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.806007 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.806042 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.806078 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.806116 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.806152 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.806187 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.806221 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.806257 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.806346 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.806546 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.806609 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.806650 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.806688 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.806725 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.806867 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.806919 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.806960 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.806997 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.807033 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.807068 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.807103 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.807139 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.807176 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.807214 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.807251 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.807286 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.807323 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.807464 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.807674 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.807766 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.807827 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.807873 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.807915 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.807954 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.807992 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.808031 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.808068 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.808105 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.808142 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.808177 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.808222 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.808257 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.808292 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.808329 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.808367 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.808402 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.808439 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.808482 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.808556 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.808595 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.808939 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.808975 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.809015 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.809052 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.809423 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.809464 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.809527 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.809566 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.809604 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.809641 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.809679 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.809722 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.809762 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.809801 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.809838 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.809878 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.810027 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.810070 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.810110 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.810154 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.810198 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.810238 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.810276 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.810316 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.810356 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.810395 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.810437 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.810476 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.810557 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.810601 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.810641 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.810680 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.810718 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.810754 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.810795 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.810861 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.810908 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.810945 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.810994 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.811046 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.811255 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.811314 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.811363 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.811412 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.811456 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.811532 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.811579 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.811623 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.811662 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.811736 4728 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.811767 4728 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.814753 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.816745 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.819723 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.820829 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.824815 4728 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.802104 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.802377 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.802403 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.802533 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.802476 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.803194 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.803915 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.804136 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.804156 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.804258 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.804396 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.804589 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.804720 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.804860 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.805195 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.805263 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.805277 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.805992 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.806035 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.806164 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.807267 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.807354 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.807856 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.808273 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.808306 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.808448 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.808470 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.809071 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.809807 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.810009 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.810275 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.810651 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.810676 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.811060 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.811141 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.811327 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.811466 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.811721 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.811837 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.812320 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.812487 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.812707 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.812882 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.813306 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:27:52 crc kubenswrapper[4728]: E0227 10:27:52.813108 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 10:27:53.312748444 +0000 UTC m=+93.275114570 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:27:52 crc kubenswrapper[4728]: E0227 10:27:52.829424 4728 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 27 10:27:52 crc kubenswrapper[4728]: E0227 10:27:52.829622 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-27 10:27:53.32958597 +0000 UTC m=+93.291952116 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.813338 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.813388 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.813678 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.813844 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.813900 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.814189 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.814475 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.814491 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.814498 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.814625 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.814665 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.815043 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.815115 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.815151 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.815164 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.815292 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.815568 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.815606 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.815979 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.816582 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.816605 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.816649 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.816797 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.817011 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.817210 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.817428 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.817577 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.817610 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.817257 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.817787 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.817832 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.817993 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.818156 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.818364 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.818561 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.818631 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.818884 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.819174 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.819757 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.819846 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.820038 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.820293 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.820474 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.820620 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.820651 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.820864 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:27:52 crc kubenswrapper[4728]: E0227 10:27:52.821129 4728 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 27 10:27:52 crc kubenswrapper[4728]: E0227 10:27:52.832656 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-27 10:27:53.33262802 +0000 UTC m=+93.294994166 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.821484 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.822399 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.823409 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.823580 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.823841 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.823977 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.824084 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.824246 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.824406 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.824448 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.824596 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.824607 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.825025 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:27:52 crc kubenswrapper[4728]: E0227 10:27:52.839563 4728 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 27 10:27:52 crc kubenswrapper[4728]: E0227 10:27:52.839604 4728 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 27 10:27:52 crc kubenswrapper[4728]: E0227 10:27:52.839746 4728 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 10:27:52 crc kubenswrapper[4728]: E0227 10:27:52.839838 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-27 10:27:53.339818201 +0000 UTC m=+93.302184317 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.841234 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.841294 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.842258 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:27:52 crc kubenswrapper[4728]: E0227 10:27:52.844709 4728 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 27 10:27:52 crc kubenswrapper[4728]: E0227 10:27:52.844752 4728 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 27 10:27:52 crc kubenswrapper[4728]: E0227 10:27:52.844776 4728 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 10:27:52 crc kubenswrapper[4728]: E0227 10:27:52.844859 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-27 10:27:53.344834104 +0000 UTC m=+93.307200240 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.847723 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.848328 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.848361 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.848396 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.848595 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.848674 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.848791 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.848872 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.849102 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.849200 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.849320 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.850070 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.850154 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.850659 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.850863 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.850995 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.856276 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.856427 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.856792 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.857128 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.857283 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.857551 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.857638 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.857892 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.857955 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.857967 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.858319 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.858533 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.859140 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.859252 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.859603 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.859709 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.859723 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.859847 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.858815 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.859780 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.859944 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.859976 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.860013 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.860317 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.860450 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.860443 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.861015 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.861605 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.861988 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.862691 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.863550 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.863700 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.863614 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.863891 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.864079 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.864065 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.864623 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.864721 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.864938 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.864989 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.865102 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.865138 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.865134 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.865325 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.865379 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.865656 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.865694 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.865772 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.865903 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.866531 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.866854 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.866616 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.867082 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.867113 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.871595 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.880948 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.881402 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.881482 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.881474 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.881598 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.881623 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.881674 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.881704 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.881812 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.882210 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.884057 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.884593 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.884924 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.885239 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.885614 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.887697 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.887907 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.904367 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.904420 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.904437 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.904460 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.904477 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:27:52Z","lastTransitionTime":"2026-02-27T10:27:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.905745 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.909631 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.912879 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.912929 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.913030 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.913044 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.913057 4728 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.913068 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.913081 4728 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.913084 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.913093 4728 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.913152 4728 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.913163 4728 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.913173 4728 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.913183 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.913193 4728 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.913203 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.913213 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.913222 4728 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.913233 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.913242 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.913090 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.913253 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.913291 4728 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.913303 4728 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.913312 4728 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.913321 4728 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.913331 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.913341 4728 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.913352 4728 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.913360 4728 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.913369 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.913378 4728 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.913386 4728 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.913394 4728 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.913403 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.913411 4728 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.913419 4728 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.913427 4728 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.913435 4728 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.913443 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.913452 4728 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.913461 4728 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.913469 4728 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.913479 4728 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.913487 4728 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.913496 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.913519 4728 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.913528 4728 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.913537 4728 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.913545 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.913553 4728 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.913563 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.913572 4728 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.913581 4728 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.913589 4728 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.913598 4728 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.913606 4728 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.913615 4728 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.913624 4728 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.913634 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.913643 4728 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.913652 4728 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.913661 4728 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.913669 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.913677 4728 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.913685 4728 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.913693 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.913702 4728 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.913710 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.913719 4728 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.913728 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.913737 4728 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.913745 4728 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.913753 4728 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.913761 4728 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.913770 4728 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.913778 4728 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.913788 4728 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.913797 4728 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.913805 4728 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.913814 4728 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.913823 4728 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.913832 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.913840 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.913849 4728 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.913857 4728 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.913865 4728 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.913875 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.913883 4728 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.913892 4728 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.913901 4728 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.913910 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.913919 4728 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.913927 4728 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.913936 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.913944 4728 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.913952 4728 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.913960 4728 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.913969 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.913977 4728 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.913985 4728 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.913992 4728 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.914000 4728 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.914007 4728 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.914015 4728 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.914024 4728 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.914032 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.914040 4728 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.914048 4728 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.914056 4728 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.914065 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.914073 4728 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.914082 4728 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.914090 4728 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.914098 4728 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.914106 4728 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.914114 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.914122 4728 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.914130 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.914138 4728 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.914146 4728 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.914154 4728 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.914161 4728 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.914170 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.914178 4728 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.914185 4728 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.914193 4728 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.914201 4728 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.914209 4728 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.914217 4728 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.914227 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.914235 4728 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.914243 4728 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.914250 4728 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.914258 4728 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.914266 4728 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.914274 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.914282 4728 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.914290 4728 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.914297 4728 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.914304 4728 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.914312 4728 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.914320 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.914327 4728 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.914335 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.914347 4728 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.914355 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.914363 4728 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.914371 4728 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.914379 4728 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.914387 4728 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.914397 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.914404 4728 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.914412 4728 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.914419 4728 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.914427 4728 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.914435 4728 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.914443 4728 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.914450 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.914460 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.914468 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.914477 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.914485 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.914493 4728 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.914514 4728 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.914522 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.914533 4728 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.914542 4728 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.914550 4728 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.914558 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.914566 4728 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.914574 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.914581 4728 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.914589 4728 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.914597 4728 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.914605 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.914613 4728 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.914623 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.914631 4728 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.914639 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.914647 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.914655 4728 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.914662 4728 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.914670 4728 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.914677 4728 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.914684 4728 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.914692 4728 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.914700 4728 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.914707 4728 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.914716 4728 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.914723 4728 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.914731 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.914739 4728 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.914747 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.914755 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.914763 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Feb 27 10:27:52 crc kubenswrapper[4728]: I0227 10:27:52.919320 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:27:53 crc kubenswrapper[4728]: I0227 10:27:53.007154 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:27:53 crc kubenswrapper[4728]: I0227 10:27:53.007212 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:27:53 crc kubenswrapper[4728]: I0227 10:27:53.007230 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:27:53 crc kubenswrapper[4728]: I0227 10:27:53.007256 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:27:53 crc kubenswrapper[4728]: I0227 10:27:53.007272 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:27:53Z","lastTransitionTime":"2026-02-27T10:27:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:27:53 crc kubenswrapper[4728]: I0227 10:27:53.015942 4728 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 10:27:53 crc kubenswrapper[4728]: I0227 10:27:53.026835 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 27 10:27:53 crc kubenswrapper[4728]: I0227 10:27:53.042086 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 27 10:27:53 crc kubenswrapper[4728]: E0227 10:27:53.050293 4728 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 10:27:53 crc kubenswrapper[4728]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,Command:[/bin/bash -c #!/bin/bash Feb 27 10:27:53 crc kubenswrapper[4728]: set -o allexport Feb 27 10:27:53 crc kubenswrapper[4728]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Feb 27 10:27:53 crc kubenswrapper[4728]: source /etc/kubernetes/apiserver-url.env Feb 27 10:27:53 crc kubenswrapper[4728]: else Feb 27 10:27:53 crc kubenswrapper[4728]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Feb 27 10:27:53 crc kubenswrapper[4728]: exit 1 Feb 27 10:27:53 crc kubenswrapper[4728]: fi Feb 27 10:27:53 crc kubenswrapper[4728]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Feb 27 10:27:53 crc kubenswrapper[4728]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.18.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b97554198294bf544fbc116c94a0a1fb2ec8a4de0e926bf9d9e320135f0bee6f,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23f833d3738d68706eb2f2868bd76bd71cee016cffa6faf5f045a60cc8c6eddd,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8048f1cb0be521f09749c0a489503cd56d85b68c6ca93380e082cfd693cd97a8,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5dbf844e49bb46b78586930149e5e5f5dc121014c8afd10fe36f3651967cc256,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rdwmf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-58b4c7f79c-55gtf_openshift-network-operator(37a5e44f-9a88-4405-be8a-b645485e7312): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 27 10:27:53 crc kubenswrapper[4728]: > logger="UnhandledError" Feb 27 10:27:53 crc kubenswrapper[4728]: E0227 10:27:53.051458 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" podUID="37a5e44f-9a88-4405-be8a-b645485e7312" Feb 27 10:27:53 crc kubenswrapper[4728]: I0227 10:27:53.056610 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 27 10:27:53 crc kubenswrapper[4728]: E0227 10:27:53.063957 4728 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 10:27:53 crc kubenswrapper[4728]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Feb 27 10:27:53 crc kubenswrapper[4728]: if [[ -f "/env/_master" ]]; then Feb 27 10:27:53 crc kubenswrapper[4728]: set -o allexport Feb 27 10:27:53 crc kubenswrapper[4728]: source "/env/_master" Feb 27 10:27:53 crc kubenswrapper[4728]: set +o allexport Feb 27 10:27:53 crc kubenswrapper[4728]: fi Feb 27 10:27:53 crc kubenswrapper[4728]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Feb 27 10:27:53 crc kubenswrapper[4728]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Feb 27 10:27:53 crc kubenswrapper[4728]: ho_enable="--enable-hybrid-overlay" Feb 27 10:27:53 crc kubenswrapper[4728]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Feb 27 10:27:53 crc kubenswrapper[4728]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Feb 27 10:27:53 crc kubenswrapper[4728]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Feb 27 10:27:53 crc kubenswrapper[4728]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Feb 27 10:27:53 crc kubenswrapper[4728]: --webhook-cert-dir="/etc/webhook-cert" \ Feb 27 10:27:53 crc kubenswrapper[4728]: --webhook-host=127.0.0.1 \ Feb 27 10:27:53 crc kubenswrapper[4728]: --webhook-port=9743 \ Feb 27 10:27:53 crc kubenswrapper[4728]: ${ho_enable} \ Feb 27 10:27:53 crc kubenswrapper[4728]: --enable-interconnect \ Feb 27 10:27:53 crc kubenswrapper[4728]: --disable-approver \ Feb 27 10:27:53 crc kubenswrapper[4728]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Feb 27 10:27:53 crc kubenswrapper[4728]: --wait-for-kubernetes-api=200s \ Feb 27 10:27:53 crc kubenswrapper[4728]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Feb 27 10:27:53 crc kubenswrapper[4728]: --loglevel="${LOGLEVEL}" Feb 27 10:27:53 crc kubenswrapper[4728]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 27 10:27:53 crc kubenswrapper[4728]: > logger="UnhandledError" Feb 27 10:27:53 crc kubenswrapper[4728]: E0227 10:27:53.067896 4728 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 10:27:53 crc kubenswrapper[4728]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Feb 27 10:27:53 crc kubenswrapper[4728]: if [[ -f "/env/_master" ]]; then Feb 27 10:27:53 crc kubenswrapper[4728]: set -o allexport Feb 27 10:27:53 crc kubenswrapper[4728]: source "/env/_master" Feb 27 10:27:53 crc kubenswrapper[4728]: set +o allexport Feb 27 10:27:53 crc kubenswrapper[4728]: fi Feb 27 10:27:53 crc kubenswrapper[4728]: Feb 27 10:27:53 crc kubenswrapper[4728]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Feb 27 10:27:53 crc kubenswrapper[4728]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Feb 27 10:27:53 crc kubenswrapper[4728]: --disable-webhook \ Feb 27 10:27:53 crc kubenswrapper[4728]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Feb 27 10:27:53 crc kubenswrapper[4728]: --loglevel="${LOGLEVEL}" Feb 27 10:27:53 crc kubenswrapper[4728]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 27 10:27:53 crc kubenswrapper[4728]: > logger="UnhandledError" Feb 27 10:27:53 crc kubenswrapper[4728]: E0227 10:27:53.069147 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-vrzqb" podUID="ef543e1b-8068-4ea3-b32a-61027b32e95d" Feb 27 10:27:53 crc kubenswrapper[4728]: W0227 10:27:53.073914 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-2c4c1ed5b1de6d9baca4e5da5605743ef0420145cbd16f894cc33a1b3144c705 WatchSource:0}: Error finding container 2c4c1ed5b1de6d9baca4e5da5605743ef0420145cbd16f894cc33a1b3144c705: Status 404 returned error can't find the container with id 2c4c1ed5b1de6d9baca4e5da5605743ef0420145cbd16f894cc33a1b3144c705 Feb 27 10:27:53 crc kubenswrapper[4728]: E0227 10:27:53.077351 4728 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rczfb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-4ln5h_openshift-network-operator(d75a4c96-2883-4a0b-bab2-0fab2b6c0b49): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Feb 27 10:27:53 crc kubenswrapper[4728]: E0227 10:27:53.078653 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-4ln5h" podUID="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" Feb 27 10:27:53 crc kubenswrapper[4728]: I0227 10:27:53.098353 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"2c4c1ed5b1de6d9baca4e5da5605743ef0420145cbd16f894cc33a1b3144c705"} Feb 27 10:27:53 crc kubenswrapper[4728]: I0227 10:27:53.100395 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"077d9e9dc425a63631bd5458b29b2e66699606e78d289ef141796d1abe054541"} Feb 27 10:27:53 crc kubenswrapper[4728]: E0227 10:27:53.101743 4728 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rczfb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-4ln5h_openshift-network-operator(d75a4c96-2883-4a0b-bab2-0fab2b6c0b49): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Feb 27 10:27:53 crc kubenswrapper[4728]: E0227 10:27:53.101884 4728 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 10:27:53 crc kubenswrapper[4728]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Feb 27 10:27:53 crc kubenswrapper[4728]: if [[ -f "/env/_master" ]]; then Feb 27 10:27:53 crc kubenswrapper[4728]: set -o allexport Feb 27 10:27:53 crc kubenswrapper[4728]: source "/env/_master" Feb 27 10:27:53 crc kubenswrapper[4728]: set +o allexport Feb 27 10:27:53 crc kubenswrapper[4728]: fi Feb 27 10:27:53 crc kubenswrapper[4728]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Feb 27 10:27:53 crc kubenswrapper[4728]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Feb 27 10:27:53 crc kubenswrapper[4728]: ho_enable="--enable-hybrid-overlay" Feb 27 10:27:53 crc kubenswrapper[4728]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Feb 27 10:27:53 crc kubenswrapper[4728]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Feb 27 10:27:53 crc kubenswrapper[4728]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Feb 27 10:27:53 crc kubenswrapper[4728]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Feb 27 10:27:53 crc kubenswrapper[4728]: --webhook-cert-dir="/etc/webhook-cert" \ Feb 27 10:27:53 crc kubenswrapper[4728]: --webhook-host=127.0.0.1 \ Feb 27 10:27:53 crc kubenswrapper[4728]: --webhook-port=9743 \ Feb 27 10:27:53 crc kubenswrapper[4728]: ${ho_enable} \ Feb 27 10:27:53 crc kubenswrapper[4728]: --enable-interconnect \ Feb 27 10:27:53 crc kubenswrapper[4728]: --disable-approver \ Feb 27 10:27:53 crc kubenswrapper[4728]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Feb 27 10:27:53 crc kubenswrapper[4728]: --wait-for-kubernetes-api=200s \ Feb 27 10:27:53 crc kubenswrapper[4728]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Feb 27 10:27:53 crc kubenswrapper[4728]: --loglevel="${LOGLEVEL}" Feb 27 10:27:53 crc kubenswrapper[4728]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 27 10:27:53 crc kubenswrapper[4728]: > logger="UnhandledError" Feb 27 10:27:53 crc kubenswrapper[4728]: E0227 10:27:53.103787 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-4ln5h" podUID="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" Feb 27 10:27:53 crc kubenswrapper[4728]: E0227 10:27:53.105099 4728 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 10:27:53 crc kubenswrapper[4728]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Feb 27 10:27:53 crc kubenswrapper[4728]: if [[ -f "/env/_master" ]]; then Feb 27 10:27:53 crc kubenswrapper[4728]: set -o allexport Feb 27 10:27:53 crc kubenswrapper[4728]: source "/env/_master" Feb 27 10:27:53 crc kubenswrapper[4728]: set +o allexport Feb 27 10:27:53 crc kubenswrapper[4728]: fi Feb 27 10:27:53 crc kubenswrapper[4728]: Feb 27 10:27:53 crc kubenswrapper[4728]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Feb 27 10:27:53 crc kubenswrapper[4728]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Feb 27 10:27:53 crc kubenswrapper[4728]: --disable-webhook \ Feb 27 10:27:53 crc kubenswrapper[4728]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Feb 27 10:27:53 crc kubenswrapper[4728]: --loglevel="${LOGLEVEL}" Feb 27 10:27:53 crc kubenswrapper[4728]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 27 10:27:53 crc kubenswrapper[4728]: > logger="UnhandledError" Feb 27 10:27:53 crc kubenswrapper[4728]: I0227 10:27:53.105280 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"a76c7a1e33a1a26858084b7ad704c06f5b7e681dfb71e6df0c660e2ef8a30fa2"} Feb 27 10:27:53 crc kubenswrapper[4728]: E0227 10:27:53.106409 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-vrzqb" podUID="ef543e1b-8068-4ea3-b32a-61027b32e95d" Feb 27 10:27:53 crc kubenswrapper[4728]: E0227 10:27:53.107743 4728 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 10:27:53 crc kubenswrapper[4728]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,Command:[/bin/bash -c #!/bin/bash Feb 27 10:27:53 crc kubenswrapper[4728]: set -o allexport Feb 27 10:27:53 crc kubenswrapper[4728]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Feb 27 10:27:53 crc kubenswrapper[4728]: source /etc/kubernetes/apiserver-url.env Feb 27 10:27:53 crc kubenswrapper[4728]: else Feb 27 10:27:53 crc kubenswrapper[4728]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Feb 27 10:27:53 crc kubenswrapper[4728]: exit 1 Feb 27 10:27:53 crc kubenswrapper[4728]: fi Feb 27 10:27:53 crc kubenswrapper[4728]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Feb 27 10:27:53 crc kubenswrapper[4728]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.18.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b97554198294bf544fbc116c94a0a1fb2ec8a4de0e926bf9d9e320135f0bee6f,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23f833d3738d68706eb2f2868bd76bd71cee016cffa6faf5f045a60cc8c6eddd,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8048f1cb0be521f09749c0a489503cd56d85b68c6ca93380e082cfd693cd97a8,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5dbf844e49bb46b78586930149e5e5f5dc121014c8afd10fe36f3651967cc256,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rdwmf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-58b4c7f79c-55gtf_openshift-network-operator(37a5e44f-9a88-4405-be8a-b645485e7312): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 27 10:27:53 crc kubenswrapper[4728]: > logger="UnhandledError" Feb 27 10:27:53 crc kubenswrapper[4728]: E0227 10:27:53.108850 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" podUID="37a5e44f-9a88-4405-be8a-b645485e7312" Feb 27 10:27:53 crc kubenswrapper[4728]: I0227 10:27:53.110833 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:27:53 crc kubenswrapper[4728]: I0227 10:27:53.110892 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:27:53 crc kubenswrapper[4728]: I0227 10:27:53.110921 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:27:53 crc kubenswrapper[4728]: I0227 10:27:53.110953 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:27:53 crc kubenswrapper[4728]: I0227 10:27:53.110995 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:27:53Z","lastTransitionTime":"2026-02-27T10:27:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:27:53 crc kubenswrapper[4728]: I0227 10:27:53.119159 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:27:53 crc kubenswrapper[4728]: I0227 10:27:53.133758 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:27:53 crc kubenswrapper[4728]: I0227 10:27:53.149060 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:27:53 crc kubenswrapper[4728]: I0227 10:27:53.160416 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:27:53 crc kubenswrapper[4728]: I0227 10:27:53.173910 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:27:53 crc kubenswrapper[4728]: I0227 10:27:53.183396 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:27:53 crc kubenswrapper[4728]: I0227 10:27:53.197786 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:27:53 crc kubenswrapper[4728]: I0227 10:27:53.210926 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:27:53 crc kubenswrapper[4728]: I0227 10:27:53.213267 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:27:53 crc kubenswrapper[4728]: I0227 10:27:53.213304 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:27:53 crc kubenswrapper[4728]: I0227 10:27:53.213316 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:27:53 crc kubenswrapper[4728]: I0227 10:27:53.213337 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:27:53 crc kubenswrapper[4728]: I0227 10:27:53.213351 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:27:53Z","lastTransitionTime":"2026-02-27T10:27:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:27:53 crc kubenswrapper[4728]: I0227 10:27:53.226698 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:27:53 crc kubenswrapper[4728]: I0227 10:27:53.237319 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:27:53 crc kubenswrapper[4728]: I0227 10:27:53.249808 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:27:53 crc kubenswrapper[4728]: I0227 10:27:53.260438 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:27:53 crc kubenswrapper[4728]: I0227 10:27:53.316046 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:27:53 crc kubenswrapper[4728]: I0227 10:27:53.316351 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:27:53 crc kubenswrapper[4728]: I0227 10:27:53.316601 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:27:53 crc kubenswrapper[4728]: I0227 10:27:53.316776 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:27:53 crc kubenswrapper[4728]: I0227 10:27:53.316919 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:27:53Z","lastTransitionTime":"2026-02-27T10:27:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:27:53 crc kubenswrapper[4728]: I0227 10:27:53.318448 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 10:27:53 crc kubenswrapper[4728]: E0227 10:27:53.318607 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 10:27:54.318578413 +0000 UTC m=+94.280944579 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:27:53 crc kubenswrapper[4728]: I0227 10:27:53.419200 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 10:27:53 crc kubenswrapper[4728]: I0227 10:27:53.419261 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 10:27:53 crc kubenswrapper[4728]: I0227 10:27:53.419300 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 10:27:53 crc kubenswrapper[4728]: I0227 10:27:53.419329 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 10:27:53 crc kubenswrapper[4728]: E0227 10:27:53.419478 4728 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 27 10:27:53 crc kubenswrapper[4728]: E0227 10:27:53.419517 4728 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 27 10:27:53 crc kubenswrapper[4728]: E0227 10:27:53.419532 4728 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 10:27:53 crc kubenswrapper[4728]: E0227 10:27:53.419586 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-27 10:27:54.419568928 +0000 UTC m=+94.381935034 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 10:27:53 crc kubenswrapper[4728]: E0227 10:27:53.419587 4728 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 27 10:27:53 crc kubenswrapper[4728]: I0227 10:27:53.419626 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:27:53 crc kubenswrapper[4728]: E0227 10:27:53.419642 4728 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 27 10:27:53 crc kubenswrapper[4728]: I0227 10:27:53.419659 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:27:53 crc kubenswrapper[4728]: I0227 10:27:53.419677 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:27:53 crc kubenswrapper[4728]: I0227 10:27:53.419700 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:27:53 crc kubenswrapper[4728]: I0227 10:27:53.419717 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:27:53Z","lastTransitionTime":"2026-02-27T10:27:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:27:53 crc kubenswrapper[4728]: E0227 10:27:53.419659 4728 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 27 10:27:53 crc kubenswrapper[4728]: E0227 10:27:53.419677 4728 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 10:27:53 crc kubenswrapper[4728]: E0227 10:27:53.419914 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-27 10:27:54.419885127 +0000 UTC m=+94.382251263 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 27 10:27:53 crc kubenswrapper[4728]: E0227 10:27:53.419582 4728 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 27 10:27:53 crc kubenswrapper[4728]: E0227 10:27:53.420024 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-27 10:27:54.41999567 +0000 UTC m=+94.382361816 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 10:27:53 crc kubenswrapper[4728]: E0227 10:27:53.420083 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-27 10:27:54.420071581 +0000 UTC m=+94.382437817 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 27 10:27:53 crc kubenswrapper[4728]: I0227 10:27:53.522347 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:27:53 crc kubenswrapper[4728]: I0227 10:27:53.522396 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:27:53 crc kubenswrapper[4728]: I0227 10:27:53.522408 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:27:53 crc kubenswrapper[4728]: I0227 10:27:53.522426 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:27:53 crc kubenswrapper[4728]: I0227 10:27:53.522438 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:27:53Z","lastTransitionTime":"2026-02-27T10:27:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:27:53 crc kubenswrapper[4728]: I0227 10:27:53.625315 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:27:53 crc kubenswrapper[4728]: I0227 10:27:53.625394 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:27:53 crc kubenswrapper[4728]: I0227 10:27:53.625411 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:27:53 crc kubenswrapper[4728]: I0227 10:27:53.625437 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:27:53 crc kubenswrapper[4728]: I0227 10:27:53.625455 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:27:53Z","lastTransitionTime":"2026-02-27T10:27:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:27:53 crc kubenswrapper[4728]: I0227 10:27:53.728877 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:27:53 crc kubenswrapper[4728]: I0227 10:27:53.728930 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:27:53 crc kubenswrapper[4728]: I0227 10:27:53.728946 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:27:53 crc kubenswrapper[4728]: I0227 10:27:53.728969 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:27:53 crc kubenswrapper[4728]: I0227 10:27:53.728990 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:27:53Z","lastTransitionTime":"2026-02-27T10:27:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:27:53 crc kubenswrapper[4728]: I0227 10:27:53.832580 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:27:53 crc kubenswrapper[4728]: I0227 10:27:53.832735 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:27:53 crc kubenswrapper[4728]: I0227 10:27:53.832761 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:27:53 crc kubenswrapper[4728]: I0227 10:27:53.832786 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:27:53 crc kubenswrapper[4728]: I0227 10:27:53.832803 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:27:53Z","lastTransitionTime":"2026-02-27T10:27:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:27:53 crc kubenswrapper[4728]: I0227 10:27:53.935294 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:27:53 crc kubenswrapper[4728]: I0227 10:27:53.935378 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:27:53 crc kubenswrapper[4728]: I0227 10:27:53.935397 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:27:53 crc kubenswrapper[4728]: I0227 10:27:53.935431 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:27:53 crc kubenswrapper[4728]: I0227 10:27:53.935453 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:27:53Z","lastTransitionTime":"2026-02-27T10:27:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:27:54 crc kubenswrapper[4728]: I0227 10:27:54.042562 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:27:54 crc kubenswrapper[4728]: I0227 10:27:54.042611 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:27:54 crc kubenswrapper[4728]: I0227 10:27:54.042628 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:27:54 crc kubenswrapper[4728]: I0227 10:27:54.042652 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:27:54 crc kubenswrapper[4728]: I0227 10:27:54.042670 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:27:54Z","lastTransitionTime":"2026-02-27T10:27:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:27:54 crc kubenswrapper[4728]: I0227 10:27:54.144559 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:27:54 crc kubenswrapper[4728]: I0227 10:27:54.144618 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:27:54 crc kubenswrapper[4728]: I0227 10:27:54.144637 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:27:54 crc kubenswrapper[4728]: I0227 10:27:54.144661 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:27:54 crc kubenswrapper[4728]: I0227 10:27:54.144679 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:27:54Z","lastTransitionTime":"2026-02-27T10:27:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:27:54 crc kubenswrapper[4728]: I0227 10:27:54.247330 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:27:54 crc kubenswrapper[4728]: I0227 10:27:54.247401 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:27:54 crc kubenswrapper[4728]: I0227 10:27:54.247425 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:27:54 crc kubenswrapper[4728]: I0227 10:27:54.247456 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:27:54 crc kubenswrapper[4728]: I0227 10:27:54.247476 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:27:54Z","lastTransitionTime":"2026-02-27T10:27:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:27:54 crc kubenswrapper[4728]: I0227 10:27:54.327351 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 10:27:54 crc kubenswrapper[4728]: E0227 10:27:54.327575 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 10:27:56.327482477 +0000 UTC m=+96.289848623 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:27:54 crc kubenswrapper[4728]: I0227 10:27:54.350201 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:27:54 crc kubenswrapper[4728]: I0227 10:27:54.350244 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:27:54 crc kubenswrapper[4728]: I0227 10:27:54.350253 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:27:54 crc kubenswrapper[4728]: I0227 10:27:54.350267 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:27:54 crc kubenswrapper[4728]: I0227 10:27:54.350275 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:27:54Z","lastTransitionTime":"2026-02-27T10:27:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:27:54 crc kubenswrapper[4728]: I0227 10:27:54.428490 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 10:27:54 crc kubenswrapper[4728]: I0227 10:27:54.428540 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 10:27:54 crc kubenswrapper[4728]: I0227 10:27:54.428562 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 10:27:54 crc kubenswrapper[4728]: I0227 10:27:54.428581 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 10:27:54 crc kubenswrapper[4728]: E0227 10:27:54.428692 4728 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 27 10:27:54 crc kubenswrapper[4728]: E0227 10:27:54.428706 4728 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 27 10:27:54 crc kubenswrapper[4728]: E0227 10:27:54.428717 4728 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 10:27:54 crc kubenswrapper[4728]: E0227 10:27:54.428730 4728 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 27 10:27:54 crc kubenswrapper[4728]: E0227 10:27:54.428767 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-27 10:27:56.42875339 +0000 UTC m=+96.391119496 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 10:27:54 crc kubenswrapper[4728]: E0227 10:27:54.428797 4728 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 27 10:27:54 crc kubenswrapper[4728]: E0227 10:27:54.428774 4728 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 27 10:27:54 crc kubenswrapper[4728]: E0227 10:27:54.428862 4728 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 10:27:54 crc kubenswrapper[4728]: E0227 10:27:54.428867 4728 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 27 10:27:54 crc kubenswrapper[4728]: E0227 10:27:54.428917 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-27 10:27:56.428886903 +0000 UTC m=+96.391253079 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 27 10:27:54 crc kubenswrapper[4728]: E0227 10:27:54.429074 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-27 10:27:56.429032697 +0000 UTC m=+96.391398823 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 10:27:54 crc kubenswrapper[4728]: E0227 10:27:54.429112 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-27 10:27:56.429100699 +0000 UTC m=+96.391466815 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 27 10:27:54 crc kubenswrapper[4728]: I0227 10:27:54.452472 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:27:54 crc kubenswrapper[4728]: I0227 10:27:54.452575 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:27:54 crc kubenswrapper[4728]: I0227 10:27:54.452595 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:27:54 crc kubenswrapper[4728]: I0227 10:27:54.452621 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:27:54 crc kubenswrapper[4728]: I0227 10:27:54.452638 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:27:54Z","lastTransitionTime":"2026-02-27T10:27:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:27:54 crc kubenswrapper[4728]: I0227 10:27:54.555260 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:27:54 crc kubenswrapper[4728]: I0227 10:27:54.555325 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:27:54 crc kubenswrapper[4728]: I0227 10:27:54.555343 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:27:54 crc kubenswrapper[4728]: I0227 10:27:54.555368 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:27:54 crc kubenswrapper[4728]: I0227 10:27:54.555386 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:27:54Z","lastTransitionTime":"2026-02-27T10:27:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:27:54 crc kubenswrapper[4728]: I0227 10:27:54.658380 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:27:54 crc kubenswrapper[4728]: I0227 10:27:54.658453 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:27:54 crc kubenswrapper[4728]: I0227 10:27:54.658470 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:27:54 crc kubenswrapper[4728]: I0227 10:27:54.658497 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:27:54 crc kubenswrapper[4728]: I0227 10:27:54.658559 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:27:54Z","lastTransitionTime":"2026-02-27T10:27:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:27:54 crc kubenswrapper[4728]: I0227 10:27:54.724737 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 10:27:54 crc kubenswrapper[4728]: I0227 10:27:54.724754 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 10:27:54 crc kubenswrapper[4728]: E0227 10:27:54.724953 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 10:27:54 crc kubenswrapper[4728]: I0227 10:27:54.725024 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 10:27:54 crc kubenswrapper[4728]: E0227 10:27:54.725191 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 10:27:54 crc kubenswrapper[4728]: E0227 10:27:54.725380 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 10:27:54 crc kubenswrapper[4728]: I0227 10:27:54.731269 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Feb 27 10:27:54 crc kubenswrapper[4728]: I0227 10:27:54.732783 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Feb 27 10:27:54 crc kubenswrapper[4728]: I0227 10:27:54.734605 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Feb 27 10:27:54 crc kubenswrapper[4728]: I0227 10:27:54.735464 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Feb 27 10:27:54 crc kubenswrapper[4728]: I0227 10:27:54.737025 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Feb 27 10:27:54 crc kubenswrapper[4728]: I0227 10:27:54.737967 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Feb 27 10:27:54 crc kubenswrapper[4728]: I0227 10:27:54.738885 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Feb 27 10:27:54 crc kubenswrapper[4728]: I0227 10:27:54.740270 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Feb 27 10:27:54 crc kubenswrapper[4728]: I0227 10:27:54.741122 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Feb 27 10:27:54 crc kubenswrapper[4728]: I0227 10:27:54.742355 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Feb 27 10:27:54 crc kubenswrapper[4728]: I0227 10:27:54.743243 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Feb 27 10:27:54 crc kubenswrapper[4728]: I0227 10:27:54.745344 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Feb 27 10:27:54 crc kubenswrapper[4728]: I0227 10:27:54.746771 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Feb 27 10:27:54 crc kubenswrapper[4728]: I0227 10:27:54.748052 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Feb 27 10:27:54 crc kubenswrapper[4728]: I0227 10:27:54.750536 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Feb 27 10:27:54 crc kubenswrapper[4728]: I0227 10:27:54.751197 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Feb 27 10:27:54 crc kubenswrapper[4728]: I0227 10:27:54.752700 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Feb 27 10:27:54 crc kubenswrapper[4728]: I0227 10:27:54.753188 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Feb 27 10:27:54 crc kubenswrapper[4728]: I0227 10:27:54.753901 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Feb 27 10:27:54 crc kubenswrapper[4728]: I0227 10:27:54.755005 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Feb 27 10:27:54 crc kubenswrapper[4728]: I0227 10:27:54.755531 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Feb 27 10:27:54 crc kubenswrapper[4728]: I0227 10:27:54.756451 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Feb 27 10:27:54 crc kubenswrapper[4728]: I0227 10:27:54.756881 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Feb 27 10:27:54 crc kubenswrapper[4728]: I0227 10:27:54.757852 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Feb 27 10:27:54 crc kubenswrapper[4728]: I0227 10:27:54.758246 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Feb 27 10:27:54 crc kubenswrapper[4728]: I0227 10:27:54.758817 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Feb 27 10:27:54 crc kubenswrapper[4728]: I0227 10:27:54.759908 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Feb 27 10:27:54 crc kubenswrapper[4728]: I0227 10:27:54.760389 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Feb 27 10:27:54 crc kubenswrapper[4728]: I0227 10:27:54.761320 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Feb 27 10:27:54 crc kubenswrapper[4728]: I0227 10:27:54.761366 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:27:54 crc kubenswrapper[4728]: I0227 10:27:54.761398 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:27:54 crc kubenswrapper[4728]: I0227 10:27:54.761414 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:27:54 crc kubenswrapper[4728]: I0227 10:27:54.761438 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:27:54 crc kubenswrapper[4728]: I0227 10:27:54.761456 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:27:54Z","lastTransitionTime":"2026-02-27T10:27:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:27:54 crc kubenswrapper[4728]: I0227 10:27:54.761830 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Feb 27 10:27:54 crc kubenswrapper[4728]: I0227 10:27:54.762694 4728 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Feb 27 10:27:54 crc kubenswrapper[4728]: I0227 10:27:54.762801 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Feb 27 10:27:54 crc kubenswrapper[4728]: I0227 10:27:54.764372 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Feb 27 10:27:54 crc kubenswrapper[4728]: I0227 10:27:54.765244 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Feb 27 10:27:54 crc kubenswrapper[4728]: I0227 10:27:54.765694 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Feb 27 10:27:54 crc kubenswrapper[4728]: I0227 10:27:54.767153 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Feb 27 10:27:54 crc kubenswrapper[4728]: I0227 10:27:54.767859 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Feb 27 10:27:54 crc kubenswrapper[4728]: I0227 10:27:54.768754 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Feb 27 10:27:54 crc kubenswrapper[4728]: I0227 10:27:54.769333 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Feb 27 10:27:54 crc kubenswrapper[4728]: I0227 10:27:54.770388 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Feb 27 10:27:54 crc kubenswrapper[4728]: I0227 10:27:54.770867 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Feb 27 10:27:54 crc kubenswrapper[4728]: I0227 10:27:54.771873 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Feb 27 10:27:54 crc kubenswrapper[4728]: I0227 10:27:54.772527 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Feb 27 10:27:54 crc kubenswrapper[4728]: I0227 10:27:54.773435 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Feb 27 10:27:54 crc kubenswrapper[4728]: I0227 10:27:54.773881 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Feb 27 10:27:54 crc kubenswrapper[4728]: I0227 10:27:54.774716 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Feb 27 10:27:54 crc kubenswrapper[4728]: I0227 10:27:54.775214 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Feb 27 10:27:54 crc kubenswrapper[4728]: I0227 10:27:54.776227 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Feb 27 10:27:54 crc kubenswrapper[4728]: I0227 10:27:54.776730 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Feb 27 10:27:54 crc kubenswrapper[4728]: I0227 10:27:54.777596 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Feb 27 10:27:54 crc kubenswrapper[4728]: I0227 10:27:54.778033 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Feb 27 10:27:54 crc kubenswrapper[4728]: I0227 10:27:54.778885 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Feb 27 10:27:54 crc kubenswrapper[4728]: I0227 10:27:54.779406 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Feb 27 10:27:54 crc kubenswrapper[4728]: I0227 10:27:54.779957 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Feb 27 10:27:54 crc kubenswrapper[4728]: I0227 10:27:54.863626 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:27:54 crc kubenswrapper[4728]: I0227 10:27:54.863691 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:27:54 crc kubenswrapper[4728]: I0227 10:27:54.863710 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:27:54 crc kubenswrapper[4728]: I0227 10:27:54.863734 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:27:54 crc kubenswrapper[4728]: I0227 10:27:54.863753 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:27:54Z","lastTransitionTime":"2026-02-27T10:27:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:27:54 crc kubenswrapper[4728]: I0227 10:27:54.966814 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:27:54 crc kubenswrapper[4728]: I0227 10:27:54.966883 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:27:54 crc kubenswrapper[4728]: I0227 10:27:54.966905 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:27:54 crc kubenswrapper[4728]: I0227 10:27:54.966956 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:27:54 crc kubenswrapper[4728]: I0227 10:27:54.966982 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:27:54Z","lastTransitionTime":"2026-02-27T10:27:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:27:55 crc kubenswrapper[4728]: I0227 10:27:55.070298 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:27:55 crc kubenswrapper[4728]: I0227 10:27:55.070387 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:27:55 crc kubenswrapper[4728]: I0227 10:27:55.070410 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:27:55 crc kubenswrapper[4728]: I0227 10:27:55.070446 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:27:55 crc kubenswrapper[4728]: I0227 10:27:55.070469 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:27:55Z","lastTransitionTime":"2026-02-27T10:27:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:27:55 crc kubenswrapper[4728]: I0227 10:27:55.172779 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:27:55 crc kubenswrapper[4728]: I0227 10:27:55.172826 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:27:55 crc kubenswrapper[4728]: I0227 10:27:55.172840 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:27:55 crc kubenswrapper[4728]: I0227 10:27:55.172856 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:27:55 crc kubenswrapper[4728]: I0227 10:27:55.172868 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:27:55Z","lastTransitionTime":"2026-02-27T10:27:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:27:55 crc kubenswrapper[4728]: I0227 10:27:55.275560 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:27:55 crc kubenswrapper[4728]: I0227 10:27:55.275636 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:27:55 crc kubenswrapper[4728]: I0227 10:27:55.275654 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:27:55 crc kubenswrapper[4728]: I0227 10:27:55.275679 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:27:55 crc kubenswrapper[4728]: I0227 10:27:55.275697 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:27:55Z","lastTransitionTime":"2026-02-27T10:27:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:27:55 crc kubenswrapper[4728]: I0227 10:27:55.378414 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:27:55 crc kubenswrapper[4728]: I0227 10:27:55.378459 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:27:55 crc kubenswrapper[4728]: I0227 10:27:55.378472 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:27:55 crc kubenswrapper[4728]: I0227 10:27:55.378490 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:27:55 crc kubenswrapper[4728]: I0227 10:27:55.378529 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:27:55Z","lastTransitionTime":"2026-02-27T10:27:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:27:55 crc kubenswrapper[4728]: I0227 10:27:55.481880 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:27:55 crc kubenswrapper[4728]: I0227 10:27:55.481994 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:27:55 crc kubenswrapper[4728]: I0227 10:27:55.482071 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:27:55 crc kubenswrapper[4728]: I0227 10:27:55.482111 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:27:55 crc kubenswrapper[4728]: I0227 10:27:55.482195 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:27:55Z","lastTransitionTime":"2026-02-27T10:27:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:27:55 crc kubenswrapper[4728]: I0227 10:27:55.585231 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:27:55 crc kubenswrapper[4728]: I0227 10:27:55.585322 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:27:55 crc kubenswrapper[4728]: I0227 10:27:55.585345 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:27:55 crc kubenswrapper[4728]: I0227 10:27:55.585377 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:27:55 crc kubenswrapper[4728]: I0227 10:27:55.585406 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:27:55Z","lastTransitionTime":"2026-02-27T10:27:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:27:55 crc kubenswrapper[4728]: I0227 10:27:55.688018 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:27:55 crc kubenswrapper[4728]: I0227 10:27:55.688095 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:27:55 crc kubenswrapper[4728]: I0227 10:27:55.688121 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:27:55 crc kubenswrapper[4728]: I0227 10:27:55.688151 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:27:55 crc kubenswrapper[4728]: I0227 10:27:55.688173 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:27:55Z","lastTransitionTime":"2026-02-27T10:27:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:27:55 crc kubenswrapper[4728]: I0227 10:27:55.790706 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:27:55 crc kubenswrapper[4728]: I0227 10:27:55.790766 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:27:55 crc kubenswrapper[4728]: I0227 10:27:55.790784 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:27:55 crc kubenswrapper[4728]: I0227 10:27:55.790807 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:27:55 crc kubenswrapper[4728]: I0227 10:27:55.790824 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:27:55Z","lastTransitionTime":"2026-02-27T10:27:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:27:55 crc kubenswrapper[4728]: I0227 10:27:55.893003 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:27:55 crc kubenswrapper[4728]: I0227 10:27:55.893096 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:27:55 crc kubenswrapper[4728]: I0227 10:27:55.893120 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:27:55 crc kubenswrapper[4728]: I0227 10:27:55.893152 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:27:55 crc kubenswrapper[4728]: I0227 10:27:55.893174 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:27:55Z","lastTransitionTime":"2026-02-27T10:27:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:27:55 crc kubenswrapper[4728]: I0227 10:27:55.996563 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:27:55 crc kubenswrapper[4728]: I0227 10:27:55.996651 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:27:55 crc kubenswrapper[4728]: I0227 10:27:55.996672 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:27:55 crc kubenswrapper[4728]: I0227 10:27:55.996731 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:27:55 crc kubenswrapper[4728]: I0227 10:27:55.996752 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:27:55Z","lastTransitionTime":"2026-02-27T10:27:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:27:56 crc kubenswrapper[4728]: I0227 10:27:56.100224 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:27:56 crc kubenswrapper[4728]: I0227 10:27:56.100298 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:27:56 crc kubenswrapper[4728]: I0227 10:27:56.100322 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:27:56 crc kubenswrapper[4728]: I0227 10:27:56.100347 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:27:56 crc kubenswrapper[4728]: I0227 10:27:56.100367 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:27:56Z","lastTransitionTime":"2026-02-27T10:27:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:27:56 crc kubenswrapper[4728]: I0227 10:27:56.203341 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:27:56 crc kubenswrapper[4728]: I0227 10:27:56.203448 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:27:56 crc kubenswrapper[4728]: I0227 10:27:56.203470 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:27:56 crc kubenswrapper[4728]: I0227 10:27:56.203531 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:27:56 crc kubenswrapper[4728]: I0227 10:27:56.203549 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:27:56Z","lastTransitionTime":"2026-02-27T10:27:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:27:56 crc kubenswrapper[4728]: I0227 10:27:56.307145 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:27:56 crc kubenswrapper[4728]: I0227 10:27:56.307203 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:27:56 crc kubenswrapper[4728]: I0227 10:27:56.307220 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:27:56 crc kubenswrapper[4728]: I0227 10:27:56.307243 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:27:56 crc kubenswrapper[4728]: I0227 10:27:56.307260 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:27:56Z","lastTransitionTime":"2026-02-27T10:27:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:27:56 crc kubenswrapper[4728]: I0227 10:27:56.347668 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 10:27:56 crc kubenswrapper[4728]: E0227 10:27:56.347848 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 10:28:00.347816753 +0000 UTC m=+100.310182859 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:27:56 crc kubenswrapper[4728]: I0227 10:27:56.409949 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:27:56 crc kubenswrapper[4728]: I0227 10:27:56.410026 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:27:56 crc kubenswrapper[4728]: I0227 10:27:56.410045 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:27:56 crc kubenswrapper[4728]: I0227 10:27:56.410073 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:27:56 crc kubenswrapper[4728]: I0227 10:27:56.410103 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:27:56Z","lastTransitionTime":"2026-02-27T10:27:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:27:56 crc kubenswrapper[4728]: I0227 10:27:56.449059 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 10:27:56 crc kubenswrapper[4728]: I0227 10:27:56.449123 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 10:27:56 crc kubenswrapper[4728]: I0227 10:27:56.449165 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 10:27:56 crc kubenswrapper[4728]: I0227 10:27:56.449202 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 10:27:56 crc kubenswrapper[4728]: E0227 10:27:56.449292 4728 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 27 10:27:56 crc kubenswrapper[4728]: E0227 10:27:56.449360 4728 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 27 10:27:56 crc kubenswrapper[4728]: E0227 10:27:56.449397 4728 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 27 10:27:56 crc kubenswrapper[4728]: E0227 10:27:56.449406 4728 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 27 10:27:56 crc kubenswrapper[4728]: E0227 10:27:56.449450 4728 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 27 10:27:56 crc kubenswrapper[4728]: E0227 10:27:56.449418 4728 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 10:27:56 crc kubenswrapper[4728]: E0227 10:27:56.449486 4728 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 27 10:27:56 crc kubenswrapper[4728]: E0227 10:27:56.449716 4728 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 10:27:56 crc kubenswrapper[4728]: E0227 10:27:56.449365 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-27 10:28:00.449343052 +0000 UTC m=+100.411709148 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 27 10:27:56 crc kubenswrapper[4728]: E0227 10:27:56.449749 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-27 10:28:00.449741584 +0000 UTC m=+100.412107690 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 10:27:56 crc kubenswrapper[4728]: E0227 10:27:56.449761 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-27 10:28:00.449755904 +0000 UTC m=+100.412122010 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 27 10:27:56 crc kubenswrapper[4728]: E0227 10:27:56.449770 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-27 10:28:00.449766324 +0000 UTC m=+100.412132430 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 10:27:56 crc kubenswrapper[4728]: I0227 10:27:56.513957 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:27:56 crc kubenswrapper[4728]: I0227 10:27:56.514015 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:27:56 crc kubenswrapper[4728]: I0227 10:27:56.514031 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:27:56 crc kubenswrapper[4728]: I0227 10:27:56.514055 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:27:56 crc kubenswrapper[4728]: I0227 10:27:56.514072 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:27:56Z","lastTransitionTime":"2026-02-27T10:27:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:27:56 crc kubenswrapper[4728]: I0227 10:27:56.617305 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:27:56 crc kubenswrapper[4728]: I0227 10:27:56.617359 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:27:56 crc kubenswrapper[4728]: I0227 10:27:56.617380 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:27:56 crc kubenswrapper[4728]: I0227 10:27:56.617409 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:27:56 crc kubenswrapper[4728]: I0227 10:27:56.617430 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:27:56Z","lastTransitionTime":"2026-02-27T10:27:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:27:56 crc kubenswrapper[4728]: I0227 10:27:56.719894 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:27:56 crc kubenswrapper[4728]: I0227 10:27:56.719945 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:27:56 crc kubenswrapper[4728]: I0227 10:27:56.719957 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:27:56 crc kubenswrapper[4728]: I0227 10:27:56.719975 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:27:56 crc kubenswrapper[4728]: I0227 10:27:56.719985 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:27:56Z","lastTransitionTime":"2026-02-27T10:27:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:27:56 crc kubenswrapper[4728]: I0227 10:27:56.724582 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 10:27:56 crc kubenswrapper[4728]: I0227 10:27:56.724619 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 10:27:56 crc kubenswrapper[4728]: E0227 10:27:56.724765 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 10:27:56 crc kubenswrapper[4728]: I0227 10:27:56.724802 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 10:27:56 crc kubenswrapper[4728]: E0227 10:27:56.724956 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 10:27:56 crc kubenswrapper[4728]: E0227 10:27:56.725109 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 10:27:56 crc kubenswrapper[4728]: I0227 10:27:56.822532 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:27:56 crc kubenswrapper[4728]: I0227 10:27:56.822582 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:27:56 crc kubenswrapper[4728]: I0227 10:27:56.822621 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:27:56 crc kubenswrapper[4728]: I0227 10:27:56.822641 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:27:56 crc kubenswrapper[4728]: I0227 10:27:56.822662 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:27:56Z","lastTransitionTime":"2026-02-27T10:27:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:27:56 crc kubenswrapper[4728]: I0227 10:27:56.925479 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:27:56 crc kubenswrapper[4728]: I0227 10:27:56.925617 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:27:56 crc kubenswrapper[4728]: I0227 10:27:56.925642 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:27:56 crc kubenswrapper[4728]: I0227 10:27:56.925672 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:27:56 crc kubenswrapper[4728]: I0227 10:27:56.925694 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:27:56Z","lastTransitionTime":"2026-02-27T10:27:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:27:57 crc kubenswrapper[4728]: I0227 10:27:57.028818 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:27:57 crc kubenswrapper[4728]: I0227 10:27:57.028904 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:27:57 crc kubenswrapper[4728]: I0227 10:27:57.028922 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:27:57 crc kubenswrapper[4728]: I0227 10:27:57.028999 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:27:57 crc kubenswrapper[4728]: I0227 10:27:57.029018 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:27:57Z","lastTransitionTime":"2026-02-27T10:27:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:27:57 crc kubenswrapper[4728]: I0227 10:27:57.131306 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:27:57 crc kubenswrapper[4728]: I0227 10:27:57.131342 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:27:57 crc kubenswrapper[4728]: I0227 10:27:57.131350 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:27:57 crc kubenswrapper[4728]: I0227 10:27:57.131369 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:27:57 crc kubenswrapper[4728]: I0227 10:27:57.131380 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:27:57Z","lastTransitionTime":"2026-02-27T10:27:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:27:57 crc kubenswrapper[4728]: I0227 10:27:57.233932 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:27:57 crc kubenswrapper[4728]: I0227 10:27:57.234052 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:27:57 crc kubenswrapper[4728]: I0227 10:27:57.234121 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:27:57 crc kubenswrapper[4728]: I0227 10:27:57.234146 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:27:57 crc kubenswrapper[4728]: I0227 10:27:57.234162 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:27:57Z","lastTransitionTime":"2026-02-27T10:27:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:27:57 crc kubenswrapper[4728]: I0227 10:27:57.337417 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:27:57 crc kubenswrapper[4728]: I0227 10:27:57.337483 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:27:57 crc kubenswrapper[4728]: I0227 10:27:57.337532 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:27:57 crc kubenswrapper[4728]: I0227 10:27:57.337563 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:27:57 crc kubenswrapper[4728]: I0227 10:27:57.337583 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:27:57Z","lastTransitionTime":"2026-02-27T10:27:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:27:57 crc kubenswrapper[4728]: I0227 10:27:57.442230 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:27:57 crc kubenswrapper[4728]: I0227 10:27:57.442317 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:27:57 crc kubenswrapper[4728]: I0227 10:27:57.442338 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:27:57 crc kubenswrapper[4728]: I0227 10:27:57.442364 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:27:57 crc kubenswrapper[4728]: I0227 10:27:57.442391 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:27:57Z","lastTransitionTime":"2026-02-27T10:27:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:27:57 crc kubenswrapper[4728]: I0227 10:27:57.545584 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:27:57 crc kubenswrapper[4728]: I0227 10:27:57.545645 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:27:57 crc kubenswrapper[4728]: I0227 10:27:57.545656 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:27:57 crc kubenswrapper[4728]: I0227 10:27:57.545669 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:27:57 crc kubenswrapper[4728]: I0227 10:27:57.545679 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:27:57Z","lastTransitionTime":"2026-02-27T10:27:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:27:57 crc kubenswrapper[4728]: I0227 10:27:57.647753 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:27:57 crc kubenswrapper[4728]: I0227 10:27:57.647811 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:27:57 crc kubenswrapper[4728]: I0227 10:27:57.647829 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:27:57 crc kubenswrapper[4728]: I0227 10:27:57.647852 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:27:57 crc kubenswrapper[4728]: I0227 10:27:57.647870 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:27:57Z","lastTransitionTime":"2026-02-27T10:27:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:27:57 crc kubenswrapper[4728]: I0227 10:27:57.739569 4728 scope.go:117] "RemoveContainer" containerID="0f09311526d9d8d971440bee60c506ffb423eed73c2ac58c81934c2df487ae48" Feb 27 10:27:57 crc kubenswrapper[4728]: E0227 10:27:57.740062 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 27 10:27:57 crc kubenswrapper[4728]: I0227 10:27:57.745077 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 27 10:27:57 crc kubenswrapper[4728]: I0227 10:27:57.750969 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:27:57 crc kubenswrapper[4728]: I0227 10:27:57.750998 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:27:57 crc kubenswrapper[4728]: I0227 10:27:57.751009 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:27:57 crc kubenswrapper[4728]: I0227 10:27:57.751021 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:27:57 crc kubenswrapper[4728]: I0227 10:27:57.751032 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:27:57Z","lastTransitionTime":"2026-02-27T10:27:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:27:57 crc kubenswrapper[4728]: I0227 10:27:57.853921 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:27:57 crc kubenswrapper[4728]: I0227 10:27:57.853967 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:27:57 crc kubenswrapper[4728]: I0227 10:27:57.854022 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:27:57 crc kubenswrapper[4728]: I0227 10:27:57.854045 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:27:57 crc kubenswrapper[4728]: I0227 10:27:57.854096 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:27:57Z","lastTransitionTime":"2026-02-27T10:27:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:27:57 crc kubenswrapper[4728]: I0227 10:27:57.956122 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:27:57 crc kubenswrapper[4728]: I0227 10:27:57.956161 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:27:57 crc kubenswrapper[4728]: I0227 10:27:57.956173 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:27:57 crc kubenswrapper[4728]: I0227 10:27:57.956188 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:27:57 crc kubenswrapper[4728]: I0227 10:27:57.956201 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:27:57Z","lastTransitionTime":"2026-02-27T10:27:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:27:58 crc kubenswrapper[4728]: I0227 10:27:58.059047 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:27:58 crc kubenswrapper[4728]: I0227 10:27:58.059109 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:27:58 crc kubenswrapper[4728]: I0227 10:27:58.059120 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:27:58 crc kubenswrapper[4728]: I0227 10:27:58.059134 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:27:58 crc kubenswrapper[4728]: I0227 10:27:58.059144 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:27:58Z","lastTransitionTime":"2026-02-27T10:27:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:27:58 crc kubenswrapper[4728]: I0227 10:27:58.118759 4728 scope.go:117] "RemoveContainer" containerID="0f09311526d9d8d971440bee60c506ffb423eed73c2ac58c81934c2df487ae48" Feb 27 10:27:58 crc kubenswrapper[4728]: E0227 10:27:58.119054 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 27 10:27:58 crc kubenswrapper[4728]: I0227 10:27:58.162402 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:27:58 crc kubenswrapper[4728]: I0227 10:27:58.162470 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:27:58 crc kubenswrapper[4728]: I0227 10:27:58.162494 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:27:58 crc kubenswrapper[4728]: I0227 10:27:58.162548 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:27:58 crc kubenswrapper[4728]: I0227 10:27:58.162566 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:27:58Z","lastTransitionTime":"2026-02-27T10:27:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:27:58 crc kubenswrapper[4728]: I0227 10:27:58.265765 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:27:58 crc kubenswrapper[4728]: I0227 10:27:58.265840 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:27:58 crc kubenswrapper[4728]: I0227 10:27:58.265863 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:27:58 crc kubenswrapper[4728]: I0227 10:27:58.265895 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:27:58 crc kubenswrapper[4728]: I0227 10:27:58.265917 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:27:58Z","lastTransitionTime":"2026-02-27T10:27:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:27:58 crc kubenswrapper[4728]: I0227 10:27:58.368834 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:27:58 crc kubenswrapper[4728]: I0227 10:27:58.368920 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:27:58 crc kubenswrapper[4728]: I0227 10:27:58.368933 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:27:58 crc kubenswrapper[4728]: I0227 10:27:58.368950 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:27:58 crc kubenswrapper[4728]: I0227 10:27:58.369281 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:27:58Z","lastTransitionTime":"2026-02-27T10:27:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:27:58 crc kubenswrapper[4728]: I0227 10:27:58.472725 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:27:58 crc kubenswrapper[4728]: I0227 10:27:58.472806 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:27:58 crc kubenswrapper[4728]: I0227 10:27:58.472827 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:27:58 crc kubenswrapper[4728]: I0227 10:27:58.472857 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:27:58 crc kubenswrapper[4728]: I0227 10:27:58.472874 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:27:58Z","lastTransitionTime":"2026-02-27T10:27:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:27:58 crc kubenswrapper[4728]: I0227 10:27:58.575260 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:27:58 crc kubenswrapper[4728]: I0227 10:27:58.575316 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:27:58 crc kubenswrapper[4728]: I0227 10:27:58.575333 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:27:58 crc kubenswrapper[4728]: I0227 10:27:58.575400 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:27:58 crc kubenswrapper[4728]: I0227 10:27:58.575418 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:27:58Z","lastTransitionTime":"2026-02-27T10:27:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:27:58 crc kubenswrapper[4728]: I0227 10:27:58.672387 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:27:58 crc kubenswrapper[4728]: I0227 10:27:58.672460 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:27:58 crc kubenswrapper[4728]: I0227 10:27:58.672479 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:27:58 crc kubenswrapper[4728]: I0227 10:27:58.672528 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:27:58 crc kubenswrapper[4728]: I0227 10:27:58.672546 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:27:58Z","lastTransitionTime":"2026-02-27T10:27:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:27:58 crc kubenswrapper[4728]: E0227 10:27:58.688130 4728 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:27:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:27:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:27:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:27:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"79ce2621-f919-4f1d-8b5b-b727bcba43c7\\\",\\\"systemUUID\\\":\\\"08a24311-ed07-4912-ba2b-648ea93d1dc5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:27:58 crc kubenswrapper[4728]: I0227 10:27:58.692858 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:27:58 crc kubenswrapper[4728]: I0227 10:27:58.692948 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:27:58 crc kubenswrapper[4728]: I0227 10:27:58.692966 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:27:58 crc kubenswrapper[4728]: I0227 10:27:58.692989 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:27:58 crc kubenswrapper[4728]: I0227 10:27:58.693005 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:27:58Z","lastTransitionTime":"2026-02-27T10:27:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:27:58 crc kubenswrapper[4728]: E0227 10:27:58.706559 4728 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:27:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:27:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:27:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:27:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"79ce2621-f919-4f1d-8b5b-b727bcba43c7\\\",\\\"systemUUID\\\":\\\"08a24311-ed07-4912-ba2b-648ea93d1dc5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:27:58 crc kubenswrapper[4728]: I0227 10:27:58.717569 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:27:58 crc kubenswrapper[4728]: I0227 10:27:58.717656 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:27:58 crc kubenswrapper[4728]: I0227 10:27:58.717676 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:27:58 crc kubenswrapper[4728]: I0227 10:27:58.720177 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:27:58 crc kubenswrapper[4728]: I0227 10:27:58.720253 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:27:58Z","lastTransitionTime":"2026-02-27T10:27:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:27:58 crc kubenswrapper[4728]: I0227 10:27:58.723853 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 10:27:58 crc kubenswrapper[4728]: I0227 10:27:58.723957 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 10:27:58 crc kubenswrapper[4728]: E0227 10:27:58.724012 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 10:27:58 crc kubenswrapper[4728]: E0227 10:27:58.724203 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 10:27:58 crc kubenswrapper[4728]: I0227 10:27:58.724393 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 10:27:58 crc kubenswrapper[4728]: E0227 10:27:58.724557 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 10:27:58 crc kubenswrapper[4728]: E0227 10:27:58.736892 4728 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:27:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:27:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:27:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:27:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"79ce2621-f919-4f1d-8b5b-b727bcba43c7\\\",\\\"systemUUID\\\":\\\"08a24311-ed07-4912-ba2b-648ea93d1dc5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:27:58 crc kubenswrapper[4728]: I0227 10:27:58.743607 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:27:58 crc kubenswrapper[4728]: I0227 10:27:58.743666 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:27:58 crc kubenswrapper[4728]: I0227 10:27:58.743687 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:27:58 crc kubenswrapper[4728]: I0227 10:27:58.743723 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:27:58 crc kubenswrapper[4728]: I0227 10:27:58.743741 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:27:58Z","lastTransitionTime":"2026-02-27T10:27:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:27:58 crc kubenswrapper[4728]: E0227 10:27:58.760580 4728 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:27:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:27:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:27:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:27:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"79ce2621-f919-4f1d-8b5b-b727bcba43c7\\\",\\\"systemUUID\\\":\\\"08a24311-ed07-4912-ba2b-648ea93d1dc5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:27:58 crc kubenswrapper[4728]: I0227 10:27:58.764425 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:27:58 crc kubenswrapper[4728]: I0227 10:27:58.764480 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:27:58 crc kubenswrapper[4728]: I0227 10:27:58.764490 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:27:58 crc kubenswrapper[4728]: I0227 10:27:58.764526 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:27:58 crc kubenswrapper[4728]: I0227 10:27:58.764537 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:27:58Z","lastTransitionTime":"2026-02-27T10:27:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:27:58 crc kubenswrapper[4728]: E0227 10:27:58.777023 4728 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:27:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:27:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:27:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:27:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"79ce2621-f919-4f1d-8b5b-b727bcba43c7\\\",\\\"systemUUID\\\":\\\"08a24311-ed07-4912-ba2b-648ea93d1dc5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:27:58 crc kubenswrapper[4728]: E0227 10:27:58.777145 4728 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 27 10:27:58 crc kubenswrapper[4728]: I0227 10:27:58.778884 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:27:58 crc kubenswrapper[4728]: I0227 10:27:58.778907 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:27:58 crc kubenswrapper[4728]: I0227 10:27:58.778915 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:27:58 crc kubenswrapper[4728]: I0227 10:27:58.778927 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:27:58 crc kubenswrapper[4728]: I0227 10:27:58.778936 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:27:58Z","lastTransitionTime":"2026-02-27T10:27:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:27:58 crc kubenswrapper[4728]: I0227 10:27:58.881914 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:27:58 crc kubenswrapper[4728]: I0227 10:27:58.881974 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:27:58 crc kubenswrapper[4728]: I0227 10:27:58.881992 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:27:58 crc kubenswrapper[4728]: I0227 10:27:58.882016 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:27:58 crc kubenswrapper[4728]: I0227 10:27:58.882033 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:27:58Z","lastTransitionTime":"2026-02-27T10:27:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:27:58 crc kubenswrapper[4728]: I0227 10:27:58.986112 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:27:58 crc kubenswrapper[4728]: I0227 10:27:58.986193 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:27:58 crc kubenswrapper[4728]: I0227 10:27:58.986236 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:27:58 crc kubenswrapper[4728]: I0227 10:27:58.986271 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:27:58 crc kubenswrapper[4728]: I0227 10:27:58.986296 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:27:58Z","lastTransitionTime":"2026-02-27T10:27:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:27:59 crc kubenswrapper[4728]: I0227 10:27:59.088894 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:27:59 crc kubenswrapper[4728]: I0227 10:27:59.088969 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:27:59 crc kubenswrapper[4728]: I0227 10:27:59.088992 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:27:59 crc kubenswrapper[4728]: I0227 10:27:59.089018 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:27:59 crc kubenswrapper[4728]: I0227 10:27:59.089035 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:27:59Z","lastTransitionTime":"2026-02-27T10:27:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:27:59 crc kubenswrapper[4728]: I0227 10:27:59.192175 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:27:59 crc kubenswrapper[4728]: I0227 10:27:59.192238 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:27:59 crc kubenswrapper[4728]: I0227 10:27:59.192258 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:27:59 crc kubenswrapper[4728]: I0227 10:27:59.192287 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:27:59 crc kubenswrapper[4728]: I0227 10:27:59.192310 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:27:59Z","lastTransitionTime":"2026-02-27T10:27:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:27:59 crc kubenswrapper[4728]: I0227 10:27:59.294791 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:27:59 crc kubenswrapper[4728]: I0227 10:27:59.294850 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:27:59 crc kubenswrapper[4728]: I0227 10:27:59.294867 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:27:59 crc kubenswrapper[4728]: I0227 10:27:59.294895 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:27:59 crc kubenswrapper[4728]: I0227 10:27:59.294912 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:27:59Z","lastTransitionTime":"2026-02-27T10:27:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:27:59 crc kubenswrapper[4728]: I0227 10:27:59.397265 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:27:59 crc kubenswrapper[4728]: I0227 10:27:59.397318 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:27:59 crc kubenswrapper[4728]: I0227 10:27:59.397336 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:27:59 crc kubenswrapper[4728]: I0227 10:27:59.397358 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:27:59 crc kubenswrapper[4728]: I0227 10:27:59.397375 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:27:59Z","lastTransitionTime":"2026-02-27T10:27:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:27:59 crc kubenswrapper[4728]: I0227 10:27:59.499978 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:27:59 crc kubenswrapper[4728]: I0227 10:27:59.500034 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:27:59 crc kubenswrapper[4728]: I0227 10:27:59.500053 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:27:59 crc kubenswrapper[4728]: I0227 10:27:59.500079 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:27:59 crc kubenswrapper[4728]: I0227 10:27:59.500096 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:27:59Z","lastTransitionTime":"2026-02-27T10:27:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:27:59 crc kubenswrapper[4728]: I0227 10:27:59.602541 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:27:59 crc kubenswrapper[4728]: I0227 10:27:59.602594 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:27:59 crc kubenswrapper[4728]: I0227 10:27:59.602616 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:27:59 crc kubenswrapper[4728]: I0227 10:27:59.602648 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:27:59 crc kubenswrapper[4728]: I0227 10:27:59.602670 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:27:59Z","lastTransitionTime":"2026-02-27T10:27:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:27:59 crc kubenswrapper[4728]: I0227 10:27:59.705303 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:27:59 crc kubenswrapper[4728]: I0227 10:27:59.705369 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:27:59 crc kubenswrapper[4728]: I0227 10:27:59.705396 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:27:59 crc kubenswrapper[4728]: I0227 10:27:59.705465 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:27:59 crc kubenswrapper[4728]: I0227 10:27:59.705486 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:27:59Z","lastTransitionTime":"2026-02-27T10:27:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:27:59 crc kubenswrapper[4728]: I0227 10:27:59.809134 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:27:59 crc kubenswrapper[4728]: I0227 10:27:59.809215 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:27:59 crc kubenswrapper[4728]: I0227 10:27:59.809234 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:27:59 crc kubenswrapper[4728]: I0227 10:27:59.809263 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:27:59 crc kubenswrapper[4728]: I0227 10:27:59.809282 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:27:59Z","lastTransitionTime":"2026-02-27T10:27:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:27:59 crc kubenswrapper[4728]: I0227 10:27:59.912722 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:27:59 crc kubenswrapper[4728]: I0227 10:27:59.913045 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:27:59 crc kubenswrapper[4728]: I0227 10:27:59.913196 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:27:59 crc kubenswrapper[4728]: I0227 10:27:59.913347 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:27:59 crc kubenswrapper[4728]: I0227 10:27:59.913487 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:27:59Z","lastTransitionTime":"2026-02-27T10:27:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:00 crc kubenswrapper[4728]: I0227 10:28:00.016606 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:00 crc kubenswrapper[4728]: I0227 10:28:00.016640 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:00 crc kubenswrapper[4728]: I0227 10:28:00.016649 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:00 crc kubenswrapper[4728]: I0227 10:28:00.016667 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:00 crc kubenswrapper[4728]: I0227 10:28:00.016680 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:00Z","lastTransitionTime":"2026-02-27T10:28:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:00 crc kubenswrapper[4728]: I0227 10:28:00.120127 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:00 crc kubenswrapper[4728]: I0227 10:28:00.120188 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:00 crc kubenswrapper[4728]: I0227 10:28:00.120201 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:00 crc kubenswrapper[4728]: I0227 10:28:00.120223 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:00 crc kubenswrapper[4728]: I0227 10:28:00.120237 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:00Z","lastTransitionTime":"2026-02-27T10:28:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:00 crc kubenswrapper[4728]: I0227 10:28:00.223923 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:00 crc kubenswrapper[4728]: I0227 10:28:00.224344 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:00 crc kubenswrapper[4728]: I0227 10:28:00.224494 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:00 crc kubenswrapper[4728]: I0227 10:28:00.224678 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:00 crc kubenswrapper[4728]: I0227 10:28:00.224796 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:00Z","lastTransitionTime":"2026-02-27T10:28:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:00 crc kubenswrapper[4728]: I0227 10:28:00.327370 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:00 crc kubenswrapper[4728]: I0227 10:28:00.327424 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:00 crc kubenswrapper[4728]: I0227 10:28:00.327440 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:00 crc kubenswrapper[4728]: I0227 10:28:00.327464 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:00 crc kubenswrapper[4728]: I0227 10:28:00.327481 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:00Z","lastTransitionTime":"2026-02-27T10:28:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:00 crc kubenswrapper[4728]: I0227 10:28:00.388179 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 10:28:00 crc kubenswrapper[4728]: E0227 10:28:00.388342 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 10:28:08.388319681 +0000 UTC m=+108.350685807 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:28:00 crc kubenswrapper[4728]: I0227 10:28:00.429925 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:00 crc kubenswrapper[4728]: I0227 10:28:00.429959 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:00 crc kubenswrapper[4728]: I0227 10:28:00.429969 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:00 crc kubenswrapper[4728]: I0227 10:28:00.429984 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:00 crc kubenswrapper[4728]: I0227 10:28:00.429995 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:00Z","lastTransitionTime":"2026-02-27T10:28:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:00 crc kubenswrapper[4728]: I0227 10:28:00.489387 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 10:28:00 crc kubenswrapper[4728]: I0227 10:28:00.489452 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 10:28:00 crc kubenswrapper[4728]: I0227 10:28:00.489493 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 10:28:00 crc kubenswrapper[4728]: I0227 10:28:00.489568 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 10:28:00 crc kubenswrapper[4728]: E0227 10:28:00.489727 4728 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 27 10:28:00 crc kubenswrapper[4728]: E0227 10:28:00.489771 4728 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 27 10:28:00 crc kubenswrapper[4728]: E0227 10:28:00.489795 4728 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 27 10:28:00 crc kubenswrapper[4728]: E0227 10:28:00.489803 4728 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 27 10:28:00 crc kubenswrapper[4728]: E0227 10:28:00.489815 4728 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 27 10:28:00 crc kubenswrapper[4728]: E0227 10:28:00.489863 4728 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 10:28:00 crc kubenswrapper[4728]: E0227 10:28:00.489828 4728 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 10:28:00 crc kubenswrapper[4728]: E0227 10:28:00.489961 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-27 10:28:08.489882981 +0000 UTC m=+108.452249127 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 27 10:28:00 crc kubenswrapper[4728]: E0227 10:28:00.489999 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-27 10:28:08.489982643 +0000 UTC m=+108.452348779 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 10:28:00 crc kubenswrapper[4728]: E0227 10:28:00.489734 4728 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 27 10:28:00 crc kubenswrapper[4728]: E0227 10:28:00.490106 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-27 10:28:08.490054185 +0000 UTC m=+108.452420321 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 10:28:00 crc kubenswrapper[4728]: E0227 10:28:00.490177 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-27 10:28:08.490137057 +0000 UTC m=+108.452503203 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 27 10:28:00 crc kubenswrapper[4728]: I0227 10:28:00.532956 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:00 crc kubenswrapper[4728]: I0227 10:28:00.533202 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:00 crc kubenswrapper[4728]: I0227 10:28:00.533317 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:00 crc kubenswrapper[4728]: I0227 10:28:00.533452 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:00 crc kubenswrapper[4728]: I0227 10:28:00.533655 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:00Z","lastTransitionTime":"2026-02-27T10:28:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:00 crc kubenswrapper[4728]: I0227 10:28:00.636043 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:00 crc kubenswrapper[4728]: I0227 10:28:00.636126 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:00 crc kubenswrapper[4728]: I0227 10:28:00.636143 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:00 crc kubenswrapper[4728]: I0227 10:28:00.636166 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:00 crc kubenswrapper[4728]: I0227 10:28:00.636182 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:00Z","lastTransitionTime":"2026-02-27T10:28:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:00 crc kubenswrapper[4728]: I0227 10:28:00.724199 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 10:28:00 crc kubenswrapper[4728]: I0227 10:28:00.724275 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 10:28:00 crc kubenswrapper[4728]: E0227 10:28:00.724376 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 10:28:00 crc kubenswrapper[4728]: I0227 10:28:00.724394 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 10:28:00 crc kubenswrapper[4728]: E0227 10:28:00.724499 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 10:28:00 crc kubenswrapper[4728]: E0227 10:28:00.724717 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 10:28:00 crc kubenswrapper[4728]: I0227 10:28:00.738646 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:00 crc kubenswrapper[4728]: I0227 10:28:00.738914 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:00 crc kubenswrapper[4728]: I0227 10:28:00.739024 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:00 crc kubenswrapper[4728]: I0227 10:28:00.739139 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:00 crc kubenswrapper[4728]: I0227 10:28:00.739260 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:00Z","lastTransitionTime":"2026-02-27T10:28:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:00 crc kubenswrapper[4728]: I0227 10:28:00.742532 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ded44d8-d959-4509-be28-3560f21eebda\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4e8835f3ff41008225e090a7c460900d18997805869ec9191f1e7cce3c5778e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://144568fcb21837838d999f90cb51207fff11edb023b2d975dad1980e0817ca43\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19f04d61914f64b94f46aebdc8a7580de81f5ba7e15935be045d4e92bd0cd889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f09311526d9d8d971440bee60c506ffb423eed73c2ac58c81934c2df487ae48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f09311526d9d8d971440bee60c506ffb423eed73c2ac58c81934c2df487ae48\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T10:27:28Z\\\",\\\"message\\\":\\\"W0227 10:27:27.964275 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0227 10:27:27.965166 1 crypto.go:601] Generating new CA for check-endpoints-signer@1772188047 cert, and key in /tmp/serving-cert-1003999021/serving-signer.crt, /tmp/serving-cert-1003999021/serving-signer.key\\\\nI0227 10:27:28.312473 1 observer_polling.go:159] Starting file observer\\\\nW0227 10:27:28.324092 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 10:27:28.324213 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 10:27:28.326919 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1003999021/tls.crt::/tmp/serving-cert-1003999021/tls.key\\\\\\\"\\\\nI0227 10:27:28.533892 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 10:27:28.537238 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 10:27:28.537256 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 10:27:28.537275 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 10:27:28.537280 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nF0227 10:27:28.543870 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T10:27:27Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd5f667231cb95d3d072f38635815d04c45ae9803c179df54367329131afd05a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c860355459019bda5fddc0a658236e97a674f965aa2eff648e97bbaa30161b75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c860355459019bda5fddc0a658236e97a674f965aa2eff648e97bbaa30161b75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:26:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:26:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:26:20Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:00 crc kubenswrapper[4728]: I0227 10:28:00.759699 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:00 crc kubenswrapper[4728]: I0227 10:28:00.775318 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:00 crc kubenswrapper[4728]: I0227 10:28:00.787663 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:00 crc kubenswrapper[4728]: I0227 10:28:00.801976 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:00 crc kubenswrapper[4728]: I0227 10:28:00.816964 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:00 crc kubenswrapper[4728]: I0227 10:28:00.831158 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:00 crc kubenswrapper[4728]: I0227 10:28:00.841778 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:00 crc kubenswrapper[4728]: I0227 10:28:00.841839 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:00 crc kubenswrapper[4728]: I0227 10:28:00.841857 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:00 crc kubenswrapper[4728]: I0227 10:28:00.841881 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:00 crc kubenswrapper[4728]: I0227 10:28:00.841897 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:00Z","lastTransitionTime":"2026-02-27T10:28:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:00 crc kubenswrapper[4728]: I0227 10:28:00.944242 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:00 crc kubenswrapper[4728]: I0227 10:28:00.944303 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:00 crc kubenswrapper[4728]: I0227 10:28:00.944319 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:00 crc kubenswrapper[4728]: I0227 10:28:00.944343 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:00 crc kubenswrapper[4728]: I0227 10:28:00.944360 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:00Z","lastTransitionTime":"2026-02-27T10:28:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:01 crc kubenswrapper[4728]: I0227 10:28:01.048067 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:01 crc kubenswrapper[4728]: I0227 10:28:01.048241 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:01 crc kubenswrapper[4728]: I0227 10:28:01.048280 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:01 crc kubenswrapper[4728]: I0227 10:28:01.048314 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:01 crc kubenswrapper[4728]: I0227 10:28:01.048348 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:01Z","lastTransitionTime":"2026-02-27T10:28:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:01 crc kubenswrapper[4728]: I0227 10:28:01.151130 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:01 crc kubenswrapper[4728]: I0227 10:28:01.151212 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:01 crc kubenswrapper[4728]: I0227 10:28:01.151237 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:01 crc kubenswrapper[4728]: I0227 10:28:01.151271 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:01 crc kubenswrapper[4728]: I0227 10:28:01.151294 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:01Z","lastTransitionTime":"2026-02-27T10:28:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:01 crc kubenswrapper[4728]: I0227 10:28:01.254623 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:01 crc kubenswrapper[4728]: I0227 10:28:01.254688 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:01 crc kubenswrapper[4728]: I0227 10:28:01.254704 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:01 crc kubenswrapper[4728]: I0227 10:28:01.254729 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:01 crc kubenswrapper[4728]: I0227 10:28:01.254746 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:01Z","lastTransitionTime":"2026-02-27T10:28:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:01 crc kubenswrapper[4728]: I0227 10:28:01.357335 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:01 crc kubenswrapper[4728]: I0227 10:28:01.357393 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:01 crc kubenswrapper[4728]: I0227 10:28:01.357417 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:01 crc kubenswrapper[4728]: I0227 10:28:01.357447 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:01 crc kubenswrapper[4728]: I0227 10:28:01.357469 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:01Z","lastTransitionTime":"2026-02-27T10:28:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:01 crc kubenswrapper[4728]: I0227 10:28:01.460998 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:01 crc kubenswrapper[4728]: I0227 10:28:01.461606 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:01 crc kubenswrapper[4728]: I0227 10:28:01.461710 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:01 crc kubenswrapper[4728]: I0227 10:28:01.461881 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:01 crc kubenswrapper[4728]: I0227 10:28:01.461901 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:01Z","lastTransitionTime":"2026-02-27T10:28:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:01 crc kubenswrapper[4728]: I0227 10:28:01.564752 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:01 crc kubenswrapper[4728]: I0227 10:28:01.564829 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:01 crc kubenswrapper[4728]: I0227 10:28:01.564846 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:01 crc kubenswrapper[4728]: I0227 10:28:01.564872 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:01 crc kubenswrapper[4728]: I0227 10:28:01.564889 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:01Z","lastTransitionTime":"2026-02-27T10:28:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:01 crc kubenswrapper[4728]: I0227 10:28:01.667897 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:01 crc kubenswrapper[4728]: I0227 10:28:01.667954 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:01 crc kubenswrapper[4728]: I0227 10:28:01.667973 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:01 crc kubenswrapper[4728]: I0227 10:28:01.667998 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:01 crc kubenswrapper[4728]: I0227 10:28:01.668015 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:01Z","lastTransitionTime":"2026-02-27T10:28:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:01 crc kubenswrapper[4728]: I0227 10:28:01.771414 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:01 crc kubenswrapper[4728]: I0227 10:28:01.771474 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:01 crc kubenswrapper[4728]: I0227 10:28:01.771491 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:01 crc kubenswrapper[4728]: I0227 10:28:01.771542 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:01 crc kubenswrapper[4728]: I0227 10:28:01.771559 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:01Z","lastTransitionTime":"2026-02-27T10:28:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:01 crc kubenswrapper[4728]: I0227 10:28:01.875162 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:01 crc kubenswrapper[4728]: I0227 10:28:01.875211 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:01 crc kubenswrapper[4728]: I0227 10:28:01.875221 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:01 crc kubenswrapper[4728]: I0227 10:28:01.875238 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:01 crc kubenswrapper[4728]: I0227 10:28:01.875249 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:01Z","lastTransitionTime":"2026-02-27T10:28:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:01 crc kubenswrapper[4728]: I0227 10:28:01.977443 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:01 crc kubenswrapper[4728]: I0227 10:28:01.977552 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:01 crc kubenswrapper[4728]: I0227 10:28:01.977582 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:01 crc kubenswrapper[4728]: I0227 10:28:01.977612 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:01 crc kubenswrapper[4728]: I0227 10:28:01.977633 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:01Z","lastTransitionTime":"2026-02-27T10:28:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:02 crc kubenswrapper[4728]: I0227 10:28:02.080365 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:02 crc kubenswrapper[4728]: I0227 10:28:02.080413 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:02 crc kubenswrapper[4728]: I0227 10:28:02.080430 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:02 crc kubenswrapper[4728]: I0227 10:28:02.080453 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:02 crc kubenswrapper[4728]: I0227 10:28:02.080471 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:02Z","lastTransitionTime":"2026-02-27T10:28:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:02 crc kubenswrapper[4728]: I0227 10:28:02.183168 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:02 crc kubenswrapper[4728]: I0227 10:28:02.183239 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:02 crc kubenswrapper[4728]: I0227 10:28:02.183261 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:02 crc kubenswrapper[4728]: I0227 10:28:02.183288 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:02 crc kubenswrapper[4728]: I0227 10:28:02.183305 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:02Z","lastTransitionTime":"2026-02-27T10:28:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:02 crc kubenswrapper[4728]: I0227 10:28:02.285962 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:02 crc kubenswrapper[4728]: I0227 10:28:02.286026 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:02 crc kubenswrapper[4728]: I0227 10:28:02.286042 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:02 crc kubenswrapper[4728]: I0227 10:28:02.286065 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:02 crc kubenswrapper[4728]: I0227 10:28:02.286083 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:02Z","lastTransitionTime":"2026-02-27T10:28:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:02 crc kubenswrapper[4728]: I0227 10:28:02.389027 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:02 crc kubenswrapper[4728]: I0227 10:28:02.389064 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:02 crc kubenswrapper[4728]: I0227 10:28:02.389077 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:02 crc kubenswrapper[4728]: I0227 10:28:02.389093 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:02 crc kubenswrapper[4728]: I0227 10:28:02.389104 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:02Z","lastTransitionTime":"2026-02-27T10:28:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:02 crc kubenswrapper[4728]: I0227 10:28:02.492558 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:02 crc kubenswrapper[4728]: I0227 10:28:02.492875 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:02 crc kubenswrapper[4728]: I0227 10:28:02.493055 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:02 crc kubenswrapper[4728]: I0227 10:28:02.493196 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:02 crc kubenswrapper[4728]: I0227 10:28:02.493334 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:02Z","lastTransitionTime":"2026-02-27T10:28:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:02 crc kubenswrapper[4728]: I0227 10:28:02.596556 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:02 crc kubenswrapper[4728]: I0227 10:28:02.596617 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:02 crc kubenswrapper[4728]: I0227 10:28:02.596639 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:02 crc kubenswrapper[4728]: I0227 10:28:02.596664 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:02 crc kubenswrapper[4728]: I0227 10:28:02.596681 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:02Z","lastTransitionTime":"2026-02-27T10:28:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:02 crc kubenswrapper[4728]: I0227 10:28:02.699802 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:02 crc kubenswrapper[4728]: I0227 10:28:02.700204 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:02 crc kubenswrapper[4728]: I0227 10:28:02.700360 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:02 crc kubenswrapper[4728]: I0227 10:28:02.700553 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:02 crc kubenswrapper[4728]: I0227 10:28:02.700734 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:02Z","lastTransitionTime":"2026-02-27T10:28:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:02 crc kubenswrapper[4728]: I0227 10:28:02.724816 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 10:28:02 crc kubenswrapper[4728]: I0227 10:28:02.724884 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 10:28:02 crc kubenswrapper[4728]: I0227 10:28:02.724951 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 10:28:02 crc kubenswrapper[4728]: E0227 10:28:02.725088 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 10:28:02 crc kubenswrapper[4728]: E0227 10:28:02.725458 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 10:28:02 crc kubenswrapper[4728]: E0227 10:28:02.725590 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 10:28:02 crc kubenswrapper[4728]: I0227 10:28:02.804221 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:02 crc kubenswrapper[4728]: I0227 10:28:02.804264 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:02 crc kubenswrapper[4728]: I0227 10:28:02.804275 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:02 crc kubenswrapper[4728]: I0227 10:28:02.804319 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:02 crc kubenswrapper[4728]: I0227 10:28:02.804331 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:02Z","lastTransitionTime":"2026-02-27T10:28:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:02 crc kubenswrapper[4728]: I0227 10:28:02.907686 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:02 crc kubenswrapper[4728]: I0227 10:28:02.908499 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:02 crc kubenswrapper[4728]: I0227 10:28:02.908687 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:02 crc kubenswrapper[4728]: I0227 10:28:02.908834 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:02 crc kubenswrapper[4728]: I0227 10:28:02.908970 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:02Z","lastTransitionTime":"2026-02-27T10:28:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:03 crc kubenswrapper[4728]: I0227 10:28:03.011496 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:03 crc kubenswrapper[4728]: I0227 10:28:03.011609 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:03 crc kubenswrapper[4728]: I0227 10:28:03.011652 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:03 crc kubenswrapper[4728]: I0227 10:28:03.011680 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:03 crc kubenswrapper[4728]: I0227 10:28:03.011700 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:03Z","lastTransitionTime":"2026-02-27T10:28:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:03 crc kubenswrapper[4728]: I0227 10:28:03.113880 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:03 crc kubenswrapper[4728]: I0227 10:28:03.114110 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:03 crc kubenswrapper[4728]: I0227 10:28:03.114228 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:03 crc kubenswrapper[4728]: I0227 10:28:03.114321 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:03 crc kubenswrapper[4728]: I0227 10:28:03.114399 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:03Z","lastTransitionTime":"2026-02-27T10:28:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:03 crc kubenswrapper[4728]: I0227 10:28:03.217124 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:03 crc kubenswrapper[4728]: I0227 10:28:03.217398 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:03 crc kubenswrapper[4728]: I0227 10:28:03.217462 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:03 crc kubenswrapper[4728]: I0227 10:28:03.217542 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:03 crc kubenswrapper[4728]: I0227 10:28:03.217617 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:03Z","lastTransitionTime":"2026-02-27T10:28:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:03 crc kubenswrapper[4728]: I0227 10:28:03.321243 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:03 crc kubenswrapper[4728]: I0227 10:28:03.321279 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:03 crc kubenswrapper[4728]: I0227 10:28:03.321289 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:03 crc kubenswrapper[4728]: I0227 10:28:03.321304 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:03 crc kubenswrapper[4728]: I0227 10:28:03.321313 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:03Z","lastTransitionTime":"2026-02-27T10:28:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:03 crc kubenswrapper[4728]: I0227 10:28:03.423711 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:03 crc kubenswrapper[4728]: I0227 10:28:03.423749 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:03 crc kubenswrapper[4728]: I0227 10:28:03.423759 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:03 crc kubenswrapper[4728]: I0227 10:28:03.423775 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:03 crc kubenswrapper[4728]: I0227 10:28:03.423786 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:03Z","lastTransitionTime":"2026-02-27T10:28:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:03 crc kubenswrapper[4728]: I0227 10:28:03.526085 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:03 crc kubenswrapper[4728]: I0227 10:28:03.526114 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:03 crc kubenswrapper[4728]: I0227 10:28:03.526122 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:03 crc kubenswrapper[4728]: I0227 10:28:03.526133 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:03 crc kubenswrapper[4728]: I0227 10:28:03.526141 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:03Z","lastTransitionTime":"2026-02-27T10:28:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:03 crc kubenswrapper[4728]: I0227 10:28:03.629128 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:03 crc kubenswrapper[4728]: I0227 10:28:03.629172 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:03 crc kubenswrapper[4728]: I0227 10:28:03.629189 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:03 crc kubenswrapper[4728]: I0227 10:28:03.629212 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:03 crc kubenswrapper[4728]: I0227 10:28:03.629228 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:03Z","lastTransitionTime":"2026-02-27T10:28:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:03 crc kubenswrapper[4728]: I0227 10:28:03.732288 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:03 crc kubenswrapper[4728]: I0227 10:28:03.732337 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:03 crc kubenswrapper[4728]: I0227 10:28:03.732357 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:03 crc kubenswrapper[4728]: I0227 10:28:03.732380 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:03 crc kubenswrapper[4728]: I0227 10:28:03.732396 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:03Z","lastTransitionTime":"2026-02-27T10:28:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:03 crc kubenswrapper[4728]: I0227 10:28:03.835588 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:03 crc kubenswrapper[4728]: I0227 10:28:03.835646 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:03 crc kubenswrapper[4728]: I0227 10:28:03.835671 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:03 crc kubenswrapper[4728]: I0227 10:28:03.835696 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:03 crc kubenswrapper[4728]: I0227 10:28:03.835713 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:03Z","lastTransitionTime":"2026-02-27T10:28:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:03 crc kubenswrapper[4728]: I0227 10:28:03.938930 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:03 crc kubenswrapper[4728]: I0227 10:28:03.939585 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:03 crc kubenswrapper[4728]: I0227 10:28:03.939616 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:03 crc kubenswrapper[4728]: I0227 10:28:03.939642 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:03 crc kubenswrapper[4728]: I0227 10:28:03.939664 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:03Z","lastTransitionTime":"2026-02-27T10:28:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:04 crc kubenswrapper[4728]: I0227 10:28:04.042529 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:04 crc kubenswrapper[4728]: I0227 10:28:04.042592 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:04 crc kubenswrapper[4728]: I0227 10:28:04.042610 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:04 crc kubenswrapper[4728]: I0227 10:28:04.042634 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:04 crc kubenswrapper[4728]: I0227 10:28:04.042652 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:04Z","lastTransitionTime":"2026-02-27T10:28:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:04 crc kubenswrapper[4728]: I0227 10:28:04.145906 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:04 crc kubenswrapper[4728]: I0227 10:28:04.145965 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:04 crc kubenswrapper[4728]: I0227 10:28:04.146008 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:04 crc kubenswrapper[4728]: I0227 10:28:04.146054 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:04 crc kubenswrapper[4728]: I0227 10:28:04.146077 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:04Z","lastTransitionTime":"2026-02-27T10:28:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:04 crc kubenswrapper[4728]: I0227 10:28:04.249856 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:04 crc kubenswrapper[4728]: I0227 10:28:04.250154 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:04 crc kubenswrapper[4728]: I0227 10:28:04.250245 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:04 crc kubenswrapper[4728]: I0227 10:28:04.250343 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:04 crc kubenswrapper[4728]: I0227 10:28:04.250431 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:04Z","lastTransitionTime":"2026-02-27T10:28:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:04 crc kubenswrapper[4728]: I0227 10:28:04.353168 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:04 crc kubenswrapper[4728]: I0227 10:28:04.353296 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:04 crc kubenswrapper[4728]: I0227 10:28:04.353315 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:04 crc kubenswrapper[4728]: I0227 10:28:04.353340 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:04 crc kubenswrapper[4728]: I0227 10:28:04.353357 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:04Z","lastTransitionTime":"2026-02-27T10:28:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:04 crc kubenswrapper[4728]: I0227 10:28:04.455751 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:04 crc kubenswrapper[4728]: I0227 10:28:04.455831 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:04 crc kubenswrapper[4728]: I0227 10:28:04.455855 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:04 crc kubenswrapper[4728]: I0227 10:28:04.455878 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:04 crc kubenswrapper[4728]: I0227 10:28:04.455895 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:04Z","lastTransitionTime":"2026-02-27T10:28:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:04 crc kubenswrapper[4728]: I0227 10:28:04.559102 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:04 crc kubenswrapper[4728]: I0227 10:28:04.559156 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:04 crc kubenswrapper[4728]: I0227 10:28:04.559174 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:04 crc kubenswrapper[4728]: I0227 10:28:04.559196 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:04 crc kubenswrapper[4728]: I0227 10:28:04.559212 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:04Z","lastTransitionTime":"2026-02-27T10:28:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:04 crc kubenswrapper[4728]: I0227 10:28:04.662240 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:04 crc kubenswrapper[4728]: I0227 10:28:04.662352 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:04 crc kubenswrapper[4728]: I0227 10:28:04.662373 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:04 crc kubenswrapper[4728]: I0227 10:28:04.662400 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:04 crc kubenswrapper[4728]: I0227 10:28:04.662418 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:04Z","lastTransitionTime":"2026-02-27T10:28:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:04 crc kubenswrapper[4728]: I0227 10:28:04.724935 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 10:28:04 crc kubenswrapper[4728]: I0227 10:28:04.725000 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 10:28:04 crc kubenswrapper[4728]: E0227 10:28:04.725132 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 10:28:04 crc kubenswrapper[4728]: I0227 10:28:04.725181 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 10:28:04 crc kubenswrapper[4728]: E0227 10:28:04.725319 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 10:28:04 crc kubenswrapper[4728]: E0227 10:28:04.725449 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 10:28:04 crc kubenswrapper[4728]: I0227 10:28:04.765859 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:04 crc kubenswrapper[4728]: I0227 10:28:04.765919 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:04 crc kubenswrapper[4728]: I0227 10:28:04.765969 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:04 crc kubenswrapper[4728]: I0227 10:28:04.766007 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:04 crc kubenswrapper[4728]: I0227 10:28:04.766029 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:04Z","lastTransitionTime":"2026-02-27T10:28:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:04 crc kubenswrapper[4728]: I0227 10:28:04.869069 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:04 crc kubenswrapper[4728]: I0227 10:28:04.869149 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:04 crc kubenswrapper[4728]: I0227 10:28:04.869173 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:04 crc kubenswrapper[4728]: I0227 10:28:04.869204 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:04 crc kubenswrapper[4728]: I0227 10:28:04.869224 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:04Z","lastTransitionTime":"2026-02-27T10:28:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:04 crc kubenswrapper[4728]: I0227 10:28:04.971470 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:04 crc kubenswrapper[4728]: I0227 10:28:04.971939 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:04 crc kubenswrapper[4728]: I0227 10:28:04.972000 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:04 crc kubenswrapper[4728]: I0227 10:28:04.972063 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:04 crc kubenswrapper[4728]: I0227 10:28:04.972125 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:04Z","lastTransitionTime":"2026-02-27T10:28:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.073928 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.073966 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.073974 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.073988 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.073996 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:05Z","lastTransitionTime":"2026-02-27T10:28:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.176552 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.176623 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.176648 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.176679 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.176702 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:05Z","lastTransitionTime":"2026-02-27T10:28:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.186139 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-n4c77"] Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.186676 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-n4c77" Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.192879 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.192902 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.193363 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.207999 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.221182 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.238197 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ded44d8-d959-4509-be28-3560f21eebda\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4e8835f3ff41008225e090a7c460900d18997805869ec9191f1e7cce3c5778e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://144568fcb21837838d999f90cb51207fff11edb023b2d975dad1980e0817ca43\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19f04d61914f64b94f46aebdc8a7580de81f5ba7e15935be045d4e92bd0cd889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f09311526d9d8d971440bee60c506ffb423eed73c2ac58c81934c2df487ae48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f09311526d9d8d971440bee60c506ffb423eed73c2ac58c81934c2df487ae48\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T10:27:28Z\\\",\\\"message\\\":\\\"W0227 10:27:27.964275 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0227 10:27:27.965166 1 crypto.go:601] Generating new CA for check-endpoints-signer@1772188047 cert, and key in /tmp/serving-cert-1003999021/serving-signer.crt, /tmp/serving-cert-1003999021/serving-signer.key\\\\nI0227 10:27:28.312473 1 observer_polling.go:159] Starting file observer\\\\nW0227 10:27:28.324092 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 10:27:28.324213 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 10:27:28.326919 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1003999021/tls.crt::/tmp/serving-cert-1003999021/tls.key\\\\\\\"\\\\nI0227 10:27:28.533892 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 10:27:28.537238 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 10:27:28.537256 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 10:27:28.537275 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 10:27:28.537280 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nF0227 10:27:28.543870 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T10:27:27Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd5f667231cb95d3d072f38635815d04c45ae9803c179df54367329131afd05a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c860355459019bda5fddc0a658236e97a674f965aa2eff648e97bbaa30161b75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c860355459019bda5fddc0a658236e97a674f965aa2eff648e97bbaa30161b75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:26:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:26:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:26:20Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.254771 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.269050 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.280087 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.280270 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.280368 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.280461 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.280568 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:05Z","lastTransitionTime":"2026-02-27T10:28:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.281695 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.297298 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.306738 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n4c77" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef8ed63c-6947-4b06-8742-54b7ba279aa7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phsn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:28:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n4c77\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.335097 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/ef8ed63c-6947-4b06-8742-54b7ba279aa7-hosts-file\") pod \"node-resolver-n4c77\" (UID: \"ef8ed63c-6947-4b06-8742-54b7ba279aa7\") " pod="openshift-dns/node-resolver-n4c77" Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.335171 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phsn4\" (UniqueName: \"kubernetes.io/projected/ef8ed63c-6947-4b06-8742-54b7ba279aa7-kube-api-access-phsn4\") pod \"node-resolver-n4c77\" (UID: \"ef8ed63c-6947-4b06-8742-54b7ba279aa7\") " pod="openshift-dns/node-resolver-n4c77" Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.383428 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.383489 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.383536 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.383565 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.383584 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:05Z","lastTransitionTime":"2026-02-27T10:28:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.435872 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/ef8ed63c-6947-4b06-8742-54b7ba279aa7-hosts-file\") pod \"node-resolver-n4c77\" (UID: \"ef8ed63c-6947-4b06-8742-54b7ba279aa7\") " pod="openshift-dns/node-resolver-n4c77" Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.435936 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-phsn4\" (UniqueName: \"kubernetes.io/projected/ef8ed63c-6947-4b06-8742-54b7ba279aa7-kube-api-access-phsn4\") pod \"node-resolver-n4c77\" (UID: \"ef8ed63c-6947-4b06-8742-54b7ba279aa7\") " pod="openshift-dns/node-resolver-n4c77" Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.436027 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/ef8ed63c-6947-4b06-8742-54b7ba279aa7-hosts-file\") pod \"node-resolver-n4c77\" (UID: \"ef8ed63c-6947-4b06-8742-54b7ba279aa7\") " pod="openshift-dns/node-resolver-n4c77" Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.464633 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-phsn4\" (UniqueName: \"kubernetes.io/projected/ef8ed63c-6947-4b06-8742-54b7ba279aa7-kube-api-access-phsn4\") pod \"node-resolver-n4c77\" (UID: \"ef8ed63c-6947-4b06-8742-54b7ba279aa7\") " pod="openshift-dns/node-resolver-n4c77" Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.486707 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.486761 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.486780 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.486804 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.486821 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:05Z","lastTransitionTime":"2026-02-27T10:28:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.510132 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-n4c77" Feb 27 10:28:05 crc kubenswrapper[4728]: W0227 10:28:05.525646 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef8ed63c_6947_4b06_8742_54b7ba279aa7.slice/crio-7672520b912a86a7920adb0569cc31cb6e44428ece1811e5e863ee2491f5ced6 WatchSource:0}: Error finding container 7672520b912a86a7920adb0569cc31cb6e44428ece1811e5e863ee2491f5ced6: Status 404 returned error can't find the container with id 7672520b912a86a7920adb0569cc31cb6e44428ece1811e5e863ee2491f5ced6 Feb 27 10:28:05 crc kubenswrapper[4728]: E0227 10:28:05.528466 4728 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 10:28:05 crc kubenswrapper[4728]: container &Container{Name:dns-node-resolver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/bin/bash -c #!/bin/bash Feb 27 10:28:05 crc kubenswrapper[4728]: set -uo pipefail Feb 27 10:28:05 crc kubenswrapper[4728]: Feb 27 10:28:05 crc kubenswrapper[4728]: trap 'jobs -p | xargs kill || true; wait; exit 0' TERM Feb 27 10:28:05 crc kubenswrapper[4728]: Feb 27 10:28:05 crc kubenswrapper[4728]: OPENSHIFT_MARKER="openshift-generated-node-resolver" Feb 27 10:28:05 crc kubenswrapper[4728]: HOSTS_FILE="/etc/hosts" Feb 27 10:28:05 crc kubenswrapper[4728]: TEMP_FILE="/etc/hosts.tmp" Feb 27 10:28:05 crc kubenswrapper[4728]: Feb 27 10:28:05 crc kubenswrapper[4728]: IFS=', ' read -r -a services <<< "${SERVICES}" Feb 27 10:28:05 crc kubenswrapper[4728]: Feb 27 10:28:05 crc kubenswrapper[4728]: # Make a temporary file with the old hosts file's attributes. Feb 27 10:28:05 crc kubenswrapper[4728]: if ! cp -f --attributes-only "${HOSTS_FILE}" "${TEMP_FILE}"; then Feb 27 10:28:05 crc kubenswrapper[4728]: echo "Failed to preserve hosts file. Exiting." Feb 27 10:28:05 crc kubenswrapper[4728]: exit 1 Feb 27 10:28:05 crc kubenswrapper[4728]: fi Feb 27 10:28:05 crc kubenswrapper[4728]: Feb 27 10:28:05 crc kubenswrapper[4728]: while true; do Feb 27 10:28:05 crc kubenswrapper[4728]: declare -A svc_ips Feb 27 10:28:05 crc kubenswrapper[4728]: for svc in "${services[@]}"; do Feb 27 10:28:05 crc kubenswrapper[4728]: # Fetch service IP from cluster dns if present. We make several tries Feb 27 10:28:05 crc kubenswrapper[4728]: # to do it: IPv4, IPv6, IPv4 over TCP and IPv6 over TCP. The two last ones Feb 27 10:28:05 crc kubenswrapper[4728]: # are for deployments with Kuryr on older OpenStack (OSP13) - those do not Feb 27 10:28:05 crc kubenswrapper[4728]: # support UDP loadbalancers and require reaching DNS through TCP. Feb 27 10:28:05 crc kubenswrapper[4728]: cmds=('dig -t A @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Feb 27 10:28:05 crc kubenswrapper[4728]: 'dig -t AAAA @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Feb 27 10:28:05 crc kubenswrapper[4728]: 'dig -t A +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Feb 27 10:28:05 crc kubenswrapper[4728]: 'dig -t AAAA +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"') Feb 27 10:28:05 crc kubenswrapper[4728]: for i in ${!cmds[*]} Feb 27 10:28:05 crc kubenswrapper[4728]: do Feb 27 10:28:05 crc kubenswrapper[4728]: ips=($(eval "${cmds[i]}")) Feb 27 10:28:05 crc kubenswrapper[4728]: if [[ "$?" -eq 0 && "${#ips[@]}" -ne 0 ]]; then Feb 27 10:28:05 crc kubenswrapper[4728]: svc_ips["${svc}"]="${ips[@]}" Feb 27 10:28:05 crc kubenswrapper[4728]: break Feb 27 10:28:05 crc kubenswrapper[4728]: fi Feb 27 10:28:05 crc kubenswrapper[4728]: done Feb 27 10:28:05 crc kubenswrapper[4728]: done Feb 27 10:28:05 crc kubenswrapper[4728]: Feb 27 10:28:05 crc kubenswrapper[4728]: # Update /etc/hosts only if we get valid service IPs Feb 27 10:28:05 crc kubenswrapper[4728]: # We will not update /etc/hosts when there is coredns service outage or api unavailability Feb 27 10:28:05 crc kubenswrapper[4728]: # Stale entries could exist in /etc/hosts if the service is deleted Feb 27 10:28:05 crc kubenswrapper[4728]: if [[ -n "${svc_ips[*]-}" ]]; then Feb 27 10:28:05 crc kubenswrapper[4728]: # Build a new hosts file from /etc/hosts with our custom entries filtered out Feb 27 10:28:05 crc kubenswrapper[4728]: if ! sed --silent "/# ${OPENSHIFT_MARKER}/d; w ${TEMP_FILE}" "${HOSTS_FILE}"; then Feb 27 10:28:05 crc kubenswrapper[4728]: # Only continue rebuilding the hosts entries if its original content is preserved Feb 27 10:28:05 crc kubenswrapper[4728]: sleep 60 & wait Feb 27 10:28:05 crc kubenswrapper[4728]: continue Feb 27 10:28:05 crc kubenswrapper[4728]: fi Feb 27 10:28:05 crc kubenswrapper[4728]: Feb 27 10:28:05 crc kubenswrapper[4728]: # Append resolver entries for services Feb 27 10:28:05 crc kubenswrapper[4728]: rc=0 Feb 27 10:28:05 crc kubenswrapper[4728]: for svc in "${!svc_ips[@]}"; do Feb 27 10:28:05 crc kubenswrapper[4728]: for ip in ${svc_ips[${svc}]}; do Feb 27 10:28:05 crc kubenswrapper[4728]: echo "${ip} ${svc} ${svc}.${CLUSTER_DOMAIN} # ${OPENSHIFT_MARKER}" >> "${TEMP_FILE}" || rc=$? Feb 27 10:28:05 crc kubenswrapper[4728]: done Feb 27 10:28:05 crc kubenswrapper[4728]: done Feb 27 10:28:05 crc kubenswrapper[4728]: if [[ $rc -ne 0 ]]; then Feb 27 10:28:05 crc kubenswrapper[4728]: sleep 60 & wait Feb 27 10:28:05 crc kubenswrapper[4728]: continue Feb 27 10:28:05 crc kubenswrapper[4728]: fi Feb 27 10:28:05 crc kubenswrapper[4728]: Feb 27 10:28:05 crc kubenswrapper[4728]: Feb 27 10:28:05 crc kubenswrapper[4728]: # TODO: Update /etc/hosts atomically to avoid any inconsistent behavior Feb 27 10:28:05 crc kubenswrapper[4728]: # Replace /etc/hosts with our modified version if needed Feb 27 10:28:05 crc kubenswrapper[4728]: cmp "${TEMP_FILE}" "${HOSTS_FILE}" || cp -f "${TEMP_FILE}" "${HOSTS_FILE}" Feb 27 10:28:05 crc kubenswrapper[4728]: # TEMP_FILE is not removed to avoid file create/delete and attributes copy churn Feb 27 10:28:05 crc kubenswrapper[4728]: fi Feb 27 10:28:05 crc kubenswrapper[4728]: sleep 60 & wait Feb 27 10:28:05 crc kubenswrapper[4728]: unset svc_ips Feb 27 10:28:05 crc kubenswrapper[4728]: done Feb 27 10:28:05 crc kubenswrapper[4728]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:SERVICES,Value:image-registry.openshift-image-registry.svc,ValueFrom:nil,},EnvVar{Name:NAMESERVER,Value:10.217.4.10,ValueFrom:nil,},EnvVar{Name:CLUSTER_DOMAIN,Value:cluster.local,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{22020096 0} {} 21Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hosts-file,ReadOnly:false,MountPath:/etc/hosts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-phsn4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-resolver-n4c77_openshift-dns(ef8ed63c-6947-4b06-8742-54b7ba279aa7): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 27 10:28:05 crc kubenswrapper[4728]: > logger="UnhandledError" Feb 27 10:28:05 crc kubenswrapper[4728]: E0227 10:28:05.529676 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns-node-resolver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-dns/node-resolver-n4c77" podUID="ef8ed63c-6947-4b06-8742-54b7ba279aa7" Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.562148 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-xghgn"] Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.563249 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-mf2hh"] Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.563743 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-9tlth"] Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.564202 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-9tlth" Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.564843 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-xghgn" Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.565424 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.569151 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.569911 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.570275 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.570345 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.570650 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.570866 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.570702 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.571266 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.570708 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.572146 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.580641 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.581058 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.589767 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.590136 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.590588 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.590953 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.591544 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:05Z","lastTransitionTime":"2026-02-27T10:28:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.593480 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.615288 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xghgn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0cd760d8-c9b2-4e95-97a3-94bc759c9884\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qtlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qtlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qtlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qtlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qtlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qtlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qtlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:28:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xghgn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.632230 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.648290 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.662345 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ded44d8-d959-4509-be28-3560f21eebda\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4e8835f3ff41008225e090a7c460900d18997805869ec9191f1e7cce3c5778e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://144568fcb21837838d999f90cb51207fff11edb023b2d975dad1980e0817ca43\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19f04d61914f64b94f46aebdc8a7580de81f5ba7e15935be045d4e92bd0cd889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f09311526d9d8d971440bee60c506ffb423eed73c2ac58c81934c2df487ae48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f09311526d9d8d971440bee60c506ffb423eed73c2ac58c81934c2df487ae48\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T10:27:28Z\\\",\\\"message\\\":\\\"W0227 10:27:27.964275 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0227 10:27:27.965166 1 crypto.go:601] Generating new CA for check-endpoints-signer@1772188047 cert, and key in /tmp/serving-cert-1003999021/serving-signer.crt, /tmp/serving-cert-1003999021/serving-signer.key\\\\nI0227 10:27:28.312473 1 observer_polling.go:159] Starting file observer\\\\nW0227 10:27:28.324092 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 10:27:28.324213 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 10:27:28.326919 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1003999021/tls.crt::/tmp/serving-cert-1003999021/tls.key\\\\\\\"\\\\nI0227 10:27:28.533892 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 10:27:28.537238 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 10:27:28.537256 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 10:27:28.537275 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 10:27:28.537280 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nF0227 10:27:28.543870 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T10:27:27Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd5f667231cb95d3d072f38635815d04c45ae9803c179df54367329131afd05a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c860355459019bda5fddc0a658236e97a674f965aa2eff648e97bbaa30161b75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c860355459019bda5fddc0a658236e97a674f965aa2eff648e97bbaa30161b75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:26:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:26:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:26:20Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.679367 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.695602 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.695656 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.695673 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.695697 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.695716 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:05Z","lastTransitionTime":"2026-02-27T10:28:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.696965 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.713670 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.725861 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n4c77" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef8ed63c-6947-4b06-8742-54b7ba279aa7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phsn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:28:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n4c77\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.739695 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/468912b7-185a-4869-9a65-70cbcb3c4fb1-cni-binary-copy\") pod \"multus-9tlth\" (UID: \"468912b7-185a-4869-9a65-70cbcb3c4fb1\") " pod="openshift-multus/multus-9tlth" Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.740017 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/0cd760d8-c9b2-4e95-97a3-94bc759c9884-os-release\") pod \"multus-additional-cni-plugins-xghgn\" (UID: \"0cd760d8-c9b2-4e95-97a3-94bc759c9884\") " pod="openshift-multus/multus-additional-cni-plugins-xghgn" Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.740285 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/468912b7-185a-4869-9a65-70cbcb3c4fb1-multus-cni-dir\") pod \"multus-9tlth\" (UID: \"468912b7-185a-4869-9a65-70cbcb3c4fb1\") " pod="openshift-multus/multus-9tlth" Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.740548 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qtlr\" (UniqueName: \"kubernetes.io/projected/0cd760d8-c9b2-4e95-97a3-94bc759c9884-kube-api-access-9qtlr\") pod \"multus-additional-cni-plugins-xghgn\" (UID: \"0cd760d8-c9b2-4e95-97a3-94bc759c9884\") " pod="openshift-multus/multus-additional-cni-plugins-xghgn" Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.740786 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/468912b7-185a-4869-9a65-70cbcb3c4fb1-host-run-k8s-cni-cncf-io\") pod \"multus-9tlth\" (UID: \"468912b7-185a-4869-9a65-70cbcb3c4fb1\") " pod="openshift-multus/multus-9tlth" Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.740982 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/468912b7-185a-4869-9a65-70cbcb3c4fb1-host-var-lib-cni-bin\") pod \"multus-9tlth\" (UID: \"468912b7-185a-4869-9a65-70cbcb3c4fb1\") " pod="openshift-multus/multus-9tlth" Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.741199 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjq97\" (UniqueName: \"kubernetes.io/projected/468912b7-185a-4869-9a65-70cbcb3c4fb1-kube-api-access-sjq97\") pod \"multus-9tlth\" (UID: \"468912b7-185a-4869-9a65-70cbcb3c4fb1\") " pod="openshift-multus/multus-9tlth" Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.741414 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c2cfd349-f825-497b-b698-7fb6bc258b22-mcd-auth-proxy-config\") pod \"machine-config-daemon-mf2hh\" (UID: \"c2cfd349-f825-497b-b698-7fb6bc258b22\") " pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.741650 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/468912b7-185a-4869-9a65-70cbcb3c4fb1-multus-socket-dir-parent\") pod \"multus-9tlth\" (UID: \"468912b7-185a-4869-9a65-70cbcb3c4fb1\") " pod="openshift-multus/multus-9tlth" Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.741853 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/0cd760d8-c9b2-4e95-97a3-94bc759c9884-system-cni-dir\") pod \"multus-additional-cni-plugins-xghgn\" (UID: \"0cd760d8-c9b2-4e95-97a3-94bc759c9884\") " pod="openshift-multus/multus-additional-cni-plugins-xghgn" Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.741971 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9tlth" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"468912b7-185a-4869-9a65-70cbcb3c4fb1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjq97\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:28:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9tlth\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.742335 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/468912b7-185a-4869-9a65-70cbcb3c4fb1-os-release\") pod \"multus-9tlth\" (UID: \"468912b7-185a-4869-9a65-70cbcb3c4fb1\") " pod="openshift-multus/multus-9tlth" Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.742607 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/0cd760d8-c9b2-4e95-97a3-94bc759c9884-cni-binary-copy\") pod \"multus-additional-cni-plugins-xghgn\" (UID: \"0cd760d8-c9b2-4e95-97a3-94bc759c9884\") " pod="openshift-multus/multus-additional-cni-plugins-xghgn" Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.742853 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/468912b7-185a-4869-9a65-70cbcb3c4fb1-system-cni-dir\") pod \"multus-9tlth\" (UID: \"468912b7-185a-4869-9a65-70cbcb3c4fb1\") " pod="openshift-multus/multus-9tlth" Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.743078 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/468912b7-185a-4869-9a65-70cbcb3c4fb1-etc-kubernetes\") pod \"multus-9tlth\" (UID: \"468912b7-185a-4869-9a65-70cbcb3c4fb1\") " pod="openshift-multus/multus-9tlth" Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.743364 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/468912b7-185a-4869-9a65-70cbcb3c4fb1-host-run-netns\") pod \"multus-9tlth\" (UID: \"468912b7-185a-4869-9a65-70cbcb3c4fb1\") " pod="openshift-multus/multus-9tlth" Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.743630 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/468912b7-185a-4869-9a65-70cbcb3c4fb1-multus-conf-dir\") pod \"multus-9tlth\" (UID: \"468912b7-185a-4869-9a65-70cbcb3c4fb1\") " pod="openshift-multus/multus-9tlth" Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.743845 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/468912b7-185a-4869-9a65-70cbcb3c4fb1-host-run-multus-certs\") pod \"multus-9tlth\" (UID: \"468912b7-185a-4869-9a65-70cbcb3c4fb1\") " pod="openshift-multus/multus-9tlth" Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.744048 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c64ws\" (UniqueName: \"kubernetes.io/projected/c2cfd349-f825-497b-b698-7fb6bc258b22-kube-api-access-c64ws\") pod \"machine-config-daemon-mf2hh\" (UID: \"c2cfd349-f825-497b-b698-7fb6bc258b22\") " pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.744259 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/0cd760d8-c9b2-4e95-97a3-94bc759c9884-cnibin\") pod \"multus-additional-cni-plugins-xghgn\" (UID: \"0cd760d8-c9b2-4e95-97a3-94bc759c9884\") " pod="openshift-multus/multus-additional-cni-plugins-xghgn" Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.744537 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/0cd760d8-c9b2-4e95-97a3-94bc759c9884-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-xghgn\" (UID: \"0cd760d8-c9b2-4e95-97a3-94bc759c9884\") " pod="openshift-multus/multus-additional-cni-plugins-xghgn" Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.744776 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/468912b7-185a-4869-9a65-70cbcb3c4fb1-host-var-lib-cni-multus\") pod \"multus-9tlth\" (UID: \"468912b7-185a-4869-9a65-70cbcb3c4fb1\") " pod="openshift-multus/multus-9tlth" Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.745030 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/468912b7-185a-4869-9a65-70cbcb3c4fb1-host-var-lib-kubelet\") pod \"multus-9tlth\" (UID: \"468912b7-185a-4869-9a65-70cbcb3c4fb1\") " pod="openshift-multus/multus-9tlth" Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.745253 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/0cd760d8-c9b2-4e95-97a3-94bc759c9884-tuning-conf-dir\") pod \"multus-additional-cni-plugins-xghgn\" (UID: \"0cd760d8-c9b2-4e95-97a3-94bc759c9884\") " pod="openshift-multus/multus-additional-cni-plugins-xghgn" Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.745492 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/468912b7-185a-4869-9a65-70cbcb3c4fb1-multus-daemon-config\") pod \"multus-9tlth\" (UID: \"468912b7-185a-4869-9a65-70cbcb3c4fb1\") " pod="openshift-multus/multus-9tlth" Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.745766 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c2cfd349-f825-497b-b698-7fb6bc258b22-proxy-tls\") pod \"machine-config-daemon-mf2hh\" (UID: \"c2cfd349-f825-497b-b698-7fb6bc258b22\") " pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.745985 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/468912b7-185a-4869-9a65-70cbcb3c4fb1-cnibin\") pod \"multus-9tlth\" (UID: \"468912b7-185a-4869-9a65-70cbcb3c4fb1\") " pod="openshift-multus/multus-9tlth" Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.746223 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/468912b7-185a-4869-9a65-70cbcb3c4fb1-hostroot\") pod \"multus-9tlth\" (UID: \"468912b7-185a-4869-9a65-70cbcb3c4fb1\") " pod="openshift-multus/multus-9tlth" Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.746444 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/c2cfd349-f825-497b-b698-7fb6bc258b22-rootfs\") pod \"machine-config-daemon-mf2hh\" (UID: \"c2cfd349-f825-497b-b698-7fb6bc258b22\") " pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.759241 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.774230 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2cfd349-f825-497b-b698-7fb6bc258b22\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c64ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c64ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:28:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mf2hh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.789597 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.798958 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.799005 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.799013 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.799027 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.799036 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:05Z","lastTransitionTime":"2026-02-27T10:28:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.802371 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n4c77" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef8ed63c-6947-4b06-8742-54b7ba279aa7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phsn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:28:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n4c77\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.818426 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9tlth" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"468912b7-185a-4869-9a65-70cbcb3c4fb1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjq97\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:28:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9tlth\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.837330 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ded44d8-d959-4509-be28-3560f21eebda\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4e8835f3ff41008225e090a7c460900d18997805869ec9191f1e7cce3c5778e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://144568fcb21837838d999f90cb51207fff11edb023b2d975dad1980e0817ca43\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19f04d61914f64b94f46aebdc8a7580de81f5ba7e15935be045d4e92bd0cd889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f09311526d9d8d971440bee60c506ffb423eed73c2ac58c81934c2df487ae48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f09311526d9d8d971440bee60c506ffb423eed73c2ac58c81934c2df487ae48\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T10:27:28Z\\\",\\\"message\\\":\\\"W0227 10:27:27.964275 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0227 10:27:27.965166 1 crypto.go:601] Generating new CA for check-endpoints-signer@1772188047 cert, and key in /tmp/serving-cert-1003999021/serving-signer.crt, /tmp/serving-cert-1003999021/serving-signer.key\\\\nI0227 10:27:28.312473 1 observer_polling.go:159] Starting file observer\\\\nW0227 10:27:28.324092 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 10:27:28.324213 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 10:27:28.326919 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1003999021/tls.crt::/tmp/serving-cert-1003999021/tls.key\\\\\\\"\\\\nI0227 10:27:28.533892 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 10:27:28.537238 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 10:27:28.537256 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 10:27:28.537275 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 10:27:28.537280 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nF0227 10:27:28.543870 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T10:27:27Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd5f667231cb95d3d072f38635815d04c45ae9803c179df54367329131afd05a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c860355459019bda5fddc0a658236e97a674f965aa2eff648e97bbaa30161b75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c860355459019bda5fddc0a658236e97a674f965aa2eff648e97bbaa30161b75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:26:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:26:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:26:20Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.847496 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/0cd760d8-c9b2-4e95-97a3-94bc759c9884-tuning-conf-dir\") pod \"multus-additional-cni-plugins-xghgn\" (UID: \"0cd760d8-c9b2-4e95-97a3-94bc759c9884\") " pod="openshift-multus/multus-additional-cni-plugins-xghgn" Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.847579 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/468912b7-185a-4869-9a65-70cbcb3c4fb1-multus-daemon-config\") pod \"multus-9tlth\" (UID: \"468912b7-185a-4869-9a65-70cbcb3c4fb1\") " pod="openshift-multus/multus-9tlth" Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.847611 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c2cfd349-f825-497b-b698-7fb6bc258b22-proxy-tls\") pod \"machine-config-daemon-mf2hh\" (UID: \"c2cfd349-f825-497b-b698-7fb6bc258b22\") " pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.847641 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/468912b7-185a-4869-9a65-70cbcb3c4fb1-cnibin\") pod \"multus-9tlth\" (UID: \"468912b7-185a-4869-9a65-70cbcb3c4fb1\") " pod="openshift-multus/multus-9tlth" Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.847689 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/468912b7-185a-4869-9a65-70cbcb3c4fb1-hostroot\") pod \"multus-9tlth\" (UID: \"468912b7-185a-4869-9a65-70cbcb3c4fb1\") " pod="openshift-multus/multus-9tlth" Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.847722 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/c2cfd349-f825-497b-b698-7fb6bc258b22-rootfs\") pod \"machine-config-daemon-mf2hh\" (UID: \"c2cfd349-f825-497b-b698-7fb6bc258b22\") " pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.847755 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/468912b7-185a-4869-9a65-70cbcb3c4fb1-cni-binary-copy\") pod \"multus-9tlth\" (UID: \"468912b7-185a-4869-9a65-70cbcb3c4fb1\") " pod="openshift-multus/multus-9tlth" Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.847806 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/0cd760d8-c9b2-4e95-97a3-94bc759c9884-os-release\") pod \"multus-additional-cni-plugins-xghgn\" (UID: \"0cd760d8-c9b2-4e95-97a3-94bc759c9884\") " pod="openshift-multus/multus-additional-cni-plugins-xghgn" Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.847836 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/468912b7-185a-4869-9a65-70cbcb3c4fb1-multus-cni-dir\") pod \"multus-9tlth\" (UID: \"468912b7-185a-4869-9a65-70cbcb3c4fb1\") " pod="openshift-multus/multus-9tlth" Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.847866 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c2cfd349-f825-497b-b698-7fb6bc258b22-mcd-auth-proxy-config\") pod \"machine-config-daemon-mf2hh\" (UID: \"c2cfd349-f825-497b-b698-7fb6bc258b22\") " pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.847897 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9qtlr\" (UniqueName: \"kubernetes.io/projected/0cd760d8-c9b2-4e95-97a3-94bc759c9884-kube-api-access-9qtlr\") pod \"multus-additional-cni-plugins-xghgn\" (UID: \"0cd760d8-c9b2-4e95-97a3-94bc759c9884\") " pod="openshift-multus/multus-additional-cni-plugins-xghgn" Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.847929 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/468912b7-185a-4869-9a65-70cbcb3c4fb1-host-run-k8s-cni-cncf-io\") pod \"multus-9tlth\" (UID: \"468912b7-185a-4869-9a65-70cbcb3c4fb1\") " pod="openshift-multus/multus-9tlth" Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.847944 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/468912b7-185a-4869-9a65-70cbcb3c4fb1-cnibin\") pod \"multus-9tlth\" (UID: \"468912b7-185a-4869-9a65-70cbcb3c4fb1\") " pod="openshift-multus/multus-9tlth" Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.847959 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/468912b7-185a-4869-9a65-70cbcb3c4fb1-host-var-lib-cni-bin\") pod \"multus-9tlth\" (UID: \"468912b7-185a-4869-9a65-70cbcb3c4fb1\") " pod="openshift-multus/multus-9tlth" Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.848043 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sjq97\" (UniqueName: \"kubernetes.io/projected/468912b7-185a-4869-9a65-70cbcb3c4fb1-kube-api-access-sjq97\") pod \"multus-9tlth\" (UID: \"468912b7-185a-4869-9a65-70cbcb3c4fb1\") " pod="openshift-multus/multus-9tlth" Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.848087 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/0cd760d8-c9b2-4e95-97a3-94bc759c9884-system-cni-dir\") pod \"multus-additional-cni-plugins-xghgn\" (UID: \"0cd760d8-c9b2-4e95-97a3-94bc759c9884\") " pod="openshift-multus/multus-additional-cni-plugins-xghgn" Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.848120 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/468912b7-185a-4869-9a65-70cbcb3c4fb1-os-release\") pod \"multus-9tlth\" (UID: \"468912b7-185a-4869-9a65-70cbcb3c4fb1\") " pod="openshift-multus/multus-9tlth" Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.848152 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/468912b7-185a-4869-9a65-70cbcb3c4fb1-multus-socket-dir-parent\") pod \"multus-9tlth\" (UID: \"468912b7-185a-4869-9a65-70cbcb3c4fb1\") " pod="openshift-multus/multus-9tlth" Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.848143 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/0cd760d8-c9b2-4e95-97a3-94bc759c9884-os-release\") pod \"multus-additional-cni-plugins-xghgn\" (UID: \"0cd760d8-c9b2-4e95-97a3-94bc759c9884\") " pod="openshift-multus/multus-additional-cni-plugins-xghgn" Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.848188 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/0cd760d8-c9b2-4e95-97a3-94bc759c9884-cni-binary-copy\") pod \"multus-additional-cni-plugins-xghgn\" (UID: \"0cd760d8-c9b2-4e95-97a3-94bc759c9884\") " pod="openshift-multus/multus-additional-cni-plugins-xghgn" Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.848246 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/468912b7-185a-4869-9a65-70cbcb3c4fb1-system-cni-dir\") pod \"multus-9tlth\" (UID: \"468912b7-185a-4869-9a65-70cbcb3c4fb1\") " pod="openshift-multus/multus-9tlth" Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.848228 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/468912b7-185a-4869-9a65-70cbcb3c4fb1-host-run-k8s-cni-cncf-io\") pod \"multus-9tlth\" (UID: \"468912b7-185a-4869-9a65-70cbcb3c4fb1\") " pod="openshift-multus/multus-9tlth" Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.848310 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/468912b7-185a-4869-9a65-70cbcb3c4fb1-etc-kubernetes\") pod \"multus-9tlth\" (UID: \"468912b7-185a-4869-9a65-70cbcb3c4fb1\") " pod="openshift-multus/multus-9tlth" Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.848278 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/468912b7-185a-4869-9a65-70cbcb3c4fb1-etc-kubernetes\") pod \"multus-9tlth\" (UID: \"468912b7-185a-4869-9a65-70cbcb3c4fb1\") " pod="openshift-multus/multus-9tlth" Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.848373 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/468912b7-185a-4869-9a65-70cbcb3c4fb1-system-cni-dir\") pod \"multus-9tlth\" (UID: \"468912b7-185a-4869-9a65-70cbcb3c4fb1\") " pod="openshift-multus/multus-9tlth" Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.847863 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/468912b7-185a-4869-9a65-70cbcb3c4fb1-hostroot\") pod \"multus-9tlth\" (UID: \"468912b7-185a-4869-9a65-70cbcb3c4fb1\") " pod="openshift-multus/multus-9tlth" Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.848404 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/468912b7-185a-4869-9a65-70cbcb3c4fb1-host-run-netns\") pod \"multus-9tlth\" (UID: \"468912b7-185a-4869-9a65-70cbcb3c4fb1\") " pod="openshift-multus/multus-9tlth" Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.848430 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/0cd760d8-c9b2-4e95-97a3-94bc759c9884-system-cni-dir\") pod \"multus-additional-cni-plugins-xghgn\" (UID: \"0cd760d8-c9b2-4e95-97a3-94bc759c9884\") " pod="openshift-multus/multus-additional-cni-plugins-xghgn" Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.848441 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/468912b7-185a-4869-9a65-70cbcb3c4fb1-host-run-multus-certs\") pod \"multus-9tlth\" (UID: \"468912b7-185a-4869-9a65-70cbcb3c4fb1\") " pod="openshift-multus/multus-9tlth" Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.848476 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c64ws\" (UniqueName: \"kubernetes.io/projected/c2cfd349-f825-497b-b698-7fb6bc258b22-kube-api-access-c64ws\") pod \"machine-config-daemon-mf2hh\" (UID: \"c2cfd349-f825-497b-b698-7fb6bc258b22\") " pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.848559 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/468912b7-185a-4869-9a65-70cbcb3c4fb1-multus-conf-dir\") pod \"multus-9tlth\" (UID: \"468912b7-185a-4869-9a65-70cbcb3c4fb1\") " pod="openshift-multus/multus-9tlth" Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.848591 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/0cd760d8-c9b2-4e95-97a3-94bc759c9884-cnibin\") pod \"multus-additional-cni-plugins-xghgn\" (UID: \"0cd760d8-c9b2-4e95-97a3-94bc759c9884\") " pod="openshift-multus/multus-additional-cni-plugins-xghgn" Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.848610 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/468912b7-185a-4869-9a65-70cbcb3c4fb1-multus-cni-dir\") pod \"multus-9tlth\" (UID: \"468912b7-185a-4869-9a65-70cbcb3c4fb1\") " pod="openshift-multus/multus-9tlth" Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.848682 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/468912b7-185a-4869-9a65-70cbcb3c4fb1-host-run-netns\") pod \"multus-9tlth\" (UID: \"468912b7-185a-4869-9a65-70cbcb3c4fb1\") " pod="openshift-multus/multus-9tlth" Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.848732 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/468912b7-185a-4869-9a65-70cbcb3c4fb1-host-run-multus-certs\") pod \"multus-9tlth\" (UID: \"468912b7-185a-4869-9a65-70cbcb3c4fb1\") " pod="openshift-multus/multus-9tlth" Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.848026 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/c2cfd349-f825-497b-b698-7fb6bc258b22-rootfs\") pod \"machine-config-daemon-mf2hh\" (UID: \"c2cfd349-f825-497b-b698-7fb6bc258b22\") " pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.849029 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/468912b7-185a-4869-9a65-70cbcb3c4fb1-multus-conf-dir\") pod \"multus-9tlth\" (UID: \"468912b7-185a-4869-9a65-70cbcb3c4fb1\") " pod="openshift-multus/multus-9tlth" Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.849054 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/468912b7-185a-4869-9a65-70cbcb3c4fb1-multus-daemon-config\") pod \"multus-9tlth\" (UID: \"468912b7-185a-4869-9a65-70cbcb3c4fb1\") " pod="openshift-multus/multus-9tlth" Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.849117 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/468912b7-185a-4869-9a65-70cbcb3c4fb1-cni-binary-copy\") pod \"multus-9tlth\" (UID: \"468912b7-185a-4869-9a65-70cbcb3c4fb1\") " pod="openshift-multus/multus-9tlth" Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.849126 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/0cd760d8-c9b2-4e95-97a3-94bc759c9884-cnibin\") pod \"multus-additional-cni-plugins-xghgn\" (UID: \"0cd760d8-c9b2-4e95-97a3-94bc759c9884\") " pod="openshift-multus/multus-additional-cni-plugins-xghgn" Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.848621 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/0cd760d8-c9b2-4e95-97a3-94bc759c9884-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-xghgn\" (UID: \"0cd760d8-c9b2-4e95-97a3-94bc759c9884\") " pod="openshift-multus/multus-additional-cni-plugins-xghgn" Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.849223 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/468912b7-185a-4869-9a65-70cbcb3c4fb1-host-var-lib-cni-multus\") pod \"multus-9tlth\" (UID: \"468912b7-185a-4869-9a65-70cbcb3c4fb1\") " pod="openshift-multus/multus-9tlth" Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.849230 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/468912b7-185a-4869-9a65-70cbcb3c4fb1-host-var-lib-cni-bin\") pod \"multus-9tlth\" (UID: \"468912b7-185a-4869-9a65-70cbcb3c4fb1\") " pod="openshift-multus/multus-9tlth" Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.849261 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/468912b7-185a-4869-9a65-70cbcb3c4fb1-host-var-lib-kubelet\") pod \"multus-9tlth\" (UID: \"468912b7-185a-4869-9a65-70cbcb3c4fb1\") " pod="openshift-multus/multus-9tlth" Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.849271 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/468912b7-185a-4869-9a65-70cbcb3c4fb1-host-var-lib-cni-multus\") pod \"multus-9tlth\" (UID: \"468912b7-185a-4869-9a65-70cbcb3c4fb1\") " pod="openshift-multus/multus-9tlth" Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.849127 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/468912b7-185a-4869-9a65-70cbcb3c4fb1-multus-socket-dir-parent\") pod \"multus-9tlth\" (UID: \"468912b7-185a-4869-9a65-70cbcb3c4fb1\") " pod="openshift-multus/multus-9tlth" Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.849304 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/468912b7-185a-4869-9a65-70cbcb3c4fb1-host-var-lib-kubelet\") pod \"multus-9tlth\" (UID: \"468912b7-185a-4869-9a65-70cbcb3c4fb1\") " pod="openshift-multus/multus-9tlth" Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.849450 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/0cd760d8-c9b2-4e95-97a3-94bc759c9884-tuning-conf-dir\") pod \"multus-additional-cni-plugins-xghgn\" (UID: \"0cd760d8-c9b2-4e95-97a3-94bc759c9884\") " pod="openshift-multus/multus-additional-cni-plugins-xghgn" Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.849456 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/0cd760d8-c9b2-4e95-97a3-94bc759c9884-cni-binary-copy\") pod \"multus-additional-cni-plugins-xghgn\" (UID: \"0cd760d8-c9b2-4e95-97a3-94bc759c9884\") " pod="openshift-multus/multus-additional-cni-plugins-xghgn" Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.849633 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/0cd760d8-c9b2-4e95-97a3-94bc759c9884-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-xghgn\" (UID: \"0cd760d8-c9b2-4e95-97a3-94bc759c9884\") " pod="openshift-multus/multus-additional-cni-plugins-xghgn" Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.849728 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/468912b7-185a-4869-9a65-70cbcb3c4fb1-os-release\") pod \"multus-9tlth\" (UID: \"468912b7-185a-4869-9a65-70cbcb3c4fb1\") " pod="openshift-multus/multus-9tlth" Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.851168 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c2cfd349-f825-497b-b698-7fb6bc258b22-mcd-auth-proxy-config\") pod \"machine-config-daemon-mf2hh\" (UID: \"c2cfd349-f825-497b-b698-7fb6bc258b22\") " pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.853490 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c2cfd349-f825-497b-b698-7fb6bc258b22-proxy-tls\") pod \"machine-config-daemon-mf2hh\" (UID: \"c2cfd349-f825-497b-b698-7fb6bc258b22\") " pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.856662 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.874927 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.880284 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sjq97\" (UniqueName: \"kubernetes.io/projected/468912b7-185a-4869-9a65-70cbcb3c4fb1-kube-api-access-sjq97\") pod \"multus-9tlth\" (UID: \"468912b7-185a-4869-9a65-70cbcb3c4fb1\") " pod="openshift-multus/multus-9tlth" Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.880783 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9qtlr\" (UniqueName: \"kubernetes.io/projected/0cd760d8-c9b2-4e95-97a3-94bc759c9884-kube-api-access-9qtlr\") pod \"multus-additional-cni-plugins-xghgn\" (UID: \"0cd760d8-c9b2-4e95-97a3-94bc759c9884\") " pod="openshift-multus/multus-additional-cni-plugins-xghgn" Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.882271 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c64ws\" (UniqueName: \"kubernetes.io/projected/c2cfd349-f825-497b-b698-7fb6bc258b22-kube-api-access-c64ws\") pod \"machine-config-daemon-mf2hh\" (UID: \"c2cfd349-f825-497b-b698-7fb6bc258b22\") " pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.889962 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.893618 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-9tlth" Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.903653 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.903749 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.903771 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.903869 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.903895 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:05Z","lastTransitionTime":"2026-02-27T10:28:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.907450 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:05 crc kubenswrapper[4728]: W0227 10:28:05.914009 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod468912b7_185a_4869_9a65_70cbcb3c4fb1.slice/crio-f9c1ae9bd66782b584b3435d8ed48fb1c5c78733973decdeb4017c1d54c29752 WatchSource:0}: Error finding container f9c1ae9bd66782b584b3435d8ed48fb1c5c78733973decdeb4017c1d54c29752: Status 404 returned error can't find the container with id f9c1ae9bd66782b584b3435d8ed48fb1c5c78733973decdeb4017c1d54c29752 Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.914129 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-xghgn" Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.921121 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" Feb 27 10:28:05 crc kubenswrapper[4728]: E0227 10:28:05.921533 4728 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 10:28:05 crc kubenswrapper[4728]: container &Container{Name:kube-multus,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,Command:[/bin/bash -ec --],Args:[MULTUS_DAEMON_OPT="" Feb 27 10:28:05 crc kubenswrapper[4728]: /entrypoint/cnibincopy.sh; exec /usr/src/multus-cni/bin/multus-daemon $MULTUS_DAEMON_OPT Feb 27 10:28:05 crc kubenswrapper[4728]: ],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/bin/,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:6443,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api-int.crc.testing,ValueFrom:nil,},EnvVar{Name:MULTUS_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:false,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:system-cni-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-cni-dir,ReadOnly:false,MountPath:/host/run/multus/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-socket-dir-parent,ReadOnly:false,MountPath:/host/run/multus,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-k8s-cni-cncf-io,ReadOnly:false,MountPath:/run/k8s.cni.cncf.io,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-netns,ReadOnly:false,MountPath:/run/netns,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-bin,ReadOnly:false,MountPath:/var/lib/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-multus,ReadOnly:false,MountPath:/var/lib/cni/multus,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-kubelet,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:hostroot,ReadOnly:false,MountPath:/hostroot,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-conf-dir,ReadOnly:false,MountPath:/etc/cni/multus/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-daemon-config,ReadOnly:true,MountPath:/etc/cni/net.d/multus.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-multus-certs,ReadOnly:false,MountPath:/etc/cni/multus/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etc-kubernetes,ReadOnly:false,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sjq97,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-9tlth_openshift-multus(468912b7-185a-4869-9a65-70cbcb3c4fb1): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 27 10:28:05 crc kubenswrapper[4728]: > logger="UnhandledError" Feb 27 10:28:05 crc kubenswrapper[4728]: E0227 10:28:05.923062 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-9tlth" podUID="468912b7-185a-4869-9a65-70cbcb3c4fb1" Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.929611 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xghgn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0cd760d8-c9b2-4e95-97a3-94bc759c9884\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qtlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qtlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qtlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qtlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qtlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qtlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qtlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:28:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xghgn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:05 crc kubenswrapper[4728]: W0227 10:28:05.933057 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0cd760d8_c9b2_4e95_97a3_94bc759c9884.slice/crio-c4757d3af623de951ba75a0c4c57fb84094828c3f3d11364f80052c843b16a11 WatchSource:0}: Error finding container c4757d3af623de951ba75a0c4c57fb84094828c3f3d11364f80052c843b16a11: Status 404 returned error can't find the container with id c4757d3af623de951ba75a0c4c57fb84094828c3f3d11364f80052c843b16a11 Feb 27 10:28:05 crc kubenswrapper[4728]: E0227 10:28:05.936379 4728 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:egress-router-binary-copy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c,Command:[/entrypoint/cnibincopy.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/bin/,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:true,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9qtlr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-additional-cni-plugins-xghgn_openshift-multus(0cd760d8-c9b2-4e95-97a3-94bc759c9884): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Feb 27 10:28:05 crc kubenswrapper[4728]: E0227 10:28:05.937688 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"egress-router-binary-copy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-additional-cni-plugins-xghgn" podUID="0cd760d8-c9b2-4e95-97a3-94bc759c9884" Feb 27 10:28:05 crc kubenswrapper[4728]: W0227 10:28:05.940685 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc2cfd349_f825_497b_b698_7fb6bc258b22.slice/crio-d01f30692d6a26569cfcff062cc4f025666ace13be948db5b5515e26f63a56d7 WatchSource:0}: Error finding container d01f30692d6a26569cfcff062cc4f025666ace13be948db5b5515e26f63a56d7: Status 404 returned error can't find the container with id d01f30692d6a26569cfcff062cc4f025666ace13be948db5b5515e26f63a56d7 Feb 27 10:28:05 crc kubenswrapper[4728]: E0227 10:28:05.944877 4728 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:machine-config-daemon,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a,Command:[/usr/bin/machine-config-daemon],Args:[start --payload-version=4.18.1],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:health,HostPort:8798,ContainerPort:8798,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:rootfs,ReadOnly:false,MountPath:/rootfs,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c64ws,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8798 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:120,TimeoutSeconds:1,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-mf2hh_openshift-machine-config-operator(c2cfd349-f825-497b-b698-7fb6bc258b22): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Feb 27 10:28:05 crc kubenswrapper[4728]: E0227 10:28:05.950661 4728 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,Command:[],Args:[--secure-listen-address=0.0.0.0:9001 --config-file=/etc/kube-rbac-proxy/config-file.yaml --tls-cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --tls-min-version=VersionTLS12 --upstream=http://127.0.0.1:8797 --logtostderr=true --tls-cert-file=/etc/tls/private/tls.crt --tls-private-key-file=/etc/tls/private/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:9001,ContainerPort:9001,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:proxy-tls,ReadOnly:false,MountPath:/etc/tls/private,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:mcd-auth-proxy-config,ReadOnly:false,MountPath:/etc/kube-rbac-proxy,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c64ws,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-mf2hh_openshift-machine-config-operator(c2cfd349-f825-497b-b698-7fb6bc258b22): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Feb 27 10:28:05 crc kubenswrapper[4728]: E0227 10:28:05.952008 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"machine-config-daemon\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.958956 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-rpr29"] Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.960557 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-rpr29" Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.962977 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.963577 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.963741 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.965818 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.966565 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.966717 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.966757 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.979568 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ded44d8-d959-4509-be28-3560f21eebda\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4e8835f3ff41008225e090a7c460900d18997805869ec9191f1e7cce3c5778e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://144568fcb21837838d999f90cb51207fff11edb023b2d975dad1980e0817ca43\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19f04d61914f64b94f46aebdc8a7580de81f5ba7e15935be045d4e92bd0cd889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f09311526d9d8d971440bee60c506ffb423eed73c2ac58c81934c2df487ae48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f09311526d9d8d971440bee60c506ffb423eed73c2ac58c81934c2df487ae48\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T10:27:28Z\\\",\\\"message\\\":\\\"W0227 10:27:27.964275 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0227 10:27:27.965166 1 crypto.go:601] Generating new CA for check-endpoints-signer@1772188047 cert, and key in /tmp/serving-cert-1003999021/serving-signer.crt, /tmp/serving-cert-1003999021/serving-signer.key\\\\nI0227 10:27:28.312473 1 observer_polling.go:159] Starting file observer\\\\nW0227 10:27:28.324092 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 10:27:28.324213 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 10:27:28.326919 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1003999021/tls.crt::/tmp/serving-cert-1003999021/tls.key\\\\\\\"\\\\nI0227 10:27:28.533892 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 10:27:28.537238 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 10:27:28.537256 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 10:27:28.537275 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 10:27:28.537280 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nF0227 10:27:28.543870 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T10:27:27Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd5f667231cb95d3d072f38635815d04c45ae9803c179df54367329131afd05a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c860355459019bda5fddc0a658236e97a674f965aa2eff648e97bbaa30161b75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c860355459019bda5fddc0a658236e97a674f965aa2eff648e97bbaa30161b75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:26:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:26:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:26:20Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:05 crc kubenswrapper[4728]: I0227 10:28:05.999312 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:06 crc kubenswrapper[4728]: I0227 10:28:06.007640 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:06 crc kubenswrapper[4728]: I0227 10:28:06.007691 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:06 crc kubenswrapper[4728]: I0227 10:28:06.007708 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:06 crc kubenswrapper[4728]: I0227 10:28:06.007731 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:06 crc kubenswrapper[4728]: I0227 10:28:06.007748 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:06Z","lastTransitionTime":"2026-02-27T10:28:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:06 crc kubenswrapper[4728]: I0227 10:28:06.016857 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:06 crc kubenswrapper[4728]: I0227 10:28:06.034543 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:06 crc kubenswrapper[4728]: I0227 10:28:06.046086 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n4c77" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef8ed63c-6947-4b06-8742-54b7ba279aa7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phsn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:28:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n4c77\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:06 crc kubenswrapper[4728]: I0227 10:28:06.062532 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9tlth" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"468912b7-185a-4869-9a65-70cbcb3c4fb1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjq97\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:28:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9tlth\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:06 crc kubenswrapper[4728]: I0227 10:28:06.093231 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:06 crc kubenswrapper[4728]: I0227 10:28:06.110816 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:06 crc kubenswrapper[4728]: I0227 10:28:06.110893 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:06 crc kubenswrapper[4728]: I0227 10:28:06.110917 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:06 crc kubenswrapper[4728]: I0227 10:28:06.110948 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:06 crc kubenswrapper[4728]: I0227 10:28:06.110971 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:06Z","lastTransitionTime":"2026-02-27T10:28:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:06 crc kubenswrapper[4728]: I0227 10:28:06.120927 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xghgn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0cd760d8-c9b2-4e95-97a3-94bc759c9884\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qtlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qtlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qtlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qtlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qtlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qtlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qtlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:28:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xghgn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:06 crc kubenswrapper[4728]: I0227 10:28:06.139457 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:06 crc kubenswrapper[4728]: I0227 10:28:06.163945 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/b021ff26-58a3-4418-b6ba-4aa8e0bb6746-node-log\") pod \"ovnkube-node-rpr29\" (UID: \"b021ff26-58a3-4418-b6ba-4aa8e0bb6746\") " pod="openshift-ovn-kubernetes/ovnkube-node-rpr29" Feb 27 10:28:06 crc kubenswrapper[4728]: I0227 10:28:06.163976 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/b021ff26-58a3-4418-b6ba-4aa8e0bb6746-run-systemd\") pod \"ovnkube-node-rpr29\" (UID: \"b021ff26-58a3-4418-b6ba-4aa8e0bb6746\") " pod="openshift-ovn-kubernetes/ovnkube-node-rpr29" Feb 27 10:28:06 crc kubenswrapper[4728]: I0227 10:28:06.163991 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b021ff26-58a3-4418-b6ba-4aa8e0bb6746-run-openvswitch\") pod \"ovnkube-node-rpr29\" (UID: \"b021ff26-58a3-4418-b6ba-4aa8e0bb6746\") " pod="openshift-ovn-kubernetes/ovnkube-node-rpr29" Feb 27 10:28:06 crc kubenswrapper[4728]: I0227 10:28:06.164007 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b021ff26-58a3-4418-b6ba-4aa8e0bb6746-host-cni-bin\") pod \"ovnkube-node-rpr29\" (UID: \"b021ff26-58a3-4418-b6ba-4aa8e0bb6746\") " pod="openshift-ovn-kubernetes/ovnkube-node-rpr29" Feb 27 10:28:06 crc kubenswrapper[4728]: I0227 10:28:06.164025 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b021ff26-58a3-4418-b6ba-4aa8e0bb6746-host-slash\") pod \"ovnkube-node-rpr29\" (UID: \"b021ff26-58a3-4418-b6ba-4aa8e0bb6746\") " pod="openshift-ovn-kubernetes/ovnkube-node-rpr29" Feb 27 10:28:06 crc kubenswrapper[4728]: I0227 10:28:06.164047 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b021ff26-58a3-4418-b6ba-4aa8e0bb6746-etc-openvswitch\") pod \"ovnkube-node-rpr29\" (UID: \"b021ff26-58a3-4418-b6ba-4aa8e0bb6746\") " pod="openshift-ovn-kubernetes/ovnkube-node-rpr29" Feb 27 10:28:06 crc kubenswrapper[4728]: I0227 10:28:06.164061 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b021ff26-58a3-4418-b6ba-4aa8e0bb6746-host-run-ovn-kubernetes\") pod \"ovnkube-node-rpr29\" (UID: \"b021ff26-58a3-4418-b6ba-4aa8e0bb6746\") " pod="openshift-ovn-kubernetes/ovnkube-node-rpr29" Feb 27 10:28:06 crc kubenswrapper[4728]: I0227 10:28:06.164078 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/b021ff26-58a3-4418-b6ba-4aa8e0bb6746-host-kubelet\") pod \"ovnkube-node-rpr29\" (UID: \"b021ff26-58a3-4418-b6ba-4aa8e0bb6746\") " pod="openshift-ovn-kubernetes/ovnkube-node-rpr29" Feb 27 10:28:06 crc kubenswrapper[4728]: I0227 10:28:06.164091 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/b021ff26-58a3-4418-b6ba-4aa8e0bb6746-run-ovn\") pod \"ovnkube-node-rpr29\" (UID: \"b021ff26-58a3-4418-b6ba-4aa8e0bb6746\") " pod="openshift-ovn-kubernetes/ovnkube-node-rpr29" Feb 27 10:28:06 crc kubenswrapper[4728]: I0227 10:28:06.164114 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b021ff26-58a3-4418-b6ba-4aa8e0bb6746-env-overrides\") pod \"ovnkube-node-rpr29\" (UID: \"b021ff26-58a3-4418-b6ba-4aa8e0bb6746\") " pod="openshift-ovn-kubernetes/ovnkube-node-rpr29" Feb 27 10:28:06 crc kubenswrapper[4728]: I0227 10:28:06.164128 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/b021ff26-58a3-4418-b6ba-4aa8e0bb6746-systemd-units\") pod \"ovnkube-node-rpr29\" (UID: \"b021ff26-58a3-4418-b6ba-4aa8e0bb6746\") " pod="openshift-ovn-kubernetes/ovnkube-node-rpr29" Feb 27 10:28:06 crc kubenswrapper[4728]: I0227 10:28:06.164144 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b021ff26-58a3-4418-b6ba-4aa8e0bb6746-var-lib-openvswitch\") pod \"ovnkube-node-rpr29\" (UID: \"b021ff26-58a3-4418-b6ba-4aa8e0bb6746\") " pod="openshift-ovn-kubernetes/ovnkube-node-rpr29" Feb 27 10:28:06 crc kubenswrapper[4728]: I0227 10:28:06.164158 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dnx4z\" (UniqueName: \"kubernetes.io/projected/b021ff26-58a3-4418-b6ba-4aa8e0bb6746-kube-api-access-dnx4z\") pod \"ovnkube-node-rpr29\" (UID: \"b021ff26-58a3-4418-b6ba-4aa8e0bb6746\") " pod="openshift-ovn-kubernetes/ovnkube-node-rpr29" Feb 27 10:28:06 crc kubenswrapper[4728]: I0227 10:28:06.164172 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/b021ff26-58a3-4418-b6ba-4aa8e0bb6746-ovnkube-script-lib\") pod \"ovnkube-node-rpr29\" (UID: \"b021ff26-58a3-4418-b6ba-4aa8e0bb6746\") " pod="openshift-ovn-kubernetes/ovnkube-node-rpr29" Feb 27 10:28:06 crc kubenswrapper[4728]: I0227 10:28:06.164192 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b021ff26-58a3-4418-b6ba-4aa8e0bb6746-host-cni-netd\") pod \"ovnkube-node-rpr29\" (UID: \"b021ff26-58a3-4418-b6ba-4aa8e0bb6746\") " pod="openshift-ovn-kubernetes/ovnkube-node-rpr29" Feb 27 10:28:06 crc kubenswrapper[4728]: I0227 10:28:06.164204 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/b021ff26-58a3-4418-b6ba-4aa8e0bb6746-log-socket\") pod \"ovnkube-node-rpr29\" (UID: \"b021ff26-58a3-4418-b6ba-4aa8e0bb6746\") " pod="openshift-ovn-kubernetes/ovnkube-node-rpr29" Feb 27 10:28:06 crc kubenswrapper[4728]: I0227 10:28:06.164307 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b021ff26-58a3-4418-b6ba-4aa8e0bb6746-host-run-netns\") pod \"ovnkube-node-rpr29\" (UID: \"b021ff26-58a3-4418-b6ba-4aa8e0bb6746\") " pod="openshift-ovn-kubernetes/ovnkube-node-rpr29" Feb 27 10:28:06 crc kubenswrapper[4728]: I0227 10:28:06.164331 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b021ff26-58a3-4418-b6ba-4aa8e0bb6746-ovn-node-metrics-cert\") pod \"ovnkube-node-rpr29\" (UID: \"b021ff26-58a3-4418-b6ba-4aa8e0bb6746\") " pod="openshift-ovn-kubernetes/ovnkube-node-rpr29" Feb 27 10:28:06 crc kubenswrapper[4728]: I0227 10:28:06.164349 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b021ff26-58a3-4418-b6ba-4aa8e0bb6746-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-rpr29\" (UID: \"b021ff26-58a3-4418-b6ba-4aa8e0bb6746\") " pod="openshift-ovn-kubernetes/ovnkube-node-rpr29" Feb 27 10:28:06 crc kubenswrapper[4728]: I0227 10:28:06.164363 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b021ff26-58a3-4418-b6ba-4aa8e0bb6746-ovnkube-config\") pod \"ovnkube-node-rpr29\" (UID: \"b021ff26-58a3-4418-b6ba-4aa8e0bb6746\") " pod="openshift-ovn-kubernetes/ovnkube-node-rpr29" Feb 27 10:28:06 crc kubenswrapper[4728]: I0227 10:28:06.165802 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-n4c77" event={"ID":"ef8ed63c-6947-4b06-8742-54b7ba279aa7","Type":"ContainerStarted","Data":"7672520b912a86a7920adb0569cc31cb6e44428ece1811e5e863ee2491f5ced6"} Feb 27 10:28:06 crc kubenswrapper[4728]: E0227 10:28:06.167205 4728 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 10:28:06 crc kubenswrapper[4728]: container &Container{Name:dns-node-resolver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/bin/bash -c #!/bin/bash Feb 27 10:28:06 crc kubenswrapper[4728]: set -uo pipefail Feb 27 10:28:06 crc kubenswrapper[4728]: Feb 27 10:28:06 crc kubenswrapper[4728]: trap 'jobs -p | xargs kill || true; wait; exit 0' TERM Feb 27 10:28:06 crc kubenswrapper[4728]: Feb 27 10:28:06 crc kubenswrapper[4728]: OPENSHIFT_MARKER="openshift-generated-node-resolver" Feb 27 10:28:06 crc kubenswrapper[4728]: HOSTS_FILE="/etc/hosts" Feb 27 10:28:06 crc kubenswrapper[4728]: TEMP_FILE="/etc/hosts.tmp" Feb 27 10:28:06 crc kubenswrapper[4728]: Feb 27 10:28:06 crc kubenswrapper[4728]: IFS=', ' read -r -a services <<< "${SERVICES}" Feb 27 10:28:06 crc kubenswrapper[4728]: Feb 27 10:28:06 crc kubenswrapper[4728]: # Make a temporary file with the old hosts file's attributes. Feb 27 10:28:06 crc kubenswrapper[4728]: if ! cp -f --attributes-only "${HOSTS_FILE}" "${TEMP_FILE}"; then Feb 27 10:28:06 crc kubenswrapper[4728]: echo "Failed to preserve hosts file. Exiting." Feb 27 10:28:06 crc kubenswrapper[4728]: exit 1 Feb 27 10:28:06 crc kubenswrapper[4728]: fi Feb 27 10:28:06 crc kubenswrapper[4728]: Feb 27 10:28:06 crc kubenswrapper[4728]: while true; do Feb 27 10:28:06 crc kubenswrapper[4728]: declare -A svc_ips Feb 27 10:28:06 crc kubenswrapper[4728]: for svc in "${services[@]}"; do Feb 27 10:28:06 crc kubenswrapper[4728]: # Fetch service IP from cluster dns if present. We make several tries Feb 27 10:28:06 crc kubenswrapper[4728]: # to do it: IPv4, IPv6, IPv4 over TCP and IPv6 over TCP. The two last ones Feb 27 10:28:06 crc kubenswrapper[4728]: # are for deployments with Kuryr on older OpenStack (OSP13) - those do not Feb 27 10:28:06 crc kubenswrapper[4728]: # support UDP loadbalancers and require reaching DNS through TCP. Feb 27 10:28:06 crc kubenswrapper[4728]: cmds=('dig -t A @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Feb 27 10:28:06 crc kubenswrapper[4728]: 'dig -t AAAA @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Feb 27 10:28:06 crc kubenswrapper[4728]: 'dig -t A +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Feb 27 10:28:06 crc kubenswrapper[4728]: 'dig -t AAAA +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"') Feb 27 10:28:06 crc kubenswrapper[4728]: for i in ${!cmds[*]} Feb 27 10:28:06 crc kubenswrapper[4728]: do Feb 27 10:28:06 crc kubenswrapper[4728]: ips=($(eval "${cmds[i]}")) Feb 27 10:28:06 crc kubenswrapper[4728]: if [[ "$?" -eq 0 && "${#ips[@]}" -ne 0 ]]; then Feb 27 10:28:06 crc kubenswrapper[4728]: svc_ips["${svc}"]="${ips[@]}" Feb 27 10:28:06 crc kubenswrapper[4728]: break Feb 27 10:28:06 crc kubenswrapper[4728]: fi Feb 27 10:28:06 crc kubenswrapper[4728]: done Feb 27 10:28:06 crc kubenswrapper[4728]: done Feb 27 10:28:06 crc kubenswrapper[4728]: Feb 27 10:28:06 crc kubenswrapper[4728]: # Update /etc/hosts only if we get valid service IPs Feb 27 10:28:06 crc kubenswrapper[4728]: # We will not update /etc/hosts when there is coredns service outage or api unavailability Feb 27 10:28:06 crc kubenswrapper[4728]: # Stale entries could exist in /etc/hosts if the service is deleted Feb 27 10:28:06 crc kubenswrapper[4728]: if [[ -n "${svc_ips[*]-}" ]]; then Feb 27 10:28:06 crc kubenswrapper[4728]: # Build a new hosts file from /etc/hosts with our custom entries filtered out Feb 27 10:28:06 crc kubenswrapper[4728]: if ! sed --silent "/# ${OPENSHIFT_MARKER}/d; w ${TEMP_FILE}" "${HOSTS_FILE}"; then Feb 27 10:28:06 crc kubenswrapper[4728]: # Only continue rebuilding the hosts entries if its original content is preserved Feb 27 10:28:06 crc kubenswrapper[4728]: sleep 60 & wait Feb 27 10:28:06 crc kubenswrapper[4728]: continue Feb 27 10:28:06 crc kubenswrapper[4728]: fi Feb 27 10:28:06 crc kubenswrapper[4728]: Feb 27 10:28:06 crc kubenswrapper[4728]: # Append resolver entries for services Feb 27 10:28:06 crc kubenswrapper[4728]: rc=0 Feb 27 10:28:06 crc kubenswrapper[4728]: for svc in "${!svc_ips[@]}"; do Feb 27 10:28:06 crc kubenswrapper[4728]: for ip in ${svc_ips[${svc}]}; do Feb 27 10:28:06 crc kubenswrapper[4728]: echo "${ip} ${svc} ${svc}.${CLUSTER_DOMAIN} # ${OPENSHIFT_MARKER}" >> "${TEMP_FILE}" || rc=$? Feb 27 10:28:06 crc kubenswrapper[4728]: done Feb 27 10:28:06 crc kubenswrapper[4728]: done Feb 27 10:28:06 crc kubenswrapper[4728]: if [[ $rc -ne 0 ]]; then Feb 27 10:28:06 crc kubenswrapper[4728]: sleep 60 & wait Feb 27 10:28:06 crc kubenswrapper[4728]: continue Feb 27 10:28:06 crc kubenswrapper[4728]: fi Feb 27 10:28:06 crc kubenswrapper[4728]: Feb 27 10:28:06 crc kubenswrapper[4728]: Feb 27 10:28:06 crc kubenswrapper[4728]: # TODO: Update /etc/hosts atomically to avoid any inconsistent behavior Feb 27 10:28:06 crc kubenswrapper[4728]: # Replace /etc/hosts with our modified version if needed Feb 27 10:28:06 crc kubenswrapper[4728]: cmp "${TEMP_FILE}" "${HOSTS_FILE}" || cp -f "${TEMP_FILE}" "${HOSTS_FILE}" Feb 27 10:28:06 crc kubenswrapper[4728]: # TEMP_FILE is not removed to avoid file create/delete and attributes copy churn Feb 27 10:28:06 crc kubenswrapper[4728]: fi Feb 27 10:28:06 crc kubenswrapper[4728]: sleep 60 & wait Feb 27 10:28:06 crc kubenswrapper[4728]: unset svc_ips Feb 27 10:28:06 crc kubenswrapper[4728]: done Feb 27 10:28:06 crc kubenswrapper[4728]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:SERVICES,Value:image-registry.openshift-image-registry.svc,ValueFrom:nil,},EnvVar{Name:NAMESERVER,Value:10.217.4.10,ValueFrom:nil,},EnvVar{Name:CLUSTER_DOMAIN,Value:cluster.local,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{22020096 0} {} 21Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hosts-file,ReadOnly:false,MountPath:/etc/hosts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-phsn4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-resolver-n4c77_openshift-dns(ef8ed63c-6947-4b06-8742-54b7ba279aa7): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 27 10:28:06 crc kubenswrapper[4728]: > logger="UnhandledError" Feb 27 10:28:06 crc kubenswrapper[4728]: I0227 10:28:06.167395 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-xghgn" event={"ID":"0cd760d8-c9b2-4e95-97a3-94bc759c9884","Type":"ContainerStarted","Data":"c4757d3af623de951ba75a0c4c57fb84094828c3f3d11364f80052c843b16a11"} Feb 27 10:28:06 crc kubenswrapper[4728]: E0227 10:28:06.168326 4728 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:egress-router-binary-copy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c,Command:[/entrypoint/cnibincopy.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/bin/,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:true,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9qtlr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-additional-cni-plugins-xghgn_openshift-multus(0cd760d8-c9b2-4e95-97a3-94bc759c9884): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Feb 27 10:28:06 crc kubenswrapper[4728]: E0227 10:28:06.168452 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns-node-resolver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-dns/node-resolver-n4c77" podUID="ef8ed63c-6947-4b06-8742-54b7ba279aa7" Feb 27 10:28:06 crc kubenswrapper[4728]: E0227 10:28:06.170168 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"egress-router-binary-copy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-additional-cni-plugins-xghgn" podUID="0cd760d8-c9b2-4e95-97a3-94bc759c9884" Feb 27 10:28:06 crc kubenswrapper[4728]: I0227 10:28:06.170261 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-9tlth" event={"ID":"468912b7-185a-4869-9a65-70cbcb3c4fb1","Type":"ContainerStarted","Data":"f9c1ae9bd66782b584b3435d8ed48fb1c5c78733973decdeb4017c1d54c29752"} Feb 27 10:28:06 crc kubenswrapper[4728]: E0227 10:28:06.172711 4728 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 10:28:06 crc kubenswrapper[4728]: container &Container{Name:kube-multus,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,Command:[/bin/bash -ec --],Args:[MULTUS_DAEMON_OPT="" Feb 27 10:28:06 crc kubenswrapper[4728]: /entrypoint/cnibincopy.sh; exec /usr/src/multus-cni/bin/multus-daemon $MULTUS_DAEMON_OPT Feb 27 10:28:06 crc kubenswrapper[4728]: ],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/bin/,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:6443,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api-int.crc.testing,ValueFrom:nil,},EnvVar{Name:MULTUS_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:false,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:system-cni-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-cni-dir,ReadOnly:false,MountPath:/host/run/multus/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-socket-dir-parent,ReadOnly:false,MountPath:/host/run/multus,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-k8s-cni-cncf-io,ReadOnly:false,MountPath:/run/k8s.cni.cncf.io,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-netns,ReadOnly:false,MountPath:/run/netns,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-bin,ReadOnly:false,MountPath:/var/lib/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-multus,ReadOnly:false,MountPath:/var/lib/cni/multus,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-kubelet,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:hostroot,ReadOnly:false,MountPath:/hostroot,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-conf-dir,ReadOnly:false,MountPath:/etc/cni/multus/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-daemon-config,ReadOnly:true,MountPath:/etc/cni/net.d/multus.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-multus-certs,ReadOnly:false,MountPath:/etc/cni/multus/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etc-kubernetes,ReadOnly:false,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sjq97,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-9tlth_openshift-multus(468912b7-185a-4869-9a65-70cbcb3c4fb1): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 27 10:28:06 crc kubenswrapper[4728]: > logger="UnhandledError" Feb 27 10:28:06 crc kubenswrapper[4728]: I0227 10:28:06.173129 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" event={"ID":"c2cfd349-f825-497b-b698-7fb6bc258b22","Type":"ContainerStarted","Data":"d01f30692d6a26569cfcff062cc4f025666ace13be948db5b5515e26f63a56d7"} Feb 27 10:28:06 crc kubenswrapper[4728]: E0227 10:28:06.173765 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-9tlth" podUID="468912b7-185a-4869-9a65-70cbcb3c4fb1" Feb 27 10:28:06 crc kubenswrapper[4728]: E0227 10:28:06.174228 4728 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:machine-config-daemon,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a,Command:[/usr/bin/machine-config-daemon],Args:[start --payload-version=4.18.1],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:health,HostPort:8798,ContainerPort:8798,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:rootfs,ReadOnly:false,MountPath:/rootfs,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c64ws,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8798 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:120,TimeoutSeconds:1,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-mf2hh_openshift-machine-config-operator(c2cfd349-f825-497b-b698-7fb6bc258b22): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Feb 27 10:28:06 crc kubenswrapper[4728]: I0227 10:28:06.174597 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2cfd349-f825-497b-b698-7fb6bc258b22\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c64ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c64ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:28:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mf2hh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:06 crc kubenswrapper[4728]: E0227 10:28:06.176201 4728 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,Command:[],Args:[--secure-listen-address=0.0.0.0:9001 --config-file=/etc/kube-rbac-proxy/config-file.yaml --tls-cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --tls-min-version=VersionTLS12 --upstream=http://127.0.0.1:8797 --logtostderr=true --tls-cert-file=/etc/tls/private/tls.crt --tls-private-key-file=/etc/tls/private/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:9001,ContainerPort:9001,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:proxy-tls,ReadOnly:false,MountPath:/etc/tls/private,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:mcd-auth-proxy-config,ReadOnly:false,MountPath:/etc/kube-rbac-proxy,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c64ws,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-mf2hh_openshift-machine-config-operator(c2cfd349-f825-497b-b698-7fb6bc258b22): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Feb 27 10:28:06 crc kubenswrapper[4728]: E0227 10:28:06.177292 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"machine-config-daemon\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" Feb 27 10:28:06 crc kubenswrapper[4728]: I0227 10:28:06.189996 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rpr29" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b021ff26-58a3-4418-b6ba-4aa8e0bb6746\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:28:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rpr29\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:06 crc kubenswrapper[4728]: I0227 10:28:06.197944 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:06 crc kubenswrapper[4728]: I0227 10:28:06.209446 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:06 crc kubenswrapper[4728]: I0227 10:28:06.212444 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:06 crc kubenswrapper[4728]: I0227 10:28:06.212470 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:06 crc kubenswrapper[4728]: I0227 10:28:06.212478 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:06 crc kubenswrapper[4728]: I0227 10:28:06.212491 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:06 crc kubenswrapper[4728]: I0227 10:28:06.212515 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:06Z","lastTransitionTime":"2026-02-27T10:28:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:06 crc kubenswrapper[4728]: I0227 10:28:06.215849 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n4c77" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef8ed63c-6947-4b06-8742-54b7ba279aa7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phsn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:28:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n4c77\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:06 crc kubenswrapper[4728]: I0227 10:28:06.226049 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9tlth" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"468912b7-185a-4869-9a65-70cbcb3c4fb1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjq97\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:28:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9tlth\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:06 crc kubenswrapper[4728]: I0227 10:28:06.241748 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ded44d8-d959-4509-be28-3560f21eebda\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4e8835f3ff41008225e090a7c460900d18997805869ec9191f1e7cce3c5778e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://144568fcb21837838d999f90cb51207fff11edb023b2d975dad1980e0817ca43\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19f04d61914f64b94f46aebdc8a7580de81f5ba7e15935be045d4e92bd0cd889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f09311526d9d8d971440bee60c506ffb423eed73c2ac58c81934c2df487ae48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f09311526d9d8d971440bee60c506ffb423eed73c2ac58c81934c2df487ae48\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T10:27:28Z\\\",\\\"message\\\":\\\"W0227 10:27:27.964275 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0227 10:27:27.965166 1 crypto.go:601] Generating new CA for check-endpoints-signer@1772188047 cert, and key in /tmp/serving-cert-1003999021/serving-signer.crt, /tmp/serving-cert-1003999021/serving-signer.key\\\\nI0227 10:27:28.312473 1 observer_polling.go:159] Starting file observer\\\\nW0227 10:27:28.324092 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 10:27:28.324213 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 10:27:28.326919 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1003999021/tls.crt::/tmp/serving-cert-1003999021/tls.key\\\\\\\"\\\\nI0227 10:27:28.533892 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 10:27:28.537238 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 10:27:28.537256 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 10:27:28.537275 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 10:27:28.537280 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nF0227 10:27:28.543870 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T10:27:27Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd5f667231cb95d3d072f38635815d04c45ae9803c179df54367329131afd05a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c860355459019bda5fddc0a658236e97a674f965aa2eff648e97bbaa30161b75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c860355459019bda5fddc0a658236e97a674f965aa2eff648e97bbaa30161b75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:26:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:26:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:26:20Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:06 crc kubenswrapper[4728]: I0227 10:28:06.257538 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:06 crc kubenswrapper[4728]: I0227 10:28:06.264835 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/b021ff26-58a3-4418-b6ba-4aa8e0bb6746-systemd-units\") pod \"ovnkube-node-rpr29\" (UID: \"b021ff26-58a3-4418-b6ba-4aa8e0bb6746\") " pod="openshift-ovn-kubernetes/ovnkube-node-rpr29" Feb 27 10:28:06 crc kubenswrapper[4728]: I0227 10:28:06.264892 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b021ff26-58a3-4418-b6ba-4aa8e0bb6746-var-lib-openvswitch\") pod \"ovnkube-node-rpr29\" (UID: \"b021ff26-58a3-4418-b6ba-4aa8e0bb6746\") " pod="openshift-ovn-kubernetes/ovnkube-node-rpr29" Feb 27 10:28:06 crc kubenswrapper[4728]: I0227 10:28:06.264937 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b021ff26-58a3-4418-b6ba-4aa8e0bb6746-env-overrides\") pod \"ovnkube-node-rpr29\" (UID: \"b021ff26-58a3-4418-b6ba-4aa8e0bb6746\") " pod="openshift-ovn-kubernetes/ovnkube-node-rpr29" Feb 27 10:28:06 crc kubenswrapper[4728]: I0227 10:28:06.264978 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dnx4z\" (UniqueName: \"kubernetes.io/projected/b021ff26-58a3-4418-b6ba-4aa8e0bb6746-kube-api-access-dnx4z\") pod \"ovnkube-node-rpr29\" (UID: \"b021ff26-58a3-4418-b6ba-4aa8e0bb6746\") " pod="openshift-ovn-kubernetes/ovnkube-node-rpr29" Feb 27 10:28:06 crc kubenswrapper[4728]: I0227 10:28:06.265025 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/b021ff26-58a3-4418-b6ba-4aa8e0bb6746-ovnkube-script-lib\") pod \"ovnkube-node-rpr29\" (UID: \"b021ff26-58a3-4418-b6ba-4aa8e0bb6746\") " pod="openshift-ovn-kubernetes/ovnkube-node-rpr29" Feb 27 10:28:06 crc kubenswrapper[4728]: I0227 10:28:06.265084 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b021ff26-58a3-4418-b6ba-4aa8e0bb6746-host-cni-netd\") pod \"ovnkube-node-rpr29\" (UID: \"b021ff26-58a3-4418-b6ba-4aa8e0bb6746\") " pod="openshift-ovn-kubernetes/ovnkube-node-rpr29" Feb 27 10:28:06 crc kubenswrapper[4728]: I0227 10:28:06.265127 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b021ff26-58a3-4418-b6ba-4aa8e0bb6746-host-run-netns\") pod \"ovnkube-node-rpr29\" (UID: \"b021ff26-58a3-4418-b6ba-4aa8e0bb6746\") " pod="openshift-ovn-kubernetes/ovnkube-node-rpr29" Feb 27 10:28:06 crc kubenswrapper[4728]: I0227 10:28:06.265162 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/b021ff26-58a3-4418-b6ba-4aa8e0bb6746-log-socket\") pod \"ovnkube-node-rpr29\" (UID: \"b021ff26-58a3-4418-b6ba-4aa8e0bb6746\") " pod="openshift-ovn-kubernetes/ovnkube-node-rpr29" Feb 27 10:28:06 crc kubenswrapper[4728]: I0227 10:28:06.265210 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b021ff26-58a3-4418-b6ba-4aa8e0bb6746-ovn-node-metrics-cert\") pod \"ovnkube-node-rpr29\" (UID: \"b021ff26-58a3-4418-b6ba-4aa8e0bb6746\") " pod="openshift-ovn-kubernetes/ovnkube-node-rpr29" Feb 27 10:28:06 crc kubenswrapper[4728]: I0227 10:28:06.265245 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b021ff26-58a3-4418-b6ba-4aa8e0bb6746-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-rpr29\" (UID: \"b021ff26-58a3-4418-b6ba-4aa8e0bb6746\") " pod="openshift-ovn-kubernetes/ovnkube-node-rpr29" Feb 27 10:28:06 crc kubenswrapper[4728]: I0227 10:28:06.265286 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b021ff26-58a3-4418-b6ba-4aa8e0bb6746-ovnkube-config\") pod \"ovnkube-node-rpr29\" (UID: \"b021ff26-58a3-4418-b6ba-4aa8e0bb6746\") " pod="openshift-ovn-kubernetes/ovnkube-node-rpr29" Feb 27 10:28:06 crc kubenswrapper[4728]: I0227 10:28:06.265377 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/b021ff26-58a3-4418-b6ba-4aa8e0bb6746-node-log\") pod \"ovnkube-node-rpr29\" (UID: \"b021ff26-58a3-4418-b6ba-4aa8e0bb6746\") " pod="openshift-ovn-kubernetes/ovnkube-node-rpr29" Feb 27 10:28:06 crc kubenswrapper[4728]: I0227 10:28:06.265414 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/b021ff26-58a3-4418-b6ba-4aa8e0bb6746-run-systemd\") pod \"ovnkube-node-rpr29\" (UID: \"b021ff26-58a3-4418-b6ba-4aa8e0bb6746\") " pod="openshift-ovn-kubernetes/ovnkube-node-rpr29" Feb 27 10:28:06 crc kubenswrapper[4728]: I0227 10:28:06.265458 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b021ff26-58a3-4418-b6ba-4aa8e0bb6746-host-slash\") pod \"ovnkube-node-rpr29\" (UID: \"b021ff26-58a3-4418-b6ba-4aa8e0bb6746\") " pod="openshift-ovn-kubernetes/ovnkube-node-rpr29" Feb 27 10:28:06 crc kubenswrapper[4728]: I0227 10:28:06.265545 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b021ff26-58a3-4418-b6ba-4aa8e0bb6746-run-openvswitch\") pod \"ovnkube-node-rpr29\" (UID: \"b021ff26-58a3-4418-b6ba-4aa8e0bb6746\") " pod="openshift-ovn-kubernetes/ovnkube-node-rpr29" Feb 27 10:28:06 crc kubenswrapper[4728]: I0227 10:28:06.265579 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b021ff26-58a3-4418-b6ba-4aa8e0bb6746-host-cni-bin\") pod \"ovnkube-node-rpr29\" (UID: \"b021ff26-58a3-4418-b6ba-4aa8e0bb6746\") " pod="openshift-ovn-kubernetes/ovnkube-node-rpr29" Feb 27 10:28:06 crc kubenswrapper[4728]: I0227 10:28:06.265630 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b021ff26-58a3-4418-b6ba-4aa8e0bb6746-etc-openvswitch\") pod \"ovnkube-node-rpr29\" (UID: \"b021ff26-58a3-4418-b6ba-4aa8e0bb6746\") " pod="openshift-ovn-kubernetes/ovnkube-node-rpr29" Feb 27 10:28:06 crc kubenswrapper[4728]: I0227 10:28:06.265660 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b021ff26-58a3-4418-b6ba-4aa8e0bb6746-host-run-ovn-kubernetes\") pod \"ovnkube-node-rpr29\" (UID: \"b021ff26-58a3-4418-b6ba-4aa8e0bb6746\") " pod="openshift-ovn-kubernetes/ovnkube-node-rpr29" Feb 27 10:28:06 crc kubenswrapper[4728]: I0227 10:28:06.265709 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/b021ff26-58a3-4418-b6ba-4aa8e0bb6746-host-kubelet\") pod \"ovnkube-node-rpr29\" (UID: \"b021ff26-58a3-4418-b6ba-4aa8e0bb6746\") " pod="openshift-ovn-kubernetes/ovnkube-node-rpr29" Feb 27 10:28:06 crc kubenswrapper[4728]: I0227 10:28:06.265741 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/b021ff26-58a3-4418-b6ba-4aa8e0bb6746-run-ovn\") pod \"ovnkube-node-rpr29\" (UID: \"b021ff26-58a3-4418-b6ba-4aa8e0bb6746\") " pod="openshift-ovn-kubernetes/ovnkube-node-rpr29" Feb 27 10:28:06 crc kubenswrapper[4728]: I0227 10:28:06.265840 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/b021ff26-58a3-4418-b6ba-4aa8e0bb6746-run-ovn\") pod \"ovnkube-node-rpr29\" (UID: \"b021ff26-58a3-4418-b6ba-4aa8e0bb6746\") " pod="openshift-ovn-kubernetes/ovnkube-node-rpr29" Feb 27 10:28:06 crc kubenswrapper[4728]: I0227 10:28:06.265901 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/b021ff26-58a3-4418-b6ba-4aa8e0bb6746-systemd-units\") pod \"ovnkube-node-rpr29\" (UID: \"b021ff26-58a3-4418-b6ba-4aa8e0bb6746\") " pod="openshift-ovn-kubernetes/ovnkube-node-rpr29" Feb 27 10:28:06 crc kubenswrapper[4728]: I0227 10:28:06.265952 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b021ff26-58a3-4418-b6ba-4aa8e0bb6746-var-lib-openvswitch\") pod \"ovnkube-node-rpr29\" (UID: \"b021ff26-58a3-4418-b6ba-4aa8e0bb6746\") " pod="openshift-ovn-kubernetes/ovnkube-node-rpr29" Feb 27 10:28:06 crc kubenswrapper[4728]: I0227 10:28:06.266857 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b021ff26-58a3-4418-b6ba-4aa8e0bb6746-env-overrides\") pod \"ovnkube-node-rpr29\" (UID: \"b021ff26-58a3-4418-b6ba-4aa8e0bb6746\") " pod="openshift-ovn-kubernetes/ovnkube-node-rpr29" Feb 27 10:28:06 crc kubenswrapper[4728]: I0227 10:28:06.266956 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/b021ff26-58a3-4418-b6ba-4aa8e0bb6746-node-log\") pod \"ovnkube-node-rpr29\" (UID: \"b021ff26-58a3-4418-b6ba-4aa8e0bb6746\") " pod="openshift-ovn-kubernetes/ovnkube-node-rpr29" Feb 27 10:28:06 crc kubenswrapper[4728]: I0227 10:28:06.267008 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b021ff26-58a3-4418-b6ba-4aa8e0bb6746-host-cni-netd\") pod \"ovnkube-node-rpr29\" (UID: \"b021ff26-58a3-4418-b6ba-4aa8e0bb6746\") " pod="openshift-ovn-kubernetes/ovnkube-node-rpr29" Feb 27 10:28:06 crc kubenswrapper[4728]: I0227 10:28:06.267059 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b021ff26-58a3-4418-b6ba-4aa8e0bb6746-host-run-netns\") pod \"ovnkube-node-rpr29\" (UID: \"b021ff26-58a3-4418-b6ba-4aa8e0bb6746\") " pod="openshift-ovn-kubernetes/ovnkube-node-rpr29" Feb 27 10:28:06 crc kubenswrapper[4728]: I0227 10:28:06.267109 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/b021ff26-58a3-4418-b6ba-4aa8e0bb6746-log-socket\") pod \"ovnkube-node-rpr29\" (UID: \"b021ff26-58a3-4418-b6ba-4aa8e0bb6746\") " pod="openshift-ovn-kubernetes/ovnkube-node-rpr29" Feb 27 10:28:06 crc kubenswrapper[4728]: I0227 10:28:06.267666 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/b021ff26-58a3-4418-b6ba-4aa8e0bb6746-ovnkube-script-lib\") pod \"ovnkube-node-rpr29\" (UID: \"b021ff26-58a3-4418-b6ba-4aa8e0bb6746\") " pod="openshift-ovn-kubernetes/ovnkube-node-rpr29" Feb 27 10:28:06 crc kubenswrapper[4728]: I0227 10:28:06.267764 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/b021ff26-58a3-4418-b6ba-4aa8e0bb6746-run-systemd\") pod \"ovnkube-node-rpr29\" (UID: \"b021ff26-58a3-4418-b6ba-4aa8e0bb6746\") " pod="openshift-ovn-kubernetes/ovnkube-node-rpr29" Feb 27 10:28:06 crc kubenswrapper[4728]: I0227 10:28:06.267813 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b021ff26-58a3-4418-b6ba-4aa8e0bb6746-host-slash\") pod \"ovnkube-node-rpr29\" (UID: \"b021ff26-58a3-4418-b6ba-4aa8e0bb6746\") " pod="openshift-ovn-kubernetes/ovnkube-node-rpr29" Feb 27 10:28:06 crc kubenswrapper[4728]: I0227 10:28:06.267861 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b021ff26-58a3-4418-b6ba-4aa8e0bb6746-run-openvswitch\") pod \"ovnkube-node-rpr29\" (UID: \"b021ff26-58a3-4418-b6ba-4aa8e0bb6746\") " pod="openshift-ovn-kubernetes/ovnkube-node-rpr29" Feb 27 10:28:06 crc kubenswrapper[4728]: I0227 10:28:06.267906 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b021ff26-58a3-4418-b6ba-4aa8e0bb6746-host-cni-bin\") pod \"ovnkube-node-rpr29\" (UID: \"b021ff26-58a3-4418-b6ba-4aa8e0bb6746\") " pod="openshift-ovn-kubernetes/ovnkube-node-rpr29" Feb 27 10:28:06 crc kubenswrapper[4728]: I0227 10:28:06.267951 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b021ff26-58a3-4418-b6ba-4aa8e0bb6746-etc-openvswitch\") pod \"ovnkube-node-rpr29\" (UID: \"b021ff26-58a3-4418-b6ba-4aa8e0bb6746\") " pod="openshift-ovn-kubernetes/ovnkube-node-rpr29" Feb 27 10:28:06 crc kubenswrapper[4728]: I0227 10:28:06.267997 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b021ff26-58a3-4418-b6ba-4aa8e0bb6746-host-run-ovn-kubernetes\") pod \"ovnkube-node-rpr29\" (UID: \"b021ff26-58a3-4418-b6ba-4aa8e0bb6746\") " pod="openshift-ovn-kubernetes/ovnkube-node-rpr29" Feb 27 10:28:06 crc kubenswrapper[4728]: I0227 10:28:06.268042 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/b021ff26-58a3-4418-b6ba-4aa8e0bb6746-host-kubelet\") pod \"ovnkube-node-rpr29\" (UID: \"b021ff26-58a3-4418-b6ba-4aa8e0bb6746\") " pod="openshift-ovn-kubernetes/ovnkube-node-rpr29" Feb 27 10:28:06 crc kubenswrapper[4728]: I0227 10:28:06.270168 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b021ff26-58a3-4418-b6ba-4aa8e0bb6746-ovn-node-metrics-cert\") pod \"ovnkube-node-rpr29\" (UID: \"b021ff26-58a3-4418-b6ba-4aa8e0bb6746\") " pod="openshift-ovn-kubernetes/ovnkube-node-rpr29" Feb 27 10:28:06 crc kubenswrapper[4728]: I0227 10:28:06.270250 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b021ff26-58a3-4418-b6ba-4aa8e0bb6746-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-rpr29\" (UID: \"b021ff26-58a3-4418-b6ba-4aa8e0bb6746\") " pod="openshift-ovn-kubernetes/ovnkube-node-rpr29" Feb 27 10:28:06 crc kubenswrapper[4728]: I0227 10:28:06.270316 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b021ff26-58a3-4418-b6ba-4aa8e0bb6746-ovnkube-config\") pod \"ovnkube-node-rpr29\" (UID: \"b021ff26-58a3-4418-b6ba-4aa8e0bb6746\") " pod="openshift-ovn-kubernetes/ovnkube-node-rpr29" Feb 27 10:28:06 crc kubenswrapper[4728]: I0227 10:28:06.273498 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:06 crc kubenswrapper[4728]: I0227 10:28:06.286780 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:06 crc kubenswrapper[4728]: I0227 10:28:06.294403 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dnx4z\" (UniqueName: \"kubernetes.io/projected/b021ff26-58a3-4418-b6ba-4aa8e0bb6746-kube-api-access-dnx4z\") pod \"ovnkube-node-rpr29\" (UID: \"b021ff26-58a3-4418-b6ba-4aa8e0bb6746\") " pod="openshift-ovn-kubernetes/ovnkube-node-rpr29" Feb 27 10:28:06 crc kubenswrapper[4728]: I0227 10:28:06.306871 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xghgn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0cd760d8-c9b2-4e95-97a3-94bc759c9884\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qtlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qtlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qtlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qtlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qtlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qtlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qtlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:28:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xghgn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:06 crc kubenswrapper[4728]: I0227 10:28:06.315057 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:06 crc kubenswrapper[4728]: I0227 10:28:06.315108 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:06 crc kubenswrapper[4728]: I0227 10:28:06.315124 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:06 crc kubenswrapper[4728]: I0227 10:28:06.315145 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:06 crc kubenswrapper[4728]: I0227 10:28:06.315161 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:06Z","lastTransitionTime":"2026-02-27T10:28:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:06 crc kubenswrapper[4728]: I0227 10:28:06.322401 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:06 crc kubenswrapper[4728]: I0227 10:28:06.335702 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2cfd349-f825-497b-b698-7fb6bc258b22\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c64ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c64ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:28:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mf2hh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:06 crc kubenswrapper[4728]: I0227 10:28:06.361392 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rpr29" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b021ff26-58a3-4418-b6ba-4aa8e0bb6746\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:28:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rpr29\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:06 crc kubenswrapper[4728]: I0227 10:28:06.372791 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:06 crc kubenswrapper[4728]: I0227 10:28:06.417778 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:06 crc kubenswrapper[4728]: I0227 10:28:06.417835 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:06 crc kubenswrapper[4728]: I0227 10:28:06.417851 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:06 crc kubenswrapper[4728]: I0227 10:28:06.417880 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:06 crc kubenswrapper[4728]: I0227 10:28:06.417897 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:06Z","lastTransitionTime":"2026-02-27T10:28:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:06 crc kubenswrapper[4728]: I0227 10:28:06.521348 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:06 crc kubenswrapper[4728]: I0227 10:28:06.521403 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:06 crc kubenswrapper[4728]: I0227 10:28:06.521419 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:06 crc kubenswrapper[4728]: I0227 10:28:06.521441 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:06 crc kubenswrapper[4728]: I0227 10:28:06.521457 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:06Z","lastTransitionTime":"2026-02-27T10:28:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:06 crc kubenswrapper[4728]: I0227 10:28:06.586117 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-rpr29" Feb 27 10:28:06 crc kubenswrapper[4728]: W0227 10:28:06.604035 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb021ff26_58a3_4418_b6ba_4aa8e0bb6746.slice/crio-3a8e0b7cc10e78d8d955c50238a1305518903d59220fad8a73f9f277d9a4a66b WatchSource:0}: Error finding container 3a8e0b7cc10e78d8d955c50238a1305518903d59220fad8a73f9f277d9a4a66b: Status 404 returned error can't find the container with id 3a8e0b7cc10e78d8d955c50238a1305518903d59220fad8a73f9f277d9a4a66b Feb 27 10:28:06 crc kubenswrapper[4728]: E0227 10:28:06.609725 4728 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 10:28:06 crc kubenswrapper[4728]: init container &Container{Name:kubecfg-setup,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c cat << EOF > /etc/ovn/kubeconfig Feb 27 10:28:06 crc kubenswrapper[4728]: apiVersion: v1 Feb 27 10:28:06 crc kubenswrapper[4728]: clusters: Feb 27 10:28:06 crc kubenswrapper[4728]: - cluster: Feb 27 10:28:06 crc kubenswrapper[4728]: certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt Feb 27 10:28:06 crc kubenswrapper[4728]: server: https://api-int.crc.testing:6443 Feb 27 10:28:06 crc kubenswrapper[4728]: name: default-cluster Feb 27 10:28:06 crc kubenswrapper[4728]: contexts: Feb 27 10:28:06 crc kubenswrapper[4728]: - context: Feb 27 10:28:06 crc kubenswrapper[4728]: cluster: default-cluster Feb 27 10:28:06 crc kubenswrapper[4728]: namespace: default Feb 27 10:28:06 crc kubenswrapper[4728]: user: default-auth Feb 27 10:28:06 crc kubenswrapper[4728]: name: default-context Feb 27 10:28:06 crc kubenswrapper[4728]: current-context: default-context Feb 27 10:28:06 crc kubenswrapper[4728]: kind: Config Feb 27 10:28:06 crc kubenswrapper[4728]: preferences: {} Feb 27 10:28:06 crc kubenswrapper[4728]: users: Feb 27 10:28:06 crc kubenswrapper[4728]: - name: default-auth Feb 27 10:28:06 crc kubenswrapper[4728]: user: Feb 27 10:28:06 crc kubenswrapper[4728]: client-certificate: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Feb 27 10:28:06 crc kubenswrapper[4728]: client-key: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Feb 27 10:28:06 crc kubenswrapper[4728]: EOF Feb 27 10:28:06 crc kubenswrapper[4728]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-openvswitch,ReadOnly:false,MountPath:/etc/ovn/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dnx4z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-node-rpr29_openshift-ovn-kubernetes(b021ff26-58a3-4418-b6ba-4aa8e0bb6746): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 27 10:28:06 crc kubenswrapper[4728]: > logger="UnhandledError" Feb 27 10:28:06 crc kubenswrapper[4728]: E0227 10:28:06.613619 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubecfg-setup\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-ovn-kubernetes/ovnkube-node-rpr29" podUID="b021ff26-58a3-4418-b6ba-4aa8e0bb6746" Feb 27 10:28:06 crc kubenswrapper[4728]: I0227 10:28:06.624393 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:06 crc kubenswrapper[4728]: I0227 10:28:06.624461 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:06 crc kubenswrapper[4728]: I0227 10:28:06.624486 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:06 crc kubenswrapper[4728]: I0227 10:28:06.624564 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:06 crc kubenswrapper[4728]: I0227 10:28:06.624592 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:06Z","lastTransitionTime":"2026-02-27T10:28:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:06 crc kubenswrapper[4728]: I0227 10:28:06.724921 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 10:28:06 crc kubenswrapper[4728]: I0227 10:28:06.725124 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 10:28:06 crc kubenswrapper[4728]: E0227 10:28:06.725350 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 10:28:06 crc kubenswrapper[4728]: I0227 10:28:06.725654 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 10:28:06 crc kubenswrapper[4728]: E0227 10:28:06.725773 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 10:28:06 crc kubenswrapper[4728]: E0227 10:28:06.725929 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 10:28:06 crc kubenswrapper[4728]: I0227 10:28:06.727206 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:06 crc kubenswrapper[4728]: I0227 10:28:06.727260 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:06 crc kubenswrapper[4728]: I0227 10:28:06.727276 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:06 crc kubenswrapper[4728]: I0227 10:28:06.727300 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:06 crc kubenswrapper[4728]: I0227 10:28:06.727316 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:06Z","lastTransitionTime":"2026-02-27T10:28:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:06 crc kubenswrapper[4728]: E0227 10:28:06.727875 4728 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 10:28:06 crc kubenswrapper[4728]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,Command:[/bin/bash -c #!/bin/bash Feb 27 10:28:06 crc kubenswrapper[4728]: set -o allexport Feb 27 10:28:06 crc kubenswrapper[4728]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Feb 27 10:28:06 crc kubenswrapper[4728]: source /etc/kubernetes/apiserver-url.env Feb 27 10:28:06 crc kubenswrapper[4728]: else Feb 27 10:28:06 crc kubenswrapper[4728]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Feb 27 10:28:06 crc kubenswrapper[4728]: exit 1 Feb 27 10:28:06 crc kubenswrapper[4728]: fi Feb 27 10:28:06 crc kubenswrapper[4728]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Feb 27 10:28:06 crc kubenswrapper[4728]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.18.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b97554198294bf544fbc116c94a0a1fb2ec8a4de0e926bf9d9e320135f0bee6f,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23f833d3738d68706eb2f2868bd76bd71cee016cffa6faf5f045a60cc8c6eddd,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8048f1cb0be521f09749c0a489503cd56d85b68c6ca93380e082cfd693cd97a8,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5dbf844e49bb46b78586930149e5e5f5dc121014c8afd10fe36f3651967cc256,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rdwmf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-58b4c7f79c-55gtf_openshift-network-operator(37a5e44f-9a88-4405-be8a-b645485e7312): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 27 10:28:06 crc kubenswrapper[4728]: > logger="UnhandledError" Feb 27 10:28:06 crc kubenswrapper[4728]: E0227 10:28:06.729772 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" podUID="37a5e44f-9a88-4405-be8a-b645485e7312" Feb 27 10:28:06 crc kubenswrapper[4728]: I0227 10:28:06.830425 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:06 crc kubenswrapper[4728]: I0227 10:28:06.830487 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:06 crc kubenswrapper[4728]: I0227 10:28:06.830550 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:06 crc kubenswrapper[4728]: I0227 10:28:06.830583 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:06 crc kubenswrapper[4728]: I0227 10:28:06.830607 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:06Z","lastTransitionTime":"2026-02-27T10:28:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:06 crc kubenswrapper[4728]: I0227 10:28:06.933809 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:06 crc kubenswrapper[4728]: I0227 10:28:06.933871 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:06 crc kubenswrapper[4728]: I0227 10:28:06.933895 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:06 crc kubenswrapper[4728]: I0227 10:28:06.933925 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:06 crc kubenswrapper[4728]: I0227 10:28:06.933946 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:06Z","lastTransitionTime":"2026-02-27T10:28:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:07 crc kubenswrapper[4728]: I0227 10:28:07.036975 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:07 crc kubenswrapper[4728]: I0227 10:28:07.037027 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:07 crc kubenswrapper[4728]: I0227 10:28:07.037044 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:07 crc kubenswrapper[4728]: I0227 10:28:07.037066 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:07 crc kubenswrapper[4728]: I0227 10:28:07.037083 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:07Z","lastTransitionTime":"2026-02-27T10:28:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:07 crc kubenswrapper[4728]: I0227 10:28:07.140169 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:07 crc kubenswrapper[4728]: I0227 10:28:07.140222 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:07 crc kubenswrapper[4728]: I0227 10:28:07.140238 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:07 crc kubenswrapper[4728]: I0227 10:28:07.140261 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:07 crc kubenswrapper[4728]: I0227 10:28:07.140278 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:07Z","lastTransitionTime":"2026-02-27T10:28:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:07 crc kubenswrapper[4728]: I0227 10:28:07.177630 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rpr29" event={"ID":"b021ff26-58a3-4418-b6ba-4aa8e0bb6746","Type":"ContainerStarted","Data":"3a8e0b7cc10e78d8d955c50238a1305518903d59220fad8a73f9f277d9a4a66b"} Feb 27 10:28:07 crc kubenswrapper[4728]: E0227 10:28:07.179931 4728 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 10:28:07 crc kubenswrapper[4728]: init container &Container{Name:kubecfg-setup,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c cat << EOF > /etc/ovn/kubeconfig Feb 27 10:28:07 crc kubenswrapper[4728]: apiVersion: v1 Feb 27 10:28:07 crc kubenswrapper[4728]: clusters: Feb 27 10:28:07 crc kubenswrapper[4728]: - cluster: Feb 27 10:28:07 crc kubenswrapper[4728]: certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt Feb 27 10:28:07 crc kubenswrapper[4728]: server: https://api-int.crc.testing:6443 Feb 27 10:28:07 crc kubenswrapper[4728]: name: default-cluster Feb 27 10:28:07 crc kubenswrapper[4728]: contexts: Feb 27 10:28:07 crc kubenswrapper[4728]: - context: Feb 27 10:28:07 crc kubenswrapper[4728]: cluster: default-cluster Feb 27 10:28:07 crc kubenswrapper[4728]: namespace: default Feb 27 10:28:07 crc kubenswrapper[4728]: user: default-auth Feb 27 10:28:07 crc kubenswrapper[4728]: name: default-context Feb 27 10:28:07 crc kubenswrapper[4728]: current-context: default-context Feb 27 10:28:07 crc kubenswrapper[4728]: kind: Config Feb 27 10:28:07 crc kubenswrapper[4728]: preferences: {} Feb 27 10:28:07 crc kubenswrapper[4728]: users: Feb 27 10:28:07 crc kubenswrapper[4728]: - name: default-auth Feb 27 10:28:07 crc kubenswrapper[4728]: user: Feb 27 10:28:07 crc kubenswrapper[4728]: client-certificate: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Feb 27 10:28:07 crc kubenswrapper[4728]: client-key: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Feb 27 10:28:07 crc kubenswrapper[4728]: EOF Feb 27 10:28:07 crc kubenswrapper[4728]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-openvswitch,ReadOnly:false,MountPath:/etc/ovn/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dnx4z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-node-rpr29_openshift-ovn-kubernetes(b021ff26-58a3-4418-b6ba-4aa8e0bb6746): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 27 10:28:07 crc kubenswrapper[4728]: > logger="UnhandledError" Feb 27 10:28:07 crc kubenswrapper[4728]: E0227 10:28:07.181200 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubecfg-setup\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-ovn-kubernetes/ovnkube-node-rpr29" podUID="b021ff26-58a3-4418-b6ba-4aa8e0bb6746" Feb 27 10:28:07 crc kubenswrapper[4728]: I0227 10:28:07.196281 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:07 crc kubenswrapper[4728]: I0227 10:28:07.211169 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:07 crc kubenswrapper[4728]: I0227 10:28:07.227864 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:07 crc kubenswrapper[4728]: I0227 10:28:07.239389 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n4c77" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef8ed63c-6947-4b06-8742-54b7ba279aa7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phsn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:28:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n4c77\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:07 crc kubenswrapper[4728]: I0227 10:28:07.243489 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:07 crc kubenswrapper[4728]: I0227 10:28:07.243553 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:07 crc kubenswrapper[4728]: I0227 10:28:07.243601 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:07 crc kubenswrapper[4728]: I0227 10:28:07.243621 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:07 crc kubenswrapper[4728]: I0227 10:28:07.243653 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:07Z","lastTransitionTime":"2026-02-27T10:28:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:07 crc kubenswrapper[4728]: I0227 10:28:07.256352 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9tlth" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"468912b7-185a-4869-9a65-70cbcb3c4fb1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjq97\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:28:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9tlth\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:07 crc kubenswrapper[4728]: I0227 10:28:07.275055 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ded44d8-d959-4509-be28-3560f21eebda\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4e8835f3ff41008225e090a7c460900d18997805869ec9191f1e7cce3c5778e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://144568fcb21837838d999f90cb51207fff11edb023b2d975dad1980e0817ca43\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19f04d61914f64b94f46aebdc8a7580de81f5ba7e15935be045d4e92bd0cd889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f09311526d9d8d971440bee60c506ffb423eed73c2ac58c81934c2df487ae48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f09311526d9d8d971440bee60c506ffb423eed73c2ac58c81934c2df487ae48\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T10:27:28Z\\\",\\\"message\\\":\\\"W0227 10:27:27.964275 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0227 10:27:27.965166 1 crypto.go:601] Generating new CA for check-endpoints-signer@1772188047 cert, and key in /tmp/serving-cert-1003999021/serving-signer.crt, /tmp/serving-cert-1003999021/serving-signer.key\\\\nI0227 10:27:28.312473 1 observer_polling.go:159] Starting file observer\\\\nW0227 10:27:28.324092 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 10:27:28.324213 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 10:27:28.326919 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1003999021/tls.crt::/tmp/serving-cert-1003999021/tls.key\\\\\\\"\\\\nI0227 10:27:28.533892 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 10:27:28.537238 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 10:27:28.537256 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 10:27:28.537275 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 10:27:28.537280 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nF0227 10:27:28.543870 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T10:27:27Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd5f667231cb95d3d072f38635815d04c45ae9803c179df54367329131afd05a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c860355459019bda5fddc0a658236e97a674f965aa2eff648e97bbaa30161b75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c860355459019bda5fddc0a658236e97a674f965aa2eff648e97bbaa30161b75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:26:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:26:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:26:20Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:07 crc kubenswrapper[4728]: I0227 10:28:07.292169 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:07 crc kubenswrapper[4728]: I0227 10:28:07.306777 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:07 crc kubenswrapper[4728]: I0227 10:28:07.330176 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xghgn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0cd760d8-c9b2-4e95-97a3-94bc759c9884\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qtlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qtlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qtlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qtlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qtlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qtlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qtlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:28:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xghgn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:07 crc kubenswrapper[4728]: I0227 10:28:07.346375 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:07 crc kubenswrapper[4728]: I0227 10:28:07.346432 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:07 crc kubenswrapper[4728]: I0227 10:28:07.346452 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:07 crc kubenswrapper[4728]: I0227 10:28:07.346478 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:07 crc kubenswrapper[4728]: I0227 10:28:07.346498 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:07Z","lastTransitionTime":"2026-02-27T10:28:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:07 crc kubenswrapper[4728]: I0227 10:28:07.357330 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rpr29" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b021ff26-58a3-4418-b6ba-4aa8e0bb6746\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:28:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rpr29\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:07 crc kubenswrapper[4728]: I0227 10:28:07.373726 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:07 crc kubenswrapper[4728]: I0227 10:28:07.387154 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2cfd349-f825-497b-b698-7fb6bc258b22\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c64ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c64ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:28:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mf2hh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:07 crc kubenswrapper[4728]: I0227 10:28:07.449829 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:07 crc kubenswrapper[4728]: I0227 10:28:07.449918 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:07 crc kubenswrapper[4728]: I0227 10:28:07.449935 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:07 crc kubenswrapper[4728]: I0227 10:28:07.449958 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:07 crc kubenswrapper[4728]: I0227 10:28:07.449974 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:07Z","lastTransitionTime":"2026-02-27T10:28:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:07 crc kubenswrapper[4728]: I0227 10:28:07.552869 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:07 crc kubenswrapper[4728]: I0227 10:28:07.553165 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:07 crc kubenswrapper[4728]: I0227 10:28:07.553342 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:07 crc kubenswrapper[4728]: I0227 10:28:07.553492 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:07 crc kubenswrapper[4728]: I0227 10:28:07.553703 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:07Z","lastTransitionTime":"2026-02-27T10:28:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:07 crc kubenswrapper[4728]: I0227 10:28:07.657335 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:07 crc kubenswrapper[4728]: I0227 10:28:07.657388 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:07 crc kubenswrapper[4728]: I0227 10:28:07.657404 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:07 crc kubenswrapper[4728]: I0227 10:28:07.657431 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:07 crc kubenswrapper[4728]: I0227 10:28:07.657449 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:07Z","lastTransitionTime":"2026-02-27T10:28:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:07 crc kubenswrapper[4728]: E0227 10:28:07.726699 4728 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rczfb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-4ln5h_openshift-network-operator(d75a4c96-2883-4a0b-bab2-0fab2b6c0b49): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Feb 27 10:28:07 crc kubenswrapper[4728]: E0227 10:28:07.727177 4728 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 10:28:07 crc kubenswrapper[4728]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Feb 27 10:28:07 crc kubenswrapper[4728]: if [[ -f "/env/_master" ]]; then Feb 27 10:28:07 crc kubenswrapper[4728]: set -o allexport Feb 27 10:28:07 crc kubenswrapper[4728]: source "/env/_master" Feb 27 10:28:07 crc kubenswrapper[4728]: set +o allexport Feb 27 10:28:07 crc kubenswrapper[4728]: fi Feb 27 10:28:07 crc kubenswrapper[4728]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Feb 27 10:28:07 crc kubenswrapper[4728]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Feb 27 10:28:07 crc kubenswrapper[4728]: ho_enable="--enable-hybrid-overlay" Feb 27 10:28:07 crc kubenswrapper[4728]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Feb 27 10:28:07 crc kubenswrapper[4728]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Feb 27 10:28:07 crc kubenswrapper[4728]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Feb 27 10:28:07 crc kubenswrapper[4728]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Feb 27 10:28:07 crc kubenswrapper[4728]: --webhook-cert-dir="/etc/webhook-cert" \ Feb 27 10:28:07 crc kubenswrapper[4728]: --webhook-host=127.0.0.1 \ Feb 27 10:28:07 crc kubenswrapper[4728]: --webhook-port=9743 \ Feb 27 10:28:07 crc kubenswrapper[4728]: ${ho_enable} \ Feb 27 10:28:07 crc kubenswrapper[4728]: --enable-interconnect \ Feb 27 10:28:07 crc kubenswrapper[4728]: --disable-approver \ Feb 27 10:28:07 crc kubenswrapper[4728]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Feb 27 10:28:07 crc kubenswrapper[4728]: --wait-for-kubernetes-api=200s \ Feb 27 10:28:07 crc kubenswrapper[4728]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Feb 27 10:28:07 crc kubenswrapper[4728]: --loglevel="${LOGLEVEL}" Feb 27 10:28:07 crc kubenswrapper[4728]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 27 10:28:07 crc kubenswrapper[4728]: > logger="UnhandledError" Feb 27 10:28:07 crc kubenswrapper[4728]: E0227 10:28:07.728428 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-4ln5h" podUID="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" Feb 27 10:28:07 crc kubenswrapper[4728]: E0227 10:28:07.729537 4728 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 10:28:07 crc kubenswrapper[4728]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Feb 27 10:28:07 crc kubenswrapper[4728]: if [[ -f "/env/_master" ]]; then Feb 27 10:28:07 crc kubenswrapper[4728]: set -o allexport Feb 27 10:28:07 crc kubenswrapper[4728]: source "/env/_master" Feb 27 10:28:07 crc kubenswrapper[4728]: set +o allexport Feb 27 10:28:07 crc kubenswrapper[4728]: fi Feb 27 10:28:07 crc kubenswrapper[4728]: Feb 27 10:28:07 crc kubenswrapper[4728]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Feb 27 10:28:07 crc kubenswrapper[4728]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Feb 27 10:28:07 crc kubenswrapper[4728]: --disable-webhook \ Feb 27 10:28:07 crc kubenswrapper[4728]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Feb 27 10:28:07 crc kubenswrapper[4728]: --loglevel="${LOGLEVEL}" Feb 27 10:28:07 crc kubenswrapper[4728]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 27 10:28:07 crc kubenswrapper[4728]: > logger="UnhandledError" Feb 27 10:28:07 crc kubenswrapper[4728]: E0227 10:28:07.730775 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-vrzqb" podUID="ef543e1b-8068-4ea3-b32a-61027b32e95d" Feb 27 10:28:07 crc kubenswrapper[4728]: I0227 10:28:07.761285 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:07 crc kubenswrapper[4728]: I0227 10:28:07.761355 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:07 crc kubenswrapper[4728]: I0227 10:28:07.761372 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:07 crc kubenswrapper[4728]: I0227 10:28:07.761397 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:07 crc kubenswrapper[4728]: I0227 10:28:07.761414 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:07Z","lastTransitionTime":"2026-02-27T10:28:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:07 crc kubenswrapper[4728]: I0227 10:28:07.864451 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:07 crc kubenswrapper[4728]: I0227 10:28:07.864558 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:07 crc kubenswrapper[4728]: I0227 10:28:07.864577 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:07 crc kubenswrapper[4728]: I0227 10:28:07.864602 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:07 crc kubenswrapper[4728]: I0227 10:28:07.864619 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:07Z","lastTransitionTime":"2026-02-27T10:28:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:07 crc kubenswrapper[4728]: I0227 10:28:07.968247 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:07 crc kubenswrapper[4728]: I0227 10:28:07.968360 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:07 crc kubenswrapper[4728]: I0227 10:28:07.968379 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:07 crc kubenswrapper[4728]: I0227 10:28:07.968404 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:07 crc kubenswrapper[4728]: I0227 10:28:07.968422 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:07Z","lastTransitionTime":"2026-02-27T10:28:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:08 crc kubenswrapper[4728]: I0227 10:28:08.071203 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:08 crc kubenswrapper[4728]: I0227 10:28:08.071276 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:08 crc kubenswrapper[4728]: I0227 10:28:08.071299 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:08 crc kubenswrapper[4728]: I0227 10:28:08.071334 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:08 crc kubenswrapper[4728]: I0227 10:28:08.071352 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:08Z","lastTransitionTime":"2026-02-27T10:28:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:08 crc kubenswrapper[4728]: I0227 10:28:08.174197 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:08 crc kubenswrapper[4728]: I0227 10:28:08.174265 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:08 crc kubenswrapper[4728]: I0227 10:28:08.174289 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:08 crc kubenswrapper[4728]: I0227 10:28:08.174319 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:08 crc kubenswrapper[4728]: I0227 10:28:08.174342 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:08Z","lastTransitionTime":"2026-02-27T10:28:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:08 crc kubenswrapper[4728]: I0227 10:28:08.278021 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:08 crc kubenswrapper[4728]: I0227 10:28:08.278383 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:08 crc kubenswrapper[4728]: I0227 10:28:08.278549 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:08 crc kubenswrapper[4728]: I0227 10:28:08.278699 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:08 crc kubenswrapper[4728]: I0227 10:28:08.278835 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:08Z","lastTransitionTime":"2026-02-27T10:28:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:08 crc kubenswrapper[4728]: I0227 10:28:08.382061 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:08 crc kubenswrapper[4728]: I0227 10:28:08.382124 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:08 crc kubenswrapper[4728]: I0227 10:28:08.382142 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:08 crc kubenswrapper[4728]: I0227 10:28:08.382169 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:08 crc kubenswrapper[4728]: I0227 10:28:08.382186 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:08Z","lastTransitionTime":"2026-02-27T10:28:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:08 crc kubenswrapper[4728]: I0227 10:28:08.388678 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 10:28:08 crc kubenswrapper[4728]: E0227 10:28:08.388831 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 10:28:24.388806486 +0000 UTC m=+124.351172632 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:28:08 crc kubenswrapper[4728]: I0227 10:28:08.484849 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:08 crc kubenswrapper[4728]: I0227 10:28:08.484902 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:08 crc kubenswrapper[4728]: I0227 10:28:08.484921 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:08 crc kubenswrapper[4728]: I0227 10:28:08.484945 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:08 crc kubenswrapper[4728]: I0227 10:28:08.484963 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:08Z","lastTransitionTime":"2026-02-27T10:28:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:08 crc kubenswrapper[4728]: I0227 10:28:08.587791 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:08 crc kubenswrapper[4728]: I0227 10:28:08.587840 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:08 crc kubenswrapper[4728]: I0227 10:28:08.587903 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:08 crc kubenswrapper[4728]: I0227 10:28:08.587926 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:08 crc kubenswrapper[4728]: I0227 10:28:08.587940 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:08Z","lastTransitionTime":"2026-02-27T10:28:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:08 crc kubenswrapper[4728]: I0227 10:28:08.590576 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 10:28:08 crc kubenswrapper[4728]: I0227 10:28:08.590632 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 10:28:08 crc kubenswrapper[4728]: I0227 10:28:08.590658 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 10:28:08 crc kubenswrapper[4728]: I0227 10:28:08.590688 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 10:28:08 crc kubenswrapper[4728]: E0227 10:28:08.590801 4728 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 27 10:28:08 crc kubenswrapper[4728]: E0227 10:28:08.590804 4728 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 27 10:28:08 crc kubenswrapper[4728]: E0227 10:28:08.590844 4728 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 27 10:28:08 crc kubenswrapper[4728]: E0227 10:28:08.590862 4728 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 27 10:28:08 crc kubenswrapper[4728]: E0227 10:28:08.590887 4728 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 27 10:28:08 crc kubenswrapper[4728]: E0227 10:28:08.590902 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-27 10:28:24.590876608 +0000 UTC m=+124.553242754 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 27 10:28:08 crc kubenswrapper[4728]: E0227 10:28:08.590917 4728 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 10:28:08 crc kubenswrapper[4728]: E0227 10:28:08.590819 4728 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 27 10:28:08 crc kubenswrapper[4728]: E0227 10:28:08.590946 4728 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 10:28:08 crc kubenswrapper[4728]: E0227 10:28:08.590968 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-27 10:28:24.59094231 +0000 UTC m=+124.553308446 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 27 10:28:08 crc kubenswrapper[4728]: E0227 10:28:08.591000 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-27 10:28:24.590986501 +0000 UTC m=+124.553352677 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 10:28:08 crc kubenswrapper[4728]: E0227 10:28:08.591022 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-27 10:28:24.591011962 +0000 UTC m=+124.553378098 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 10:28:08 crc kubenswrapper[4728]: I0227 10:28:08.691091 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:08 crc kubenswrapper[4728]: I0227 10:28:08.691185 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:08 crc kubenswrapper[4728]: I0227 10:28:08.691203 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:08 crc kubenswrapper[4728]: I0227 10:28:08.691227 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:08 crc kubenswrapper[4728]: I0227 10:28:08.691243 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:08Z","lastTransitionTime":"2026-02-27T10:28:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:08 crc kubenswrapper[4728]: I0227 10:28:08.724621 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 10:28:08 crc kubenswrapper[4728]: I0227 10:28:08.724683 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 10:28:08 crc kubenswrapper[4728]: E0227 10:28:08.724826 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 10:28:08 crc kubenswrapper[4728]: I0227 10:28:08.724629 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 10:28:08 crc kubenswrapper[4728]: E0227 10:28:08.725102 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 10:28:08 crc kubenswrapper[4728]: E0227 10:28:08.725378 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 10:28:08 crc kubenswrapper[4728]: I0227 10:28:08.745487 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Feb 27 10:28:08 crc kubenswrapper[4728]: I0227 10:28:08.795417 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:08 crc kubenswrapper[4728]: I0227 10:28:08.795487 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:08 crc kubenswrapper[4728]: I0227 10:28:08.795538 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:08 crc kubenswrapper[4728]: I0227 10:28:08.795566 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:08 crc kubenswrapper[4728]: I0227 10:28:08.795584 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:08Z","lastTransitionTime":"2026-02-27T10:28:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:08 crc kubenswrapper[4728]: I0227 10:28:08.899174 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:08 crc kubenswrapper[4728]: I0227 10:28:08.899221 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:08 crc kubenswrapper[4728]: I0227 10:28:08.899238 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:08 crc kubenswrapper[4728]: I0227 10:28:08.899259 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:08 crc kubenswrapper[4728]: I0227 10:28:08.899276 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:08Z","lastTransitionTime":"2026-02-27T10:28:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:09 crc kubenswrapper[4728]: I0227 10:28:09.002697 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:09 crc kubenswrapper[4728]: I0227 10:28:09.002744 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:09 crc kubenswrapper[4728]: I0227 10:28:09.002763 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:09 crc kubenswrapper[4728]: I0227 10:28:09.002786 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:09 crc kubenswrapper[4728]: I0227 10:28:09.002802 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:09Z","lastTransitionTime":"2026-02-27T10:28:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:09 crc kubenswrapper[4728]: I0227 10:28:09.106226 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:09 crc kubenswrapper[4728]: I0227 10:28:09.106316 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:09 crc kubenswrapper[4728]: I0227 10:28:09.106335 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:09 crc kubenswrapper[4728]: I0227 10:28:09.106358 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:09 crc kubenswrapper[4728]: I0227 10:28:09.106373 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:09Z","lastTransitionTime":"2026-02-27T10:28:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:09 crc kubenswrapper[4728]: I0227 10:28:09.153520 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:09 crc kubenswrapper[4728]: I0227 10:28:09.153663 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:09 crc kubenswrapper[4728]: I0227 10:28:09.153688 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:09 crc kubenswrapper[4728]: I0227 10:28:09.153716 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:09 crc kubenswrapper[4728]: I0227 10:28:09.153740 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:09Z","lastTransitionTime":"2026-02-27T10:28:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:09 crc kubenswrapper[4728]: E0227 10:28:09.170231 4728 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:28:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:28:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:28:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:28:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"79ce2621-f919-4f1d-8b5b-b727bcba43c7\\\",\\\"systemUUID\\\":\\\"08a24311-ed07-4912-ba2b-648ea93d1dc5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:09 crc kubenswrapper[4728]: I0227 10:28:09.175878 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:09 crc kubenswrapper[4728]: I0227 10:28:09.176150 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:09 crc kubenswrapper[4728]: I0227 10:28:09.176169 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:09 crc kubenswrapper[4728]: I0227 10:28:09.176648 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:09 crc kubenswrapper[4728]: I0227 10:28:09.176707 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:09Z","lastTransitionTime":"2026-02-27T10:28:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:09 crc kubenswrapper[4728]: E0227 10:28:09.197744 4728 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:28:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:28:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:28:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:28:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"79ce2621-f919-4f1d-8b5b-b727bcba43c7\\\",\\\"systemUUID\\\":\\\"08a24311-ed07-4912-ba2b-648ea93d1dc5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:09 crc kubenswrapper[4728]: I0227 10:28:09.203638 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:09 crc kubenswrapper[4728]: I0227 10:28:09.203709 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:09 crc kubenswrapper[4728]: I0227 10:28:09.203728 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:09 crc kubenswrapper[4728]: I0227 10:28:09.204208 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:09 crc kubenswrapper[4728]: I0227 10:28:09.204251 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:09Z","lastTransitionTime":"2026-02-27T10:28:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:09 crc kubenswrapper[4728]: E0227 10:28:09.220441 4728 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:28:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:28:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:28:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:28:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"79ce2621-f919-4f1d-8b5b-b727bcba43c7\\\",\\\"systemUUID\\\":\\\"08a24311-ed07-4912-ba2b-648ea93d1dc5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:09 crc kubenswrapper[4728]: I0227 10:28:09.226318 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:09 crc kubenswrapper[4728]: I0227 10:28:09.226377 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:09 crc kubenswrapper[4728]: I0227 10:28:09.226401 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:09 crc kubenswrapper[4728]: I0227 10:28:09.226432 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:09 crc kubenswrapper[4728]: I0227 10:28:09.226457 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:09Z","lastTransitionTime":"2026-02-27T10:28:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:09 crc kubenswrapper[4728]: E0227 10:28:09.242883 4728 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:28:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:28:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:28:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:28:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"79ce2621-f919-4f1d-8b5b-b727bcba43c7\\\",\\\"systemUUID\\\":\\\"08a24311-ed07-4912-ba2b-648ea93d1dc5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:09 crc kubenswrapper[4728]: I0227 10:28:09.248293 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:09 crc kubenswrapper[4728]: I0227 10:28:09.248349 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:09 crc kubenswrapper[4728]: I0227 10:28:09.248371 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:09 crc kubenswrapper[4728]: I0227 10:28:09.248402 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:09 crc kubenswrapper[4728]: I0227 10:28:09.248426 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:09Z","lastTransitionTime":"2026-02-27T10:28:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:09 crc kubenswrapper[4728]: E0227 10:28:09.263799 4728 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:28:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:28:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:28:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:28:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"79ce2621-f919-4f1d-8b5b-b727bcba43c7\\\",\\\"systemUUID\\\":\\\"08a24311-ed07-4912-ba2b-648ea93d1dc5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:09 crc kubenswrapper[4728]: E0227 10:28:09.264066 4728 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 27 10:28:09 crc kubenswrapper[4728]: I0227 10:28:09.266498 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:09 crc kubenswrapper[4728]: I0227 10:28:09.266596 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:09 crc kubenswrapper[4728]: I0227 10:28:09.266619 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:09 crc kubenswrapper[4728]: I0227 10:28:09.266646 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:09 crc kubenswrapper[4728]: I0227 10:28:09.266669 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:09Z","lastTransitionTime":"2026-02-27T10:28:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:09 crc kubenswrapper[4728]: I0227 10:28:09.369931 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:09 crc kubenswrapper[4728]: I0227 10:28:09.369990 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:09 crc kubenswrapper[4728]: I0227 10:28:09.370006 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:09 crc kubenswrapper[4728]: I0227 10:28:09.370042 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:09 crc kubenswrapper[4728]: I0227 10:28:09.370078 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:09Z","lastTransitionTime":"2026-02-27T10:28:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:09 crc kubenswrapper[4728]: I0227 10:28:09.472575 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:09 crc kubenswrapper[4728]: I0227 10:28:09.472612 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:09 crc kubenswrapper[4728]: I0227 10:28:09.472643 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:09 crc kubenswrapper[4728]: I0227 10:28:09.472657 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:09 crc kubenswrapper[4728]: I0227 10:28:09.472667 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:09Z","lastTransitionTime":"2026-02-27T10:28:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:09 crc kubenswrapper[4728]: I0227 10:28:09.575645 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:09 crc kubenswrapper[4728]: I0227 10:28:09.575977 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:09 crc kubenswrapper[4728]: I0227 10:28:09.576062 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:09 crc kubenswrapper[4728]: I0227 10:28:09.576185 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:09 crc kubenswrapper[4728]: I0227 10:28:09.576275 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:09Z","lastTransitionTime":"2026-02-27T10:28:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:09 crc kubenswrapper[4728]: I0227 10:28:09.678774 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:09 crc kubenswrapper[4728]: I0227 10:28:09.678833 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:09 crc kubenswrapper[4728]: I0227 10:28:09.678852 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:09 crc kubenswrapper[4728]: I0227 10:28:09.678874 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:09 crc kubenswrapper[4728]: I0227 10:28:09.678892 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:09Z","lastTransitionTime":"2026-02-27T10:28:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:09 crc kubenswrapper[4728]: I0227 10:28:09.725729 4728 scope.go:117] "RemoveContainer" containerID="0f09311526d9d8d971440bee60c506ffb423eed73c2ac58c81934c2df487ae48" Feb 27 10:28:09 crc kubenswrapper[4728]: I0227 10:28:09.782784 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:09 crc kubenswrapper[4728]: I0227 10:28:09.782834 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:09 crc kubenswrapper[4728]: I0227 10:28:09.782854 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:09 crc kubenswrapper[4728]: I0227 10:28:09.782880 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:09 crc kubenswrapper[4728]: I0227 10:28:09.782900 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:09Z","lastTransitionTime":"2026-02-27T10:28:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:09 crc kubenswrapper[4728]: I0227 10:28:09.886215 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:09 crc kubenswrapper[4728]: I0227 10:28:09.886290 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:09 crc kubenswrapper[4728]: I0227 10:28:09.886313 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:09 crc kubenswrapper[4728]: I0227 10:28:09.886342 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:09 crc kubenswrapper[4728]: I0227 10:28:09.886360 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:09Z","lastTransitionTime":"2026-02-27T10:28:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:09 crc kubenswrapper[4728]: I0227 10:28:09.988854 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:09 crc kubenswrapper[4728]: I0227 10:28:09.988918 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:09 crc kubenswrapper[4728]: I0227 10:28:09.988941 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:09 crc kubenswrapper[4728]: I0227 10:28:09.988970 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:09 crc kubenswrapper[4728]: I0227 10:28:09.988992 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:09Z","lastTransitionTime":"2026-02-27T10:28:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:10 crc kubenswrapper[4728]: I0227 10:28:10.092160 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:10 crc kubenswrapper[4728]: I0227 10:28:10.092238 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:10 crc kubenswrapper[4728]: I0227 10:28:10.092259 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:10 crc kubenswrapper[4728]: I0227 10:28:10.092281 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:10 crc kubenswrapper[4728]: I0227 10:28:10.092297 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:10Z","lastTransitionTime":"2026-02-27T10:28:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:10 crc kubenswrapper[4728]: I0227 10:28:10.191722 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Feb 27 10:28:10 crc kubenswrapper[4728]: I0227 10:28:10.193911 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:10 crc kubenswrapper[4728]: I0227 10:28:10.193953 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:10 crc kubenswrapper[4728]: I0227 10:28:10.193966 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:10 crc kubenswrapper[4728]: I0227 10:28:10.193985 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:10 crc kubenswrapper[4728]: I0227 10:28:10.193999 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:10Z","lastTransitionTime":"2026-02-27T10:28:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:10 crc kubenswrapper[4728]: I0227 10:28:10.195079 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"f483187ce4b468f12ba37432364d63b1594b49400f77ab7ff5040730bc29b35d"} Feb 27 10:28:10 crc kubenswrapper[4728]: I0227 10:28:10.195366 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 10:28:10 crc kubenswrapper[4728]: I0227 10:28:10.209664 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:10 crc kubenswrapper[4728]: I0227 10:28:10.220703 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2cfd349-f825-497b-b698-7fb6bc258b22\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c64ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c64ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:28:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mf2hh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:10 crc kubenswrapper[4728]: I0227 10:28:10.244189 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rpr29" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b021ff26-58a3-4418-b6ba-4aa8e0bb6746\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:28:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rpr29\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:10 crc kubenswrapper[4728]: I0227 10:28:10.272349 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f942aba-66f5-4353-b2f7-53d7ba94ae34\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dd5470266565899e6fda78eec789f70994968d19b1001f6340a99cfd2b73933\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e13903f4839e360a4bd61167579f8ba8936c176b194af3ed693fc7a3b6c88fe2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d2407a2f1f0fcd2f2dd4efda991f2d014a4d8c85592f2e93df7e4860a46862f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8643806fee981c732e483bce1bf93a8e35ab71964444bb2b9c476d7c93f85869\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://497509ccbf9c511546f719138ff58231a55c407323b370c2687557b87a660c9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cf8e26fed561fc883edbc30e5a062308c49ec85bc0efb0a64fef14f9dacb1b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cf8e26fed561fc883edbc30e5a062308c49ec85bc0efb0a64fef14f9dacb1b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:26:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:26:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e814ab45a5c01528b03c209cbe306360276b0a22ca9270fe7ce3bf36e0ef5f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e814ab45a5c01528b03c209cbe306360276b0a22ca9270fe7ce3bf36e0ef5f7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:26:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:26:23Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://51f217547dadd68ada46a99f3964971e47d56b9b1e45e018813ddce20347d4c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51f217547dadd68ada46a99f3964971e47d56b9b1e45e018813ddce20347d4c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:26:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:26:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:26:20Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:10 crc kubenswrapper[4728]: I0227 10:28:10.285822 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:10 crc kubenswrapper[4728]: I0227 10:28:10.299211 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:10 crc kubenswrapper[4728]: I0227 10:28:10.299943 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:10 crc kubenswrapper[4728]: I0227 10:28:10.299964 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:10 crc kubenswrapper[4728]: I0227 10:28:10.299991 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:10 crc kubenswrapper[4728]: I0227 10:28:10.300009 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:10Z","lastTransitionTime":"2026-02-27T10:28:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:10 crc kubenswrapper[4728]: I0227 10:28:10.306833 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ded44d8-d959-4509-be28-3560f21eebda\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4e8835f3ff41008225e090a7c460900d18997805869ec9191f1e7cce3c5778e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://144568fcb21837838d999f90cb51207fff11edb023b2d975dad1980e0817ca43\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19f04d61914f64b94f46aebdc8a7580de81f5ba7e15935be045d4e92bd0cd889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f483187ce4b468f12ba37432364d63b1594b49400f77ab7ff5040730bc29b35d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f09311526d9d8d971440bee60c506ffb423eed73c2ac58c81934c2df487ae48\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T10:27:28Z\\\",\\\"message\\\":\\\"W0227 10:27:27.964275 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0227 10:27:27.965166 1 crypto.go:601] Generating new CA for check-endpoints-signer@1772188047 cert, and key in /tmp/serving-cert-1003999021/serving-signer.crt, /tmp/serving-cert-1003999021/serving-signer.key\\\\nI0227 10:27:28.312473 1 observer_polling.go:159] Starting file observer\\\\nW0227 10:27:28.324092 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 10:27:28.324213 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 10:27:28.326919 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1003999021/tls.crt::/tmp/serving-cert-1003999021/tls.key\\\\\\\"\\\\nI0227 10:27:28.533892 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 10:27:28.537238 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 10:27:28.537256 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 10:27:28.537275 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 10:27:28.537280 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nF0227 10:27:28.543870 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T10:27:27Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd5f667231cb95d3d072f38635815d04c45ae9803c179df54367329131afd05a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c860355459019bda5fddc0a658236e97a674f965aa2eff648e97bbaa30161b75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c860355459019bda5fddc0a658236e97a674f965aa2eff648e97bbaa30161b75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:26:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:26:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:26:20Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:10 crc kubenswrapper[4728]: I0227 10:28:10.317842 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:10 crc kubenswrapper[4728]: I0227 10:28:10.331086 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:10 crc kubenswrapper[4728]: I0227 10:28:10.344746 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:10 crc kubenswrapper[4728]: I0227 10:28:10.354586 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n4c77" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef8ed63c-6947-4b06-8742-54b7ba279aa7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phsn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:28:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n4c77\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:10 crc kubenswrapper[4728]: I0227 10:28:10.366670 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9tlth" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"468912b7-185a-4869-9a65-70cbcb3c4fb1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjq97\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:28:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9tlth\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:10 crc kubenswrapper[4728]: I0227 10:28:10.380009 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:10 crc kubenswrapper[4728]: I0227 10:28:10.393976 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xghgn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0cd760d8-c9b2-4e95-97a3-94bc759c9884\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qtlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qtlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qtlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qtlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qtlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qtlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qtlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:28:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xghgn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:10 crc kubenswrapper[4728]: I0227 10:28:10.401921 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:10 crc kubenswrapper[4728]: I0227 10:28:10.401950 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:10 crc kubenswrapper[4728]: I0227 10:28:10.401960 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:10 crc kubenswrapper[4728]: I0227 10:28:10.401977 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:10 crc kubenswrapper[4728]: I0227 10:28:10.401988 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:10Z","lastTransitionTime":"2026-02-27T10:28:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:10 crc kubenswrapper[4728]: I0227 10:28:10.505199 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:10 crc kubenswrapper[4728]: I0227 10:28:10.505244 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:10 crc kubenswrapper[4728]: I0227 10:28:10.505260 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:10 crc kubenswrapper[4728]: I0227 10:28:10.505282 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:10 crc kubenswrapper[4728]: I0227 10:28:10.505298 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:10Z","lastTransitionTime":"2026-02-27T10:28:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:10 crc kubenswrapper[4728]: I0227 10:28:10.608705 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:10 crc kubenswrapper[4728]: I0227 10:28:10.608764 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:10 crc kubenswrapper[4728]: I0227 10:28:10.608787 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:10 crc kubenswrapper[4728]: I0227 10:28:10.608817 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:10 crc kubenswrapper[4728]: I0227 10:28:10.608842 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:10Z","lastTransitionTime":"2026-02-27T10:28:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:10 crc kubenswrapper[4728]: I0227 10:28:10.711395 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:10 crc kubenswrapper[4728]: I0227 10:28:10.711462 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:10 crc kubenswrapper[4728]: I0227 10:28:10.711484 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:10 crc kubenswrapper[4728]: I0227 10:28:10.711576 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:10 crc kubenswrapper[4728]: I0227 10:28:10.711600 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:10Z","lastTransitionTime":"2026-02-27T10:28:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:10 crc kubenswrapper[4728]: I0227 10:28:10.724222 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 10:28:10 crc kubenswrapper[4728]: I0227 10:28:10.724417 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 10:28:10 crc kubenswrapper[4728]: I0227 10:28:10.724814 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 10:28:10 crc kubenswrapper[4728]: E0227 10:28:10.724957 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 10:28:10 crc kubenswrapper[4728]: E0227 10:28:10.724993 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 10:28:10 crc kubenswrapper[4728]: E0227 10:28:10.725256 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 10:28:10 crc kubenswrapper[4728]: I0227 10:28:10.738290 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:10 crc kubenswrapper[4728]: I0227 10:28:10.769765 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f942aba-66f5-4353-b2f7-53d7ba94ae34\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dd5470266565899e6fda78eec789f70994968d19b1001f6340a99cfd2b73933\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e13903f4839e360a4bd61167579f8ba8936c176b194af3ed693fc7a3b6c88fe2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d2407a2f1f0fcd2f2dd4efda991f2d014a4d8c85592f2e93df7e4860a46862f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8643806fee981c732e483bce1bf93a8e35ab71964444bb2b9c476d7c93f85869\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://497509ccbf9c511546f719138ff58231a55c407323b370c2687557b87a660c9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cf8e26fed561fc883edbc30e5a062308c49ec85bc0efb0a64fef14f9dacb1b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cf8e26fed561fc883edbc30e5a062308c49ec85bc0efb0a64fef14f9dacb1b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:26:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:26:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e814ab45a5c01528b03c209cbe306360276b0a22ca9270fe7ce3bf36e0ef5f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e814ab45a5c01528b03c209cbe306360276b0a22ca9270fe7ce3bf36e0ef5f7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:26:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:26:23Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://51f217547dadd68ada46a99f3964971e47d56b9b1e45e018813ddce20347d4c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51f217547dadd68ada46a99f3964971e47d56b9b1e45e018813ddce20347d4c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:26:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:26:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:26:20Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:10 crc kubenswrapper[4728]: I0227 10:28:10.784551 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n4c77" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef8ed63c-6947-4b06-8742-54b7ba279aa7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phsn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:28:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n4c77\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:10 crc kubenswrapper[4728]: I0227 10:28:10.801678 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9tlth" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"468912b7-185a-4869-9a65-70cbcb3c4fb1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjq97\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:28:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9tlth\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:10 crc kubenswrapper[4728]: I0227 10:28:10.813588 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:10 crc kubenswrapper[4728]: I0227 10:28:10.813620 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:10 crc kubenswrapper[4728]: I0227 10:28:10.813632 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:10 crc kubenswrapper[4728]: I0227 10:28:10.813648 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:10 crc kubenswrapper[4728]: I0227 10:28:10.813659 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:10Z","lastTransitionTime":"2026-02-27T10:28:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:10 crc kubenswrapper[4728]: I0227 10:28:10.819453 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ded44d8-d959-4509-be28-3560f21eebda\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4e8835f3ff41008225e090a7c460900d18997805869ec9191f1e7cce3c5778e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://144568fcb21837838d999f90cb51207fff11edb023b2d975dad1980e0817ca43\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19f04d61914f64b94f46aebdc8a7580de81f5ba7e15935be045d4e92bd0cd889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f483187ce4b468f12ba37432364d63b1594b49400f77ab7ff5040730bc29b35d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f09311526d9d8d971440bee60c506ffb423eed73c2ac58c81934c2df487ae48\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T10:27:28Z\\\",\\\"message\\\":\\\"W0227 10:27:27.964275 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0227 10:27:27.965166 1 crypto.go:601] Generating new CA for check-endpoints-signer@1772188047 cert, and key in /tmp/serving-cert-1003999021/serving-signer.crt, /tmp/serving-cert-1003999021/serving-signer.key\\\\nI0227 10:27:28.312473 1 observer_polling.go:159] Starting file observer\\\\nW0227 10:27:28.324092 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 10:27:28.324213 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 10:27:28.326919 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1003999021/tls.crt::/tmp/serving-cert-1003999021/tls.key\\\\\\\"\\\\nI0227 10:27:28.533892 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 10:27:28.537238 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 10:27:28.537256 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 10:27:28.537275 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 10:27:28.537280 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nF0227 10:27:28.543870 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T10:27:27Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd5f667231cb95d3d072f38635815d04c45ae9803c179df54367329131afd05a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c860355459019bda5fddc0a658236e97a674f965aa2eff648e97bbaa30161b75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c860355459019bda5fddc0a658236e97a674f965aa2eff648e97bbaa30161b75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:26:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:26:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:26:20Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:10 crc kubenswrapper[4728]: I0227 10:28:10.837133 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:10 crc kubenswrapper[4728]: I0227 10:28:10.849044 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:10 crc kubenswrapper[4728]: I0227 10:28:10.867257 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:10 crc kubenswrapper[4728]: I0227 10:28:10.882264 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:10 crc kubenswrapper[4728]: I0227 10:28:10.901564 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xghgn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0cd760d8-c9b2-4e95-97a3-94bc759c9884\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qtlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qtlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qtlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qtlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qtlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qtlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qtlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:28:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xghgn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:10 crc kubenswrapper[4728]: I0227 10:28:10.916634 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:10 crc kubenswrapper[4728]: I0227 10:28:10.916693 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:10 crc kubenswrapper[4728]: I0227 10:28:10.916708 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:10 crc kubenswrapper[4728]: I0227 10:28:10.916731 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:10 crc kubenswrapper[4728]: I0227 10:28:10.916749 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:10Z","lastTransitionTime":"2026-02-27T10:28:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:10 crc kubenswrapper[4728]: I0227 10:28:10.917131 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:10 crc kubenswrapper[4728]: I0227 10:28:10.933167 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2cfd349-f825-497b-b698-7fb6bc258b22\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c64ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c64ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:28:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mf2hh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:10 crc kubenswrapper[4728]: I0227 10:28:10.958339 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rpr29" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b021ff26-58a3-4418-b6ba-4aa8e0bb6746\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:28:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rpr29\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:11 crc kubenswrapper[4728]: I0227 10:28:11.019374 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:11 crc kubenswrapper[4728]: I0227 10:28:11.019431 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:11 crc kubenswrapper[4728]: I0227 10:28:11.019449 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:11 crc kubenswrapper[4728]: I0227 10:28:11.019473 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:11 crc kubenswrapper[4728]: I0227 10:28:11.019490 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:11Z","lastTransitionTime":"2026-02-27T10:28:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:11 crc kubenswrapper[4728]: I0227 10:28:11.121299 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:11 crc kubenswrapper[4728]: I0227 10:28:11.121337 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:11 crc kubenswrapper[4728]: I0227 10:28:11.121347 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:11 crc kubenswrapper[4728]: I0227 10:28:11.121363 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:11 crc kubenswrapper[4728]: I0227 10:28:11.121372 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:11Z","lastTransitionTime":"2026-02-27T10:28:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:11 crc kubenswrapper[4728]: I0227 10:28:11.223699 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:11 crc kubenswrapper[4728]: I0227 10:28:11.223736 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:11 crc kubenswrapper[4728]: I0227 10:28:11.223745 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:11 crc kubenswrapper[4728]: I0227 10:28:11.223760 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:11 crc kubenswrapper[4728]: I0227 10:28:11.223770 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:11Z","lastTransitionTime":"2026-02-27T10:28:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:11 crc kubenswrapper[4728]: I0227 10:28:11.575389 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:11 crc kubenswrapper[4728]: I0227 10:28:11.575453 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:11 crc kubenswrapper[4728]: I0227 10:28:11.575465 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:11 crc kubenswrapper[4728]: I0227 10:28:11.575481 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:11 crc kubenswrapper[4728]: I0227 10:28:11.575492 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:11Z","lastTransitionTime":"2026-02-27T10:28:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:11 crc kubenswrapper[4728]: I0227 10:28:11.677950 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:11 crc kubenswrapper[4728]: I0227 10:28:11.678001 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:11 crc kubenswrapper[4728]: I0227 10:28:11.678016 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:11 crc kubenswrapper[4728]: I0227 10:28:11.678034 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:11 crc kubenswrapper[4728]: I0227 10:28:11.678048 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:11Z","lastTransitionTime":"2026-02-27T10:28:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:11 crc kubenswrapper[4728]: I0227 10:28:11.780488 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:11 crc kubenswrapper[4728]: I0227 10:28:11.780553 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:11 crc kubenswrapper[4728]: I0227 10:28:11.780563 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:11 crc kubenswrapper[4728]: I0227 10:28:11.780627 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:11 crc kubenswrapper[4728]: I0227 10:28:11.780644 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:11Z","lastTransitionTime":"2026-02-27T10:28:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:11 crc kubenswrapper[4728]: I0227 10:28:11.843916 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-97psz"] Feb 27 10:28:11 crc kubenswrapper[4728]: I0227 10:28:11.844438 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-97psz" Feb 27 10:28:11 crc kubenswrapper[4728]: I0227 10:28:11.847298 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 27 10:28:11 crc kubenswrapper[4728]: I0227 10:28:11.849180 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 27 10:28:11 crc kubenswrapper[4728]: I0227 10:28:11.849318 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 27 10:28:11 crc kubenswrapper[4728]: I0227 10:28:11.849777 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 27 10:28:11 crc kubenswrapper[4728]: I0227 10:28:11.863242 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:11 crc kubenswrapper[4728]: I0227 10:28:11.877533 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24vxw\" (UniqueName: \"kubernetes.io/projected/6f9f1c81-b0b0-4016-9ef5-38cd92277b5b-kube-api-access-24vxw\") pod \"node-ca-97psz\" (UID: \"6f9f1c81-b0b0-4016-9ef5-38cd92277b5b\") " pod="openshift-image-registry/node-ca-97psz" Feb 27 10:28:11 crc kubenswrapper[4728]: I0227 10:28:11.877607 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/6f9f1c81-b0b0-4016-9ef5-38cd92277b5b-serviceca\") pod \"node-ca-97psz\" (UID: \"6f9f1c81-b0b0-4016-9ef5-38cd92277b5b\") " pod="openshift-image-registry/node-ca-97psz" Feb 27 10:28:11 crc kubenswrapper[4728]: I0227 10:28:11.877687 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6f9f1c81-b0b0-4016-9ef5-38cd92277b5b-host\") pod \"node-ca-97psz\" (UID: \"6f9f1c81-b0b0-4016-9ef5-38cd92277b5b\") " pod="openshift-image-registry/node-ca-97psz" Feb 27 10:28:11 crc kubenswrapper[4728]: I0227 10:28:11.878189 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:11 crc kubenswrapper[4728]: I0227 10:28:11.885027 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:11 crc kubenswrapper[4728]: I0227 10:28:11.885082 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:11 crc kubenswrapper[4728]: I0227 10:28:11.885098 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:11 crc kubenswrapper[4728]: I0227 10:28:11.885122 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:11 crc kubenswrapper[4728]: I0227 10:28:11.885139 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:11Z","lastTransitionTime":"2026-02-27T10:28:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:11 crc kubenswrapper[4728]: I0227 10:28:11.892411 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n4c77" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef8ed63c-6947-4b06-8742-54b7ba279aa7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phsn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:28:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n4c77\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:11 crc kubenswrapper[4728]: I0227 10:28:11.910386 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9tlth" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"468912b7-185a-4869-9a65-70cbcb3c4fb1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjq97\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:28:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9tlth\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:11 crc kubenswrapper[4728]: I0227 10:28:11.923449 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-97psz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f9f1c81-b0b0-4016-9ef5-38cd92277b5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:11Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:11Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24vxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:28:11Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-97psz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:11 crc kubenswrapper[4728]: I0227 10:28:11.940680 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ded44d8-d959-4509-be28-3560f21eebda\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4e8835f3ff41008225e090a7c460900d18997805869ec9191f1e7cce3c5778e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://144568fcb21837838d999f90cb51207fff11edb023b2d975dad1980e0817ca43\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19f04d61914f64b94f46aebdc8a7580de81f5ba7e15935be045d4e92bd0cd889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f483187ce4b468f12ba37432364d63b1594b49400f77ab7ff5040730bc29b35d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f09311526d9d8d971440bee60c506ffb423eed73c2ac58c81934c2df487ae48\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T10:27:28Z\\\",\\\"message\\\":\\\"W0227 10:27:27.964275 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0227 10:27:27.965166 1 crypto.go:601] Generating new CA for check-endpoints-signer@1772188047 cert, and key in /tmp/serving-cert-1003999021/serving-signer.crt, /tmp/serving-cert-1003999021/serving-signer.key\\\\nI0227 10:27:28.312473 1 observer_polling.go:159] Starting file observer\\\\nW0227 10:27:28.324092 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 10:27:28.324213 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 10:27:28.326919 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1003999021/tls.crt::/tmp/serving-cert-1003999021/tls.key\\\\\\\"\\\\nI0227 10:27:28.533892 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 10:27:28.537238 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 10:27:28.537256 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 10:27:28.537275 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 10:27:28.537280 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nF0227 10:27:28.543870 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T10:27:27Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd5f667231cb95d3d072f38635815d04c45ae9803c179df54367329131afd05a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c860355459019bda5fddc0a658236e97a674f965aa2eff648e97bbaa30161b75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c860355459019bda5fddc0a658236e97a674f965aa2eff648e97bbaa30161b75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:26:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:26:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:26:20Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:11 crc kubenswrapper[4728]: I0227 10:28:11.955477 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:11 crc kubenswrapper[4728]: I0227 10:28:11.966854 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:11 crc kubenswrapper[4728]: I0227 10:28:11.978838 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6f9f1c81-b0b0-4016-9ef5-38cd92277b5b-host\") pod \"node-ca-97psz\" (UID: \"6f9f1c81-b0b0-4016-9ef5-38cd92277b5b\") " pod="openshift-image-registry/node-ca-97psz" Feb 27 10:28:11 crc kubenswrapper[4728]: I0227 10:28:11.978938 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-24vxw\" (UniqueName: \"kubernetes.io/projected/6f9f1c81-b0b0-4016-9ef5-38cd92277b5b-kube-api-access-24vxw\") pod \"node-ca-97psz\" (UID: \"6f9f1c81-b0b0-4016-9ef5-38cd92277b5b\") " pod="openshift-image-registry/node-ca-97psz" Feb 27 10:28:11 crc kubenswrapper[4728]: I0227 10:28:11.978983 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/6f9f1c81-b0b0-4016-9ef5-38cd92277b5b-serviceca\") pod \"node-ca-97psz\" (UID: \"6f9f1c81-b0b0-4016-9ef5-38cd92277b5b\") " pod="openshift-image-registry/node-ca-97psz" Feb 27 10:28:11 crc kubenswrapper[4728]: I0227 10:28:11.978997 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6f9f1c81-b0b0-4016-9ef5-38cd92277b5b-host\") pod \"node-ca-97psz\" (UID: \"6f9f1c81-b0b0-4016-9ef5-38cd92277b5b\") " pod="openshift-image-registry/node-ca-97psz" Feb 27 10:28:11 crc kubenswrapper[4728]: I0227 10:28:11.980752 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/6f9f1c81-b0b0-4016-9ef5-38cd92277b5b-serviceca\") pod \"node-ca-97psz\" (UID: \"6f9f1c81-b0b0-4016-9ef5-38cd92277b5b\") " pod="openshift-image-registry/node-ca-97psz" Feb 27 10:28:11 crc kubenswrapper[4728]: I0227 10:28:11.983473 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xghgn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0cd760d8-c9b2-4e95-97a3-94bc759c9884\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qtlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qtlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qtlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qtlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qtlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qtlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qtlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:28:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xghgn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:11 crc kubenswrapper[4728]: I0227 10:28:11.987976 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:11 crc kubenswrapper[4728]: I0227 10:28:11.988017 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:11 crc kubenswrapper[4728]: I0227 10:28:11.988029 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:11 crc kubenswrapper[4728]: I0227 10:28:11.988049 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:11 crc kubenswrapper[4728]: I0227 10:28:11.988063 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:11Z","lastTransitionTime":"2026-02-27T10:28:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:11 crc kubenswrapper[4728]: I0227 10:28:11.997655 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-24vxw\" (UniqueName: \"kubernetes.io/projected/6f9f1c81-b0b0-4016-9ef5-38cd92277b5b-kube-api-access-24vxw\") pod \"node-ca-97psz\" (UID: \"6f9f1c81-b0b0-4016-9ef5-38cd92277b5b\") " pod="openshift-image-registry/node-ca-97psz" Feb 27 10:28:12 crc kubenswrapper[4728]: I0227 10:28:12.007688 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rpr29" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b021ff26-58a3-4418-b6ba-4aa8e0bb6746\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:28:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rpr29\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:12 crc kubenswrapper[4728]: I0227 10:28:12.025087 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:12 crc kubenswrapper[4728]: I0227 10:28:12.038037 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2cfd349-f825-497b-b698-7fb6bc258b22\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c64ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c64ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:28:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mf2hh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:12 crc kubenswrapper[4728]: I0227 10:28:12.064257 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f942aba-66f5-4353-b2f7-53d7ba94ae34\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dd5470266565899e6fda78eec789f70994968d19b1001f6340a99cfd2b73933\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e13903f4839e360a4bd61167579f8ba8936c176b194af3ed693fc7a3b6c88fe2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d2407a2f1f0fcd2f2dd4efda991f2d014a4d8c85592f2e93df7e4860a46862f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8643806fee981c732e483bce1bf93a8e35ab71964444bb2b9c476d7c93f85869\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://497509ccbf9c511546f719138ff58231a55c407323b370c2687557b87a660c9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cf8e26fed561fc883edbc30e5a062308c49ec85bc0efb0a64fef14f9dacb1b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cf8e26fed561fc883edbc30e5a062308c49ec85bc0efb0a64fef14f9dacb1b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:26:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:26:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e814ab45a5c01528b03c209cbe306360276b0a22ca9270fe7ce3bf36e0ef5f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e814ab45a5c01528b03c209cbe306360276b0a22ca9270fe7ce3bf36e0ef5f7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:26:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:26:23Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://51f217547dadd68ada46a99f3964971e47d56b9b1e45e018813ddce20347d4c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51f217547dadd68ada46a99f3964971e47d56b9b1e45e018813ddce20347d4c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:26:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:26:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:26:20Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:12 crc kubenswrapper[4728]: I0227 10:28:12.079580 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:12 crc kubenswrapper[4728]: I0227 10:28:12.091080 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:12 crc kubenswrapper[4728]: I0227 10:28:12.091144 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:12 crc kubenswrapper[4728]: I0227 10:28:12.091156 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:12 crc kubenswrapper[4728]: I0227 10:28:12.091174 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:12 crc kubenswrapper[4728]: I0227 10:28:12.091187 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:12Z","lastTransitionTime":"2026-02-27T10:28:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:12 crc kubenswrapper[4728]: I0227 10:28:12.160764 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-97psz" Feb 27 10:28:12 crc kubenswrapper[4728]: W0227 10:28:12.184151 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6f9f1c81_b0b0_4016_9ef5_38cd92277b5b.slice/crio-7370d275aac167d79845590c835fd2c01bf9d2431ab1b4e0b637dab5d8433fbd WatchSource:0}: Error finding container 7370d275aac167d79845590c835fd2c01bf9d2431ab1b4e0b637dab5d8433fbd: Status 404 returned error can't find the container with id 7370d275aac167d79845590c835fd2c01bf9d2431ab1b4e0b637dab5d8433fbd Feb 27 10:28:12 crc kubenswrapper[4728]: E0227 10:28:12.188169 4728 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 10:28:12 crc kubenswrapper[4728]: container &Container{Name:node-ca,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f,Command:[/bin/sh -c trap 'jobs -p | xargs -r kill; echo shutting down node-ca; exit 0' TERM Feb 27 10:28:12 crc kubenswrapper[4728]: while [ true ]; Feb 27 10:28:12 crc kubenswrapper[4728]: do Feb 27 10:28:12 crc kubenswrapper[4728]: for f in $(ls /tmp/serviceca); do Feb 27 10:28:12 crc kubenswrapper[4728]: echo $f Feb 27 10:28:12 crc kubenswrapper[4728]: ca_file_path="/tmp/serviceca/${f}" Feb 27 10:28:12 crc kubenswrapper[4728]: f=$(echo $f | sed -r 's/(.*)\.\./\1:/') Feb 27 10:28:12 crc kubenswrapper[4728]: reg_dir_path="/etc/docker/certs.d/${f}" Feb 27 10:28:12 crc kubenswrapper[4728]: if [ -e "${reg_dir_path}" ]; then Feb 27 10:28:12 crc kubenswrapper[4728]: cp -u $ca_file_path $reg_dir_path/ca.crt Feb 27 10:28:12 crc kubenswrapper[4728]: else Feb 27 10:28:12 crc kubenswrapper[4728]: mkdir $reg_dir_path Feb 27 10:28:12 crc kubenswrapper[4728]: cp $ca_file_path $reg_dir_path/ca.crt Feb 27 10:28:12 crc kubenswrapper[4728]: fi Feb 27 10:28:12 crc kubenswrapper[4728]: done Feb 27 10:28:12 crc kubenswrapper[4728]: for d in $(ls /etc/docker/certs.d); do Feb 27 10:28:12 crc kubenswrapper[4728]: echo $d Feb 27 10:28:12 crc kubenswrapper[4728]: dp=$(echo $d | sed -r 's/(.*):/\1\.\./') Feb 27 10:28:12 crc kubenswrapper[4728]: reg_conf_path="/tmp/serviceca/${dp}" Feb 27 10:28:12 crc kubenswrapper[4728]: if [ ! -e "${reg_conf_path}" ]; then Feb 27 10:28:12 crc kubenswrapper[4728]: rm -rf /etc/docker/certs.d/$d Feb 27 10:28:12 crc kubenswrapper[4728]: fi Feb 27 10:28:12 crc kubenswrapper[4728]: done Feb 27 10:28:12 crc kubenswrapper[4728]: sleep 60 & wait ${!} Feb 27 10:28:12 crc kubenswrapper[4728]: done Feb 27 10:28:12 crc kubenswrapper[4728]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{10485760 0} {} 10Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:serviceca,ReadOnly:false,MountPath:/tmp/serviceca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host,ReadOnly:false,MountPath:/etc/docker/certs.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-24vxw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:*1001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-ca-97psz_openshift-image-registry(6f9f1c81-b0b0-4016-9ef5-38cd92277b5b): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 27 10:28:12 crc kubenswrapper[4728]: > logger="UnhandledError" Feb 27 10:28:12 crc kubenswrapper[4728]: E0227 10:28:12.189348 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-ca\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-image-registry/node-ca-97psz" podUID="6f9f1c81-b0b0-4016-9ef5-38cd92277b5b" Feb 27 10:28:12 crc kubenswrapper[4728]: I0227 10:28:12.195344 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:12 crc kubenswrapper[4728]: I0227 10:28:12.195398 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:12 crc kubenswrapper[4728]: I0227 10:28:12.195414 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:12 crc kubenswrapper[4728]: I0227 10:28:12.195437 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:12 crc kubenswrapper[4728]: I0227 10:28:12.195453 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:12Z","lastTransitionTime":"2026-02-27T10:28:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:12 crc kubenswrapper[4728]: I0227 10:28:12.201586 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-97psz" event={"ID":"6f9f1c81-b0b0-4016-9ef5-38cd92277b5b","Type":"ContainerStarted","Data":"7370d275aac167d79845590c835fd2c01bf9d2431ab1b4e0b637dab5d8433fbd"} Feb 27 10:28:12 crc kubenswrapper[4728]: E0227 10:28:12.203752 4728 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 10:28:12 crc kubenswrapper[4728]: container &Container{Name:node-ca,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f,Command:[/bin/sh -c trap 'jobs -p | xargs -r kill; echo shutting down node-ca; exit 0' TERM Feb 27 10:28:12 crc kubenswrapper[4728]: while [ true ]; Feb 27 10:28:12 crc kubenswrapper[4728]: do Feb 27 10:28:12 crc kubenswrapper[4728]: for f in $(ls /tmp/serviceca); do Feb 27 10:28:12 crc kubenswrapper[4728]: echo $f Feb 27 10:28:12 crc kubenswrapper[4728]: ca_file_path="/tmp/serviceca/${f}" Feb 27 10:28:12 crc kubenswrapper[4728]: f=$(echo $f | sed -r 's/(.*)\.\./\1:/') Feb 27 10:28:12 crc kubenswrapper[4728]: reg_dir_path="/etc/docker/certs.d/${f}" Feb 27 10:28:12 crc kubenswrapper[4728]: if [ -e "${reg_dir_path}" ]; then Feb 27 10:28:12 crc kubenswrapper[4728]: cp -u $ca_file_path $reg_dir_path/ca.crt Feb 27 10:28:12 crc kubenswrapper[4728]: else Feb 27 10:28:12 crc kubenswrapper[4728]: mkdir $reg_dir_path Feb 27 10:28:12 crc kubenswrapper[4728]: cp $ca_file_path $reg_dir_path/ca.crt Feb 27 10:28:12 crc kubenswrapper[4728]: fi Feb 27 10:28:12 crc kubenswrapper[4728]: done Feb 27 10:28:12 crc kubenswrapper[4728]: for d in $(ls /etc/docker/certs.d); do Feb 27 10:28:12 crc kubenswrapper[4728]: echo $d Feb 27 10:28:12 crc kubenswrapper[4728]: dp=$(echo $d | sed -r 's/(.*):/\1\.\./') Feb 27 10:28:12 crc kubenswrapper[4728]: reg_conf_path="/tmp/serviceca/${dp}" Feb 27 10:28:12 crc kubenswrapper[4728]: if [ ! -e "${reg_conf_path}" ]; then Feb 27 10:28:12 crc kubenswrapper[4728]: rm -rf /etc/docker/certs.d/$d Feb 27 10:28:12 crc kubenswrapper[4728]: fi Feb 27 10:28:12 crc kubenswrapper[4728]: done Feb 27 10:28:12 crc kubenswrapper[4728]: sleep 60 & wait ${!} Feb 27 10:28:12 crc kubenswrapper[4728]: done Feb 27 10:28:12 crc kubenswrapper[4728]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{10485760 0} {} 10Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:serviceca,ReadOnly:false,MountPath:/tmp/serviceca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host,ReadOnly:false,MountPath:/etc/docker/certs.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-24vxw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:*1001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-ca-97psz_openshift-image-registry(6f9f1c81-b0b0-4016-9ef5-38cd92277b5b): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 27 10:28:12 crc kubenswrapper[4728]: > logger="UnhandledError" Feb 27 10:28:12 crc kubenswrapper[4728]: E0227 10:28:12.205196 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-ca\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-image-registry/node-ca-97psz" podUID="6f9f1c81-b0b0-4016-9ef5-38cd92277b5b" Feb 27 10:28:12 crc kubenswrapper[4728]: I0227 10:28:12.225880 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:12 crc kubenswrapper[4728]: I0227 10:28:12.246462 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:12 crc kubenswrapper[4728]: I0227 10:28:12.265946 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:12 crc kubenswrapper[4728]: I0227 10:28:12.274249 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n4c77" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef8ed63c-6947-4b06-8742-54b7ba279aa7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phsn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:28:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n4c77\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:12 crc kubenswrapper[4728]: I0227 10:28:12.289452 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9tlth" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"468912b7-185a-4869-9a65-70cbcb3c4fb1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjq97\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:28:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9tlth\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:12 crc kubenswrapper[4728]: I0227 10:28:12.297323 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:12 crc kubenswrapper[4728]: I0227 10:28:12.297376 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:12 crc kubenswrapper[4728]: I0227 10:28:12.297393 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:12 crc kubenswrapper[4728]: I0227 10:28:12.297416 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:12 crc kubenswrapper[4728]: I0227 10:28:12.297432 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:12Z","lastTransitionTime":"2026-02-27T10:28:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:12 crc kubenswrapper[4728]: I0227 10:28:12.298238 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-97psz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f9f1c81-b0b0-4016-9ef5-38cd92277b5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:11Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:11Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24vxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:28:11Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-97psz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:12 crc kubenswrapper[4728]: I0227 10:28:12.309559 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ded44d8-d959-4509-be28-3560f21eebda\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4e8835f3ff41008225e090a7c460900d18997805869ec9191f1e7cce3c5778e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://144568fcb21837838d999f90cb51207fff11edb023b2d975dad1980e0817ca43\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19f04d61914f64b94f46aebdc8a7580de81f5ba7e15935be045d4e92bd0cd889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f483187ce4b468f12ba37432364d63b1594b49400f77ab7ff5040730bc29b35d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f09311526d9d8d971440bee60c506ffb423eed73c2ac58c81934c2df487ae48\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T10:27:28Z\\\",\\\"message\\\":\\\"W0227 10:27:27.964275 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0227 10:27:27.965166 1 crypto.go:601] Generating new CA for check-endpoints-signer@1772188047 cert, and key in /tmp/serving-cert-1003999021/serving-signer.crt, /tmp/serving-cert-1003999021/serving-signer.key\\\\nI0227 10:27:28.312473 1 observer_polling.go:159] Starting file observer\\\\nW0227 10:27:28.324092 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 10:27:28.324213 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 10:27:28.326919 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1003999021/tls.crt::/tmp/serving-cert-1003999021/tls.key\\\\\\\"\\\\nI0227 10:27:28.533892 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 10:27:28.537238 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 10:27:28.537256 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 10:27:28.537275 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 10:27:28.537280 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nF0227 10:27:28.543870 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T10:27:27Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd5f667231cb95d3d072f38635815d04c45ae9803c179df54367329131afd05a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c860355459019bda5fddc0a658236e97a674f965aa2eff648e97bbaa30161b75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c860355459019bda5fddc0a658236e97a674f965aa2eff648e97bbaa30161b75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:26:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:26:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:26:20Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:12 crc kubenswrapper[4728]: I0227 10:28:12.320777 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xghgn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0cd760d8-c9b2-4e95-97a3-94bc759c9884\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qtlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qtlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qtlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qtlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qtlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qtlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qtlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:28:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xghgn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:12 crc kubenswrapper[4728]: I0227 10:28:12.328731 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:12 crc kubenswrapper[4728]: I0227 10:28:12.335226 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2cfd349-f825-497b-b698-7fb6bc258b22\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c64ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c64ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:28:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mf2hh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:12 crc kubenswrapper[4728]: I0227 10:28:12.347928 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rpr29" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b021ff26-58a3-4418-b6ba-4aa8e0bb6746\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:28:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rpr29\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:12 crc kubenswrapper[4728]: I0227 10:28:12.359121 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:12 crc kubenswrapper[4728]: I0227 10:28:12.377521 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f942aba-66f5-4353-b2f7-53d7ba94ae34\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dd5470266565899e6fda78eec789f70994968d19b1001f6340a99cfd2b73933\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e13903f4839e360a4bd61167579f8ba8936c176b194af3ed693fc7a3b6c88fe2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d2407a2f1f0fcd2f2dd4efda991f2d014a4d8c85592f2e93df7e4860a46862f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8643806fee981c732e483bce1bf93a8e35ab71964444bb2b9c476d7c93f85869\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://497509ccbf9c511546f719138ff58231a55c407323b370c2687557b87a660c9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cf8e26fed561fc883edbc30e5a062308c49ec85bc0efb0a64fef14f9dacb1b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cf8e26fed561fc883edbc30e5a062308c49ec85bc0efb0a64fef14f9dacb1b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:26:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:26:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e814ab45a5c01528b03c209cbe306360276b0a22ca9270fe7ce3bf36e0ef5f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e814ab45a5c01528b03c209cbe306360276b0a22ca9270fe7ce3bf36e0ef5f7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:26:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:26:23Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://51f217547dadd68ada46a99f3964971e47d56b9b1e45e018813ddce20347d4c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51f217547dadd68ada46a99f3964971e47d56b9b1e45e018813ddce20347d4c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:26:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:26:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:26:20Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:12 crc kubenswrapper[4728]: I0227 10:28:12.385379 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:12 crc kubenswrapper[4728]: I0227 10:28:12.400363 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:12 crc kubenswrapper[4728]: I0227 10:28:12.400421 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:12 crc kubenswrapper[4728]: I0227 10:28:12.400440 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:12 crc kubenswrapper[4728]: I0227 10:28:12.400463 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:12 crc kubenswrapper[4728]: I0227 10:28:12.400480 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:12Z","lastTransitionTime":"2026-02-27T10:28:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:12 crc kubenswrapper[4728]: I0227 10:28:12.502657 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:12 crc kubenswrapper[4728]: I0227 10:28:12.502695 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:12 crc kubenswrapper[4728]: I0227 10:28:12.502709 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:12 crc kubenswrapper[4728]: I0227 10:28:12.502725 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:12 crc kubenswrapper[4728]: I0227 10:28:12.502736 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:12Z","lastTransitionTime":"2026-02-27T10:28:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:12 crc kubenswrapper[4728]: I0227 10:28:12.604991 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:12 crc kubenswrapper[4728]: I0227 10:28:12.605051 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:12 crc kubenswrapper[4728]: I0227 10:28:12.605067 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:12 crc kubenswrapper[4728]: I0227 10:28:12.605091 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:12 crc kubenswrapper[4728]: I0227 10:28:12.605107 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:12Z","lastTransitionTime":"2026-02-27T10:28:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:12 crc kubenswrapper[4728]: I0227 10:28:12.708126 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:12 crc kubenswrapper[4728]: I0227 10:28:12.708188 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:12 crc kubenswrapper[4728]: I0227 10:28:12.708207 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:12 crc kubenswrapper[4728]: I0227 10:28:12.708232 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:12 crc kubenswrapper[4728]: I0227 10:28:12.708250 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:12Z","lastTransitionTime":"2026-02-27T10:28:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:12 crc kubenswrapper[4728]: I0227 10:28:12.724721 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 10:28:12 crc kubenswrapper[4728]: I0227 10:28:12.724772 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 10:28:12 crc kubenswrapper[4728]: I0227 10:28:12.724842 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 10:28:12 crc kubenswrapper[4728]: E0227 10:28:12.724978 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 10:28:12 crc kubenswrapper[4728]: E0227 10:28:12.725116 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 10:28:12 crc kubenswrapper[4728]: E0227 10:28:12.725236 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 10:28:12 crc kubenswrapper[4728]: I0227 10:28:12.811267 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:12 crc kubenswrapper[4728]: I0227 10:28:12.811323 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:12 crc kubenswrapper[4728]: I0227 10:28:12.811340 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:12 crc kubenswrapper[4728]: I0227 10:28:12.811362 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:12 crc kubenswrapper[4728]: I0227 10:28:12.811378 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:12Z","lastTransitionTime":"2026-02-27T10:28:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:12 crc kubenswrapper[4728]: I0227 10:28:12.914174 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:12 crc kubenswrapper[4728]: I0227 10:28:12.914251 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:12 crc kubenswrapper[4728]: I0227 10:28:12.914277 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:12 crc kubenswrapper[4728]: I0227 10:28:12.914307 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:12 crc kubenswrapper[4728]: I0227 10:28:12.914328 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:12Z","lastTransitionTime":"2026-02-27T10:28:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:13 crc kubenswrapper[4728]: I0227 10:28:13.017760 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:13 crc kubenswrapper[4728]: I0227 10:28:13.017831 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:13 crc kubenswrapper[4728]: I0227 10:28:13.017855 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:13 crc kubenswrapper[4728]: I0227 10:28:13.017885 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:13 crc kubenswrapper[4728]: I0227 10:28:13.017907 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:13Z","lastTransitionTime":"2026-02-27T10:28:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:13 crc kubenswrapper[4728]: I0227 10:28:13.121889 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:13 crc kubenswrapper[4728]: I0227 10:28:13.121972 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:13 crc kubenswrapper[4728]: I0227 10:28:13.121996 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:13 crc kubenswrapper[4728]: I0227 10:28:13.122029 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:13 crc kubenswrapper[4728]: I0227 10:28:13.122049 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:13Z","lastTransitionTime":"2026-02-27T10:28:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:13 crc kubenswrapper[4728]: I0227 10:28:13.225062 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:13 crc kubenswrapper[4728]: I0227 10:28:13.225160 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:13 crc kubenswrapper[4728]: I0227 10:28:13.225183 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:13 crc kubenswrapper[4728]: I0227 10:28:13.225214 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:13 crc kubenswrapper[4728]: I0227 10:28:13.225235 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:13Z","lastTransitionTime":"2026-02-27T10:28:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:13 crc kubenswrapper[4728]: I0227 10:28:13.328811 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:13 crc kubenswrapper[4728]: I0227 10:28:13.328865 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:13 crc kubenswrapper[4728]: I0227 10:28:13.328882 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:13 crc kubenswrapper[4728]: I0227 10:28:13.328941 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:13 crc kubenswrapper[4728]: I0227 10:28:13.328960 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:13Z","lastTransitionTime":"2026-02-27T10:28:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:13 crc kubenswrapper[4728]: I0227 10:28:13.432446 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:13 crc kubenswrapper[4728]: I0227 10:28:13.432548 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:13 crc kubenswrapper[4728]: I0227 10:28:13.432573 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:13 crc kubenswrapper[4728]: I0227 10:28:13.432602 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:13 crc kubenswrapper[4728]: I0227 10:28:13.432622 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:13Z","lastTransitionTime":"2026-02-27T10:28:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:13 crc kubenswrapper[4728]: I0227 10:28:13.535540 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:13 crc kubenswrapper[4728]: I0227 10:28:13.535597 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:13 crc kubenswrapper[4728]: I0227 10:28:13.535614 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:13 crc kubenswrapper[4728]: I0227 10:28:13.535637 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:13 crc kubenswrapper[4728]: I0227 10:28:13.535653 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:13Z","lastTransitionTime":"2026-02-27T10:28:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:13 crc kubenswrapper[4728]: I0227 10:28:13.638178 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:13 crc kubenswrapper[4728]: I0227 10:28:13.638244 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:13 crc kubenswrapper[4728]: I0227 10:28:13.638263 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:13 crc kubenswrapper[4728]: I0227 10:28:13.638289 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:13 crc kubenswrapper[4728]: I0227 10:28:13.638318 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:13Z","lastTransitionTime":"2026-02-27T10:28:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:13 crc kubenswrapper[4728]: I0227 10:28:13.741451 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:13 crc kubenswrapper[4728]: I0227 10:28:13.741552 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:13 crc kubenswrapper[4728]: I0227 10:28:13.741586 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:13 crc kubenswrapper[4728]: I0227 10:28:13.741616 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:13 crc kubenswrapper[4728]: I0227 10:28:13.741638 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:13Z","lastTransitionTime":"2026-02-27T10:28:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:13 crc kubenswrapper[4728]: I0227 10:28:13.844810 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:13 crc kubenswrapper[4728]: I0227 10:28:13.844886 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:13 crc kubenswrapper[4728]: I0227 10:28:13.844907 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:13 crc kubenswrapper[4728]: I0227 10:28:13.844953 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:13 crc kubenswrapper[4728]: I0227 10:28:13.844975 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:13Z","lastTransitionTime":"2026-02-27T10:28:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:13 crc kubenswrapper[4728]: I0227 10:28:13.948079 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:13 crc kubenswrapper[4728]: I0227 10:28:13.948135 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:13 crc kubenswrapper[4728]: I0227 10:28:13.948154 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:13 crc kubenswrapper[4728]: I0227 10:28:13.948176 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:13 crc kubenswrapper[4728]: I0227 10:28:13.948193 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:13Z","lastTransitionTime":"2026-02-27T10:28:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:14 crc kubenswrapper[4728]: I0227 10:28:14.051571 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:14 crc kubenswrapper[4728]: I0227 10:28:14.051652 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:14 crc kubenswrapper[4728]: I0227 10:28:14.051676 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:14 crc kubenswrapper[4728]: I0227 10:28:14.051707 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:14 crc kubenswrapper[4728]: I0227 10:28:14.051729 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:14Z","lastTransitionTime":"2026-02-27T10:28:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:14 crc kubenswrapper[4728]: I0227 10:28:14.154699 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:14 crc kubenswrapper[4728]: I0227 10:28:14.154767 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:14 crc kubenswrapper[4728]: I0227 10:28:14.154786 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:14 crc kubenswrapper[4728]: I0227 10:28:14.154810 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:14 crc kubenswrapper[4728]: I0227 10:28:14.154827 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:14Z","lastTransitionTime":"2026-02-27T10:28:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:14 crc kubenswrapper[4728]: I0227 10:28:14.257957 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:14 crc kubenswrapper[4728]: I0227 10:28:14.258029 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:14 crc kubenswrapper[4728]: I0227 10:28:14.258052 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:14 crc kubenswrapper[4728]: I0227 10:28:14.258144 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:14 crc kubenswrapper[4728]: I0227 10:28:14.258185 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:14Z","lastTransitionTime":"2026-02-27T10:28:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:14 crc kubenswrapper[4728]: I0227 10:28:14.361085 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:14 crc kubenswrapper[4728]: I0227 10:28:14.361204 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:14 crc kubenswrapper[4728]: I0227 10:28:14.361229 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:14 crc kubenswrapper[4728]: I0227 10:28:14.361260 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:14 crc kubenswrapper[4728]: I0227 10:28:14.361278 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:14Z","lastTransitionTime":"2026-02-27T10:28:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:14 crc kubenswrapper[4728]: I0227 10:28:14.465100 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:14 crc kubenswrapper[4728]: I0227 10:28:14.465167 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:14 crc kubenswrapper[4728]: I0227 10:28:14.465179 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:14 crc kubenswrapper[4728]: I0227 10:28:14.465197 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:14 crc kubenswrapper[4728]: I0227 10:28:14.465209 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:14Z","lastTransitionTime":"2026-02-27T10:28:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:14 crc kubenswrapper[4728]: I0227 10:28:14.568542 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:14 crc kubenswrapper[4728]: I0227 10:28:14.568591 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:14 crc kubenswrapper[4728]: I0227 10:28:14.568603 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:14 crc kubenswrapper[4728]: I0227 10:28:14.568622 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:14 crc kubenswrapper[4728]: I0227 10:28:14.568634 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:14Z","lastTransitionTime":"2026-02-27T10:28:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:14 crc kubenswrapper[4728]: I0227 10:28:14.671676 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:14 crc kubenswrapper[4728]: I0227 10:28:14.671736 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:14 crc kubenswrapper[4728]: I0227 10:28:14.671758 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:14 crc kubenswrapper[4728]: I0227 10:28:14.671788 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:14 crc kubenswrapper[4728]: I0227 10:28:14.671811 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:14Z","lastTransitionTime":"2026-02-27T10:28:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:14 crc kubenswrapper[4728]: I0227 10:28:14.724056 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 10:28:14 crc kubenswrapper[4728]: I0227 10:28:14.724251 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 10:28:14 crc kubenswrapper[4728]: E0227 10:28:14.724393 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 10:28:14 crc kubenswrapper[4728]: E0227 10:28:14.724252 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 10:28:14 crc kubenswrapper[4728]: I0227 10:28:14.724061 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 10:28:14 crc kubenswrapper[4728]: E0227 10:28:14.724610 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 10:28:14 crc kubenswrapper[4728]: I0227 10:28:14.774965 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:14 crc kubenswrapper[4728]: I0227 10:28:14.775026 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:14 crc kubenswrapper[4728]: I0227 10:28:14.775043 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:14 crc kubenswrapper[4728]: I0227 10:28:14.775068 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:14 crc kubenswrapper[4728]: I0227 10:28:14.775086 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:14Z","lastTransitionTime":"2026-02-27T10:28:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:14 crc kubenswrapper[4728]: I0227 10:28:14.879343 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:14 crc kubenswrapper[4728]: I0227 10:28:14.879425 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:14 crc kubenswrapper[4728]: I0227 10:28:14.879445 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:14 crc kubenswrapper[4728]: I0227 10:28:14.879472 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:14 crc kubenswrapper[4728]: I0227 10:28:14.879491 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:14Z","lastTransitionTime":"2026-02-27T10:28:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:14 crc kubenswrapper[4728]: I0227 10:28:14.960127 4728 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 27 10:28:14 crc kubenswrapper[4728]: I0227 10:28:14.982166 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:14 crc kubenswrapper[4728]: I0227 10:28:14.982288 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:14 crc kubenswrapper[4728]: I0227 10:28:14.982308 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:14 crc kubenswrapper[4728]: I0227 10:28:14.982332 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:14 crc kubenswrapper[4728]: I0227 10:28:14.982350 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:14Z","lastTransitionTime":"2026-02-27T10:28:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:15 crc kubenswrapper[4728]: I0227 10:28:15.085333 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:15 crc kubenswrapper[4728]: I0227 10:28:15.085386 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:15 crc kubenswrapper[4728]: I0227 10:28:15.085404 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:15 crc kubenswrapper[4728]: I0227 10:28:15.085425 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:15 crc kubenswrapper[4728]: I0227 10:28:15.085441 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:15Z","lastTransitionTime":"2026-02-27T10:28:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:15 crc kubenswrapper[4728]: I0227 10:28:15.187976 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:15 crc kubenswrapper[4728]: I0227 10:28:15.188043 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:15 crc kubenswrapper[4728]: I0227 10:28:15.188060 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:15 crc kubenswrapper[4728]: I0227 10:28:15.188086 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:15 crc kubenswrapper[4728]: I0227 10:28:15.188104 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:15Z","lastTransitionTime":"2026-02-27T10:28:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:15 crc kubenswrapper[4728]: I0227 10:28:15.291103 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:15 crc kubenswrapper[4728]: I0227 10:28:15.291173 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:15 crc kubenswrapper[4728]: I0227 10:28:15.291188 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:15 crc kubenswrapper[4728]: I0227 10:28:15.291206 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:15 crc kubenswrapper[4728]: I0227 10:28:15.291220 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:15Z","lastTransitionTime":"2026-02-27T10:28:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:15 crc kubenswrapper[4728]: I0227 10:28:15.394583 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:15 crc kubenswrapper[4728]: I0227 10:28:15.394646 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:15 crc kubenswrapper[4728]: I0227 10:28:15.394662 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:15 crc kubenswrapper[4728]: I0227 10:28:15.394686 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:15 crc kubenswrapper[4728]: I0227 10:28:15.394704 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:15Z","lastTransitionTime":"2026-02-27T10:28:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:15 crc kubenswrapper[4728]: I0227 10:28:15.498250 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:15 crc kubenswrapper[4728]: I0227 10:28:15.498312 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:15 crc kubenswrapper[4728]: I0227 10:28:15.498330 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:15 crc kubenswrapper[4728]: I0227 10:28:15.498353 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:15 crc kubenswrapper[4728]: I0227 10:28:15.498370 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:15Z","lastTransitionTime":"2026-02-27T10:28:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:15 crc kubenswrapper[4728]: I0227 10:28:15.602270 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:15 crc kubenswrapper[4728]: I0227 10:28:15.602340 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:15 crc kubenswrapper[4728]: I0227 10:28:15.602362 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:15 crc kubenswrapper[4728]: I0227 10:28:15.602392 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:15 crc kubenswrapper[4728]: I0227 10:28:15.602416 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:15Z","lastTransitionTime":"2026-02-27T10:28:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:15 crc kubenswrapper[4728]: I0227 10:28:15.705461 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:15 crc kubenswrapper[4728]: I0227 10:28:15.705815 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:15 crc kubenswrapper[4728]: I0227 10:28:15.705840 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:15 crc kubenswrapper[4728]: I0227 10:28:15.705864 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:15 crc kubenswrapper[4728]: I0227 10:28:15.705881 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:15Z","lastTransitionTime":"2026-02-27T10:28:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:15 crc kubenswrapper[4728]: I0227 10:28:15.808486 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:15 crc kubenswrapper[4728]: I0227 10:28:15.808591 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:15 crc kubenswrapper[4728]: I0227 10:28:15.808611 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:15 crc kubenswrapper[4728]: I0227 10:28:15.808636 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:15 crc kubenswrapper[4728]: I0227 10:28:15.808654 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:15Z","lastTransitionTime":"2026-02-27T10:28:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:15 crc kubenswrapper[4728]: I0227 10:28:15.912031 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:15 crc kubenswrapper[4728]: I0227 10:28:15.912093 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:15 crc kubenswrapper[4728]: I0227 10:28:15.912110 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:15 crc kubenswrapper[4728]: I0227 10:28:15.912134 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:15 crc kubenswrapper[4728]: I0227 10:28:15.912153 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:15Z","lastTransitionTime":"2026-02-27T10:28:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:16 crc kubenswrapper[4728]: I0227 10:28:16.014730 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:16 crc kubenswrapper[4728]: I0227 10:28:16.014795 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:16 crc kubenswrapper[4728]: I0227 10:28:16.014817 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:16 crc kubenswrapper[4728]: I0227 10:28:16.014847 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:16 crc kubenswrapper[4728]: I0227 10:28:16.014869 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:16Z","lastTransitionTime":"2026-02-27T10:28:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:16 crc kubenswrapper[4728]: I0227 10:28:16.117452 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:16 crc kubenswrapper[4728]: I0227 10:28:16.117586 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:16 crc kubenswrapper[4728]: I0227 10:28:16.117610 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:16 crc kubenswrapper[4728]: I0227 10:28:16.117639 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:16 crc kubenswrapper[4728]: I0227 10:28:16.117664 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:16Z","lastTransitionTime":"2026-02-27T10:28:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:16 crc kubenswrapper[4728]: I0227 10:28:16.220418 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:16 crc kubenswrapper[4728]: I0227 10:28:16.220463 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:16 crc kubenswrapper[4728]: I0227 10:28:16.220481 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:16 crc kubenswrapper[4728]: I0227 10:28:16.220545 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:16 crc kubenswrapper[4728]: I0227 10:28:16.220569 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:16Z","lastTransitionTime":"2026-02-27T10:28:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:16 crc kubenswrapper[4728]: I0227 10:28:16.323363 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:16 crc kubenswrapper[4728]: I0227 10:28:16.323400 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:16 crc kubenswrapper[4728]: I0227 10:28:16.323411 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:16 crc kubenswrapper[4728]: I0227 10:28:16.323428 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:16 crc kubenswrapper[4728]: I0227 10:28:16.323441 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:16Z","lastTransitionTime":"2026-02-27T10:28:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:16 crc kubenswrapper[4728]: I0227 10:28:16.427188 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:16 crc kubenswrapper[4728]: I0227 10:28:16.428144 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:16 crc kubenswrapper[4728]: I0227 10:28:16.428538 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:16 crc kubenswrapper[4728]: I0227 10:28:16.428736 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:16 crc kubenswrapper[4728]: I0227 10:28:16.428976 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:16Z","lastTransitionTime":"2026-02-27T10:28:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:16 crc kubenswrapper[4728]: I0227 10:28:16.532434 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:16 crc kubenswrapper[4728]: I0227 10:28:16.532487 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:16 crc kubenswrapper[4728]: I0227 10:28:16.532541 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:16 crc kubenswrapper[4728]: I0227 10:28:16.532571 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:16 crc kubenswrapper[4728]: I0227 10:28:16.532592 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:16Z","lastTransitionTime":"2026-02-27T10:28:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:16 crc kubenswrapper[4728]: I0227 10:28:16.635108 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:16 crc kubenswrapper[4728]: I0227 10:28:16.635600 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:16 crc kubenswrapper[4728]: I0227 10:28:16.635807 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:16 crc kubenswrapper[4728]: I0227 10:28:16.636015 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:16 crc kubenswrapper[4728]: I0227 10:28:16.636191 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:16Z","lastTransitionTime":"2026-02-27T10:28:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:16 crc kubenswrapper[4728]: I0227 10:28:16.724178 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 10:28:16 crc kubenswrapper[4728]: I0227 10:28:16.724290 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 10:28:16 crc kubenswrapper[4728]: E0227 10:28:16.724398 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 10:28:16 crc kubenswrapper[4728]: I0227 10:28:16.724290 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 10:28:16 crc kubenswrapper[4728]: E0227 10:28:16.724486 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 10:28:16 crc kubenswrapper[4728]: E0227 10:28:16.724701 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 10:28:16 crc kubenswrapper[4728]: I0227 10:28:16.739018 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:16 crc kubenswrapper[4728]: I0227 10:28:16.739066 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:16 crc kubenswrapper[4728]: I0227 10:28:16.739082 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:16 crc kubenswrapper[4728]: I0227 10:28:16.739102 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:16 crc kubenswrapper[4728]: I0227 10:28:16.739120 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:16Z","lastTransitionTime":"2026-02-27T10:28:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:16 crc kubenswrapper[4728]: I0227 10:28:16.842047 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:16 crc kubenswrapper[4728]: I0227 10:28:16.842113 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:16 crc kubenswrapper[4728]: I0227 10:28:16.842131 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:16 crc kubenswrapper[4728]: I0227 10:28:16.842158 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:16 crc kubenswrapper[4728]: I0227 10:28:16.842175 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:16Z","lastTransitionTime":"2026-02-27T10:28:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:16 crc kubenswrapper[4728]: I0227 10:28:16.944930 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:16 crc kubenswrapper[4728]: I0227 10:28:16.945043 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:16 crc kubenswrapper[4728]: I0227 10:28:16.945066 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:16 crc kubenswrapper[4728]: I0227 10:28:16.945097 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:16 crc kubenswrapper[4728]: I0227 10:28:16.945118 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:16Z","lastTransitionTime":"2026-02-27T10:28:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:17 crc kubenswrapper[4728]: I0227 10:28:17.048568 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:17 crc kubenswrapper[4728]: I0227 10:28:17.048637 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:17 crc kubenswrapper[4728]: I0227 10:28:17.048659 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:17 crc kubenswrapper[4728]: I0227 10:28:17.048688 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:17 crc kubenswrapper[4728]: I0227 10:28:17.048706 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:17Z","lastTransitionTime":"2026-02-27T10:28:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:17 crc kubenswrapper[4728]: I0227 10:28:17.151200 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:17 crc kubenswrapper[4728]: I0227 10:28:17.151428 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:17 crc kubenswrapper[4728]: I0227 10:28:17.151613 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:17 crc kubenswrapper[4728]: I0227 10:28:17.151801 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:17 crc kubenswrapper[4728]: I0227 10:28:17.151997 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:17Z","lastTransitionTime":"2026-02-27T10:28:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:17 crc kubenswrapper[4728]: I0227 10:28:17.255038 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:17 crc kubenswrapper[4728]: I0227 10:28:17.255322 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:17 crc kubenswrapper[4728]: I0227 10:28:17.255407 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:17 crc kubenswrapper[4728]: I0227 10:28:17.255494 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:17 crc kubenswrapper[4728]: I0227 10:28:17.255630 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:17Z","lastTransitionTime":"2026-02-27T10:28:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:17 crc kubenswrapper[4728]: I0227 10:28:17.358631 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:17 crc kubenswrapper[4728]: I0227 10:28:17.358679 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:17 crc kubenswrapper[4728]: I0227 10:28:17.358695 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:17 crc kubenswrapper[4728]: I0227 10:28:17.358718 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:17 crc kubenswrapper[4728]: I0227 10:28:17.358736 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:17Z","lastTransitionTime":"2026-02-27T10:28:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:17 crc kubenswrapper[4728]: I0227 10:28:17.461736 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:17 crc kubenswrapper[4728]: I0227 10:28:17.461845 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:17 crc kubenswrapper[4728]: I0227 10:28:17.461872 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:17 crc kubenswrapper[4728]: I0227 10:28:17.461901 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:17 crc kubenswrapper[4728]: I0227 10:28:17.461919 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:17Z","lastTransitionTime":"2026-02-27T10:28:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:17 crc kubenswrapper[4728]: I0227 10:28:17.565004 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:17 crc kubenswrapper[4728]: I0227 10:28:17.565079 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:17 crc kubenswrapper[4728]: I0227 10:28:17.565103 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:17 crc kubenswrapper[4728]: I0227 10:28:17.565132 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:17 crc kubenswrapper[4728]: I0227 10:28:17.565157 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:17Z","lastTransitionTime":"2026-02-27T10:28:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:17 crc kubenswrapper[4728]: I0227 10:28:17.667627 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:17 crc kubenswrapper[4728]: I0227 10:28:17.667705 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:17 crc kubenswrapper[4728]: I0227 10:28:17.667729 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:17 crc kubenswrapper[4728]: I0227 10:28:17.667757 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:17 crc kubenswrapper[4728]: I0227 10:28:17.667777 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:17Z","lastTransitionTime":"2026-02-27T10:28:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:17 crc kubenswrapper[4728]: I0227 10:28:17.703819 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kqwxf"] Feb 27 10:28:17 crc kubenswrapper[4728]: I0227 10:28:17.704689 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kqwxf" Feb 27 10:28:17 crc kubenswrapper[4728]: I0227 10:28:17.707943 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 27 10:28:17 crc kubenswrapper[4728]: I0227 10:28:17.712248 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 27 10:28:17 crc kubenswrapper[4728]: I0227 10:28:17.721631 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:17 crc kubenswrapper[4728]: E0227 10:28:17.727110 4728 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:machine-config-daemon,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a,Command:[/usr/bin/machine-config-daemon],Args:[start --payload-version=4.18.1],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:health,HostPort:8798,ContainerPort:8798,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:rootfs,ReadOnly:false,MountPath:/rootfs,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c64ws,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8798 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:120,TimeoutSeconds:1,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-mf2hh_openshift-machine-config-operator(c2cfd349-f825-497b-b698-7fb6bc258b22): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Feb 27 10:28:17 crc kubenswrapper[4728]: E0227 10:28:17.727389 4728 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 10:28:17 crc kubenswrapper[4728]: container &Container{Name:dns-node-resolver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/bin/bash -c #!/bin/bash Feb 27 10:28:17 crc kubenswrapper[4728]: set -uo pipefail Feb 27 10:28:17 crc kubenswrapper[4728]: Feb 27 10:28:17 crc kubenswrapper[4728]: trap 'jobs -p | xargs kill || true; wait; exit 0' TERM Feb 27 10:28:17 crc kubenswrapper[4728]: Feb 27 10:28:17 crc kubenswrapper[4728]: OPENSHIFT_MARKER="openshift-generated-node-resolver" Feb 27 10:28:17 crc kubenswrapper[4728]: HOSTS_FILE="/etc/hosts" Feb 27 10:28:17 crc kubenswrapper[4728]: TEMP_FILE="/etc/hosts.tmp" Feb 27 10:28:17 crc kubenswrapper[4728]: Feb 27 10:28:17 crc kubenswrapper[4728]: IFS=', ' read -r -a services <<< "${SERVICES}" Feb 27 10:28:17 crc kubenswrapper[4728]: Feb 27 10:28:17 crc kubenswrapper[4728]: # Make a temporary file with the old hosts file's attributes. Feb 27 10:28:17 crc kubenswrapper[4728]: if ! cp -f --attributes-only "${HOSTS_FILE}" "${TEMP_FILE}"; then Feb 27 10:28:17 crc kubenswrapper[4728]: echo "Failed to preserve hosts file. Exiting." Feb 27 10:28:17 crc kubenswrapper[4728]: exit 1 Feb 27 10:28:17 crc kubenswrapper[4728]: fi Feb 27 10:28:17 crc kubenswrapper[4728]: Feb 27 10:28:17 crc kubenswrapper[4728]: while true; do Feb 27 10:28:17 crc kubenswrapper[4728]: declare -A svc_ips Feb 27 10:28:17 crc kubenswrapper[4728]: for svc in "${services[@]}"; do Feb 27 10:28:17 crc kubenswrapper[4728]: # Fetch service IP from cluster dns if present. We make several tries Feb 27 10:28:17 crc kubenswrapper[4728]: # to do it: IPv4, IPv6, IPv4 over TCP and IPv6 over TCP. The two last ones Feb 27 10:28:17 crc kubenswrapper[4728]: # are for deployments with Kuryr on older OpenStack (OSP13) - those do not Feb 27 10:28:17 crc kubenswrapper[4728]: # support UDP loadbalancers and require reaching DNS through TCP. Feb 27 10:28:17 crc kubenswrapper[4728]: cmds=('dig -t A @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Feb 27 10:28:17 crc kubenswrapper[4728]: 'dig -t AAAA @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Feb 27 10:28:17 crc kubenswrapper[4728]: 'dig -t A +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Feb 27 10:28:17 crc kubenswrapper[4728]: 'dig -t AAAA +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"') Feb 27 10:28:17 crc kubenswrapper[4728]: for i in ${!cmds[*]} Feb 27 10:28:17 crc kubenswrapper[4728]: do Feb 27 10:28:17 crc kubenswrapper[4728]: ips=($(eval "${cmds[i]}")) Feb 27 10:28:17 crc kubenswrapper[4728]: if [[ "$?" -eq 0 && "${#ips[@]}" -ne 0 ]]; then Feb 27 10:28:17 crc kubenswrapper[4728]: svc_ips["${svc}"]="${ips[@]}" Feb 27 10:28:17 crc kubenswrapper[4728]: break Feb 27 10:28:17 crc kubenswrapper[4728]: fi Feb 27 10:28:17 crc kubenswrapper[4728]: done Feb 27 10:28:17 crc kubenswrapper[4728]: done Feb 27 10:28:17 crc kubenswrapper[4728]: Feb 27 10:28:17 crc kubenswrapper[4728]: # Update /etc/hosts only if we get valid service IPs Feb 27 10:28:17 crc kubenswrapper[4728]: # We will not update /etc/hosts when there is coredns service outage or api unavailability Feb 27 10:28:17 crc kubenswrapper[4728]: # Stale entries could exist in /etc/hosts if the service is deleted Feb 27 10:28:17 crc kubenswrapper[4728]: if [[ -n "${svc_ips[*]-}" ]]; then Feb 27 10:28:17 crc kubenswrapper[4728]: # Build a new hosts file from /etc/hosts with our custom entries filtered out Feb 27 10:28:17 crc kubenswrapper[4728]: if ! sed --silent "/# ${OPENSHIFT_MARKER}/d; w ${TEMP_FILE}" "${HOSTS_FILE}"; then Feb 27 10:28:17 crc kubenswrapper[4728]: # Only continue rebuilding the hosts entries if its original content is preserved Feb 27 10:28:17 crc kubenswrapper[4728]: sleep 60 & wait Feb 27 10:28:17 crc kubenswrapper[4728]: continue Feb 27 10:28:17 crc kubenswrapper[4728]: fi Feb 27 10:28:17 crc kubenswrapper[4728]: Feb 27 10:28:17 crc kubenswrapper[4728]: # Append resolver entries for services Feb 27 10:28:17 crc kubenswrapper[4728]: rc=0 Feb 27 10:28:17 crc kubenswrapper[4728]: for svc in "${!svc_ips[@]}"; do Feb 27 10:28:17 crc kubenswrapper[4728]: for ip in ${svc_ips[${svc}]}; do Feb 27 10:28:17 crc kubenswrapper[4728]: echo "${ip} ${svc} ${svc}.${CLUSTER_DOMAIN} # ${OPENSHIFT_MARKER}" >> "${TEMP_FILE}" || rc=$? Feb 27 10:28:17 crc kubenswrapper[4728]: done Feb 27 10:28:17 crc kubenswrapper[4728]: done Feb 27 10:28:17 crc kubenswrapper[4728]: if [[ $rc -ne 0 ]]; then Feb 27 10:28:17 crc kubenswrapper[4728]: sleep 60 & wait Feb 27 10:28:17 crc kubenswrapper[4728]: continue Feb 27 10:28:17 crc kubenswrapper[4728]: fi Feb 27 10:28:17 crc kubenswrapper[4728]: Feb 27 10:28:17 crc kubenswrapper[4728]: Feb 27 10:28:17 crc kubenswrapper[4728]: # TODO: Update /etc/hosts atomically to avoid any inconsistent behavior Feb 27 10:28:17 crc kubenswrapper[4728]: # Replace /etc/hosts with our modified version if needed Feb 27 10:28:17 crc kubenswrapper[4728]: cmp "${TEMP_FILE}" "${HOSTS_FILE}" || cp -f "${TEMP_FILE}" "${HOSTS_FILE}" Feb 27 10:28:17 crc kubenswrapper[4728]: # TEMP_FILE is not removed to avoid file create/delete and attributes copy churn Feb 27 10:28:17 crc kubenswrapper[4728]: fi Feb 27 10:28:17 crc kubenswrapper[4728]: sleep 60 & wait Feb 27 10:28:17 crc kubenswrapper[4728]: unset svc_ips Feb 27 10:28:17 crc kubenswrapper[4728]: done Feb 27 10:28:17 crc kubenswrapper[4728]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:SERVICES,Value:image-registry.openshift-image-registry.svc,ValueFrom:nil,},EnvVar{Name:NAMESERVER,Value:10.217.4.10,ValueFrom:nil,},EnvVar{Name:CLUSTER_DOMAIN,Value:cluster.local,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{22020096 0} {} 21Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hosts-file,ReadOnly:false,MountPath:/etc/hosts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-phsn4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-resolver-n4c77_openshift-dns(ef8ed63c-6947-4b06-8742-54b7ba279aa7): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 27 10:28:17 crc kubenswrapper[4728]: > logger="UnhandledError" Feb 27 10:28:17 crc kubenswrapper[4728]: E0227 10:28:17.727673 4728 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 10:28:17 crc kubenswrapper[4728]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,Command:[/bin/bash -c #!/bin/bash Feb 27 10:28:17 crc kubenswrapper[4728]: set -o allexport Feb 27 10:28:17 crc kubenswrapper[4728]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Feb 27 10:28:17 crc kubenswrapper[4728]: source /etc/kubernetes/apiserver-url.env Feb 27 10:28:17 crc kubenswrapper[4728]: else Feb 27 10:28:17 crc kubenswrapper[4728]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Feb 27 10:28:17 crc kubenswrapper[4728]: exit 1 Feb 27 10:28:17 crc kubenswrapper[4728]: fi Feb 27 10:28:17 crc kubenswrapper[4728]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Feb 27 10:28:17 crc kubenswrapper[4728]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.18.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b97554198294bf544fbc116c94a0a1fb2ec8a4de0e926bf9d9e320135f0bee6f,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23f833d3738d68706eb2f2868bd76bd71cee016cffa6faf5f045a60cc8c6eddd,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8048f1cb0be521f09749c0a489503cd56d85b68c6ca93380e082cfd693cd97a8,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5dbf844e49bb46b78586930149e5e5f5dc121014c8afd10fe36f3651967cc256,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rdwmf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-58b4c7f79c-55gtf_openshift-network-operator(37a5e44f-9a88-4405-be8a-b645485e7312): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 27 10:28:17 crc kubenswrapper[4728]: > logger="UnhandledError" Feb 27 10:28:17 crc kubenswrapper[4728]: E0227 10:28:17.728842 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" podUID="37a5e44f-9a88-4405-be8a-b645485e7312" Feb 27 10:28:17 crc kubenswrapper[4728]: E0227 10:28:17.728967 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns-node-resolver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-dns/node-resolver-n4c77" podUID="ef8ed63c-6947-4b06-8742-54b7ba279aa7" Feb 27 10:28:17 crc kubenswrapper[4728]: E0227 10:28:17.729968 4728 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,Command:[],Args:[--secure-listen-address=0.0.0.0:9001 --config-file=/etc/kube-rbac-proxy/config-file.yaml --tls-cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --tls-min-version=VersionTLS12 --upstream=http://127.0.0.1:8797 --logtostderr=true --tls-cert-file=/etc/tls/private/tls.crt --tls-private-key-file=/etc/tls/private/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:9001,ContainerPort:9001,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:proxy-tls,ReadOnly:false,MountPath:/etc/tls/private,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:mcd-auth-proxy-config,ReadOnly:false,MountPath:/etc/kube-rbac-proxy,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c64ws,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-mf2hh_openshift-machine-config-operator(c2cfd349-f825-497b-b698-7fb6bc258b22): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Feb 27 10:28:17 crc kubenswrapper[4728]: E0227 10:28:17.732712 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"machine-config-daemon\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" Feb 27 10:28:17 crc kubenswrapper[4728]: I0227 10:28:17.736695 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2cfd349-f825-497b-b698-7fb6bc258b22\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c64ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c64ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:28:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mf2hh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:17 crc kubenswrapper[4728]: I0227 10:28:17.746026 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/87b02f57-7d16-404a-9dad-36d71fe43a6f-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-kqwxf\" (UID: \"87b02f57-7d16-404a-9dad-36d71fe43a6f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kqwxf" Feb 27 10:28:17 crc kubenswrapper[4728]: I0227 10:28:17.746087 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/87b02f57-7d16-404a-9dad-36d71fe43a6f-env-overrides\") pod \"ovnkube-control-plane-749d76644c-kqwxf\" (UID: \"87b02f57-7d16-404a-9dad-36d71fe43a6f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kqwxf" Feb 27 10:28:17 crc kubenswrapper[4728]: I0227 10:28:17.746127 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/87b02f57-7d16-404a-9dad-36d71fe43a6f-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-kqwxf\" (UID: \"87b02f57-7d16-404a-9dad-36d71fe43a6f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kqwxf" Feb 27 10:28:17 crc kubenswrapper[4728]: I0227 10:28:17.746235 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sk895\" (UniqueName: \"kubernetes.io/projected/87b02f57-7d16-404a-9dad-36d71fe43a6f-kube-api-access-sk895\") pod \"ovnkube-control-plane-749d76644c-kqwxf\" (UID: \"87b02f57-7d16-404a-9dad-36d71fe43a6f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kqwxf" Feb 27 10:28:17 crc kubenswrapper[4728]: I0227 10:28:17.764339 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rpr29" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b021ff26-58a3-4418-b6ba-4aa8e0bb6746\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:28:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rpr29\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:17 crc kubenswrapper[4728]: I0227 10:28:17.772148 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:17 crc kubenswrapper[4728]: I0227 10:28:17.772198 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:17 crc kubenswrapper[4728]: I0227 10:28:17.772215 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:17 crc kubenswrapper[4728]: I0227 10:28:17.772239 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:17 crc kubenswrapper[4728]: I0227 10:28:17.772260 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:17Z","lastTransitionTime":"2026-02-27T10:28:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:17 crc kubenswrapper[4728]: I0227 10:28:17.793767 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f942aba-66f5-4353-b2f7-53d7ba94ae34\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dd5470266565899e6fda78eec789f70994968d19b1001f6340a99cfd2b73933\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e13903f4839e360a4bd61167579f8ba8936c176b194af3ed693fc7a3b6c88fe2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d2407a2f1f0fcd2f2dd4efda991f2d014a4d8c85592f2e93df7e4860a46862f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8643806fee981c732e483bce1bf93a8e35ab71964444bb2b9c476d7c93f85869\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://497509ccbf9c511546f719138ff58231a55c407323b370c2687557b87a660c9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cf8e26fed561fc883edbc30e5a062308c49ec85bc0efb0a64fef14f9dacb1b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cf8e26fed561fc883edbc30e5a062308c49ec85bc0efb0a64fef14f9dacb1b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:26:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:26:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e814ab45a5c01528b03c209cbe306360276b0a22ca9270fe7ce3bf36e0ef5f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e814ab45a5c01528b03c209cbe306360276b0a22ca9270fe7ce3bf36e0ef5f7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:26:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:26:23Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://51f217547dadd68ada46a99f3964971e47d56b9b1e45e018813ddce20347d4c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51f217547dadd68ada46a99f3964971e47d56b9b1e45e018813ddce20347d4c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:26:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:26:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:26:20Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:17 crc kubenswrapper[4728]: I0227 10:28:17.804185 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:17 crc kubenswrapper[4728]: I0227 10:28:17.813238 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-97psz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f9f1c81-b0b0-4016-9ef5-38cd92277b5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:11Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:11Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24vxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:28:11Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-97psz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:17 crc kubenswrapper[4728]: I0227 10:28:17.825883 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ded44d8-d959-4509-be28-3560f21eebda\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4e8835f3ff41008225e090a7c460900d18997805869ec9191f1e7cce3c5778e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://144568fcb21837838d999f90cb51207fff11edb023b2d975dad1980e0817ca43\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19f04d61914f64b94f46aebdc8a7580de81f5ba7e15935be045d4e92bd0cd889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f483187ce4b468f12ba37432364d63b1594b49400f77ab7ff5040730bc29b35d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f09311526d9d8d971440bee60c506ffb423eed73c2ac58c81934c2df487ae48\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T10:27:28Z\\\",\\\"message\\\":\\\"W0227 10:27:27.964275 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0227 10:27:27.965166 1 crypto.go:601] Generating new CA for check-endpoints-signer@1772188047 cert, and key in /tmp/serving-cert-1003999021/serving-signer.crt, /tmp/serving-cert-1003999021/serving-signer.key\\\\nI0227 10:27:28.312473 1 observer_polling.go:159] Starting file observer\\\\nW0227 10:27:28.324092 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 10:27:28.324213 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 10:27:28.326919 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1003999021/tls.crt::/tmp/serving-cert-1003999021/tls.key\\\\\\\"\\\\nI0227 10:27:28.533892 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 10:27:28.537238 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 10:27:28.537256 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 10:27:28.537275 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 10:27:28.537280 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nF0227 10:27:28.543870 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T10:27:27Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd5f667231cb95d3d072f38635815d04c45ae9803c179df54367329131afd05a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c860355459019bda5fddc0a658236e97a674f965aa2eff648e97bbaa30161b75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c860355459019bda5fddc0a658236e97a674f965aa2eff648e97bbaa30161b75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:26:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:26:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:26:20Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:17 crc kubenswrapper[4728]: I0227 10:28:17.841129 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:17 crc kubenswrapper[4728]: I0227 10:28:17.847699 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/87b02f57-7d16-404a-9dad-36d71fe43a6f-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-kqwxf\" (UID: \"87b02f57-7d16-404a-9dad-36d71fe43a6f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kqwxf" Feb 27 10:28:17 crc kubenswrapper[4728]: I0227 10:28:17.847749 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/87b02f57-7d16-404a-9dad-36d71fe43a6f-env-overrides\") pod \"ovnkube-control-plane-749d76644c-kqwxf\" (UID: \"87b02f57-7d16-404a-9dad-36d71fe43a6f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kqwxf" Feb 27 10:28:17 crc kubenswrapper[4728]: I0227 10:28:17.847789 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/87b02f57-7d16-404a-9dad-36d71fe43a6f-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-kqwxf\" (UID: \"87b02f57-7d16-404a-9dad-36d71fe43a6f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kqwxf" Feb 27 10:28:17 crc kubenswrapper[4728]: I0227 10:28:17.847855 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sk895\" (UniqueName: \"kubernetes.io/projected/87b02f57-7d16-404a-9dad-36d71fe43a6f-kube-api-access-sk895\") pod \"ovnkube-control-plane-749d76644c-kqwxf\" (UID: \"87b02f57-7d16-404a-9dad-36d71fe43a6f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kqwxf" Feb 27 10:28:17 crc kubenswrapper[4728]: I0227 10:28:17.849070 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/87b02f57-7d16-404a-9dad-36d71fe43a6f-env-overrides\") pod \"ovnkube-control-plane-749d76644c-kqwxf\" (UID: \"87b02f57-7d16-404a-9dad-36d71fe43a6f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kqwxf" Feb 27 10:28:17 crc kubenswrapper[4728]: I0227 10:28:17.849586 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/87b02f57-7d16-404a-9dad-36d71fe43a6f-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-kqwxf\" (UID: \"87b02f57-7d16-404a-9dad-36d71fe43a6f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kqwxf" Feb 27 10:28:17 crc kubenswrapper[4728]: I0227 10:28:17.856201 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/87b02f57-7d16-404a-9dad-36d71fe43a6f-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-kqwxf\" (UID: \"87b02f57-7d16-404a-9dad-36d71fe43a6f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kqwxf" Feb 27 10:28:17 crc kubenswrapper[4728]: I0227 10:28:17.862771 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:17 crc kubenswrapper[4728]: I0227 10:28:17.876249 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:17 crc kubenswrapper[4728]: I0227 10:28:17.876284 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:17 crc kubenswrapper[4728]: I0227 10:28:17.876302 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:17 crc kubenswrapper[4728]: I0227 10:28:17.876327 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:17 crc kubenswrapper[4728]: I0227 10:28:17.876346 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:17Z","lastTransitionTime":"2026-02-27T10:28:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:17 crc kubenswrapper[4728]: I0227 10:28:17.905997 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sk895\" (UniqueName: \"kubernetes.io/projected/87b02f57-7d16-404a-9dad-36d71fe43a6f-kube-api-access-sk895\") pod \"ovnkube-control-plane-749d76644c-kqwxf\" (UID: \"87b02f57-7d16-404a-9dad-36d71fe43a6f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kqwxf" Feb 27 10:28:17 crc kubenswrapper[4728]: I0227 10:28:17.906278 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:17 crc kubenswrapper[4728]: I0227 10:28:17.914948 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n4c77" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef8ed63c-6947-4b06-8742-54b7ba279aa7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phsn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:28:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n4c77\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:17 crc kubenswrapper[4728]: I0227 10:28:17.927984 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9tlth" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"468912b7-185a-4869-9a65-70cbcb3c4fb1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjq97\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:28:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9tlth\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:17 crc kubenswrapper[4728]: I0227 10:28:17.939985 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:17 crc kubenswrapper[4728]: I0227 10:28:17.951425 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xghgn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0cd760d8-c9b2-4e95-97a3-94bc759c9884\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qtlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qtlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qtlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qtlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qtlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qtlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qtlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:28:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xghgn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:17 crc kubenswrapper[4728]: I0227 10:28:17.962261 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kqwxf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87b02f57-7d16-404a-9dad-36d71fe43a6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sk895\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sk895\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:28:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-kqwxf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:17 crc kubenswrapper[4728]: I0227 10:28:17.979110 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:17 crc kubenswrapper[4728]: I0227 10:28:17.979171 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:17 crc kubenswrapper[4728]: I0227 10:28:17.979188 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:17 crc kubenswrapper[4728]: I0227 10:28:17.979213 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:17 crc kubenswrapper[4728]: I0227 10:28:17.979231 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:17Z","lastTransitionTime":"2026-02-27T10:28:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:18 crc kubenswrapper[4728]: I0227 10:28:18.032886 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kqwxf" Feb 27 10:28:18 crc kubenswrapper[4728]: E0227 10:28:18.046096 4728 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 10:28:18 crc kubenswrapper[4728]: container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,Command:[/bin/bash -c #!/bin/bash Feb 27 10:28:18 crc kubenswrapper[4728]: set -euo pipefail Feb 27 10:28:18 crc kubenswrapper[4728]: TLS_PK=/etc/pki/tls/metrics-cert/tls.key Feb 27 10:28:18 crc kubenswrapper[4728]: TLS_CERT=/etc/pki/tls/metrics-cert/tls.crt Feb 27 10:28:18 crc kubenswrapper[4728]: # As the secret mount is optional we must wait for the files to be present. Feb 27 10:28:18 crc kubenswrapper[4728]: # The service is created in monitor.yaml and this is created in sdn.yaml. Feb 27 10:28:18 crc kubenswrapper[4728]: TS=$(date +%s) Feb 27 10:28:18 crc kubenswrapper[4728]: WARN_TS=$(( ${TS} + $(( 20 * 60)) )) Feb 27 10:28:18 crc kubenswrapper[4728]: HAS_LOGGED_INFO=0 Feb 27 10:28:18 crc kubenswrapper[4728]: Feb 27 10:28:18 crc kubenswrapper[4728]: log_missing_certs(){ Feb 27 10:28:18 crc kubenswrapper[4728]: CUR_TS=$(date +%s) Feb 27 10:28:18 crc kubenswrapper[4728]: if [[ "${CUR_TS}" -gt "WARN_TS" ]]; then Feb 27 10:28:18 crc kubenswrapper[4728]: echo $(date -Iseconds) WARN: ovn-control-plane-metrics-cert not mounted after 20 minutes. Feb 27 10:28:18 crc kubenswrapper[4728]: elif [[ "${HAS_LOGGED_INFO}" -eq 0 ]] ; then Feb 27 10:28:18 crc kubenswrapper[4728]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-cert not mounted. Waiting 20 minutes. Feb 27 10:28:18 crc kubenswrapper[4728]: HAS_LOGGED_INFO=1 Feb 27 10:28:18 crc kubenswrapper[4728]: fi Feb 27 10:28:18 crc kubenswrapper[4728]: } Feb 27 10:28:18 crc kubenswrapper[4728]: while [[ ! -f "${TLS_PK}" || ! -f "${TLS_CERT}" ]] ; do Feb 27 10:28:18 crc kubenswrapper[4728]: log_missing_certs Feb 27 10:28:18 crc kubenswrapper[4728]: sleep 5 Feb 27 10:28:18 crc kubenswrapper[4728]: done Feb 27 10:28:18 crc kubenswrapper[4728]: Feb 27 10:28:18 crc kubenswrapper[4728]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-certs mounted, starting kube-rbac-proxy Feb 27 10:28:18 crc kubenswrapper[4728]: exec /usr/bin/kube-rbac-proxy \ Feb 27 10:28:18 crc kubenswrapper[4728]: --logtostderr \ Feb 27 10:28:18 crc kubenswrapper[4728]: --secure-listen-address=:9108 \ Feb 27 10:28:18 crc kubenswrapper[4728]: --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 \ Feb 27 10:28:18 crc kubenswrapper[4728]: --upstream=http://127.0.0.1:29108/ \ Feb 27 10:28:18 crc kubenswrapper[4728]: --tls-private-key-file=${TLS_PK} \ Feb 27 10:28:18 crc kubenswrapper[4728]: --tls-cert-file=${TLS_CERT} Feb 27 10:28:18 crc kubenswrapper[4728]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:9108,ContainerPort:9108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{20971520 0} {} 20Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovn-control-plane-metrics-cert,ReadOnly:true,MountPath:/etc/pki/tls/metrics-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sk895,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-749d76644c-kqwxf_openshift-ovn-kubernetes(87b02f57-7d16-404a-9dad-36d71fe43a6f): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 27 10:28:18 crc kubenswrapper[4728]: > logger="UnhandledError" Feb 27 10:28:18 crc kubenswrapper[4728]: E0227 10:28:18.048417 4728 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 10:28:18 crc kubenswrapper[4728]: container &Container{Name:ovnkube-cluster-manager,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Feb 27 10:28:18 crc kubenswrapper[4728]: if [[ -f "/env/_master" ]]; then Feb 27 10:28:18 crc kubenswrapper[4728]: set -o allexport Feb 27 10:28:18 crc kubenswrapper[4728]: source "/env/_master" Feb 27 10:28:18 crc kubenswrapper[4728]: set +o allexport Feb 27 10:28:18 crc kubenswrapper[4728]: fi Feb 27 10:28:18 crc kubenswrapper[4728]: Feb 27 10:28:18 crc kubenswrapper[4728]: ovn_v4_join_subnet_opt= Feb 27 10:28:18 crc kubenswrapper[4728]: if [[ "" != "" ]]; then Feb 27 10:28:18 crc kubenswrapper[4728]: ovn_v4_join_subnet_opt="--gateway-v4-join-subnet " Feb 27 10:28:18 crc kubenswrapper[4728]: fi Feb 27 10:28:18 crc kubenswrapper[4728]: ovn_v6_join_subnet_opt= Feb 27 10:28:18 crc kubenswrapper[4728]: if [[ "" != "" ]]; then Feb 27 10:28:18 crc kubenswrapper[4728]: ovn_v6_join_subnet_opt="--gateway-v6-join-subnet " Feb 27 10:28:18 crc kubenswrapper[4728]: fi Feb 27 10:28:18 crc kubenswrapper[4728]: Feb 27 10:28:18 crc kubenswrapper[4728]: ovn_v4_transit_switch_subnet_opt= Feb 27 10:28:18 crc kubenswrapper[4728]: if [[ "" != "" ]]; then Feb 27 10:28:18 crc kubenswrapper[4728]: ovn_v4_transit_switch_subnet_opt="--cluster-manager-v4-transit-switch-subnet " Feb 27 10:28:18 crc kubenswrapper[4728]: fi Feb 27 10:28:18 crc kubenswrapper[4728]: ovn_v6_transit_switch_subnet_opt= Feb 27 10:28:18 crc kubenswrapper[4728]: if [[ "" != "" ]]; then Feb 27 10:28:18 crc kubenswrapper[4728]: ovn_v6_transit_switch_subnet_opt="--cluster-manager-v6-transit-switch-subnet " Feb 27 10:28:18 crc kubenswrapper[4728]: fi Feb 27 10:28:18 crc kubenswrapper[4728]: Feb 27 10:28:18 crc kubenswrapper[4728]: dns_name_resolver_enabled_flag= Feb 27 10:28:18 crc kubenswrapper[4728]: if [[ "false" == "true" ]]; then Feb 27 10:28:18 crc kubenswrapper[4728]: dns_name_resolver_enabled_flag="--enable-dns-name-resolver" Feb 27 10:28:18 crc kubenswrapper[4728]: fi Feb 27 10:28:18 crc kubenswrapper[4728]: Feb 27 10:28:18 crc kubenswrapper[4728]: persistent_ips_enabled_flag= Feb 27 10:28:18 crc kubenswrapper[4728]: if [[ "true" == "true" ]]; then Feb 27 10:28:18 crc kubenswrapper[4728]: persistent_ips_enabled_flag="--enable-persistent-ips" Feb 27 10:28:18 crc kubenswrapper[4728]: fi Feb 27 10:28:18 crc kubenswrapper[4728]: Feb 27 10:28:18 crc kubenswrapper[4728]: # This is needed so that converting clusters from GA to TP Feb 27 10:28:18 crc kubenswrapper[4728]: # will rollout control plane pods as well Feb 27 10:28:18 crc kubenswrapper[4728]: network_segmentation_enabled_flag= Feb 27 10:28:18 crc kubenswrapper[4728]: multi_network_enabled_flag= Feb 27 10:28:18 crc kubenswrapper[4728]: if [[ "true" == "true" ]]; then Feb 27 10:28:18 crc kubenswrapper[4728]: multi_network_enabled_flag="--enable-multi-network" Feb 27 10:28:18 crc kubenswrapper[4728]: network_segmentation_enabled_flag="--enable-network-segmentation" Feb 27 10:28:18 crc kubenswrapper[4728]: fi Feb 27 10:28:18 crc kubenswrapper[4728]: Feb 27 10:28:18 crc kubenswrapper[4728]: echo "I$(date "+%m%d %H:%M:%S.%N") - ovnkube-control-plane - start ovnkube --init-cluster-manager ${K8S_NODE}" Feb 27 10:28:18 crc kubenswrapper[4728]: exec /usr/bin/ovnkube \ Feb 27 10:28:18 crc kubenswrapper[4728]: --enable-interconnect \ Feb 27 10:28:18 crc kubenswrapper[4728]: --init-cluster-manager "${K8S_NODE}" \ Feb 27 10:28:18 crc kubenswrapper[4728]: --config-file=/run/ovnkube-config/ovnkube.conf \ Feb 27 10:28:18 crc kubenswrapper[4728]: --loglevel "${OVN_KUBE_LOG_LEVEL}" \ Feb 27 10:28:18 crc kubenswrapper[4728]: --metrics-bind-address "127.0.0.1:29108" \ Feb 27 10:28:18 crc kubenswrapper[4728]: --metrics-enable-pprof \ Feb 27 10:28:18 crc kubenswrapper[4728]: --metrics-enable-config-duration \ Feb 27 10:28:18 crc kubenswrapper[4728]: ${ovn_v4_join_subnet_opt} \ Feb 27 10:28:18 crc kubenswrapper[4728]: ${ovn_v6_join_subnet_opt} \ Feb 27 10:28:18 crc kubenswrapper[4728]: ${ovn_v4_transit_switch_subnet_opt} \ Feb 27 10:28:18 crc kubenswrapper[4728]: ${ovn_v6_transit_switch_subnet_opt} \ Feb 27 10:28:18 crc kubenswrapper[4728]: ${dns_name_resolver_enabled_flag} \ Feb 27 10:28:18 crc kubenswrapper[4728]: ${persistent_ips_enabled_flag} \ Feb 27 10:28:18 crc kubenswrapper[4728]: ${multi_network_enabled_flag} \ Feb 27 10:28:18 crc kubenswrapper[4728]: ${network_segmentation_enabled_flag} Feb 27 10:28:18 crc kubenswrapper[4728]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics-port,HostPort:29108,ContainerPort:29108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OVN_KUBE_LOG_LEVEL,Value:4,ValueFrom:nil,},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{314572800 0} {} 300Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovnkube-config,ReadOnly:false,MountPath:/run/ovnkube-config/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sk895,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-749d76644c-kqwxf_openshift-ovn-kubernetes(87b02f57-7d16-404a-9dad-36d71fe43a6f): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 27 10:28:18 crc kubenswrapper[4728]: > logger="UnhandledError" Feb 27 10:28:18 crc kubenswrapper[4728]: E0227 10:28:18.049684 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"ovnkube-cluster-manager\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kqwxf" podUID="87b02f57-7d16-404a-9dad-36d71fe43a6f" Feb 27 10:28:18 crc kubenswrapper[4728]: I0227 10:28:18.082269 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:18 crc kubenswrapper[4728]: I0227 10:28:18.082347 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:18 crc kubenswrapper[4728]: I0227 10:28:18.082369 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:18 crc kubenswrapper[4728]: I0227 10:28:18.082399 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:18 crc kubenswrapper[4728]: I0227 10:28:18.082422 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:18Z","lastTransitionTime":"2026-02-27T10:28:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:18 crc kubenswrapper[4728]: I0227 10:28:18.185661 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:18 crc kubenswrapper[4728]: I0227 10:28:18.185707 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:18 crc kubenswrapper[4728]: I0227 10:28:18.185717 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:18 crc kubenswrapper[4728]: I0227 10:28:18.185733 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:18 crc kubenswrapper[4728]: I0227 10:28:18.185742 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:18Z","lastTransitionTime":"2026-02-27T10:28:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:18 crc kubenswrapper[4728]: I0227 10:28:18.221092 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kqwxf" event={"ID":"87b02f57-7d16-404a-9dad-36d71fe43a6f","Type":"ContainerStarted","Data":"e302f6cd4e894cb2a2ddd38d173276c69eebe6765916845bc716adf4991fbf15"} Feb 27 10:28:18 crc kubenswrapper[4728]: E0227 10:28:18.223460 4728 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 10:28:18 crc kubenswrapper[4728]: container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,Command:[/bin/bash -c #!/bin/bash Feb 27 10:28:18 crc kubenswrapper[4728]: set -euo pipefail Feb 27 10:28:18 crc kubenswrapper[4728]: TLS_PK=/etc/pki/tls/metrics-cert/tls.key Feb 27 10:28:18 crc kubenswrapper[4728]: TLS_CERT=/etc/pki/tls/metrics-cert/tls.crt Feb 27 10:28:18 crc kubenswrapper[4728]: # As the secret mount is optional we must wait for the files to be present. Feb 27 10:28:18 crc kubenswrapper[4728]: # The service is created in monitor.yaml and this is created in sdn.yaml. Feb 27 10:28:18 crc kubenswrapper[4728]: TS=$(date +%s) Feb 27 10:28:18 crc kubenswrapper[4728]: WARN_TS=$(( ${TS} + $(( 20 * 60)) )) Feb 27 10:28:18 crc kubenswrapper[4728]: HAS_LOGGED_INFO=0 Feb 27 10:28:18 crc kubenswrapper[4728]: Feb 27 10:28:18 crc kubenswrapper[4728]: log_missing_certs(){ Feb 27 10:28:18 crc kubenswrapper[4728]: CUR_TS=$(date +%s) Feb 27 10:28:18 crc kubenswrapper[4728]: if [[ "${CUR_TS}" -gt "WARN_TS" ]]; then Feb 27 10:28:18 crc kubenswrapper[4728]: echo $(date -Iseconds) WARN: ovn-control-plane-metrics-cert not mounted after 20 minutes. Feb 27 10:28:18 crc kubenswrapper[4728]: elif [[ "${HAS_LOGGED_INFO}" -eq 0 ]] ; then Feb 27 10:28:18 crc kubenswrapper[4728]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-cert not mounted. Waiting 20 minutes. Feb 27 10:28:18 crc kubenswrapper[4728]: HAS_LOGGED_INFO=1 Feb 27 10:28:18 crc kubenswrapper[4728]: fi Feb 27 10:28:18 crc kubenswrapper[4728]: } Feb 27 10:28:18 crc kubenswrapper[4728]: while [[ ! -f "${TLS_PK}" || ! -f "${TLS_CERT}" ]] ; do Feb 27 10:28:18 crc kubenswrapper[4728]: log_missing_certs Feb 27 10:28:18 crc kubenswrapper[4728]: sleep 5 Feb 27 10:28:18 crc kubenswrapper[4728]: done Feb 27 10:28:18 crc kubenswrapper[4728]: Feb 27 10:28:18 crc kubenswrapper[4728]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-certs mounted, starting kube-rbac-proxy Feb 27 10:28:18 crc kubenswrapper[4728]: exec /usr/bin/kube-rbac-proxy \ Feb 27 10:28:18 crc kubenswrapper[4728]: --logtostderr \ Feb 27 10:28:18 crc kubenswrapper[4728]: --secure-listen-address=:9108 \ Feb 27 10:28:18 crc kubenswrapper[4728]: --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 \ Feb 27 10:28:18 crc kubenswrapper[4728]: --upstream=http://127.0.0.1:29108/ \ Feb 27 10:28:18 crc kubenswrapper[4728]: --tls-private-key-file=${TLS_PK} \ Feb 27 10:28:18 crc kubenswrapper[4728]: --tls-cert-file=${TLS_CERT} Feb 27 10:28:18 crc kubenswrapper[4728]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:9108,ContainerPort:9108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{20971520 0} {} 20Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovn-control-plane-metrics-cert,ReadOnly:true,MountPath:/etc/pki/tls/metrics-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sk895,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-749d76644c-kqwxf_openshift-ovn-kubernetes(87b02f57-7d16-404a-9dad-36d71fe43a6f): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 27 10:28:18 crc kubenswrapper[4728]: > logger="UnhandledError" Feb 27 10:28:18 crc kubenswrapper[4728]: E0227 10:28:18.226198 4728 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 10:28:18 crc kubenswrapper[4728]: container &Container{Name:ovnkube-cluster-manager,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Feb 27 10:28:18 crc kubenswrapper[4728]: if [[ -f "/env/_master" ]]; then Feb 27 10:28:18 crc kubenswrapper[4728]: set -o allexport Feb 27 10:28:18 crc kubenswrapper[4728]: source "/env/_master" Feb 27 10:28:18 crc kubenswrapper[4728]: set +o allexport Feb 27 10:28:18 crc kubenswrapper[4728]: fi Feb 27 10:28:18 crc kubenswrapper[4728]: Feb 27 10:28:18 crc kubenswrapper[4728]: ovn_v4_join_subnet_opt= Feb 27 10:28:18 crc kubenswrapper[4728]: if [[ "" != "" ]]; then Feb 27 10:28:18 crc kubenswrapper[4728]: ovn_v4_join_subnet_opt="--gateway-v4-join-subnet " Feb 27 10:28:18 crc kubenswrapper[4728]: fi Feb 27 10:28:18 crc kubenswrapper[4728]: ovn_v6_join_subnet_opt= Feb 27 10:28:18 crc kubenswrapper[4728]: if [[ "" != "" ]]; then Feb 27 10:28:18 crc kubenswrapper[4728]: ovn_v6_join_subnet_opt="--gateway-v6-join-subnet " Feb 27 10:28:18 crc kubenswrapper[4728]: fi Feb 27 10:28:18 crc kubenswrapper[4728]: Feb 27 10:28:18 crc kubenswrapper[4728]: ovn_v4_transit_switch_subnet_opt= Feb 27 10:28:18 crc kubenswrapper[4728]: if [[ "" != "" ]]; then Feb 27 10:28:18 crc kubenswrapper[4728]: ovn_v4_transit_switch_subnet_opt="--cluster-manager-v4-transit-switch-subnet " Feb 27 10:28:18 crc kubenswrapper[4728]: fi Feb 27 10:28:18 crc kubenswrapper[4728]: ovn_v6_transit_switch_subnet_opt= Feb 27 10:28:18 crc kubenswrapper[4728]: if [[ "" != "" ]]; then Feb 27 10:28:18 crc kubenswrapper[4728]: ovn_v6_transit_switch_subnet_opt="--cluster-manager-v6-transit-switch-subnet " Feb 27 10:28:18 crc kubenswrapper[4728]: fi Feb 27 10:28:18 crc kubenswrapper[4728]: Feb 27 10:28:18 crc kubenswrapper[4728]: dns_name_resolver_enabled_flag= Feb 27 10:28:18 crc kubenswrapper[4728]: if [[ "false" == "true" ]]; then Feb 27 10:28:18 crc kubenswrapper[4728]: dns_name_resolver_enabled_flag="--enable-dns-name-resolver" Feb 27 10:28:18 crc kubenswrapper[4728]: fi Feb 27 10:28:18 crc kubenswrapper[4728]: Feb 27 10:28:18 crc kubenswrapper[4728]: persistent_ips_enabled_flag= Feb 27 10:28:18 crc kubenswrapper[4728]: if [[ "true" == "true" ]]; then Feb 27 10:28:18 crc kubenswrapper[4728]: persistent_ips_enabled_flag="--enable-persistent-ips" Feb 27 10:28:18 crc kubenswrapper[4728]: fi Feb 27 10:28:18 crc kubenswrapper[4728]: Feb 27 10:28:18 crc kubenswrapper[4728]: # This is needed so that converting clusters from GA to TP Feb 27 10:28:18 crc kubenswrapper[4728]: # will rollout control plane pods as well Feb 27 10:28:18 crc kubenswrapper[4728]: network_segmentation_enabled_flag= Feb 27 10:28:18 crc kubenswrapper[4728]: multi_network_enabled_flag= Feb 27 10:28:18 crc kubenswrapper[4728]: if [[ "true" == "true" ]]; then Feb 27 10:28:18 crc kubenswrapper[4728]: multi_network_enabled_flag="--enable-multi-network" Feb 27 10:28:18 crc kubenswrapper[4728]: network_segmentation_enabled_flag="--enable-network-segmentation" Feb 27 10:28:18 crc kubenswrapper[4728]: fi Feb 27 10:28:18 crc kubenswrapper[4728]: Feb 27 10:28:18 crc kubenswrapper[4728]: echo "I$(date "+%m%d %H:%M:%S.%N") - ovnkube-control-plane - start ovnkube --init-cluster-manager ${K8S_NODE}" Feb 27 10:28:18 crc kubenswrapper[4728]: exec /usr/bin/ovnkube \ Feb 27 10:28:18 crc kubenswrapper[4728]: --enable-interconnect \ Feb 27 10:28:18 crc kubenswrapper[4728]: --init-cluster-manager "${K8S_NODE}" \ Feb 27 10:28:18 crc kubenswrapper[4728]: --config-file=/run/ovnkube-config/ovnkube.conf \ Feb 27 10:28:18 crc kubenswrapper[4728]: --loglevel "${OVN_KUBE_LOG_LEVEL}" \ Feb 27 10:28:18 crc kubenswrapper[4728]: --metrics-bind-address "127.0.0.1:29108" \ Feb 27 10:28:18 crc kubenswrapper[4728]: --metrics-enable-pprof \ Feb 27 10:28:18 crc kubenswrapper[4728]: --metrics-enable-config-duration \ Feb 27 10:28:18 crc kubenswrapper[4728]: ${ovn_v4_join_subnet_opt} \ Feb 27 10:28:18 crc kubenswrapper[4728]: ${ovn_v6_join_subnet_opt} \ Feb 27 10:28:18 crc kubenswrapper[4728]: ${ovn_v4_transit_switch_subnet_opt} \ Feb 27 10:28:18 crc kubenswrapper[4728]: ${ovn_v6_transit_switch_subnet_opt} \ Feb 27 10:28:18 crc kubenswrapper[4728]: ${dns_name_resolver_enabled_flag} \ Feb 27 10:28:18 crc kubenswrapper[4728]: ${persistent_ips_enabled_flag} \ Feb 27 10:28:18 crc kubenswrapper[4728]: ${multi_network_enabled_flag} \ Feb 27 10:28:18 crc kubenswrapper[4728]: ${network_segmentation_enabled_flag} Feb 27 10:28:18 crc kubenswrapper[4728]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics-port,HostPort:29108,ContainerPort:29108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OVN_KUBE_LOG_LEVEL,Value:4,ValueFrom:nil,},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{314572800 0} {} 300Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovnkube-config,ReadOnly:false,MountPath:/run/ovnkube-config/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sk895,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-749d76644c-kqwxf_openshift-ovn-kubernetes(87b02f57-7d16-404a-9dad-36d71fe43a6f): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 27 10:28:18 crc kubenswrapper[4728]: > logger="UnhandledError" Feb 27 10:28:18 crc kubenswrapper[4728]: E0227 10:28:18.227447 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"ovnkube-cluster-manager\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kqwxf" podUID="87b02f57-7d16-404a-9dad-36d71fe43a6f" Feb 27 10:28:18 crc kubenswrapper[4728]: I0227 10:28:18.234000 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-97psz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f9f1c81-b0b0-4016-9ef5-38cd92277b5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:11Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:11Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24vxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:28:11Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-97psz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:18 crc kubenswrapper[4728]: I0227 10:28:18.246332 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ded44d8-d959-4509-be28-3560f21eebda\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4e8835f3ff41008225e090a7c460900d18997805869ec9191f1e7cce3c5778e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://144568fcb21837838d999f90cb51207fff11edb023b2d975dad1980e0817ca43\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19f04d61914f64b94f46aebdc8a7580de81f5ba7e15935be045d4e92bd0cd889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f483187ce4b468f12ba37432364d63b1594b49400f77ab7ff5040730bc29b35d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f09311526d9d8d971440bee60c506ffb423eed73c2ac58c81934c2df487ae48\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T10:27:28Z\\\",\\\"message\\\":\\\"W0227 10:27:27.964275 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0227 10:27:27.965166 1 crypto.go:601] Generating new CA for check-endpoints-signer@1772188047 cert, and key in /tmp/serving-cert-1003999021/serving-signer.crt, /tmp/serving-cert-1003999021/serving-signer.key\\\\nI0227 10:27:28.312473 1 observer_polling.go:159] Starting file observer\\\\nW0227 10:27:28.324092 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 10:27:28.324213 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 10:27:28.326919 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1003999021/tls.crt::/tmp/serving-cert-1003999021/tls.key\\\\\\\"\\\\nI0227 10:27:28.533892 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 10:27:28.537238 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 10:27:28.537256 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 10:27:28.537275 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 10:27:28.537280 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nF0227 10:27:28.543870 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T10:27:27Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd5f667231cb95d3d072f38635815d04c45ae9803c179df54367329131afd05a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c860355459019bda5fddc0a658236e97a674f965aa2eff648e97bbaa30161b75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c860355459019bda5fddc0a658236e97a674f965aa2eff648e97bbaa30161b75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:26:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:26:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:26:20Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:18 crc kubenswrapper[4728]: I0227 10:28:18.261746 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:18 crc kubenswrapper[4728]: I0227 10:28:18.272194 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:18 crc kubenswrapper[4728]: I0227 10:28:18.284205 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:18 crc kubenswrapper[4728]: I0227 10:28:18.288421 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:18 crc kubenswrapper[4728]: I0227 10:28:18.288459 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:18 crc kubenswrapper[4728]: I0227 10:28:18.288470 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:18 crc kubenswrapper[4728]: I0227 10:28:18.288489 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:18 crc kubenswrapper[4728]: I0227 10:28:18.288522 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:18Z","lastTransitionTime":"2026-02-27T10:28:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:18 crc kubenswrapper[4728]: I0227 10:28:18.294259 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n4c77" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef8ed63c-6947-4b06-8742-54b7ba279aa7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phsn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:28:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n4c77\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:18 crc kubenswrapper[4728]: I0227 10:28:18.305716 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9tlth" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"468912b7-185a-4869-9a65-70cbcb3c4fb1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjq97\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:28:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9tlth\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:18 crc kubenswrapper[4728]: I0227 10:28:18.315102 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:18 crc kubenswrapper[4728]: I0227 10:28:18.327619 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xghgn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0cd760d8-c9b2-4e95-97a3-94bc759c9884\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qtlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qtlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qtlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qtlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qtlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qtlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qtlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:28:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xghgn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:18 crc kubenswrapper[4728]: I0227 10:28:18.338740 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kqwxf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87b02f57-7d16-404a-9dad-36d71fe43a6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sk895\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sk895\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:28:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-kqwxf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:18 crc kubenswrapper[4728]: I0227 10:28:18.350428 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:18 crc kubenswrapper[4728]: I0227 10:28:18.361214 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2cfd349-f825-497b-b698-7fb6bc258b22\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c64ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c64ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:28:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mf2hh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:18 crc kubenswrapper[4728]: I0227 10:28:18.386322 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rpr29" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b021ff26-58a3-4418-b6ba-4aa8e0bb6746\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:28:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rpr29\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:18 crc kubenswrapper[4728]: I0227 10:28:18.391702 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:18 crc kubenswrapper[4728]: I0227 10:28:18.391768 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:18 crc kubenswrapper[4728]: I0227 10:28:18.391791 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:18 crc kubenswrapper[4728]: I0227 10:28:18.391821 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:18 crc kubenswrapper[4728]: I0227 10:28:18.391843 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:18Z","lastTransitionTime":"2026-02-27T10:28:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:18 crc kubenswrapper[4728]: I0227 10:28:18.412684 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f942aba-66f5-4353-b2f7-53d7ba94ae34\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dd5470266565899e6fda78eec789f70994968d19b1001f6340a99cfd2b73933\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e13903f4839e360a4bd61167579f8ba8936c176b194af3ed693fc7a3b6c88fe2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d2407a2f1f0fcd2f2dd4efda991f2d014a4d8c85592f2e93df7e4860a46862f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8643806fee981c732e483bce1bf93a8e35ab71964444bb2b9c476d7c93f85869\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://497509ccbf9c511546f719138ff58231a55c407323b370c2687557b87a660c9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cf8e26fed561fc883edbc30e5a062308c49ec85bc0efb0a64fef14f9dacb1b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cf8e26fed561fc883edbc30e5a062308c49ec85bc0efb0a64fef14f9dacb1b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:26:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:26:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e814ab45a5c01528b03c209cbe306360276b0a22ca9270fe7ce3bf36e0ef5f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e814ab45a5c01528b03c209cbe306360276b0a22ca9270fe7ce3bf36e0ef5f7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:26:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:26:23Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://51f217547dadd68ada46a99f3964971e47d56b9b1e45e018813ddce20347d4c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51f217547dadd68ada46a99f3964971e47d56b9b1e45e018813ddce20347d4c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:26:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:26:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:26:20Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:18 crc kubenswrapper[4728]: I0227 10:28:18.424895 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-wv4rk"] Feb 27 10:28:18 crc kubenswrapper[4728]: I0227 10:28:18.426204 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wv4rk" Feb 27 10:28:18 crc kubenswrapper[4728]: E0227 10:28:18.426587 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wv4rk" podUID="861d0263-093a-4dfa-93d7-d3efb29da94b" Feb 27 10:28:18 crc kubenswrapper[4728]: I0227 10:28:18.430618 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:18 crc kubenswrapper[4728]: I0227 10:28:18.442142 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wv4rk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"861d0263-093a-4dfa-93d7-d3efb29da94b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2nd56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2nd56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:28:18Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wv4rk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:18 crc kubenswrapper[4728]: I0227 10:28:18.454946 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2nd56\" (UniqueName: \"kubernetes.io/projected/861d0263-093a-4dfa-93d7-d3efb29da94b-kube-api-access-2nd56\") pod \"network-metrics-daemon-wv4rk\" (UID: \"861d0263-093a-4dfa-93d7-d3efb29da94b\") " pod="openshift-multus/network-metrics-daemon-wv4rk" Feb 27 10:28:18 crc kubenswrapper[4728]: I0227 10:28:18.455078 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/861d0263-093a-4dfa-93d7-d3efb29da94b-metrics-certs\") pod \"network-metrics-daemon-wv4rk\" (UID: \"861d0263-093a-4dfa-93d7-d3efb29da94b\") " pod="openshift-multus/network-metrics-daemon-wv4rk" Feb 27 10:28:18 crc kubenswrapper[4728]: I0227 10:28:18.458779 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:18 crc kubenswrapper[4728]: I0227 10:28:18.474790 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2cfd349-f825-497b-b698-7fb6bc258b22\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c64ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c64ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:28:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mf2hh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:18 crc kubenswrapper[4728]: I0227 10:28:18.494490 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rpr29" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b021ff26-58a3-4418-b6ba-4aa8e0bb6746\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:28:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rpr29\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:18 crc kubenswrapper[4728]: I0227 10:28:18.495777 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:18 crc kubenswrapper[4728]: I0227 10:28:18.495844 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:18 crc kubenswrapper[4728]: I0227 10:28:18.495869 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:18 crc kubenswrapper[4728]: I0227 10:28:18.495899 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:18 crc kubenswrapper[4728]: I0227 10:28:18.495923 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:18Z","lastTransitionTime":"2026-02-27T10:28:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:18 crc kubenswrapper[4728]: I0227 10:28:18.508793 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:18 crc kubenswrapper[4728]: I0227 10:28:18.537782 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f942aba-66f5-4353-b2f7-53d7ba94ae34\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dd5470266565899e6fda78eec789f70994968d19b1001f6340a99cfd2b73933\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e13903f4839e360a4bd61167579f8ba8936c176b194af3ed693fc7a3b6c88fe2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d2407a2f1f0fcd2f2dd4efda991f2d014a4d8c85592f2e93df7e4860a46862f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8643806fee981c732e483bce1bf93a8e35ab71964444bb2b9c476d7c93f85869\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://497509ccbf9c511546f719138ff58231a55c407323b370c2687557b87a660c9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cf8e26fed561fc883edbc30e5a062308c49ec85bc0efb0a64fef14f9dacb1b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cf8e26fed561fc883edbc30e5a062308c49ec85bc0efb0a64fef14f9dacb1b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:26:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:26:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e814ab45a5c01528b03c209cbe306360276b0a22ca9270fe7ce3bf36e0ef5f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e814ab45a5c01528b03c209cbe306360276b0a22ca9270fe7ce3bf36e0ef5f7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:26:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:26:23Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://51f217547dadd68ada46a99f3964971e47d56b9b1e45e018813ddce20347d4c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51f217547dadd68ada46a99f3964971e47d56b9b1e45e018813ddce20347d4c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:26:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:26:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:26:20Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:18 crc kubenswrapper[4728]: I0227 10:28:18.552831 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:18 crc kubenswrapper[4728]: I0227 10:28:18.555981 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2nd56\" (UniqueName: \"kubernetes.io/projected/861d0263-093a-4dfa-93d7-d3efb29da94b-kube-api-access-2nd56\") pod \"network-metrics-daemon-wv4rk\" (UID: \"861d0263-093a-4dfa-93d7-d3efb29da94b\") " pod="openshift-multus/network-metrics-daemon-wv4rk" Feb 27 10:28:18 crc kubenswrapper[4728]: I0227 10:28:18.556147 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/861d0263-093a-4dfa-93d7-d3efb29da94b-metrics-certs\") pod \"network-metrics-daemon-wv4rk\" (UID: \"861d0263-093a-4dfa-93d7-d3efb29da94b\") " pod="openshift-multus/network-metrics-daemon-wv4rk" Feb 27 10:28:18 crc kubenswrapper[4728]: E0227 10:28:18.556294 4728 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 27 10:28:18 crc kubenswrapper[4728]: E0227 10:28:18.556396 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/861d0263-093a-4dfa-93d7-d3efb29da94b-metrics-certs podName:861d0263-093a-4dfa-93d7-d3efb29da94b nodeName:}" failed. No retries permitted until 2026-02-27 10:28:19.056370881 +0000 UTC m=+119.018737047 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/861d0263-093a-4dfa-93d7-d3efb29da94b-metrics-certs") pod "network-metrics-daemon-wv4rk" (UID: "861d0263-093a-4dfa-93d7-d3efb29da94b") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 27 10:28:18 crc kubenswrapper[4728]: I0227 10:28:18.565727 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n4c77" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef8ed63c-6947-4b06-8742-54b7ba279aa7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phsn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:28:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n4c77\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:18 crc kubenswrapper[4728]: I0227 10:28:18.581853 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9tlth" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"468912b7-185a-4869-9a65-70cbcb3c4fb1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjq97\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:28:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9tlth\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:18 crc kubenswrapper[4728]: I0227 10:28:18.582969 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2nd56\" (UniqueName: \"kubernetes.io/projected/861d0263-093a-4dfa-93d7-d3efb29da94b-kube-api-access-2nd56\") pod \"network-metrics-daemon-wv4rk\" (UID: \"861d0263-093a-4dfa-93d7-d3efb29da94b\") " pod="openshift-multus/network-metrics-daemon-wv4rk" Feb 27 10:28:18 crc kubenswrapper[4728]: I0227 10:28:18.593882 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-97psz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f9f1c81-b0b0-4016-9ef5-38cd92277b5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:11Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:11Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24vxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:28:11Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-97psz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:18 crc kubenswrapper[4728]: I0227 10:28:18.598492 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:18 crc kubenswrapper[4728]: I0227 10:28:18.598590 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:18 crc kubenswrapper[4728]: I0227 10:28:18.598608 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:18 crc kubenswrapper[4728]: I0227 10:28:18.598633 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:18 crc kubenswrapper[4728]: I0227 10:28:18.598650 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:18Z","lastTransitionTime":"2026-02-27T10:28:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:18 crc kubenswrapper[4728]: I0227 10:28:18.607149 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ded44d8-d959-4509-be28-3560f21eebda\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4e8835f3ff41008225e090a7c460900d18997805869ec9191f1e7cce3c5778e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://144568fcb21837838d999f90cb51207fff11edb023b2d975dad1980e0817ca43\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19f04d61914f64b94f46aebdc8a7580de81f5ba7e15935be045d4e92bd0cd889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f483187ce4b468f12ba37432364d63b1594b49400f77ab7ff5040730bc29b35d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f09311526d9d8d971440bee60c506ffb423eed73c2ac58c81934c2df487ae48\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T10:27:28Z\\\",\\\"message\\\":\\\"W0227 10:27:27.964275 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0227 10:27:27.965166 1 crypto.go:601] Generating new CA for check-endpoints-signer@1772188047 cert, and key in /tmp/serving-cert-1003999021/serving-signer.crt, /tmp/serving-cert-1003999021/serving-signer.key\\\\nI0227 10:27:28.312473 1 observer_polling.go:159] Starting file observer\\\\nW0227 10:27:28.324092 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 10:27:28.324213 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 10:27:28.326919 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1003999021/tls.crt::/tmp/serving-cert-1003999021/tls.key\\\\\\\"\\\\nI0227 10:27:28.533892 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 10:27:28.537238 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 10:27:28.537256 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 10:27:28.537275 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 10:27:28.537280 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nF0227 10:27:28.543870 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T10:27:27Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd5f667231cb95d3d072f38635815d04c45ae9803c179df54367329131afd05a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c860355459019bda5fddc0a658236e97a674f965aa2eff648e97bbaa30161b75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c860355459019bda5fddc0a658236e97a674f965aa2eff648e97bbaa30161b75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:26:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:26:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:26:20Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:18 crc kubenswrapper[4728]: I0227 10:28:18.620261 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:18 crc kubenswrapper[4728]: I0227 10:28:18.630998 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:18 crc kubenswrapper[4728]: I0227 10:28:18.640759 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:18 crc kubenswrapper[4728]: I0227 10:28:18.657148 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xghgn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0cd760d8-c9b2-4e95-97a3-94bc759c9884\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qtlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qtlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qtlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qtlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qtlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qtlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qtlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:28:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xghgn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:18 crc kubenswrapper[4728]: I0227 10:28:18.668822 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kqwxf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87b02f57-7d16-404a-9dad-36d71fe43a6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sk895\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sk895\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:28:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-kqwxf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:18 crc kubenswrapper[4728]: I0227 10:28:18.702172 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:18 crc kubenswrapper[4728]: I0227 10:28:18.702226 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:18 crc kubenswrapper[4728]: I0227 10:28:18.702245 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:18 crc kubenswrapper[4728]: I0227 10:28:18.702268 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:18 crc kubenswrapper[4728]: I0227 10:28:18.702286 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:18Z","lastTransitionTime":"2026-02-27T10:28:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:18 crc kubenswrapper[4728]: I0227 10:28:18.724007 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 10:28:18 crc kubenswrapper[4728]: I0227 10:28:18.724725 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 10:28:18 crc kubenswrapper[4728]: I0227 10:28:18.724830 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 10:28:18 crc kubenswrapper[4728]: E0227 10:28:18.725193 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 10:28:18 crc kubenswrapper[4728]: E0227 10:28:18.725361 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 10:28:18 crc kubenswrapper[4728]: E0227 10:28:18.725543 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 10:28:18 crc kubenswrapper[4728]: E0227 10:28:18.727086 4728 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 10:28:18 crc kubenswrapper[4728]: container &Container{Name:kube-multus,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,Command:[/bin/bash -ec --],Args:[MULTUS_DAEMON_OPT="" Feb 27 10:28:18 crc kubenswrapper[4728]: /entrypoint/cnibincopy.sh; exec /usr/src/multus-cni/bin/multus-daemon $MULTUS_DAEMON_OPT Feb 27 10:28:18 crc kubenswrapper[4728]: ],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/bin/,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:6443,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api-int.crc.testing,ValueFrom:nil,},EnvVar{Name:MULTUS_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:false,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:system-cni-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-cni-dir,ReadOnly:false,MountPath:/host/run/multus/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-socket-dir-parent,ReadOnly:false,MountPath:/host/run/multus,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-k8s-cni-cncf-io,ReadOnly:false,MountPath:/run/k8s.cni.cncf.io,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-netns,ReadOnly:false,MountPath:/run/netns,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-bin,ReadOnly:false,MountPath:/var/lib/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-multus,ReadOnly:false,MountPath:/var/lib/cni/multus,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-kubelet,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:hostroot,ReadOnly:false,MountPath:/hostroot,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-conf-dir,ReadOnly:false,MountPath:/etc/cni/multus/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-daemon-config,ReadOnly:true,MountPath:/etc/cni/net.d/multus.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-multus-certs,ReadOnly:false,MountPath:/etc/cni/multus/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etc-kubernetes,ReadOnly:false,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sjq97,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-9tlth_openshift-multus(468912b7-185a-4869-9a65-70cbcb3c4fb1): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 27 10:28:18 crc kubenswrapper[4728]: > logger="UnhandledError" Feb 27 10:28:18 crc kubenswrapper[4728]: E0227 10:28:18.727439 4728 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:egress-router-binary-copy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c,Command:[/entrypoint/cnibincopy.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/bin/,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:true,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9qtlr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-additional-cni-plugins-xghgn_openshift-multus(0cd760d8-c9b2-4e95-97a3-94bc759c9884): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Feb 27 10:28:18 crc kubenswrapper[4728]: E0227 10:28:18.728399 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-9tlth" podUID="468912b7-185a-4869-9a65-70cbcb3c4fb1" Feb 27 10:28:18 crc kubenswrapper[4728]: E0227 10:28:18.728571 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"egress-router-binary-copy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-additional-cni-plugins-xghgn" podUID="0cd760d8-c9b2-4e95-97a3-94bc759c9884" Feb 27 10:28:18 crc kubenswrapper[4728]: I0227 10:28:18.805329 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:18 crc kubenswrapper[4728]: I0227 10:28:18.805375 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:18 crc kubenswrapper[4728]: I0227 10:28:18.805387 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:18 crc kubenswrapper[4728]: I0227 10:28:18.805404 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:18 crc kubenswrapper[4728]: I0227 10:28:18.805415 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:18Z","lastTransitionTime":"2026-02-27T10:28:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:18 crc kubenswrapper[4728]: I0227 10:28:18.908268 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:18 crc kubenswrapper[4728]: I0227 10:28:18.908325 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:18 crc kubenswrapper[4728]: I0227 10:28:18.908341 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:18 crc kubenswrapper[4728]: I0227 10:28:18.908365 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:18 crc kubenswrapper[4728]: I0227 10:28:18.908383 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:18Z","lastTransitionTime":"2026-02-27T10:28:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:19 crc kubenswrapper[4728]: I0227 10:28:19.011715 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:19 crc kubenswrapper[4728]: I0227 10:28:19.011855 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:19 crc kubenswrapper[4728]: I0227 10:28:19.011877 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:19 crc kubenswrapper[4728]: I0227 10:28:19.011902 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:19 crc kubenswrapper[4728]: I0227 10:28:19.011918 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:19Z","lastTransitionTime":"2026-02-27T10:28:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:19 crc kubenswrapper[4728]: I0227 10:28:19.062350 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/861d0263-093a-4dfa-93d7-d3efb29da94b-metrics-certs\") pod \"network-metrics-daemon-wv4rk\" (UID: \"861d0263-093a-4dfa-93d7-d3efb29da94b\") " pod="openshift-multus/network-metrics-daemon-wv4rk" Feb 27 10:28:19 crc kubenswrapper[4728]: E0227 10:28:19.062641 4728 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 27 10:28:19 crc kubenswrapper[4728]: E0227 10:28:19.062772 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/861d0263-093a-4dfa-93d7-d3efb29da94b-metrics-certs podName:861d0263-093a-4dfa-93d7-d3efb29da94b nodeName:}" failed. No retries permitted until 2026-02-27 10:28:20.062741034 +0000 UTC m=+120.025107170 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/861d0263-093a-4dfa-93d7-d3efb29da94b-metrics-certs") pod "network-metrics-daemon-wv4rk" (UID: "861d0263-093a-4dfa-93d7-d3efb29da94b") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 27 10:28:19 crc kubenswrapper[4728]: I0227 10:28:19.115119 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:19 crc kubenswrapper[4728]: I0227 10:28:19.115292 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:19 crc kubenswrapper[4728]: I0227 10:28:19.115316 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:19 crc kubenswrapper[4728]: I0227 10:28:19.115683 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:19 crc kubenswrapper[4728]: I0227 10:28:19.116028 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:19Z","lastTransitionTime":"2026-02-27T10:28:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:19 crc kubenswrapper[4728]: I0227 10:28:19.219647 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:19 crc kubenswrapper[4728]: I0227 10:28:19.219710 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:19 crc kubenswrapper[4728]: I0227 10:28:19.219728 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:19 crc kubenswrapper[4728]: I0227 10:28:19.219755 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:19 crc kubenswrapper[4728]: I0227 10:28:19.219773 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:19Z","lastTransitionTime":"2026-02-27T10:28:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:19 crc kubenswrapper[4728]: I0227 10:28:19.324850 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:19 crc kubenswrapper[4728]: I0227 10:28:19.324920 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:19 crc kubenswrapper[4728]: I0227 10:28:19.324944 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:19 crc kubenswrapper[4728]: I0227 10:28:19.324977 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:19 crc kubenswrapper[4728]: I0227 10:28:19.325001 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:19Z","lastTransitionTime":"2026-02-27T10:28:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:19 crc kubenswrapper[4728]: I0227 10:28:19.370719 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:19 crc kubenswrapper[4728]: I0227 10:28:19.370781 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:19 crc kubenswrapper[4728]: I0227 10:28:19.370792 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:19 crc kubenswrapper[4728]: I0227 10:28:19.370815 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:19 crc kubenswrapper[4728]: I0227 10:28:19.370836 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:19Z","lastTransitionTime":"2026-02-27T10:28:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:19 crc kubenswrapper[4728]: E0227 10:28:19.386812 4728 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:28:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:28:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:28:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:28:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"79ce2621-f919-4f1d-8b5b-b727bcba43c7\\\",\\\"systemUUID\\\":\\\"08a24311-ed07-4912-ba2b-648ea93d1dc5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:19 crc kubenswrapper[4728]: I0227 10:28:19.392787 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:19 crc kubenswrapper[4728]: I0227 10:28:19.392850 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:19 crc kubenswrapper[4728]: I0227 10:28:19.392914 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:19 crc kubenswrapper[4728]: I0227 10:28:19.392940 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:19 crc kubenswrapper[4728]: I0227 10:28:19.392961 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:19Z","lastTransitionTime":"2026-02-27T10:28:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:19 crc kubenswrapper[4728]: E0227 10:28:19.409571 4728 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:28:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:28:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:28:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:28:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"79ce2621-f919-4f1d-8b5b-b727bcba43c7\\\",\\\"systemUUID\\\":\\\"08a24311-ed07-4912-ba2b-648ea93d1dc5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:19 crc kubenswrapper[4728]: I0227 10:28:19.415110 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:19 crc kubenswrapper[4728]: I0227 10:28:19.415187 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:19 crc kubenswrapper[4728]: I0227 10:28:19.415206 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:19 crc kubenswrapper[4728]: I0227 10:28:19.415236 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:19 crc kubenswrapper[4728]: I0227 10:28:19.415250 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:19Z","lastTransitionTime":"2026-02-27T10:28:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:19 crc kubenswrapper[4728]: E0227 10:28:19.430371 4728 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:28:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:28:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:28:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:28:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"79ce2621-f919-4f1d-8b5b-b727bcba43c7\\\",\\\"systemUUID\\\":\\\"08a24311-ed07-4912-ba2b-648ea93d1dc5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:19 crc kubenswrapper[4728]: I0227 10:28:19.435628 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:19 crc kubenswrapper[4728]: I0227 10:28:19.435734 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:19 crc kubenswrapper[4728]: I0227 10:28:19.435792 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:19 crc kubenswrapper[4728]: I0227 10:28:19.435821 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:19 crc kubenswrapper[4728]: I0227 10:28:19.435838 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:19Z","lastTransitionTime":"2026-02-27T10:28:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:19 crc kubenswrapper[4728]: E0227 10:28:19.454221 4728 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:28:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:28:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:28:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:28:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"79ce2621-f919-4f1d-8b5b-b727bcba43c7\\\",\\\"systemUUID\\\":\\\"08a24311-ed07-4912-ba2b-648ea93d1dc5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:19 crc kubenswrapper[4728]: I0227 10:28:19.458823 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:19 crc kubenswrapper[4728]: I0227 10:28:19.458875 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:19 crc kubenswrapper[4728]: I0227 10:28:19.458888 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:19 crc kubenswrapper[4728]: I0227 10:28:19.458912 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:19 crc kubenswrapper[4728]: I0227 10:28:19.458925 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:19Z","lastTransitionTime":"2026-02-27T10:28:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:19 crc kubenswrapper[4728]: E0227 10:28:19.473210 4728 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:28:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:28:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:28:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:28:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"79ce2621-f919-4f1d-8b5b-b727bcba43c7\\\",\\\"systemUUID\\\":\\\"08a24311-ed07-4912-ba2b-648ea93d1dc5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:19 crc kubenswrapper[4728]: E0227 10:28:19.473407 4728 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 27 10:28:19 crc kubenswrapper[4728]: I0227 10:28:19.475684 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:19 crc kubenswrapper[4728]: I0227 10:28:19.475730 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:19 crc kubenswrapper[4728]: I0227 10:28:19.475751 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:19 crc kubenswrapper[4728]: I0227 10:28:19.475772 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:19 crc kubenswrapper[4728]: I0227 10:28:19.475788 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:19Z","lastTransitionTime":"2026-02-27T10:28:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:19 crc kubenswrapper[4728]: I0227 10:28:19.578956 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:19 crc kubenswrapper[4728]: I0227 10:28:19.579014 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:19 crc kubenswrapper[4728]: I0227 10:28:19.579031 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:19 crc kubenswrapper[4728]: I0227 10:28:19.579054 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:19 crc kubenswrapper[4728]: I0227 10:28:19.579070 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:19Z","lastTransitionTime":"2026-02-27T10:28:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:19 crc kubenswrapper[4728]: I0227 10:28:19.682084 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:19 crc kubenswrapper[4728]: I0227 10:28:19.682145 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:19 crc kubenswrapper[4728]: I0227 10:28:19.682163 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:19 crc kubenswrapper[4728]: I0227 10:28:19.682224 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:19 crc kubenswrapper[4728]: I0227 10:28:19.682242 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:19Z","lastTransitionTime":"2026-02-27T10:28:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:19 crc kubenswrapper[4728]: I0227 10:28:19.724435 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wv4rk" Feb 27 10:28:19 crc kubenswrapper[4728]: E0227 10:28:19.724678 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wv4rk" podUID="861d0263-093a-4dfa-93d7-d3efb29da94b" Feb 27 10:28:19 crc kubenswrapper[4728]: E0227 10:28:19.727965 4728 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rczfb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-4ln5h_openshift-network-operator(d75a4c96-2883-4a0b-bab2-0fab2b6c0b49): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Feb 27 10:28:19 crc kubenswrapper[4728]: E0227 10:28:19.729263 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-4ln5h" podUID="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" Feb 27 10:28:19 crc kubenswrapper[4728]: I0227 10:28:19.785191 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:19 crc kubenswrapper[4728]: I0227 10:28:19.785246 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:19 crc kubenswrapper[4728]: I0227 10:28:19.785264 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:19 crc kubenswrapper[4728]: I0227 10:28:19.785288 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:19 crc kubenswrapper[4728]: I0227 10:28:19.785305 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:19Z","lastTransitionTime":"2026-02-27T10:28:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:19 crc kubenswrapper[4728]: I0227 10:28:19.888237 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:19 crc kubenswrapper[4728]: I0227 10:28:19.888317 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:19 crc kubenswrapper[4728]: I0227 10:28:19.888336 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:19 crc kubenswrapper[4728]: I0227 10:28:19.888362 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:19 crc kubenswrapper[4728]: I0227 10:28:19.888383 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:19Z","lastTransitionTime":"2026-02-27T10:28:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:19 crc kubenswrapper[4728]: I0227 10:28:19.991710 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:19 crc kubenswrapper[4728]: I0227 10:28:19.991824 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:19 crc kubenswrapper[4728]: I0227 10:28:19.991846 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:19 crc kubenswrapper[4728]: I0227 10:28:19.991919 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:19 crc kubenswrapper[4728]: I0227 10:28:19.991938 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:19Z","lastTransitionTime":"2026-02-27T10:28:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:20 crc kubenswrapper[4728]: I0227 10:28:20.076284 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/861d0263-093a-4dfa-93d7-d3efb29da94b-metrics-certs\") pod \"network-metrics-daemon-wv4rk\" (UID: \"861d0263-093a-4dfa-93d7-d3efb29da94b\") " pod="openshift-multus/network-metrics-daemon-wv4rk" Feb 27 10:28:20 crc kubenswrapper[4728]: E0227 10:28:20.076565 4728 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 27 10:28:20 crc kubenswrapper[4728]: E0227 10:28:20.076691 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/861d0263-093a-4dfa-93d7-d3efb29da94b-metrics-certs podName:861d0263-093a-4dfa-93d7-d3efb29da94b nodeName:}" failed. No retries permitted until 2026-02-27 10:28:22.076660522 +0000 UTC m=+122.039026668 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/861d0263-093a-4dfa-93d7-d3efb29da94b-metrics-certs") pod "network-metrics-daemon-wv4rk" (UID: "861d0263-093a-4dfa-93d7-d3efb29da94b") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 27 10:28:20 crc kubenswrapper[4728]: I0227 10:28:20.095066 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:20 crc kubenswrapper[4728]: I0227 10:28:20.095140 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:20 crc kubenswrapper[4728]: I0227 10:28:20.095163 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:20 crc kubenswrapper[4728]: I0227 10:28:20.095196 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:20 crc kubenswrapper[4728]: I0227 10:28:20.095218 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:20Z","lastTransitionTime":"2026-02-27T10:28:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:20 crc kubenswrapper[4728]: I0227 10:28:20.203249 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:20 crc kubenswrapper[4728]: I0227 10:28:20.203649 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:20 crc kubenswrapper[4728]: I0227 10:28:20.203718 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:20 crc kubenswrapper[4728]: I0227 10:28:20.203749 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:20 crc kubenswrapper[4728]: I0227 10:28:20.203768 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:20Z","lastTransitionTime":"2026-02-27T10:28:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:20 crc kubenswrapper[4728]: I0227 10:28:20.307179 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:20 crc kubenswrapper[4728]: I0227 10:28:20.307236 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:20 crc kubenswrapper[4728]: I0227 10:28:20.307252 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:20 crc kubenswrapper[4728]: I0227 10:28:20.307276 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:20 crc kubenswrapper[4728]: I0227 10:28:20.307293 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:20Z","lastTransitionTime":"2026-02-27T10:28:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:20 crc kubenswrapper[4728]: I0227 10:28:20.410649 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:20 crc kubenswrapper[4728]: I0227 10:28:20.410708 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:20 crc kubenswrapper[4728]: I0227 10:28:20.410729 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:20 crc kubenswrapper[4728]: I0227 10:28:20.410759 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:20 crc kubenswrapper[4728]: I0227 10:28:20.410780 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:20Z","lastTransitionTime":"2026-02-27T10:28:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:20 crc kubenswrapper[4728]: I0227 10:28:20.513717 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:20 crc kubenswrapper[4728]: I0227 10:28:20.513790 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:20 crc kubenswrapper[4728]: I0227 10:28:20.513809 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:20 crc kubenswrapper[4728]: I0227 10:28:20.513833 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:20 crc kubenswrapper[4728]: I0227 10:28:20.513850 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:20Z","lastTransitionTime":"2026-02-27T10:28:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:20 crc kubenswrapper[4728]: I0227 10:28:20.617559 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:20 crc kubenswrapper[4728]: I0227 10:28:20.617601 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:20 crc kubenswrapper[4728]: I0227 10:28:20.617612 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:20 crc kubenswrapper[4728]: I0227 10:28:20.617630 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:20 crc kubenswrapper[4728]: I0227 10:28:20.617668 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:20Z","lastTransitionTime":"2026-02-27T10:28:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:20 crc kubenswrapper[4728]: E0227 10:28:20.718002 4728 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Feb 27 10:28:20 crc kubenswrapper[4728]: I0227 10:28:20.724717 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 10:28:20 crc kubenswrapper[4728]: E0227 10:28:20.724888 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 10:28:20 crc kubenswrapper[4728]: I0227 10:28:20.726066 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 10:28:20 crc kubenswrapper[4728]: I0227 10:28:20.726159 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 10:28:20 crc kubenswrapper[4728]: E0227 10:28:20.726452 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 10:28:20 crc kubenswrapper[4728]: E0227 10:28:20.727003 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 10:28:20 crc kubenswrapper[4728]: E0227 10:28:20.731838 4728 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 10:28:20 crc kubenswrapper[4728]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Feb 27 10:28:20 crc kubenswrapper[4728]: if [[ -f "/env/_master" ]]; then Feb 27 10:28:20 crc kubenswrapper[4728]: set -o allexport Feb 27 10:28:20 crc kubenswrapper[4728]: source "/env/_master" Feb 27 10:28:20 crc kubenswrapper[4728]: set +o allexport Feb 27 10:28:20 crc kubenswrapper[4728]: fi Feb 27 10:28:20 crc kubenswrapper[4728]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Feb 27 10:28:20 crc kubenswrapper[4728]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Feb 27 10:28:20 crc kubenswrapper[4728]: ho_enable="--enable-hybrid-overlay" Feb 27 10:28:20 crc kubenswrapper[4728]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Feb 27 10:28:20 crc kubenswrapper[4728]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Feb 27 10:28:20 crc kubenswrapper[4728]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Feb 27 10:28:20 crc kubenswrapper[4728]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Feb 27 10:28:20 crc kubenswrapper[4728]: --webhook-cert-dir="/etc/webhook-cert" \ Feb 27 10:28:20 crc kubenswrapper[4728]: --webhook-host=127.0.0.1 \ Feb 27 10:28:20 crc kubenswrapper[4728]: --webhook-port=9743 \ Feb 27 10:28:20 crc kubenswrapper[4728]: ${ho_enable} \ Feb 27 10:28:20 crc kubenswrapper[4728]: --enable-interconnect \ Feb 27 10:28:20 crc kubenswrapper[4728]: --disable-approver \ Feb 27 10:28:20 crc kubenswrapper[4728]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Feb 27 10:28:20 crc kubenswrapper[4728]: --wait-for-kubernetes-api=200s \ Feb 27 10:28:20 crc kubenswrapper[4728]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Feb 27 10:28:20 crc kubenswrapper[4728]: --loglevel="${LOGLEVEL}" Feb 27 10:28:20 crc kubenswrapper[4728]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 27 10:28:20 crc kubenswrapper[4728]: > logger="UnhandledError" Feb 27 10:28:20 crc kubenswrapper[4728]: E0227 10:28:20.734643 4728 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 10:28:20 crc kubenswrapper[4728]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Feb 27 10:28:20 crc kubenswrapper[4728]: if [[ -f "/env/_master" ]]; then Feb 27 10:28:20 crc kubenswrapper[4728]: set -o allexport Feb 27 10:28:20 crc kubenswrapper[4728]: source "/env/_master" Feb 27 10:28:20 crc kubenswrapper[4728]: set +o allexport Feb 27 10:28:20 crc kubenswrapper[4728]: fi Feb 27 10:28:20 crc kubenswrapper[4728]: Feb 27 10:28:20 crc kubenswrapper[4728]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Feb 27 10:28:20 crc kubenswrapper[4728]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Feb 27 10:28:20 crc kubenswrapper[4728]: --disable-webhook \ Feb 27 10:28:20 crc kubenswrapper[4728]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Feb 27 10:28:20 crc kubenswrapper[4728]: --loglevel="${LOGLEVEL}" Feb 27 10:28:20 crc kubenswrapper[4728]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 27 10:28:20 crc kubenswrapper[4728]: > logger="UnhandledError" Feb 27 10:28:20 crc kubenswrapper[4728]: E0227 10:28:20.736145 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-vrzqb" podUID="ef543e1b-8068-4ea3-b32a-61027b32e95d" Feb 27 10:28:20 crc kubenswrapper[4728]: I0227 10:28:20.743036 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:20 crc kubenswrapper[4728]: I0227 10:28:20.753396 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2cfd349-f825-497b-b698-7fb6bc258b22\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c64ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c64ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:28:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mf2hh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:20 crc kubenswrapper[4728]: I0227 10:28:20.781544 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rpr29" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b021ff26-58a3-4418-b6ba-4aa8e0bb6746\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:28:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rpr29\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:20 crc kubenswrapper[4728]: I0227 10:28:20.791334 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wv4rk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"861d0263-093a-4dfa-93d7-d3efb29da94b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2nd56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2nd56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:28:18Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wv4rk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:20 crc kubenswrapper[4728]: E0227 10:28:20.808598 4728 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 27 10:28:20 crc kubenswrapper[4728]: I0227 10:28:20.811962 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f942aba-66f5-4353-b2f7-53d7ba94ae34\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dd5470266565899e6fda78eec789f70994968d19b1001f6340a99cfd2b73933\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e13903f4839e360a4bd61167579f8ba8936c176b194af3ed693fc7a3b6c88fe2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d2407a2f1f0fcd2f2dd4efda991f2d014a4d8c85592f2e93df7e4860a46862f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8643806fee981c732e483bce1bf93a8e35ab71964444bb2b9c476d7c93f85869\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://497509ccbf9c511546f719138ff58231a55c407323b370c2687557b87a660c9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cf8e26fed561fc883edbc30e5a062308c49ec85bc0efb0a64fef14f9dacb1b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cf8e26fed561fc883edbc30e5a062308c49ec85bc0efb0a64fef14f9dacb1b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:26:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:26:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e814ab45a5c01528b03c209cbe306360276b0a22ca9270fe7ce3bf36e0ef5f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e814ab45a5c01528b03c209cbe306360276b0a22ca9270fe7ce3bf36e0ef5f7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:26:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:26:23Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://51f217547dadd68ada46a99f3964971e47d56b9b1e45e018813ddce20347d4c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51f217547dadd68ada46a99f3964971e47d56b9b1e45e018813ddce20347d4c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:26:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:26:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:26:20Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:20 crc kubenswrapper[4728]: I0227 10:28:20.826950 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:20 crc kubenswrapper[4728]: I0227 10:28:20.839050 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ded44d8-d959-4509-be28-3560f21eebda\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4e8835f3ff41008225e090a7c460900d18997805869ec9191f1e7cce3c5778e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://144568fcb21837838d999f90cb51207fff11edb023b2d975dad1980e0817ca43\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19f04d61914f64b94f46aebdc8a7580de81f5ba7e15935be045d4e92bd0cd889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f483187ce4b468f12ba37432364d63b1594b49400f77ab7ff5040730bc29b35d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f09311526d9d8d971440bee60c506ffb423eed73c2ac58c81934c2df487ae48\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T10:27:28Z\\\",\\\"message\\\":\\\"W0227 10:27:27.964275 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0227 10:27:27.965166 1 crypto.go:601] Generating new CA for check-endpoints-signer@1772188047 cert, and key in /tmp/serving-cert-1003999021/serving-signer.crt, /tmp/serving-cert-1003999021/serving-signer.key\\\\nI0227 10:27:28.312473 1 observer_polling.go:159] Starting file observer\\\\nW0227 10:27:28.324092 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 10:27:28.324213 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 10:27:28.326919 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1003999021/tls.crt::/tmp/serving-cert-1003999021/tls.key\\\\\\\"\\\\nI0227 10:27:28.533892 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 10:27:28.537238 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 10:27:28.537256 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 10:27:28.537275 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 10:27:28.537280 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nF0227 10:27:28.543870 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T10:27:27Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd5f667231cb95d3d072f38635815d04c45ae9803c179df54367329131afd05a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c860355459019bda5fddc0a658236e97a674f965aa2eff648e97bbaa30161b75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c860355459019bda5fddc0a658236e97a674f965aa2eff648e97bbaa30161b75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:26:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:26:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:26:20Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:20 crc kubenswrapper[4728]: I0227 10:28:20.851305 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:20 crc kubenswrapper[4728]: I0227 10:28:20.860499 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:20 crc kubenswrapper[4728]: I0227 10:28:20.869353 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:20 crc kubenswrapper[4728]: I0227 10:28:20.879964 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n4c77" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef8ed63c-6947-4b06-8742-54b7ba279aa7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phsn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:28:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n4c77\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:20 crc kubenswrapper[4728]: I0227 10:28:20.891575 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9tlth" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"468912b7-185a-4869-9a65-70cbcb3c4fb1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjq97\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:28:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9tlth\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:20 crc kubenswrapper[4728]: I0227 10:28:20.903387 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-97psz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f9f1c81-b0b0-4016-9ef5-38cd92277b5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:11Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:11Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24vxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:28:11Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-97psz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:20 crc kubenswrapper[4728]: I0227 10:28:20.918695 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:20 crc kubenswrapper[4728]: I0227 10:28:20.934994 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xghgn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0cd760d8-c9b2-4e95-97a3-94bc759c9884\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qtlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qtlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qtlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qtlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qtlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qtlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qtlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:28:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xghgn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:20 crc kubenswrapper[4728]: I0227 10:28:20.947186 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kqwxf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87b02f57-7d16-404a-9dad-36d71fe43a6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sk895\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sk895\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:28:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-kqwxf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:21 crc kubenswrapper[4728]: I0227 10:28:21.724035 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wv4rk" Feb 27 10:28:21 crc kubenswrapper[4728]: E0227 10:28:21.724163 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wv4rk" podUID="861d0263-093a-4dfa-93d7-d3efb29da94b" Feb 27 10:28:21 crc kubenswrapper[4728]: E0227 10:28:21.727756 4728 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 10:28:21 crc kubenswrapper[4728]: init container &Container{Name:kubecfg-setup,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c cat << EOF > /etc/ovn/kubeconfig Feb 27 10:28:21 crc kubenswrapper[4728]: apiVersion: v1 Feb 27 10:28:21 crc kubenswrapper[4728]: clusters: Feb 27 10:28:21 crc kubenswrapper[4728]: - cluster: Feb 27 10:28:21 crc kubenswrapper[4728]: certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt Feb 27 10:28:21 crc kubenswrapper[4728]: server: https://api-int.crc.testing:6443 Feb 27 10:28:21 crc kubenswrapper[4728]: name: default-cluster Feb 27 10:28:21 crc kubenswrapper[4728]: contexts: Feb 27 10:28:21 crc kubenswrapper[4728]: - context: Feb 27 10:28:21 crc kubenswrapper[4728]: cluster: default-cluster Feb 27 10:28:21 crc kubenswrapper[4728]: namespace: default Feb 27 10:28:21 crc kubenswrapper[4728]: user: default-auth Feb 27 10:28:21 crc kubenswrapper[4728]: name: default-context Feb 27 10:28:21 crc kubenswrapper[4728]: current-context: default-context Feb 27 10:28:21 crc kubenswrapper[4728]: kind: Config Feb 27 10:28:21 crc kubenswrapper[4728]: preferences: {} Feb 27 10:28:21 crc kubenswrapper[4728]: users: Feb 27 10:28:21 crc kubenswrapper[4728]: - name: default-auth Feb 27 10:28:21 crc kubenswrapper[4728]: user: Feb 27 10:28:21 crc kubenswrapper[4728]: client-certificate: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Feb 27 10:28:21 crc kubenswrapper[4728]: client-key: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Feb 27 10:28:21 crc kubenswrapper[4728]: EOF Feb 27 10:28:21 crc kubenswrapper[4728]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-openvswitch,ReadOnly:false,MountPath:/etc/ovn/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dnx4z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-node-rpr29_openshift-ovn-kubernetes(b021ff26-58a3-4418-b6ba-4aa8e0bb6746): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 27 10:28:21 crc kubenswrapper[4728]: > logger="UnhandledError" Feb 27 10:28:21 crc kubenswrapper[4728]: E0227 10:28:21.728948 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubecfg-setup\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-ovn-kubernetes/ovnkube-node-rpr29" podUID="b021ff26-58a3-4418-b6ba-4aa8e0bb6746" Feb 27 10:28:22 crc kubenswrapper[4728]: I0227 10:28:22.072863 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 10:28:22 crc kubenswrapper[4728]: I0227 10:28:22.089213 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2cfd349-f825-497b-b698-7fb6bc258b22\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c64ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c64ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:28:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mf2hh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:22 crc kubenswrapper[4728]: I0227 10:28:22.099847 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/861d0263-093a-4dfa-93d7-d3efb29da94b-metrics-certs\") pod \"network-metrics-daemon-wv4rk\" (UID: \"861d0263-093a-4dfa-93d7-d3efb29da94b\") " pod="openshift-multus/network-metrics-daemon-wv4rk" Feb 27 10:28:22 crc kubenswrapper[4728]: E0227 10:28:22.100047 4728 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 27 10:28:22 crc kubenswrapper[4728]: E0227 10:28:22.100125 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/861d0263-093a-4dfa-93d7-d3efb29da94b-metrics-certs podName:861d0263-093a-4dfa-93d7-d3efb29da94b nodeName:}" failed. No retries permitted until 2026-02-27 10:28:26.1001047 +0000 UTC m=+126.062470916 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/861d0263-093a-4dfa-93d7-d3efb29da94b-metrics-certs") pod "network-metrics-daemon-wv4rk" (UID: "861d0263-093a-4dfa-93d7-d3efb29da94b") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 27 10:28:22 crc kubenswrapper[4728]: I0227 10:28:22.116429 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rpr29" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b021ff26-58a3-4418-b6ba-4aa8e0bb6746\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:28:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rpr29\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:22 crc kubenswrapper[4728]: I0227 10:28:22.126405 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wv4rk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"861d0263-093a-4dfa-93d7-d3efb29da94b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2nd56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2nd56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:28:18Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wv4rk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:22 crc kubenswrapper[4728]: I0227 10:28:22.138256 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:22 crc kubenswrapper[4728]: I0227 10:28:22.162835 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f942aba-66f5-4353-b2f7-53d7ba94ae34\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dd5470266565899e6fda78eec789f70994968d19b1001f6340a99cfd2b73933\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e13903f4839e360a4bd61167579f8ba8936c176b194af3ed693fc7a3b6c88fe2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d2407a2f1f0fcd2f2dd4efda991f2d014a4d8c85592f2e93df7e4860a46862f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8643806fee981c732e483bce1bf93a8e35ab71964444bb2b9c476d7c93f85869\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://497509ccbf9c511546f719138ff58231a55c407323b370c2687557b87a660c9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cf8e26fed561fc883edbc30e5a062308c49ec85bc0efb0a64fef14f9dacb1b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cf8e26fed561fc883edbc30e5a062308c49ec85bc0efb0a64fef14f9dacb1b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:26:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:26:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e814ab45a5c01528b03c209cbe306360276b0a22ca9270fe7ce3bf36e0ef5f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e814ab45a5c01528b03c209cbe306360276b0a22ca9270fe7ce3bf36e0ef5f7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:26:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:26:23Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://51f217547dadd68ada46a99f3964971e47d56b9b1e45e018813ddce20347d4c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51f217547dadd68ada46a99f3964971e47d56b9b1e45e018813ddce20347d4c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:26:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:26:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:26:20Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:22 crc kubenswrapper[4728]: I0227 10:28:22.179309 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:22 crc kubenswrapper[4728]: I0227 10:28:22.198234 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:22 crc kubenswrapper[4728]: I0227 10:28:22.213758 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:22 crc kubenswrapper[4728]: I0227 10:28:22.229974 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:22 crc kubenswrapper[4728]: I0227 10:28:22.242210 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n4c77" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef8ed63c-6947-4b06-8742-54b7ba279aa7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phsn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:28:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n4c77\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:22 crc kubenswrapper[4728]: I0227 10:28:22.258025 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9tlth" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"468912b7-185a-4869-9a65-70cbcb3c4fb1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjq97\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:28:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9tlth\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:22 crc kubenswrapper[4728]: I0227 10:28:22.270612 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-97psz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f9f1c81-b0b0-4016-9ef5-38cd92277b5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:11Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:11Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24vxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:28:11Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-97psz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:22 crc kubenswrapper[4728]: I0227 10:28:22.288220 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ded44d8-d959-4509-be28-3560f21eebda\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4e8835f3ff41008225e090a7c460900d18997805869ec9191f1e7cce3c5778e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://144568fcb21837838d999f90cb51207fff11edb023b2d975dad1980e0817ca43\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19f04d61914f64b94f46aebdc8a7580de81f5ba7e15935be045d4e92bd0cd889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f483187ce4b468f12ba37432364d63b1594b49400f77ab7ff5040730bc29b35d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f09311526d9d8d971440bee60c506ffb423eed73c2ac58c81934c2df487ae48\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T10:27:28Z\\\",\\\"message\\\":\\\"W0227 10:27:27.964275 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0227 10:27:27.965166 1 crypto.go:601] Generating new CA for check-endpoints-signer@1772188047 cert, and key in /tmp/serving-cert-1003999021/serving-signer.crt, /tmp/serving-cert-1003999021/serving-signer.key\\\\nI0227 10:27:28.312473 1 observer_polling.go:159] Starting file observer\\\\nW0227 10:27:28.324092 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 10:27:28.324213 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 10:27:28.326919 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1003999021/tls.crt::/tmp/serving-cert-1003999021/tls.key\\\\\\\"\\\\nI0227 10:27:28.533892 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 10:27:28.537238 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 10:27:28.537256 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 10:27:28.537275 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 10:27:28.537280 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nF0227 10:27:28.543870 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T10:27:27Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd5f667231cb95d3d072f38635815d04c45ae9803c179df54367329131afd05a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c860355459019bda5fddc0a658236e97a674f965aa2eff648e97bbaa30161b75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c860355459019bda5fddc0a658236e97a674f965aa2eff648e97bbaa30161b75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:26:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:26:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:26:20Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:22 crc kubenswrapper[4728]: I0227 10:28:22.307073 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xghgn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0cd760d8-c9b2-4e95-97a3-94bc759c9884\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qtlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qtlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qtlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qtlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qtlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qtlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qtlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:28:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xghgn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:22 crc kubenswrapper[4728]: I0227 10:28:22.324310 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kqwxf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87b02f57-7d16-404a-9dad-36d71fe43a6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sk895\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sk895\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:28:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-kqwxf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:22 crc kubenswrapper[4728]: I0227 10:28:22.339336 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:22 crc kubenswrapper[4728]: I0227 10:28:22.724125 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 10:28:22 crc kubenswrapper[4728]: I0227 10:28:22.724219 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 10:28:22 crc kubenswrapper[4728]: E0227 10:28:22.725284 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 10:28:22 crc kubenswrapper[4728]: I0227 10:28:22.724219 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 10:28:22 crc kubenswrapper[4728]: E0227 10:28:22.725109 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 10:28:22 crc kubenswrapper[4728]: E0227 10:28:22.725560 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 10:28:23 crc kubenswrapper[4728]: I0227 10:28:23.166082 4728 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 27 10:28:23 crc kubenswrapper[4728]: I0227 10:28:23.724750 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wv4rk" Feb 27 10:28:23 crc kubenswrapper[4728]: E0227 10:28:23.725948 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wv4rk" podUID="861d0263-093a-4dfa-93d7-d3efb29da94b" Feb 27 10:28:24 crc kubenswrapper[4728]: I0227 10:28:24.239343 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-97psz" event={"ID":"6f9f1c81-b0b0-4016-9ef5-38cd92277b5b","Type":"ContainerStarted","Data":"053066c93643a4cfda98a923faaa20d0bac7b090fc309322b54bbd09c74c7fcc"} Feb 27 10:28:24 crc kubenswrapper[4728]: I0227 10:28:24.248672 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:24 crc kubenswrapper[4728]: I0227 10:28:24.256132 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:24 crc kubenswrapper[4728]: I0227 10:28:24.262409 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n4c77" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef8ed63c-6947-4b06-8742-54b7ba279aa7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phsn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:28:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n4c77\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:24 crc kubenswrapper[4728]: I0227 10:28:24.271804 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9tlth" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"468912b7-185a-4869-9a65-70cbcb3c4fb1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjq97\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:28:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9tlth\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:24 crc kubenswrapper[4728]: I0227 10:28:24.281120 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-97psz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f9f1c81-b0b0-4016-9ef5-38cd92277b5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://053066c93643a4cfda98a923faaa20d0bac7b090fc309322b54bbd09c74c7fcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:28:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24vxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:28:11Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-97psz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:24 crc kubenswrapper[4728]: I0227 10:28:24.296573 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ded44d8-d959-4509-be28-3560f21eebda\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4e8835f3ff41008225e090a7c460900d18997805869ec9191f1e7cce3c5778e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://144568fcb21837838d999f90cb51207fff11edb023b2d975dad1980e0817ca43\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19f04d61914f64b94f46aebdc8a7580de81f5ba7e15935be045d4e92bd0cd889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f483187ce4b468f12ba37432364d63b1594b49400f77ab7ff5040730bc29b35d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f09311526d9d8d971440bee60c506ffb423eed73c2ac58c81934c2df487ae48\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T10:27:28Z\\\",\\\"message\\\":\\\"W0227 10:27:27.964275 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0227 10:27:27.965166 1 crypto.go:601] Generating new CA for check-endpoints-signer@1772188047 cert, and key in /tmp/serving-cert-1003999021/serving-signer.crt, /tmp/serving-cert-1003999021/serving-signer.key\\\\nI0227 10:27:28.312473 1 observer_polling.go:159] Starting file observer\\\\nW0227 10:27:28.324092 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 10:27:28.324213 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 10:27:28.326919 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1003999021/tls.crt::/tmp/serving-cert-1003999021/tls.key\\\\\\\"\\\\nI0227 10:27:28.533892 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 10:27:28.537238 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 10:27:28.537256 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 10:27:28.537275 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 10:27:28.537280 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nF0227 10:27:28.543870 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T10:27:27Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd5f667231cb95d3d072f38635815d04c45ae9803c179df54367329131afd05a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c860355459019bda5fddc0a658236e97a674f965aa2eff648e97bbaa30161b75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c860355459019bda5fddc0a658236e97a674f965aa2eff648e97bbaa30161b75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:26:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:26:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:26:20Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:24 crc kubenswrapper[4728]: I0227 10:28:24.306478 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:24 crc kubenswrapper[4728]: I0227 10:28:24.318428 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kqwxf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87b02f57-7d16-404a-9dad-36d71fe43a6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sk895\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sk895\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:28:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-kqwxf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:24 crc kubenswrapper[4728]: I0227 10:28:24.326760 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:24 crc kubenswrapper[4728]: I0227 10:28:24.344116 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xghgn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0cd760d8-c9b2-4e95-97a3-94bc759c9884\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qtlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qtlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qtlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qtlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qtlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qtlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qtlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:28:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xghgn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:24 crc kubenswrapper[4728]: I0227 10:28:24.370421 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rpr29" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b021ff26-58a3-4418-b6ba-4aa8e0bb6746\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:28:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rpr29\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:24 crc kubenswrapper[4728]: I0227 10:28:24.381675 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wv4rk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"861d0263-093a-4dfa-93d7-d3efb29da94b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2nd56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2nd56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:28:18Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wv4rk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:24 crc kubenswrapper[4728]: I0227 10:28:24.396074 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:24 crc kubenswrapper[4728]: I0227 10:28:24.407754 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2cfd349-f825-497b-b698-7fb6bc258b22\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c64ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c64ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:28:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mf2hh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:24 crc kubenswrapper[4728]: I0227 10:28:24.424841 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 10:28:24 crc kubenswrapper[4728]: E0227 10:28:24.425077 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 10:28:56.425045275 +0000 UTC m=+156.387411411 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:28:24 crc kubenswrapper[4728]: I0227 10:28:24.435186 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f942aba-66f5-4353-b2f7-53d7ba94ae34\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dd5470266565899e6fda78eec789f70994968d19b1001f6340a99cfd2b73933\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e13903f4839e360a4bd61167579f8ba8936c176b194af3ed693fc7a3b6c88fe2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d2407a2f1f0fcd2f2dd4efda991f2d014a4d8c85592f2e93df7e4860a46862f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8643806fee981c732e483bce1bf93a8e35ab71964444bb2b9c476d7c93f85869\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://497509ccbf9c511546f719138ff58231a55c407323b370c2687557b87a660c9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cf8e26fed561fc883edbc30e5a062308c49ec85bc0efb0a64fef14f9dacb1b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cf8e26fed561fc883edbc30e5a062308c49ec85bc0efb0a64fef14f9dacb1b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:26:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:26:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e814ab45a5c01528b03c209cbe306360276b0a22ca9270fe7ce3bf36e0ef5f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e814ab45a5c01528b03c209cbe306360276b0a22ca9270fe7ce3bf36e0ef5f7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:26:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:26:23Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://51f217547dadd68ada46a99f3964971e47d56b9b1e45e018813ddce20347d4c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51f217547dadd68ada46a99f3964971e47d56b9b1e45e018813ddce20347d4c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:26:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:26:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:26:20Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:24 crc kubenswrapper[4728]: I0227 10:28:24.449629 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:24 crc kubenswrapper[4728]: I0227 10:28:24.627489 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 10:28:24 crc kubenswrapper[4728]: I0227 10:28:24.627783 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 10:28:24 crc kubenswrapper[4728]: I0227 10:28:24.627961 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 10:28:24 crc kubenswrapper[4728]: E0227 10:28:24.627830 4728 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 27 10:28:24 crc kubenswrapper[4728]: I0227 10:28:24.628125 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 10:28:24 crc kubenswrapper[4728]: E0227 10:28:24.628218 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-27 10:28:56.628195807 +0000 UTC m=+156.590561943 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 27 10:28:24 crc kubenswrapper[4728]: E0227 10:28:24.627896 4728 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 27 10:28:24 crc kubenswrapper[4728]: E0227 10:28:24.628114 4728 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 27 10:28:24 crc kubenswrapper[4728]: E0227 10:28:24.628492 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-27 10:28:56.628456033 +0000 UTC m=+156.590822179 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 27 10:28:24 crc kubenswrapper[4728]: E0227 10:28:24.628567 4728 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 27 10:28:24 crc kubenswrapper[4728]: E0227 10:28:24.628600 4728 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 10:28:24 crc kubenswrapper[4728]: E0227 10:28:24.628788 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-27 10:28:56.62871455 +0000 UTC m=+156.591080686 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 10:28:24 crc kubenswrapper[4728]: E0227 10:28:24.629053 4728 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 27 10:28:24 crc kubenswrapper[4728]: E0227 10:28:24.629243 4728 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 27 10:28:24 crc kubenswrapper[4728]: E0227 10:28:24.629422 4728 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 10:28:24 crc kubenswrapper[4728]: E0227 10:28:24.629665 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-27 10:28:56.629644355 +0000 UTC m=+156.592010491 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 10:28:24 crc kubenswrapper[4728]: I0227 10:28:24.724644 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 10:28:24 crc kubenswrapper[4728]: I0227 10:28:24.724644 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 10:28:24 crc kubenswrapper[4728]: I0227 10:28:24.725019 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 10:28:24 crc kubenswrapper[4728]: E0227 10:28:24.725251 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 10:28:24 crc kubenswrapper[4728]: E0227 10:28:24.725677 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 10:28:24 crc kubenswrapper[4728]: E0227 10:28:24.726263 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 10:28:25 crc kubenswrapper[4728]: I0227 10:28:25.724091 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wv4rk" Feb 27 10:28:25 crc kubenswrapper[4728]: E0227 10:28:25.724297 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wv4rk" podUID="861d0263-093a-4dfa-93d7-d3efb29da94b" Feb 27 10:28:25 crc kubenswrapper[4728]: E0227 10:28:25.810272 4728 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 27 10:28:26 crc kubenswrapper[4728]: I0227 10:28:26.145583 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/861d0263-093a-4dfa-93d7-d3efb29da94b-metrics-certs\") pod \"network-metrics-daemon-wv4rk\" (UID: \"861d0263-093a-4dfa-93d7-d3efb29da94b\") " pod="openshift-multus/network-metrics-daemon-wv4rk" Feb 27 10:28:26 crc kubenswrapper[4728]: E0227 10:28:26.145880 4728 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 27 10:28:26 crc kubenswrapper[4728]: E0227 10:28:26.146005 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/861d0263-093a-4dfa-93d7-d3efb29da94b-metrics-certs podName:861d0263-093a-4dfa-93d7-d3efb29da94b nodeName:}" failed. No retries permitted until 2026-02-27 10:28:34.145971661 +0000 UTC m=+134.108337807 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/861d0263-093a-4dfa-93d7-d3efb29da94b-metrics-certs") pod "network-metrics-daemon-wv4rk" (UID: "861d0263-093a-4dfa-93d7-d3efb29da94b") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 27 10:28:26 crc kubenswrapper[4728]: I0227 10:28:26.724721 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 10:28:26 crc kubenswrapper[4728]: I0227 10:28:26.724838 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 10:28:26 crc kubenswrapper[4728]: E0227 10:28:26.724893 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 10:28:26 crc kubenswrapper[4728]: I0227 10:28:26.724930 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 10:28:26 crc kubenswrapper[4728]: E0227 10:28:26.725123 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 10:28:26 crc kubenswrapper[4728]: E0227 10:28:26.725370 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 10:28:27 crc kubenswrapper[4728]: I0227 10:28:27.724590 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wv4rk" Feb 27 10:28:27 crc kubenswrapper[4728]: E0227 10:28:27.724788 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wv4rk" podUID="861d0263-093a-4dfa-93d7-d3efb29da94b" Feb 27 10:28:28 crc kubenswrapper[4728]: I0227 10:28:28.724799 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 10:28:28 crc kubenswrapper[4728]: I0227 10:28:28.724884 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 10:28:28 crc kubenswrapper[4728]: E0227 10:28:28.724986 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 10:28:28 crc kubenswrapper[4728]: I0227 10:28:28.725023 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 10:28:28 crc kubenswrapper[4728]: E0227 10:28:28.725152 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 10:28:28 crc kubenswrapper[4728]: E0227 10:28:28.725370 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 10:28:29 crc kubenswrapper[4728]: I0227 10:28:29.724759 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wv4rk" Feb 27 10:28:29 crc kubenswrapper[4728]: E0227 10:28:29.724919 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wv4rk" podUID="861d0263-093a-4dfa-93d7-d3efb29da94b" Feb 27 10:28:29 crc kubenswrapper[4728]: I0227 10:28:29.849295 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:29 crc kubenswrapper[4728]: I0227 10:28:29.849368 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:29 crc kubenswrapper[4728]: I0227 10:28:29.849385 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:29 crc kubenswrapper[4728]: I0227 10:28:29.849410 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:29 crc kubenswrapper[4728]: I0227 10:28:29.849428 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:29Z","lastTransitionTime":"2026-02-27T10:28:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:29 crc kubenswrapper[4728]: E0227 10:28:29.865958 4728 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:28:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:28:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:28:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:28:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"79ce2621-f919-4f1d-8b5b-b727bcba43c7\\\",\\\"systemUUID\\\":\\\"08a24311-ed07-4912-ba2b-648ea93d1dc5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:29 crc kubenswrapper[4728]: I0227 10:28:29.870955 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:29 crc kubenswrapper[4728]: I0227 10:28:29.871015 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:29 crc kubenswrapper[4728]: I0227 10:28:29.871033 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:29 crc kubenswrapper[4728]: I0227 10:28:29.871059 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:29 crc kubenswrapper[4728]: I0227 10:28:29.871076 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:29Z","lastTransitionTime":"2026-02-27T10:28:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:29 crc kubenswrapper[4728]: E0227 10:28:29.887706 4728 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:28:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:28:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:28:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:28:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"79ce2621-f919-4f1d-8b5b-b727bcba43c7\\\",\\\"systemUUID\\\":\\\"08a24311-ed07-4912-ba2b-648ea93d1dc5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:29 crc kubenswrapper[4728]: I0227 10:28:29.892582 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:29 crc kubenswrapper[4728]: I0227 10:28:29.892633 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:29 crc kubenswrapper[4728]: I0227 10:28:29.892647 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:29 crc kubenswrapper[4728]: I0227 10:28:29.892669 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:29 crc kubenswrapper[4728]: I0227 10:28:29.892686 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:29Z","lastTransitionTime":"2026-02-27T10:28:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:29 crc kubenswrapper[4728]: E0227 10:28:29.908122 4728 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:28:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:28:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:28:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:28:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"79ce2621-f919-4f1d-8b5b-b727bcba43c7\\\",\\\"systemUUID\\\":\\\"08a24311-ed07-4912-ba2b-648ea93d1dc5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:29 crc kubenswrapper[4728]: I0227 10:28:29.914346 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:29 crc kubenswrapper[4728]: I0227 10:28:29.914399 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:29 crc kubenswrapper[4728]: I0227 10:28:29.914420 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:29 crc kubenswrapper[4728]: I0227 10:28:29.914445 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:29 crc kubenswrapper[4728]: I0227 10:28:29.914461 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:29Z","lastTransitionTime":"2026-02-27T10:28:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:29 crc kubenswrapper[4728]: E0227 10:28:29.929600 4728 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:28:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:28:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:28:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:28:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"79ce2621-f919-4f1d-8b5b-b727bcba43c7\\\",\\\"systemUUID\\\":\\\"08a24311-ed07-4912-ba2b-648ea93d1dc5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:29 crc kubenswrapper[4728]: I0227 10:28:29.935476 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:29 crc kubenswrapper[4728]: I0227 10:28:29.935556 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:29 crc kubenswrapper[4728]: I0227 10:28:29.935577 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:29 crc kubenswrapper[4728]: I0227 10:28:29.935599 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:29 crc kubenswrapper[4728]: I0227 10:28:29.935618 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:29Z","lastTransitionTime":"2026-02-27T10:28:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:29 crc kubenswrapper[4728]: E0227 10:28:29.950894 4728 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:28:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:28:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:28:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T10:28:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"79ce2621-f919-4f1d-8b5b-b727bcba43c7\\\",\\\"systemUUID\\\":\\\"08a24311-ed07-4912-ba2b-648ea93d1dc5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:29 crc kubenswrapper[4728]: E0227 10:28:29.951123 4728 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 27 10:28:30 crc kubenswrapper[4728]: I0227 10:28:30.723792 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 10:28:30 crc kubenswrapper[4728]: E0227 10:28:30.724066 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 10:28:30 crc kubenswrapper[4728]: I0227 10:28:30.724151 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 10:28:30 crc kubenswrapper[4728]: I0227 10:28:30.724216 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 10:28:30 crc kubenswrapper[4728]: E0227 10:28:30.724746 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 10:28:30 crc kubenswrapper[4728]: E0227 10:28:30.725145 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 10:28:30 crc kubenswrapper[4728]: I0227 10:28:30.750802 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:30 crc kubenswrapper[4728]: I0227 10:28:30.779169 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xghgn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0cd760d8-c9b2-4e95-97a3-94bc759c9884\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qtlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qtlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qtlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qtlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qtlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qtlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qtlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:28:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xghgn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:30 crc kubenswrapper[4728]: I0227 10:28:30.793361 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kqwxf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87b02f57-7d16-404a-9dad-36d71fe43a6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sk895\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sk895\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:28:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-kqwxf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:30 crc kubenswrapper[4728]: I0227 10:28:30.810203 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:30 crc kubenswrapper[4728]: E0227 10:28:30.811425 4728 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 27 10:28:30 crc kubenswrapper[4728]: I0227 10:28:30.824005 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2cfd349-f825-497b-b698-7fb6bc258b22\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c64ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c64ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:28:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mf2hh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:30 crc kubenswrapper[4728]: I0227 10:28:30.850725 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rpr29" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b021ff26-58a3-4418-b6ba-4aa8e0bb6746\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:28:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rpr29\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:30 crc kubenswrapper[4728]: I0227 10:28:30.862928 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wv4rk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"861d0263-093a-4dfa-93d7-d3efb29da94b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2nd56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2nd56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:28:18Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wv4rk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:30 crc kubenswrapper[4728]: I0227 10:28:30.891652 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f942aba-66f5-4353-b2f7-53d7ba94ae34\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dd5470266565899e6fda78eec789f70994968d19b1001f6340a99cfd2b73933\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e13903f4839e360a4bd61167579f8ba8936c176b194af3ed693fc7a3b6c88fe2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d2407a2f1f0fcd2f2dd4efda991f2d014a4d8c85592f2e93df7e4860a46862f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8643806fee981c732e483bce1bf93a8e35ab71964444bb2b9c476d7c93f85869\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://497509ccbf9c511546f719138ff58231a55c407323b370c2687557b87a660c9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cf8e26fed561fc883edbc30e5a062308c49ec85bc0efb0a64fef14f9dacb1b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cf8e26fed561fc883edbc30e5a062308c49ec85bc0efb0a64fef14f9dacb1b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:26:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:26:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e814ab45a5c01528b03c209cbe306360276b0a22ca9270fe7ce3bf36e0ef5f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e814ab45a5c01528b03c209cbe306360276b0a22ca9270fe7ce3bf36e0ef5f7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:26:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:26:23Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://51f217547dadd68ada46a99f3964971e47d56b9b1e45e018813ddce20347d4c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51f217547dadd68ada46a99f3964971e47d56b9b1e45e018813ddce20347d4c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:26:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:26:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:26:20Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:30 crc kubenswrapper[4728]: I0227 10:28:30.907208 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:30 crc kubenswrapper[4728]: I0227 10:28:30.926106 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ded44d8-d959-4509-be28-3560f21eebda\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4e8835f3ff41008225e090a7c460900d18997805869ec9191f1e7cce3c5778e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://144568fcb21837838d999f90cb51207fff11edb023b2d975dad1980e0817ca43\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19f04d61914f64b94f46aebdc8a7580de81f5ba7e15935be045d4e92bd0cd889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f483187ce4b468f12ba37432364d63b1594b49400f77ab7ff5040730bc29b35d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f09311526d9d8d971440bee60c506ffb423eed73c2ac58c81934c2df487ae48\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T10:27:28Z\\\",\\\"message\\\":\\\"W0227 10:27:27.964275 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0227 10:27:27.965166 1 crypto.go:601] Generating new CA for check-endpoints-signer@1772188047 cert, and key in /tmp/serving-cert-1003999021/serving-signer.crt, /tmp/serving-cert-1003999021/serving-signer.key\\\\nI0227 10:27:28.312473 1 observer_polling.go:159] Starting file observer\\\\nW0227 10:27:28.324092 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 10:27:28.324213 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 10:27:28.326919 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1003999021/tls.crt::/tmp/serving-cert-1003999021/tls.key\\\\\\\"\\\\nI0227 10:27:28.533892 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 10:27:28.537238 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 10:27:28.537256 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 10:27:28.537275 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 10:27:28.537280 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nF0227 10:27:28.543870 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T10:27:27Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd5f667231cb95d3d072f38635815d04c45ae9803c179df54367329131afd05a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c860355459019bda5fddc0a658236e97a674f965aa2eff648e97bbaa30161b75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c860355459019bda5fddc0a658236e97a674f965aa2eff648e97bbaa30161b75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:26:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:26:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:26:20Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:30 crc kubenswrapper[4728]: I0227 10:28:30.944747 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:30 crc kubenswrapper[4728]: I0227 10:28:30.959008 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:30 crc kubenswrapper[4728]: I0227 10:28:30.974765 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:30 crc kubenswrapper[4728]: I0227 10:28:30.984409 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n4c77" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef8ed63c-6947-4b06-8742-54b7ba279aa7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phsn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:28:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n4c77\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:31 crc kubenswrapper[4728]: I0227 10:28:31.000461 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9tlth" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"468912b7-185a-4869-9a65-70cbcb3c4fb1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjq97\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:28:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9tlth\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:31 crc kubenswrapper[4728]: I0227 10:28:31.011084 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-97psz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f9f1c81-b0b0-4016-9ef5-38cd92277b5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://053066c93643a4cfda98a923faaa20d0bac7b090fc309322b54bbd09c74c7fcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:28:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24vxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:28:11Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-97psz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:31 crc kubenswrapper[4728]: I0227 10:28:31.262826 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kqwxf" event={"ID":"87b02f57-7d16-404a-9dad-36d71fe43a6f","Type":"ContainerStarted","Data":"d387e9b7b68cb980ad84552e88b9d4a71d923cfa77e8f2d3d63476082400c796"} Feb 27 10:28:31 crc kubenswrapper[4728]: I0227 10:28:31.263305 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kqwxf" event={"ID":"87b02f57-7d16-404a-9dad-36d71fe43a6f","Type":"ContainerStarted","Data":"feeb44d669cb211049f789a5a2f7421a736794a197cf846995f94cf03bf8286b"} Feb 27 10:28:31 crc kubenswrapper[4728]: I0227 10:28:31.281214 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ded44d8-d959-4509-be28-3560f21eebda\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4e8835f3ff41008225e090a7c460900d18997805869ec9191f1e7cce3c5778e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://144568fcb21837838d999f90cb51207fff11edb023b2d975dad1980e0817ca43\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19f04d61914f64b94f46aebdc8a7580de81f5ba7e15935be045d4e92bd0cd889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f483187ce4b468f12ba37432364d63b1594b49400f77ab7ff5040730bc29b35d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f09311526d9d8d971440bee60c506ffb423eed73c2ac58c81934c2df487ae48\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T10:27:28Z\\\",\\\"message\\\":\\\"W0227 10:27:27.964275 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0227 10:27:27.965166 1 crypto.go:601] Generating new CA for check-endpoints-signer@1772188047 cert, and key in /tmp/serving-cert-1003999021/serving-signer.crt, /tmp/serving-cert-1003999021/serving-signer.key\\\\nI0227 10:27:28.312473 1 observer_polling.go:159] Starting file observer\\\\nW0227 10:27:28.324092 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 10:27:28.324213 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 10:27:28.326919 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1003999021/tls.crt::/tmp/serving-cert-1003999021/tls.key\\\\\\\"\\\\nI0227 10:27:28.533892 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 10:27:28.537238 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 10:27:28.537256 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 10:27:28.537275 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 10:27:28.537280 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nF0227 10:27:28.543870 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T10:27:27Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd5f667231cb95d3d072f38635815d04c45ae9803c179df54367329131afd05a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c860355459019bda5fddc0a658236e97a674f965aa2eff648e97bbaa30161b75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c860355459019bda5fddc0a658236e97a674f965aa2eff648e97bbaa30161b75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:26:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:26:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:26:20Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:31 crc kubenswrapper[4728]: I0227 10:28:31.294196 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:31 crc kubenswrapper[4728]: I0227 10:28:31.305455 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:31 crc kubenswrapper[4728]: I0227 10:28:31.320365 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:31 crc kubenswrapper[4728]: I0227 10:28:31.330389 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n4c77" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef8ed63c-6947-4b06-8742-54b7ba279aa7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phsn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:28:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n4c77\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:31 crc kubenswrapper[4728]: I0227 10:28:31.343058 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9tlth" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"468912b7-185a-4869-9a65-70cbcb3c4fb1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjq97\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:28:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9tlth\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:31 crc kubenswrapper[4728]: I0227 10:28:31.354949 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-97psz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f9f1c81-b0b0-4016-9ef5-38cd92277b5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://053066c93643a4cfda98a923faaa20d0bac7b090fc309322b54bbd09c74c7fcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:28:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24vxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:28:11Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-97psz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:31 crc kubenswrapper[4728]: I0227 10:28:31.367973 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:31 crc kubenswrapper[4728]: I0227 10:28:31.387606 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xghgn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0cd760d8-c9b2-4e95-97a3-94bc759c9884\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qtlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qtlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qtlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qtlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qtlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qtlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qtlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:28:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xghgn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:31 crc kubenswrapper[4728]: I0227 10:28:31.400591 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kqwxf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87b02f57-7d16-404a-9dad-36d71fe43a6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://feeb44d669cb211049f789a5a2f7421a736794a197cf846995f94cf03bf8286b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sk895\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d387e9b7b68cb980ad84552e88b9d4a71d923cfa77e8f2d3d63476082400c796\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sk895\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:28:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-kqwxf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:31 crc kubenswrapper[4728]: I0227 10:28:31.418933 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:31 crc kubenswrapper[4728]: I0227 10:28:31.432619 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2cfd349-f825-497b-b698-7fb6bc258b22\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c64ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c64ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:28:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mf2hh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:31 crc kubenswrapper[4728]: I0227 10:28:31.458729 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rpr29" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b021ff26-58a3-4418-b6ba-4aa8e0bb6746\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:28:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rpr29\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:31 crc kubenswrapper[4728]: I0227 10:28:31.470486 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wv4rk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"861d0263-093a-4dfa-93d7-d3efb29da94b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2nd56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2nd56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:28:18Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wv4rk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:31 crc kubenswrapper[4728]: I0227 10:28:31.499930 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f942aba-66f5-4353-b2f7-53d7ba94ae34\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dd5470266565899e6fda78eec789f70994968d19b1001f6340a99cfd2b73933\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e13903f4839e360a4bd61167579f8ba8936c176b194af3ed693fc7a3b6c88fe2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d2407a2f1f0fcd2f2dd4efda991f2d014a4d8c85592f2e93df7e4860a46862f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8643806fee981c732e483bce1bf93a8e35ab71964444bb2b9c476d7c93f85869\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://497509ccbf9c511546f719138ff58231a55c407323b370c2687557b87a660c9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cf8e26fed561fc883edbc30e5a062308c49ec85bc0efb0a64fef14f9dacb1b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cf8e26fed561fc883edbc30e5a062308c49ec85bc0efb0a64fef14f9dacb1b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:26:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:26:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e814ab45a5c01528b03c209cbe306360276b0a22ca9270fe7ce3bf36e0ef5f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e814ab45a5c01528b03c209cbe306360276b0a22ca9270fe7ce3bf36e0ef5f7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:26:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:26:23Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://51f217547dadd68ada46a99f3964971e47d56b9b1e45e018813ddce20347d4c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51f217547dadd68ada46a99f3964971e47d56b9b1e45e018813ddce20347d4c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:26:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:26:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:26:20Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:31 crc kubenswrapper[4728]: I0227 10:28:31.515376 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:31 crc kubenswrapper[4728]: I0227 10:28:31.724693 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wv4rk" Feb 27 10:28:31 crc kubenswrapper[4728]: E0227 10:28:31.725307 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wv4rk" podUID="861d0263-093a-4dfa-93d7-d3efb29da94b" Feb 27 10:28:32 crc kubenswrapper[4728]: I0227 10:28:32.268029 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-n4c77" event={"ID":"ef8ed63c-6947-4b06-8742-54b7ba279aa7","Type":"ContainerStarted","Data":"f74dd933e424a72f0cc61ea8d85a4b458af75d261b2a2a1c0bfc8d9b99459565"} Feb 27 10:28:32 crc kubenswrapper[4728]: I0227 10:28:32.270368 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"0c542bb7c40fa1e7dc7d28a29de0799a35100b306355b0929b3861d15dbc71bb"} Feb 27 10:28:32 crc kubenswrapper[4728]: I0227 10:28:32.283448 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:32 crc kubenswrapper[4728]: I0227 10:28:32.302044 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xghgn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0cd760d8-c9b2-4e95-97a3-94bc759c9884\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qtlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qtlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qtlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qtlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qtlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qtlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qtlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:28:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xghgn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:32 crc kubenswrapper[4728]: I0227 10:28:32.317763 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kqwxf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87b02f57-7d16-404a-9dad-36d71fe43a6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://feeb44d669cb211049f789a5a2f7421a736794a197cf846995f94cf03bf8286b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sk895\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d387e9b7b68cb980ad84552e88b9d4a71d923cfa77e8f2d3d63476082400c796\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sk895\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:28:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-kqwxf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:32 crc kubenswrapper[4728]: I0227 10:28:32.333070 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:32 crc kubenswrapper[4728]: I0227 10:28:32.342204 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2cfd349-f825-497b-b698-7fb6bc258b22\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c64ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c64ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:28:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mf2hh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:32 crc kubenswrapper[4728]: I0227 10:28:32.363396 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rpr29" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b021ff26-58a3-4418-b6ba-4aa8e0bb6746\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:28:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rpr29\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:32 crc kubenswrapper[4728]: I0227 10:28:32.373376 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wv4rk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"861d0263-093a-4dfa-93d7-d3efb29da94b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2nd56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2nd56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:28:18Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wv4rk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:32 crc kubenswrapper[4728]: I0227 10:28:32.397765 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f942aba-66f5-4353-b2f7-53d7ba94ae34\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dd5470266565899e6fda78eec789f70994968d19b1001f6340a99cfd2b73933\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e13903f4839e360a4bd61167579f8ba8936c176b194af3ed693fc7a3b6c88fe2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d2407a2f1f0fcd2f2dd4efda991f2d014a4d8c85592f2e93df7e4860a46862f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8643806fee981c732e483bce1bf93a8e35ab71964444bb2b9c476d7c93f85869\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://497509ccbf9c511546f719138ff58231a55c407323b370c2687557b87a660c9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cf8e26fed561fc883edbc30e5a062308c49ec85bc0efb0a64fef14f9dacb1b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cf8e26fed561fc883edbc30e5a062308c49ec85bc0efb0a64fef14f9dacb1b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:26:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:26:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e814ab45a5c01528b03c209cbe306360276b0a22ca9270fe7ce3bf36e0ef5f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e814ab45a5c01528b03c209cbe306360276b0a22ca9270fe7ce3bf36e0ef5f7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:26:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:26:23Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://51f217547dadd68ada46a99f3964971e47d56b9b1e45e018813ddce20347d4c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51f217547dadd68ada46a99f3964971e47d56b9b1e45e018813ddce20347d4c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:26:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:26:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:26:20Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:32 crc kubenswrapper[4728]: I0227 10:28:32.408543 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:32 crc kubenswrapper[4728]: I0227 10:28:32.418533 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-97psz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f9f1c81-b0b0-4016-9ef5-38cd92277b5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://053066c93643a4cfda98a923faaa20d0bac7b090fc309322b54bbd09c74c7fcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:28:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24vxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:28:11Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-97psz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:32 crc kubenswrapper[4728]: I0227 10:28:32.433338 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ded44d8-d959-4509-be28-3560f21eebda\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4e8835f3ff41008225e090a7c460900d18997805869ec9191f1e7cce3c5778e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://144568fcb21837838d999f90cb51207fff11edb023b2d975dad1980e0817ca43\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19f04d61914f64b94f46aebdc8a7580de81f5ba7e15935be045d4e92bd0cd889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f483187ce4b468f12ba37432364d63b1594b49400f77ab7ff5040730bc29b35d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f09311526d9d8d971440bee60c506ffb423eed73c2ac58c81934c2df487ae48\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T10:27:28Z\\\",\\\"message\\\":\\\"W0227 10:27:27.964275 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0227 10:27:27.965166 1 crypto.go:601] Generating new CA for check-endpoints-signer@1772188047 cert, and key in /tmp/serving-cert-1003999021/serving-signer.crt, /tmp/serving-cert-1003999021/serving-signer.key\\\\nI0227 10:27:28.312473 1 observer_polling.go:159] Starting file observer\\\\nW0227 10:27:28.324092 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 10:27:28.324213 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 10:27:28.326919 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1003999021/tls.crt::/tmp/serving-cert-1003999021/tls.key\\\\\\\"\\\\nI0227 10:27:28.533892 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 10:27:28.537238 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 10:27:28.537256 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 10:27:28.537275 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 10:27:28.537280 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nF0227 10:27:28.543870 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T10:27:27Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd5f667231cb95d3d072f38635815d04c45ae9803c179df54367329131afd05a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c860355459019bda5fddc0a658236e97a674f965aa2eff648e97bbaa30161b75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c860355459019bda5fddc0a658236e97a674f965aa2eff648e97bbaa30161b75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:26:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:26:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:26:20Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:32 crc kubenswrapper[4728]: I0227 10:28:32.451485 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:32 crc kubenswrapper[4728]: I0227 10:28:32.461185 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:32 crc kubenswrapper[4728]: I0227 10:28:32.474268 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:32 crc kubenswrapper[4728]: I0227 10:28:32.481186 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n4c77" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef8ed63c-6947-4b06-8742-54b7ba279aa7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f74dd933e424a72f0cc61ea8d85a4b458af75d261b2a2a1c0bfc8d9b99459565\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phsn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:28:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n4c77\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:32 crc kubenswrapper[4728]: I0227 10:28:32.489252 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9tlth" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"468912b7-185a-4869-9a65-70cbcb3c4fb1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjq97\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:28:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9tlth\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:32 crc kubenswrapper[4728]: I0227 10:28:32.499312 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:32 crc kubenswrapper[4728]: I0227 10:28:32.518269 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xghgn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0cd760d8-c9b2-4e95-97a3-94bc759c9884\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qtlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qtlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qtlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qtlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qtlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qtlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qtlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:28:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xghgn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:32 crc kubenswrapper[4728]: I0227 10:28:32.531173 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kqwxf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87b02f57-7d16-404a-9dad-36d71fe43a6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://feeb44d669cb211049f789a5a2f7421a736794a197cf846995f94cf03bf8286b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sk895\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d387e9b7b68cb980ad84552e88b9d4a71d923cfa77e8f2d3d63476082400c796\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sk895\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:28:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-kqwxf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:32 crc kubenswrapper[4728]: I0227 10:28:32.542671 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wv4rk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"861d0263-093a-4dfa-93d7-d3efb29da94b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2nd56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2nd56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:28:18Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wv4rk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:32 crc kubenswrapper[4728]: I0227 10:28:32.558327 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:32 crc kubenswrapper[4728]: I0227 10:28:32.570674 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2cfd349-f825-497b-b698-7fb6bc258b22\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c64ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c64ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:28:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mf2hh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:32 crc kubenswrapper[4728]: I0227 10:28:32.597416 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rpr29" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b021ff26-58a3-4418-b6ba-4aa8e0bb6746\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:28:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rpr29\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:32 crc kubenswrapper[4728]: I0227 10:28:32.610931 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:32 crc kubenswrapper[4728]: I0227 10:28:32.638660 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f942aba-66f5-4353-b2f7-53d7ba94ae34\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dd5470266565899e6fda78eec789f70994968d19b1001f6340a99cfd2b73933\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e13903f4839e360a4bd61167579f8ba8936c176b194af3ed693fc7a3b6c88fe2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d2407a2f1f0fcd2f2dd4efda991f2d014a4d8c85592f2e93df7e4860a46862f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8643806fee981c732e483bce1bf93a8e35ab71964444bb2b9c476d7c93f85869\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://497509ccbf9c511546f719138ff58231a55c407323b370c2687557b87a660c9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cf8e26fed561fc883edbc30e5a062308c49ec85bc0efb0a64fef14f9dacb1b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cf8e26fed561fc883edbc30e5a062308c49ec85bc0efb0a64fef14f9dacb1b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:26:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:26:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e814ab45a5c01528b03c209cbe306360276b0a22ca9270fe7ce3bf36e0ef5f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e814ab45a5c01528b03c209cbe306360276b0a22ca9270fe7ce3bf36e0ef5f7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:26:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:26:23Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://51f217547dadd68ada46a99f3964971e47d56b9b1e45e018813ddce20347d4c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51f217547dadd68ada46a99f3964971e47d56b9b1e45e018813ddce20347d4c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:26:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:26:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:26:20Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:32 crc kubenswrapper[4728]: I0227 10:28:32.653598 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:32 crc kubenswrapper[4728]: I0227 10:28:32.665602 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n4c77" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef8ed63c-6947-4b06-8742-54b7ba279aa7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f74dd933e424a72f0cc61ea8d85a4b458af75d261b2a2a1c0bfc8d9b99459565\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phsn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:28:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n4c77\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:32 crc kubenswrapper[4728]: I0227 10:28:32.678192 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9tlth" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"468912b7-185a-4869-9a65-70cbcb3c4fb1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjq97\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:28:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9tlth\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:32 crc kubenswrapper[4728]: I0227 10:28:32.685450 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-97psz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f9f1c81-b0b0-4016-9ef5-38cd92277b5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://053066c93643a4cfda98a923faaa20d0bac7b090fc309322b54bbd09c74c7fcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:28:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24vxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:28:11Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-97psz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:32 crc kubenswrapper[4728]: I0227 10:28:32.695210 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ded44d8-d959-4509-be28-3560f21eebda\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4e8835f3ff41008225e090a7c460900d18997805869ec9191f1e7cce3c5778e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://144568fcb21837838d999f90cb51207fff11edb023b2d975dad1980e0817ca43\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19f04d61914f64b94f46aebdc8a7580de81f5ba7e15935be045d4e92bd0cd889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f483187ce4b468f12ba37432364d63b1594b49400f77ab7ff5040730bc29b35d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f09311526d9d8d971440bee60c506ffb423eed73c2ac58c81934c2df487ae48\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T10:27:28Z\\\",\\\"message\\\":\\\"W0227 10:27:27.964275 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0227 10:27:27.965166 1 crypto.go:601] Generating new CA for check-endpoints-signer@1772188047 cert, and key in /tmp/serving-cert-1003999021/serving-signer.crt, /tmp/serving-cert-1003999021/serving-signer.key\\\\nI0227 10:27:28.312473 1 observer_polling.go:159] Starting file observer\\\\nW0227 10:27:28.324092 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 10:27:28.324213 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 10:27:28.326919 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1003999021/tls.crt::/tmp/serving-cert-1003999021/tls.key\\\\\\\"\\\\nI0227 10:27:28.533892 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 10:27:28.537238 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 10:27:28.537256 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 10:27:28.537275 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 10:27:28.537280 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nF0227 10:27:28.543870 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T10:27:27Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd5f667231cb95d3d072f38635815d04c45ae9803c179df54367329131afd05a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c860355459019bda5fddc0a658236e97a674f965aa2eff648e97bbaa30161b75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c860355459019bda5fddc0a658236e97a674f965aa2eff648e97bbaa30161b75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:26:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:26:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:26:20Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:32 crc kubenswrapper[4728]: I0227 10:28:32.704445 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c542bb7c40fa1e7dc7d28a29de0799a35100b306355b0929b3861d15dbc71bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:32 crc kubenswrapper[4728]: I0227 10:28:32.712745 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:32 crc kubenswrapper[4728]: I0227 10:28:32.724180 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 10:28:32 crc kubenswrapper[4728]: I0227 10:28:32.724245 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 10:28:32 crc kubenswrapper[4728]: E0227 10:28:32.724390 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 10:28:32 crc kubenswrapper[4728]: I0227 10:28:32.724552 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 10:28:32 crc kubenswrapper[4728]: E0227 10:28:32.724746 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 10:28:32 crc kubenswrapper[4728]: E0227 10:28:32.724768 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 10:28:32 crc kubenswrapper[4728]: I0227 10:28:32.735094 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Feb 27 10:28:33 crc kubenswrapper[4728]: I0227 10:28:33.284266 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" event={"ID":"c2cfd349-f825-497b-b698-7fb6bc258b22","Type":"ContainerStarted","Data":"55f053ff36d628f23a3a8b0b0a7bcb47a12acfd9c902a2b2eca42bf0179fd289"} Feb 27 10:28:33 crc kubenswrapper[4728]: I0227 10:28:33.285412 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" event={"ID":"c2cfd349-f825-497b-b698-7fb6bc258b22","Type":"ContainerStarted","Data":"983e19c2154a1b01db67f4b9f25a99f1aecc3d35ea0f570828eabe5e7d0b10ac"} Feb 27 10:28:33 crc kubenswrapper[4728]: I0227 10:28:33.302547 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:33 crc kubenswrapper[4728]: I0227 10:28:33.326181 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xghgn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0cd760d8-c9b2-4e95-97a3-94bc759c9884\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qtlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qtlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qtlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qtlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qtlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qtlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qtlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:28:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xghgn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:33 crc kubenswrapper[4728]: I0227 10:28:33.342031 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kqwxf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87b02f57-7d16-404a-9dad-36d71fe43a6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://feeb44d669cb211049f789a5a2f7421a736794a197cf846995f94cf03bf8286b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sk895\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d387e9b7b68cb980ad84552e88b9d4a71d923cfa77e8f2d3d63476082400c796\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sk895\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:28:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-kqwxf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:33 crc kubenswrapper[4728]: I0227 10:28:33.354295 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:33 crc kubenswrapper[4728]: I0227 10:28:33.368000 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2cfd349-f825-497b-b698-7fb6bc258b22\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55f053ff36d628f23a3a8b0b0a7bcb47a12acfd9c902a2b2eca42bf0179fd289\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c64ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://983e19c2154a1b01db67f4b9f25a99f1aecc3d35ea0f570828eabe5e7d0b10ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c64ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:28:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mf2hh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:33 crc kubenswrapper[4728]: I0227 10:28:33.389316 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rpr29" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b021ff26-58a3-4418-b6ba-4aa8e0bb6746\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:28:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rpr29\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:33 crc kubenswrapper[4728]: I0227 10:28:33.398781 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wv4rk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"861d0263-093a-4dfa-93d7-d3efb29da94b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2nd56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2nd56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:28:18Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wv4rk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:33 crc kubenswrapper[4728]: I0227 10:28:33.411945 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9943e101-f32e-4d5a-9103-2fabd664424f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fe83994e55479337722acb63999d86f58d298b82b4ab6eab9b1bb66c9471ee0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a784f19afff543a28851473da4380cc37388d631ee168f5d6cc5969b97a3f02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5590dd2a64c5ce1cfcffcf7f25149f7664dd72d2f811e4062447c35c066644ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f4065dfe803d293615bc7131d6949ba9fbd78c633b7d762e3b10370e0d91406\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f4065dfe803d293615bc7131d6949ba9fbd78c633b7d762e3b10370e0d91406\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:26:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:26:21Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:26:20Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:33 crc kubenswrapper[4728]: I0227 10:28:33.432431 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f942aba-66f5-4353-b2f7-53d7ba94ae34\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dd5470266565899e6fda78eec789f70994968d19b1001f6340a99cfd2b73933\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e13903f4839e360a4bd61167579f8ba8936c176b194af3ed693fc7a3b6c88fe2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d2407a2f1f0fcd2f2dd4efda991f2d014a4d8c85592f2e93df7e4860a46862f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8643806fee981c732e483bce1bf93a8e35ab71964444bb2b9c476d7c93f85869\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://497509ccbf9c511546f719138ff58231a55c407323b370c2687557b87a660c9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cf8e26fed561fc883edbc30e5a062308c49ec85bc0efb0a64fef14f9dacb1b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cf8e26fed561fc883edbc30e5a062308c49ec85bc0efb0a64fef14f9dacb1b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:26:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:26:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e814ab45a5c01528b03c209cbe306360276b0a22ca9270fe7ce3bf36e0ef5f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e814ab45a5c01528b03c209cbe306360276b0a22ca9270fe7ce3bf36e0ef5f7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:26:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:26:23Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://51f217547dadd68ada46a99f3964971e47d56b9b1e45e018813ddce20347d4c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51f217547dadd68ada46a99f3964971e47d56b9b1e45e018813ddce20347d4c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:26:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:26:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:26:20Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:33 crc kubenswrapper[4728]: I0227 10:28:33.453815 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:33 crc kubenswrapper[4728]: I0227 10:28:33.467079 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ded44d8-d959-4509-be28-3560f21eebda\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4e8835f3ff41008225e090a7c460900d18997805869ec9191f1e7cce3c5778e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://144568fcb21837838d999f90cb51207fff11edb023b2d975dad1980e0817ca43\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19f04d61914f64b94f46aebdc8a7580de81f5ba7e15935be045d4e92bd0cd889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f483187ce4b468f12ba37432364d63b1594b49400f77ab7ff5040730bc29b35d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f09311526d9d8d971440bee60c506ffb423eed73c2ac58c81934c2df487ae48\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T10:27:28Z\\\",\\\"message\\\":\\\"W0227 10:27:27.964275 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0227 10:27:27.965166 1 crypto.go:601] Generating new CA for check-endpoints-signer@1772188047 cert, and key in /tmp/serving-cert-1003999021/serving-signer.crt, /tmp/serving-cert-1003999021/serving-signer.key\\\\nI0227 10:27:28.312473 1 observer_polling.go:159] Starting file observer\\\\nW0227 10:27:28.324092 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 10:27:28.324213 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 10:27:28.326919 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1003999021/tls.crt::/tmp/serving-cert-1003999021/tls.key\\\\\\\"\\\\nI0227 10:27:28.533892 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 10:27:28.537238 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 10:27:28.537256 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 10:27:28.537275 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 10:27:28.537280 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nF0227 10:27:28.543870 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T10:27:27Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd5f667231cb95d3d072f38635815d04c45ae9803c179df54367329131afd05a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c860355459019bda5fddc0a658236e97a674f965aa2eff648e97bbaa30161b75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c860355459019bda5fddc0a658236e97a674f965aa2eff648e97bbaa30161b75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:26:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:26:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:26:20Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:33 crc kubenswrapper[4728]: I0227 10:28:33.479897 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c542bb7c40fa1e7dc7d28a29de0799a35100b306355b0929b3861d15dbc71bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:33 crc kubenswrapper[4728]: I0227 10:28:33.490573 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:33 crc kubenswrapper[4728]: I0227 10:28:33.499095 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:33 crc kubenswrapper[4728]: I0227 10:28:33.506555 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n4c77" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef8ed63c-6947-4b06-8742-54b7ba279aa7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f74dd933e424a72f0cc61ea8d85a4b458af75d261b2a2a1c0bfc8d9b99459565\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phsn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:28:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n4c77\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:33 crc kubenswrapper[4728]: I0227 10:28:33.515886 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9tlth" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"468912b7-185a-4869-9a65-70cbcb3c4fb1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjq97\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:28:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9tlth\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:33 crc kubenswrapper[4728]: I0227 10:28:33.522634 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-97psz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f9f1c81-b0b0-4016-9ef5-38cd92277b5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://053066c93643a4cfda98a923faaa20d0bac7b090fc309322b54bbd09c74c7fcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:28:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24vxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:28:11Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-97psz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:33 crc kubenswrapper[4728]: I0227 10:28:33.724344 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wv4rk" Feb 27 10:28:33 crc kubenswrapper[4728]: E0227 10:28:33.724490 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wv4rk" podUID="861d0263-093a-4dfa-93d7-d3efb29da94b" Feb 27 10:28:34 crc kubenswrapper[4728]: I0227 10:28:34.237799 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/861d0263-093a-4dfa-93d7-d3efb29da94b-metrics-certs\") pod \"network-metrics-daemon-wv4rk\" (UID: \"861d0263-093a-4dfa-93d7-d3efb29da94b\") " pod="openshift-multus/network-metrics-daemon-wv4rk" Feb 27 10:28:34 crc kubenswrapper[4728]: E0227 10:28:34.237994 4728 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 27 10:28:34 crc kubenswrapper[4728]: E0227 10:28:34.238449 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/861d0263-093a-4dfa-93d7-d3efb29da94b-metrics-certs podName:861d0263-093a-4dfa-93d7-d3efb29da94b nodeName:}" failed. No retries permitted until 2026-02-27 10:28:50.238427655 +0000 UTC m=+150.200793761 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/861d0263-093a-4dfa-93d7-d3efb29da94b-metrics-certs") pod "network-metrics-daemon-wv4rk" (UID: "861d0263-093a-4dfa-93d7-d3efb29da94b") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 27 10:28:34 crc kubenswrapper[4728]: I0227 10:28:34.289365 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"ad4be51d1ece16e724766bb0648fd339e6db5466235634e50c17804dc57df694"} Feb 27 10:28:34 crc kubenswrapper[4728]: I0227 10:28:34.300585 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:34 crc kubenswrapper[4728]: I0227 10:28:34.312178 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2cfd349-f825-497b-b698-7fb6bc258b22\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55f053ff36d628f23a3a8b0b0a7bcb47a12acfd9c902a2b2eca42bf0179fd289\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c64ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://983e19c2154a1b01db67f4b9f25a99f1aecc3d35ea0f570828eabe5e7d0b10ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c64ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:28:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mf2hh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:34 crc kubenswrapper[4728]: I0227 10:28:34.340316 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rpr29" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b021ff26-58a3-4418-b6ba-4aa8e0bb6746\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:28:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rpr29\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:34 crc kubenswrapper[4728]: I0227 10:28:34.351478 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wv4rk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"861d0263-093a-4dfa-93d7-d3efb29da94b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2nd56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2nd56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:28:18Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wv4rk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:34 crc kubenswrapper[4728]: I0227 10:28:34.373119 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:34 crc kubenswrapper[4728]: I0227 10:28:34.389753 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9943e101-f32e-4d5a-9103-2fabd664424f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fe83994e55479337722acb63999d86f58d298b82b4ab6eab9b1bb66c9471ee0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a784f19afff543a28851473da4380cc37388d631ee168f5d6cc5969b97a3f02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5590dd2a64c5ce1cfcffcf7f25149f7664dd72d2f811e4062447c35c066644ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f4065dfe803d293615bc7131d6949ba9fbd78c633b7d762e3b10370e0d91406\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f4065dfe803d293615bc7131d6949ba9fbd78c633b7d762e3b10370e0d91406\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:26:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:26:21Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:26:20Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:34 crc kubenswrapper[4728]: I0227 10:28:34.415317 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f942aba-66f5-4353-b2f7-53d7ba94ae34\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dd5470266565899e6fda78eec789f70994968d19b1001f6340a99cfd2b73933\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e13903f4839e360a4bd61167579f8ba8936c176b194af3ed693fc7a3b6c88fe2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d2407a2f1f0fcd2f2dd4efda991f2d014a4d8c85592f2e93df7e4860a46862f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8643806fee981c732e483bce1bf93a8e35ab71964444bb2b9c476d7c93f85869\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://497509ccbf9c511546f719138ff58231a55c407323b370c2687557b87a660c9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cf8e26fed561fc883edbc30e5a062308c49ec85bc0efb0a64fef14f9dacb1b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cf8e26fed561fc883edbc30e5a062308c49ec85bc0efb0a64fef14f9dacb1b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:26:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:26:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e814ab45a5c01528b03c209cbe306360276b0a22ca9270fe7ce3bf36e0ef5f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e814ab45a5c01528b03c209cbe306360276b0a22ca9270fe7ce3bf36e0ef5f7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:26:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:26:23Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://51f217547dadd68ada46a99f3964971e47d56b9b1e45e018813ddce20347d4c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51f217547dadd68ada46a99f3964971e47d56b9b1e45e018813ddce20347d4c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:26:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:26:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:26:20Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:34 crc kubenswrapper[4728]: I0227 10:28:34.426409 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n4c77" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef8ed63c-6947-4b06-8742-54b7ba279aa7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f74dd933e424a72f0cc61ea8d85a4b458af75d261b2a2a1c0bfc8d9b99459565\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phsn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:28:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n4c77\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:34 crc kubenswrapper[4728]: I0227 10:28:34.440051 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9tlth" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"468912b7-185a-4869-9a65-70cbcb3c4fb1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjq97\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:28:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9tlth\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:34 crc kubenswrapper[4728]: I0227 10:28:34.449856 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-97psz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f9f1c81-b0b0-4016-9ef5-38cd92277b5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://053066c93643a4cfda98a923faaa20d0bac7b090fc309322b54bbd09c74c7fcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:28:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24vxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:28:11Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-97psz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:34 crc kubenswrapper[4728]: I0227 10:28:34.474055 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ded44d8-d959-4509-be28-3560f21eebda\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4e8835f3ff41008225e090a7c460900d18997805869ec9191f1e7cce3c5778e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://144568fcb21837838d999f90cb51207fff11edb023b2d975dad1980e0817ca43\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19f04d61914f64b94f46aebdc8a7580de81f5ba7e15935be045d4e92bd0cd889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f483187ce4b468f12ba37432364d63b1594b49400f77ab7ff5040730bc29b35d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f09311526d9d8d971440bee60c506ffb423eed73c2ac58c81934c2df487ae48\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T10:27:28Z\\\",\\\"message\\\":\\\"W0227 10:27:27.964275 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0227 10:27:27.965166 1 crypto.go:601] Generating new CA for check-endpoints-signer@1772188047 cert, and key in /tmp/serving-cert-1003999021/serving-signer.crt, /tmp/serving-cert-1003999021/serving-signer.key\\\\nI0227 10:27:28.312473 1 observer_polling.go:159] Starting file observer\\\\nW0227 10:27:28.324092 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 10:27:28.324213 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 10:27:28.326919 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1003999021/tls.crt::/tmp/serving-cert-1003999021/tls.key\\\\\\\"\\\\nI0227 10:27:28.533892 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 10:27:28.537238 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 10:27:28.537256 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 10:27:28.537275 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 10:27:28.537280 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nF0227 10:27:28.543870 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T10:27:27Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd5f667231cb95d3d072f38635815d04c45ae9803c179df54367329131afd05a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c860355459019bda5fddc0a658236e97a674f965aa2eff648e97bbaa30161b75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c860355459019bda5fddc0a658236e97a674f965aa2eff648e97bbaa30161b75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:26:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:26:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:26:20Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:34 crc kubenswrapper[4728]: I0227 10:28:34.485719 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c542bb7c40fa1e7dc7d28a29de0799a35100b306355b0929b3861d15dbc71bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:34 crc kubenswrapper[4728]: I0227 10:28:34.495915 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:34 crc kubenswrapper[4728]: I0227 10:28:34.513694 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:34 crc kubenswrapper[4728]: I0227 10:28:34.525830 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad4be51d1ece16e724766bb0648fd339e6db5466235634e50c17804dc57df694\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:34 crc kubenswrapper[4728]: I0227 10:28:34.538639 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xghgn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0cd760d8-c9b2-4e95-97a3-94bc759c9884\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qtlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qtlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qtlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qtlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qtlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qtlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qtlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:28:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xghgn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:34 crc kubenswrapper[4728]: I0227 10:28:34.548860 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kqwxf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87b02f57-7d16-404a-9dad-36d71fe43a6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://feeb44d669cb211049f789a5a2f7421a736794a197cf846995f94cf03bf8286b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sk895\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d387e9b7b68cb980ad84552e88b9d4a71d923cfa77e8f2d3d63476082400c796\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sk895\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:28:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-kqwxf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:34 crc kubenswrapper[4728]: I0227 10:28:34.723992 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 10:28:34 crc kubenswrapper[4728]: I0227 10:28:34.724042 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 10:28:34 crc kubenswrapper[4728]: I0227 10:28:34.724275 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 10:28:34 crc kubenswrapper[4728]: E0227 10:28:34.724304 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 10:28:34 crc kubenswrapper[4728]: E0227 10:28:34.724379 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 10:28:34 crc kubenswrapper[4728]: E0227 10:28:34.724570 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 10:28:35 crc kubenswrapper[4728]: I0227 10:28:35.294816 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-9tlth" event={"ID":"468912b7-185a-4869-9a65-70cbcb3c4fb1","Type":"ContainerStarted","Data":"f67177f4cd8151bc3425e0989b15e78fd050fb5688cddece113bc16eed09512f"} Feb 27 10:28:35 crc kubenswrapper[4728]: I0227 10:28:35.297449 4728 generic.go:334] "Generic (PLEG): container finished" podID="b021ff26-58a3-4418-b6ba-4aa8e0bb6746" containerID="4fae001e6877f3c7fb93e4b04bc24d137dc9dc274d4acac247fb84bd9c32d80d" exitCode=0 Feb 27 10:28:35 crc kubenswrapper[4728]: I0227 10:28:35.297571 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rpr29" event={"ID":"b021ff26-58a3-4418-b6ba-4aa8e0bb6746","Type":"ContainerDied","Data":"4fae001e6877f3c7fb93e4b04bc24d137dc9dc274d4acac247fb84bd9c32d80d"} Feb 27 10:28:35 crc kubenswrapper[4728]: I0227 10:28:35.299295 4728 generic.go:334] "Generic (PLEG): container finished" podID="0cd760d8-c9b2-4e95-97a3-94bc759c9884" containerID="442ea00813c58e538b77c63ad9e68c78d159e8bc2bfff00af07ae1a9161af05f" exitCode=0 Feb 27 10:28:35 crc kubenswrapper[4728]: I0227 10:28:35.299352 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-xghgn" event={"ID":"0cd760d8-c9b2-4e95-97a3-94bc759c9884","Type":"ContainerDied","Data":"442ea00813c58e538b77c63ad9e68c78d159e8bc2bfff00af07ae1a9161af05f"} Feb 27 10:28:35 crc kubenswrapper[4728]: I0227 10:28:35.323771 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f942aba-66f5-4353-b2f7-53d7ba94ae34\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dd5470266565899e6fda78eec789f70994968d19b1001f6340a99cfd2b73933\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e13903f4839e360a4bd61167579f8ba8936c176b194af3ed693fc7a3b6c88fe2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d2407a2f1f0fcd2f2dd4efda991f2d014a4d8c85592f2e93df7e4860a46862f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8643806fee981c732e483bce1bf93a8e35ab71964444bb2b9c476d7c93f85869\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://497509ccbf9c511546f719138ff58231a55c407323b370c2687557b87a660c9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cf8e26fed561fc883edbc30e5a062308c49ec85bc0efb0a64fef14f9dacb1b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cf8e26fed561fc883edbc30e5a062308c49ec85bc0efb0a64fef14f9dacb1b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:26:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:26:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e814ab45a5c01528b03c209cbe306360276b0a22ca9270fe7ce3bf36e0ef5f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e814ab45a5c01528b03c209cbe306360276b0a22ca9270fe7ce3bf36e0ef5f7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:26:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:26:23Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://51f217547dadd68ada46a99f3964971e47d56b9b1e45e018813ddce20347d4c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51f217547dadd68ada46a99f3964971e47d56b9b1e45e018813ddce20347d4c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:26:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:26:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:26:20Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:35 crc kubenswrapper[4728]: I0227 10:28:35.338108 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:35 crc kubenswrapper[4728]: I0227 10:28:35.351668 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9943e101-f32e-4d5a-9103-2fabd664424f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fe83994e55479337722acb63999d86f58d298b82b4ab6eab9b1bb66c9471ee0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a784f19afff543a28851473da4380cc37388d631ee168f5d6cc5969b97a3f02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5590dd2a64c5ce1cfcffcf7f25149f7664dd72d2f811e4062447c35c066644ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f4065dfe803d293615bc7131d6949ba9fbd78c633b7d762e3b10370e0d91406\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f4065dfe803d293615bc7131d6949ba9fbd78c633b7d762e3b10370e0d91406\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:26:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:26:21Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:26:20Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:35 crc kubenswrapper[4728]: I0227 10:28:35.360568 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c542bb7c40fa1e7dc7d28a29de0799a35100b306355b0929b3861d15dbc71bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:35 crc kubenswrapper[4728]: I0227 10:28:35.371153 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:35 crc kubenswrapper[4728]: I0227 10:28:35.382093 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:35 crc kubenswrapper[4728]: I0227 10:28:35.390850 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n4c77" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef8ed63c-6947-4b06-8742-54b7ba279aa7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f74dd933e424a72f0cc61ea8d85a4b458af75d261b2a2a1c0bfc8d9b99459565\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phsn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:28:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n4c77\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:35 crc kubenswrapper[4728]: I0227 10:28:35.404647 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9tlth" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"468912b7-185a-4869-9a65-70cbcb3c4fb1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f67177f4cd8151bc3425e0989b15e78fd050fb5688cddece113bc16eed09512f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:28:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjq97\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:28:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9tlth\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:35 crc kubenswrapper[4728]: I0227 10:28:35.415058 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-97psz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f9f1c81-b0b0-4016-9ef5-38cd92277b5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://053066c93643a4cfda98a923faaa20d0bac7b090fc309322b54bbd09c74c7fcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:28:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24vxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:28:11Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-97psz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:35 crc kubenswrapper[4728]: I0227 10:28:35.434038 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ded44d8-d959-4509-be28-3560f21eebda\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4e8835f3ff41008225e090a7c460900d18997805869ec9191f1e7cce3c5778e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://144568fcb21837838d999f90cb51207fff11edb023b2d975dad1980e0817ca43\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19f04d61914f64b94f46aebdc8a7580de81f5ba7e15935be045d4e92bd0cd889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f483187ce4b468f12ba37432364d63b1594b49400f77ab7ff5040730bc29b35d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f09311526d9d8d971440bee60c506ffb423eed73c2ac58c81934c2df487ae48\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T10:27:28Z\\\",\\\"message\\\":\\\"W0227 10:27:27.964275 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0227 10:27:27.965166 1 crypto.go:601] Generating new CA for check-endpoints-signer@1772188047 cert, and key in /tmp/serving-cert-1003999021/serving-signer.crt, /tmp/serving-cert-1003999021/serving-signer.key\\\\nI0227 10:27:28.312473 1 observer_polling.go:159] Starting file observer\\\\nW0227 10:27:28.324092 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 10:27:28.324213 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 10:27:28.326919 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1003999021/tls.crt::/tmp/serving-cert-1003999021/tls.key\\\\\\\"\\\\nI0227 10:27:28.533892 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 10:27:28.537238 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 10:27:28.537256 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 10:27:28.537275 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 10:27:28.537280 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nF0227 10:27:28.543870 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T10:27:27Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd5f667231cb95d3d072f38635815d04c45ae9803c179df54367329131afd05a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c860355459019bda5fddc0a658236e97a674f965aa2eff648e97bbaa30161b75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c860355459019bda5fddc0a658236e97a674f965aa2eff648e97bbaa30161b75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:26:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:26:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:26:20Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:35 crc kubenswrapper[4728]: I0227 10:28:35.449495 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xghgn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0cd760d8-c9b2-4e95-97a3-94bc759c9884\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qtlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qtlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qtlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qtlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qtlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qtlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qtlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:28:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xghgn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:35 crc kubenswrapper[4728]: I0227 10:28:35.457585 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kqwxf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87b02f57-7d16-404a-9dad-36d71fe43a6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://feeb44d669cb211049f789a5a2f7421a736794a197cf846995f94cf03bf8286b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sk895\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d387e9b7b68cb980ad84552e88b9d4a71d923cfa77e8f2d3d63476082400c796\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sk895\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:28:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-kqwxf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:35 crc kubenswrapper[4728]: I0227 10:28:35.468575 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad4be51d1ece16e724766bb0648fd339e6db5466235634e50c17804dc57df694\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:35 crc kubenswrapper[4728]: I0227 10:28:35.480453 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2cfd349-f825-497b-b698-7fb6bc258b22\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55f053ff36d628f23a3a8b0b0a7bcb47a12acfd9c902a2b2eca42bf0179fd289\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c64ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://983e19c2154a1b01db67f4b9f25a99f1aecc3d35ea0f570828eabe5e7d0b10ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c64ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:28:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mf2hh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:35 crc kubenswrapper[4728]: I0227 10:28:35.496885 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rpr29" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b021ff26-58a3-4418-b6ba-4aa8e0bb6746\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:28:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rpr29\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:35 crc kubenswrapper[4728]: I0227 10:28:35.505734 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wv4rk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"861d0263-093a-4dfa-93d7-d3efb29da94b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2nd56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2nd56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:28:18Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wv4rk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:35 crc kubenswrapper[4728]: I0227 10:28:35.517927 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:35 crc kubenswrapper[4728]: I0227 10:28:35.530250 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9tlth" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"468912b7-185a-4869-9a65-70cbcb3c4fb1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f67177f4cd8151bc3425e0989b15e78fd050fb5688cddece113bc16eed09512f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:28:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjq97\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:28:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9tlth\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:35 crc kubenswrapper[4728]: I0227 10:28:35.538772 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-97psz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f9f1c81-b0b0-4016-9ef5-38cd92277b5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://053066c93643a4cfda98a923faaa20d0bac7b090fc309322b54bbd09c74c7fcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:28:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24vxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:28:11Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-97psz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:35 crc kubenswrapper[4728]: I0227 10:28:35.554182 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ded44d8-d959-4509-be28-3560f21eebda\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4e8835f3ff41008225e090a7c460900d18997805869ec9191f1e7cce3c5778e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://144568fcb21837838d999f90cb51207fff11edb023b2d975dad1980e0817ca43\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19f04d61914f64b94f46aebdc8a7580de81f5ba7e15935be045d4e92bd0cd889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f483187ce4b468f12ba37432364d63b1594b49400f77ab7ff5040730bc29b35d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f09311526d9d8d971440bee60c506ffb423eed73c2ac58c81934c2df487ae48\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T10:27:28Z\\\",\\\"message\\\":\\\"W0227 10:27:27.964275 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0227 10:27:27.965166 1 crypto.go:601] Generating new CA for check-endpoints-signer@1772188047 cert, and key in /tmp/serving-cert-1003999021/serving-signer.crt, /tmp/serving-cert-1003999021/serving-signer.key\\\\nI0227 10:27:28.312473 1 observer_polling.go:159] Starting file observer\\\\nW0227 10:27:28.324092 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 10:27:28.324213 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 10:27:28.326919 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1003999021/tls.crt::/tmp/serving-cert-1003999021/tls.key\\\\\\\"\\\\nI0227 10:27:28.533892 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 10:27:28.537238 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 10:27:28.537256 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 10:27:28.537275 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 10:27:28.537280 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nF0227 10:27:28.543870 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T10:27:27Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd5f667231cb95d3d072f38635815d04c45ae9803c179df54367329131afd05a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c860355459019bda5fddc0a658236e97a674f965aa2eff648e97bbaa30161b75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c860355459019bda5fddc0a658236e97a674f965aa2eff648e97bbaa30161b75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:26:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:26:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:26:20Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:35 crc kubenswrapper[4728]: I0227 10:28:35.564784 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c542bb7c40fa1e7dc7d28a29de0799a35100b306355b0929b3861d15dbc71bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:35 crc kubenswrapper[4728]: I0227 10:28:35.581716 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:35 crc kubenswrapper[4728]: I0227 10:28:35.592346 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:35 crc kubenswrapper[4728]: I0227 10:28:35.601032 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n4c77" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef8ed63c-6947-4b06-8742-54b7ba279aa7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f74dd933e424a72f0cc61ea8d85a4b458af75d261b2a2a1c0bfc8d9b99459565\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phsn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:28:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n4c77\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:35 crc kubenswrapper[4728]: I0227 10:28:35.610936 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad4be51d1ece16e724766bb0648fd339e6db5466235634e50c17804dc57df694\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:35 crc kubenswrapper[4728]: I0227 10:28:35.623938 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xghgn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0cd760d8-c9b2-4e95-97a3-94bc759c9884\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qtlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://442ea00813c58e538b77c63ad9e68c78d159e8bc2bfff00af07ae1a9161af05f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://442ea00813c58e538b77c63ad9e68c78d159e8bc2bfff00af07ae1a9161af05f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:28:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:28:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qtlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qtlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qtlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qtlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qtlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qtlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:28:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xghgn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:35 crc kubenswrapper[4728]: I0227 10:28:35.633189 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kqwxf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87b02f57-7d16-404a-9dad-36d71fe43a6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://feeb44d669cb211049f789a5a2f7421a736794a197cf846995f94cf03bf8286b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sk895\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d387e9b7b68cb980ad84552e88b9d4a71d923cfa77e8f2d3d63476082400c796\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sk895\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:28:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-kqwxf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:35 crc kubenswrapper[4728]: I0227 10:28:35.642456 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:35 crc kubenswrapper[4728]: I0227 10:28:35.650730 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2cfd349-f825-497b-b698-7fb6bc258b22\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55f053ff36d628f23a3a8b0b0a7bcb47a12acfd9c902a2b2eca42bf0179fd289\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c64ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://983e19c2154a1b01db67f4b9f25a99f1aecc3d35ea0f570828eabe5e7d0b10ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c64ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:28:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mf2hh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:35 crc kubenswrapper[4728]: I0227 10:28:35.679396 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rpr29" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b021ff26-58a3-4418-b6ba-4aa8e0bb6746\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fae001e6877f3c7fb93e4b04bc24d137dc9dc274d4acac247fb84bd9c32d80d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4fae001e6877f3c7fb93e4b04bc24d137dc9dc274d4acac247fb84bd9c32d80d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:28:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:28:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnx4z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:28:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rpr29\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:35 crc kubenswrapper[4728]: I0227 10:28:35.690170 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wv4rk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"861d0263-093a-4dfa-93d7-d3efb29da94b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:28:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2nd56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2nd56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:28:18Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wv4rk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:35 crc kubenswrapper[4728]: I0227 10:28:35.704951 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9943e101-f32e-4d5a-9103-2fabd664424f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fe83994e55479337722acb63999d86f58d298b82b4ab6eab9b1bb66c9471ee0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a784f19afff543a28851473da4380cc37388d631ee168f5d6cc5969b97a3f02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5590dd2a64c5ce1cfcffcf7f25149f7664dd72d2f811e4062447c35c066644ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f4065dfe803d293615bc7131d6949ba9fbd78c633b7d762e3b10370e0d91406\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f4065dfe803d293615bc7131d6949ba9fbd78c633b7d762e3b10370e0d91406\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:26:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:26:21Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:26:20Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:35 crc kubenswrapper[4728]: I0227 10:28:35.724394 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wv4rk" Feb 27 10:28:35 crc kubenswrapper[4728]: E0227 10:28:35.724800 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wv4rk" podUID="861d0263-093a-4dfa-93d7-d3efb29da94b" Feb 27 10:28:35 crc kubenswrapper[4728]: I0227 10:28:35.729273 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f942aba-66f5-4353-b2f7-53d7ba94ae34\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dd5470266565899e6fda78eec789f70994968d19b1001f6340a99cfd2b73933\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e13903f4839e360a4bd61167579f8ba8936c176b194af3ed693fc7a3b6c88fe2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d2407a2f1f0fcd2f2dd4efda991f2d014a4d8c85592f2e93df7e4860a46862f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8643806fee981c732e483bce1bf93a8e35ab71964444bb2b9c476d7c93f85869\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://497509ccbf9c511546f719138ff58231a55c407323b370c2687557b87a660c9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cf8e26fed561fc883edbc30e5a062308c49ec85bc0efb0a64fef14f9dacb1b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cf8e26fed561fc883edbc30e5a062308c49ec85bc0efb0a64fef14f9dacb1b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:26:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:26:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e814ab45a5c01528b03c209cbe306360276b0a22ca9270fe7ce3bf36e0ef5f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e814ab45a5c01528b03c209cbe306360276b0a22ca9270fe7ce3bf36e0ef5f7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:26:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:26:23Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://51f217547dadd68ada46a99f3964971e47d56b9b1e45e018813ddce20347d4c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51f217547dadd68ada46a99f3964971e47d56b9b1e45e018813ddce20347d4c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:26:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:26:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:26:20Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:35 crc kubenswrapper[4728]: I0227 10:28:35.741369 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:35 crc kubenswrapper[4728]: E0227 10:28:35.812820 4728 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 27 10:28:36 crc kubenswrapper[4728]: I0227 10:28:36.308157 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rpr29" event={"ID":"b021ff26-58a3-4418-b6ba-4aa8e0bb6746","Type":"ContainerStarted","Data":"b09c7887821795cd7de5864ce224a276f25faa70f181b308de3f418f7b1a241f"} Feb 27 10:28:36 crc kubenswrapper[4728]: I0227 10:28:36.310252 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rpr29" event={"ID":"b021ff26-58a3-4418-b6ba-4aa8e0bb6746","Type":"ContainerStarted","Data":"2bfe2d245cfb745fa96a831c2aedf5cc232e00a62b801485743d7da8e9ae6406"} Feb 27 10:28:36 crc kubenswrapper[4728]: I0227 10:28:36.310454 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rpr29" event={"ID":"b021ff26-58a3-4418-b6ba-4aa8e0bb6746","Type":"ContainerStarted","Data":"97ccafee960cda236a6c78d9ac91eb112c587cbb3a09b3f3fe63fd6a169527df"} Feb 27 10:28:36 crc kubenswrapper[4728]: I0227 10:28:36.310692 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rpr29" event={"ID":"b021ff26-58a3-4418-b6ba-4aa8e0bb6746","Type":"ContainerStarted","Data":"226b2fb5a272504808ea376efa8cfc95a1089240a4d41df99b2a13c37e97dabd"} Feb 27 10:28:36 crc kubenswrapper[4728]: I0227 10:28:36.310884 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rpr29" event={"ID":"b021ff26-58a3-4418-b6ba-4aa8e0bb6746","Type":"ContainerStarted","Data":"a2be6adb37ecb98705469e4fb97ada883e4ceb5c205af3e70b54b1a6f88b0a83"} Feb 27 10:28:36 crc kubenswrapper[4728]: I0227 10:28:36.311087 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rpr29" event={"ID":"b021ff26-58a3-4418-b6ba-4aa8e0bb6746","Type":"ContainerStarted","Data":"6fe524630c9ea646073ef4c97fedec7a4605b268bff442198ad57cf07a033f1e"} Feb 27 10:28:36 crc kubenswrapper[4728]: I0227 10:28:36.311285 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"66833e90c0ee3616a6966da0b1e262cf4778f4e095d2d80dd4d59fd85d9cf9dc"} Feb 27 10:28:36 crc kubenswrapper[4728]: I0227 10:28:36.311535 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"44caa0660bfce5a8bf6140c351ad07239826225c13d9f7d8e840b934f363e149"} Feb 27 10:28:36 crc kubenswrapper[4728]: I0227 10:28:36.313688 4728 generic.go:334] "Generic (PLEG): container finished" podID="0cd760d8-c9b2-4e95-97a3-94bc759c9884" containerID="198acd45002411a26ce602f8c0a49b1a7008f9256d65d82dfb2a2620df6e4d38" exitCode=0 Feb 27 10:28:36 crc kubenswrapper[4728]: I0227 10:28:36.313752 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-xghgn" event={"ID":"0cd760d8-c9b2-4e95-97a3-94bc759c9884","Type":"ContainerDied","Data":"198acd45002411a26ce602f8c0a49b1a7008f9256d65d82dfb2a2620df6e4d38"} Feb 27 10:28:36 crc kubenswrapper[4728]: I0227 10:28:36.327422 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9943e101-f32e-4d5a-9103-2fabd664424f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fe83994e55479337722acb63999d86f58d298b82b4ab6eab9b1bb66c9471ee0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a784f19afff543a28851473da4380cc37388d631ee168f5d6cc5969b97a3f02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5590dd2a64c5ce1cfcffcf7f25149f7664dd72d2f811e4062447c35c066644ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f4065dfe803d293615bc7131d6949ba9fbd78c633b7d762e3b10370e0d91406\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f4065dfe803d293615bc7131d6949ba9fbd78c633b7d762e3b10370e0d91406\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:26:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:26:21Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:26:20Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:36 crc kubenswrapper[4728]: I0227 10:28:36.355155 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f942aba-66f5-4353-b2f7-53d7ba94ae34\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T10:26:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dd5470266565899e6fda78eec789f70994968d19b1001f6340a99cfd2b73933\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e13903f4839e360a4bd61167579f8ba8936c176b194af3ed693fc7a3b6c88fe2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d2407a2f1f0fcd2f2dd4efda991f2d014a4d8c85592f2e93df7e4860a46862f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8643806fee981c732e483bce1bf93a8e35ab71964444bb2b9c476d7c93f85869\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://497509ccbf9c511546f719138ff58231a55c407323b370c2687557b87a660c9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T10:26:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cf8e26fed561fc883edbc30e5a062308c49ec85bc0efb0a64fef14f9dacb1b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cf8e26fed561fc883edbc30e5a062308c49ec85bc0efb0a64fef14f9dacb1b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:26:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:26:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e814ab45a5c01528b03c209cbe306360276b0a22ca9270fe7ce3bf36e0ef5f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e814ab45a5c01528b03c209cbe306360276b0a22ca9270fe7ce3bf36e0ef5f7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:26:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:26:23Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://51f217547dadd68ada46a99f3964971e47d56b9b1e45e018813ddce20347d4c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51f217547dadd68ada46a99f3964971e47d56b9b1e45e018813ddce20347d4c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T10:26:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T10:26:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T10:26:20Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:36 crc kubenswrapper[4728]: I0227 10:28:36.369760 4728 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T10:27:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 10:28:36 crc kubenswrapper[4728]: I0227 10:28:36.419913 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-9tlth" podStartSLOduration=66.419877621 podStartE2EDuration="1m6.419877621s" podCreationTimestamp="2026-02-27 10:27:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:28:36.403217439 +0000 UTC m=+136.365583595" watchObservedRunningTime="2026-02-27 10:28:36.419877621 +0000 UTC m=+136.382243737" Feb 27 10:28:36 crc kubenswrapper[4728]: I0227 10:28:36.420175 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-97psz" podStartSLOduration=67.420169659 podStartE2EDuration="1m7.420169659s" podCreationTimestamp="2026-02-27 10:27:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:28:36.419987604 +0000 UTC m=+136.382353730" watchObservedRunningTime="2026-02-27 10:28:36.420169659 +0000 UTC m=+136.382535775" Feb 27 10:28:36 crc kubenswrapper[4728]: I0227 10:28:36.444255 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=39.44423272 podStartE2EDuration="39.44423272s" podCreationTimestamp="2026-02-27 10:27:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:28:36.443792258 +0000 UTC m=+136.406158374" watchObservedRunningTime="2026-02-27 10:28:36.44423272 +0000 UTC m=+136.406598856" Feb 27 10:28:36 crc kubenswrapper[4728]: I0227 10:28:36.512633 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-n4c77" podStartSLOduration=67.512605506 podStartE2EDuration="1m7.512605506s" podCreationTimestamp="2026-02-27 10:27:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:28:36.512063621 +0000 UTC m=+136.474429737" watchObservedRunningTime="2026-02-27 10:28:36.512605506 +0000 UTC m=+136.474971642" Feb 27 10:28:36 crc kubenswrapper[4728]: I0227 10:28:36.577025 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kqwxf" podStartSLOduration=66.577007199 podStartE2EDuration="1m6.577007199s" podCreationTimestamp="2026-02-27 10:27:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:28:36.559652668 +0000 UTC m=+136.522018774" watchObservedRunningTime="2026-02-27 10:28:36.577007199 +0000 UTC m=+136.539373305" Feb 27 10:28:36 crc kubenswrapper[4728]: I0227 10:28:36.625687 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podStartSLOduration=66.625665997 podStartE2EDuration="1m6.625665997s" podCreationTimestamp="2026-02-27 10:27:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:28:36.596598594 +0000 UTC m=+136.558964740" watchObservedRunningTime="2026-02-27 10:28:36.625665997 +0000 UTC m=+136.588032133" Feb 27 10:28:36 crc kubenswrapper[4728]: I0227 10:28:36.699093 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=28.699074174 podStartE2EDuration="28.699074174s" podCreationTimestamp="2026-02-27 10:28:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:28:36.698839658 +0000 UTC m=+136.661205844" watchObservedRunningTime="2026-02-27 10:28:36.699074174 +0000 UTC m=+136.661440290" Feb 27 10:28:36 crc kubenswrapper[4728]: I0227 10:28:36.724043 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 10:28:36 crc kubenswrapper[4728]: I0227 10:28:36.724048 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 10:28:36 crc kubenswrapper[4728]: I0227 10:28:36.724229 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 10:28:36 crc kubenswrapper[4728]: E0227 10:28:36.724352 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 10:28:36 crc kubenswrapper[4728]: E0227 10:28:36.724560 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 10:28:36 crc kubenswrapper[4728]: E0227 10:28:36.724615 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 10:28:36 crc kubenswrapper[4728]: I0227 10:28:36.740920 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=4.740890038 podStartE2EDuration="4.740890038s" podCreationTimestamp="2026-02-27 10:28:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:28:36.740080145 +0000 UTC m=+136.702446271" watchObservedRunningTime="2026-02-27 10:28:36.740890038 +0000 UTC m=+136.703256194" Feb 27 10:28:37 crc kubenswrapper[4728]: I0227 10:28:37.318552 4728 generic.go:334] "Generic (PLEG): container finished" podID="0cd760d8-c9b2-4e95-97a3-94bc759c9884" containerID="467ed3b28797de38dbc430d232a709bdde053e5cb40001398937e2fd865201a5" exitCode=0 Feb 27 10:28:37 crc kubenswrapper[4728]: I0227 10:28:37.318639 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-xghgn" event={"ID":"0cd760d8-c9b2-4e95-97a3-94bc759c9884","Type":"ContainerDied","Data":"467ed3b28797de38dbc430d232a709bdde053e5cb40001398937e2fd865201a5"} Feb 27 10:28:37 crc kubenswrapper[4728]: I0227 10:28:37.724703 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wv4rk" Feb 27 10:28:37 crc kubenswrapper[4728]: E0227 10:28:37.724887 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wv4rk" podUID="861d0263-093a-4dfa-93d7-d3efb29da94b" Feb 27 10:28:38 crc kubenswrapper[4728]: I0227 10:28:38.324923 4728 generic.go:334] "Generic (PLEG): container finished" podID="0cd760d8-c9b2-4e95-97a3-94bc759c9884" containerID="85f0caadf702c018a5d530c2ec8bef1e18dfedab44c92e3f7bb645faa771a031" exitCode=0 Feb 27 10:28:38 crc kubenswrapper[4728]: I0227 10:28:38.325009 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-xghgn" event={"ID":"0cd760d8-c9b2-4e95-97a3-94bc759c9884","Type":"ContainerDied","Data":"85f0caadf702c018a5d530c2ec8bef1e18dfedab44c92e3f7bb645faa771a031"} Feb 27 10:28:38 crc kubenswrapper[4728]: I0227 10:28:38.330495 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rpr29" event={"ID":"b021ff26-58a3-4418-b6ba-4aa8e0bb6746","Type":"ContainerStarted","Data":"03ade0507ce072ec69afdcbe9da276564bbe5614b51ff1367e19f01c8e2e64fb"} Feb 27 10:28:38 crc kubenswrapper[4728]: I0227 10:28:38.724726 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 10:28:38 crc kubenswrapper[4728]: I0227 10:28:38.724746 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 10:28:38 crc kubenswrapper[4728]: E0227 10:28:38.725219 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 10:28:38 crc kubenswrapper[4728]: I0227 10:28:38.724762 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 10:28:38 crc kubenswrapper[4728]: E0227 10:28:38.725572 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 10:28:38 crc kubenswrapper[4728]: E0227 10:28:38.725637 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 10:28:39 crc kubenswrapper[4728]: I0227 10:28:39.343088 4728 generic.go:334] "Generic (PLEG): container finished" podID="0cd760d8-c9b2-4e95-97a3-94bc759c9884" containerID="eecdb800beb8590751a573ea162cea105aaebe772037faca683070a32d65755f" exitCode=0 Feb 27 10:28:39 crc kubenswrapper[4728]: I0227 10:28:39.343276 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-xghgn" event={"ID":"0cd760d8-c9b2-4e95-97a3-94bc759c9884","Type":"ContainerDied","Data":"eecdb800beb8590751a573ea162cea105aaebe772037faca683070a32d65755f"} Feb 27 10:28:39 crc kubenswrapper[4728]: I0227 10:28:39.723919 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wv4rk" Feb 27 10:28:39 crc kubenswrapper[4728]: E0227 10:28:39.724067 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wv4rk" podUID="861d0263-093a-4dfa-93d7-d3efb29da94b" Feb 27 10:28:40 crc kubenswrapper[4728]: I0227 10:28:40.073048 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 10:28:40 crc kubenswrapper[4728]: I0227 10:28:40.073402 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 10:28:40 crc kubenswrapper[4728]: I0227 10:28:40.073626 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 10:28:40 crc kubenswrapper[4728]: I0227 10:28:40.073775 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 10:28:40 crc kubenswrapper[4728]: I0227 10:28:40.073931 4728 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T10:28:40Z","lastTransitionTime":"2026-02-27T10:28:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 10:28:40 crc kubenswrapper[4728]: I0227 10:28:40.142438 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-ghfss"] Feb 27 10:28:40 crc kubenswrapper[4728]: I0227 10:28:40.143022 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ghfss" Feb 27 10:28:40 crc kubenswrapper[4728]: I0227 10:28:40.145399 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 27 10:28:40 crc kubenswrapper[4728]: I0227 10:28:40.145733 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 27 10:28:40 crc kubenswrapper[4728]: I0227 10:28:40.145752 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 27 10:28:40 crc kubenswrapper[4728]: I0227 10:28:40.146765 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 27 10:28:40 crc kubenswrapper[4728]: I0227 10:28:40.294543 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/335c46dc-6884-4132-8aef-e3a9a81492f9-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-ghfss\" (UID: \"335c46dc-6884-4132-8aef-e3a9a81492f9\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ghfss" Feb 27 10:28:40 crc kubenswrapper[4728]: I0227 10:28:40.295098 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/335c46dc-6884-4132-8aef-e3a9a81492f9-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-ghfss\" (UID: \"335c46dc-6884-4132-8aef-e3a9a81492f9\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ghfss" Feb 27 10:28:40 crc kubenswrapper[4728]: I0227 10:28:40.295283 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/335c46dc-6884-4132-8aef-e3a9a81492f9-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-ghfss\" (UID: \"335c46dc-6884-4132-8aef-e3a9a81492f9\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ghfss" Feb 27 10:28:40 crc kubenswrapper[4728]: I0227 10:28:40.295491 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/335c46dc-6884-4132-8aef-e3a9a81492f9-service-ca\") pod \"cluster-version-operator-5c965bbfc6-ghfss\" (UID: \"335c46dc-6884-4132-8aef-e3a9a81492f9\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ghfss" Feb 27 10:28:40 crc kubenswrapper[4728]: I0227 10:28:40.295684 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/335c46dc-6884-4132-8aef-e3a9a81492f9-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-ghfss\" (UID: \"335c46dc-6884-4132-8aef-e3a9a81492f9\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ghfss" Feb 27 10:28:40 crc kubenswrapper[4728]: I0227 10:28:40.354887 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rpr29" event={"ID":"b021ff26-58a3-4418-b6ba-4aa8e0bb6746","Type":"ContainerStarted","Data":"1918119473c9678a4002db8e1aaf8917697032ea10fb68d07119849b6c95058d"} Feb 27 10:28:40 crc kubenswrapper[4728]: I0227 10:28:40.355308 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-rpr29" Feb 27 10:28:40 crc kubenswrapper[4728]: I0227 10:28:40.355353 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-rpr29" Feb 27 10:28:40 crc kubenswrapper[4728]: I0227 10:28:40.355371 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-rpr29" Feb 27 10:28:40 crc kubenswrapper[4728]: I0227 10:28:40.359419 4728 generic.go:334] "Generic (PLEG): container finished" podID="0cd760d8-c9b2-4e95-97a3-94bc759c9884" containerID="fac4e5f3c052c3522984f01b2fe7a6b364ce8f9b1c660a0cc09e33011bccacb1" exitCode=0 Feb 27 10:28:40 crc kubenswrapper[4728]: I0227 10:28:40.359492 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-xghgn" event={"ID":"0cd760d8-c9b2-4e95-97a3-94bc759c9884","Type":"ContainerDied","Data":"fac4e5f3c052c3522984f01b2fe7a6b364ce8f9b1c660a0cc09e33011bccacb1"} Feb 27 10:28:40 crc kubenswrapper[4728]: I0227 10:28:40.391690 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-rpr29" podStartSLOduration=70.391670881 podStartE2EDuration="1m10.391670881s" podCreationTimestamp="2026-02-27 10:27:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:28:40.386541105 +0000 UTC m=+140.348907231" watchObservedRunningTime="2026-02-27 10:28:40.391670881 +0000 UTC m=+140.354037017" Feb 27 10:28:40 crc kubenswrapper[4728]: I0227 10:28:40.392830 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-rpr29" Feb 27 10:28:40 crc kubenswrapper[4728]: I0227 10:28:40.397281 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/335c46dc-6884-4132-8aef-e3a9a81492f9-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-ghfss\" (UID: \"335c46dc-6884-4132-8aef-e3a9a81492f9\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ghfss" Feb 27 10:28:40 crc kubenswrapper[4728]: I0227 10:28:40.397369 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/335c46dc-6884-4132-8aef-e3a9a81492f9-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-ghfss\" (UID: \"335c46dc-6884-4132-8aef-e3a9a81492f9\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ghfss" Feb 27 10:28:40 crc kubenswrapper[4728]: I0227 10:28:40.397409 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/335c46dc-6884-4132-8aef-e3a9a81492f9-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-ghfss\" (UID: \"335c46dc-6884-4132-8aef-e3a9a81492f9\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ghfss" Feb 27 10:28:40 crc kubenswrapper[4728]: I0227 10:28:40.397475 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/335c46dc-6884-4132-8aef-e3a9a81492f9-service-ca\") pod \"cluster-version-operator-5c965bbfc6-ghfss\" (UID: \"335c46dc-6884-4132-8aef-e3a9a81492f9\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ghfss" Feb 27 10:28:40 crc kubenswrapper[4728]: I0227 10:28:40.397563 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/335c46dc-6884-4132-8aef-e3a9a81492f9-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-ghfss\" (UID: \"335c46dc-6884-4132-8aef-e3a9a81492f9\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ghfss" Feb 27 10:28:40 crc kubenswrapper[4728]: I0227 10:28:40.397689 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/335c46dc-6884-4132-8aef-e3a9a81492f9-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-ghfss\" (UID: \"335c46dc-6884-4132-8aef-e3a9a81492f9\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ghfss" Feb 27 10:28:40 crc kubenswrapper[4728]: I0227 10:28:40.397766 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/335c46dc-6884-4132-8aef-e3a9a81492f9-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-ghfss\" (UID: \"335c46dc-6884-4132-8aef-e3a9a81492f9\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ghfss" Feb 27 10:28:40 crc kubenswrapper[4728]: I0227 10:28:40.398820 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/335c46dc-6884-4132-8aef-e3a9a81492f9-service-ca\") pod \"cluster-version-operator-5c965bbfc6-ghfss\" (UID: \"335c46dc-6884-4132-8aef-e3a9a81492f9\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ghfss" Feb 27 10:28:40 crc kubenswrapper[4728]: I0227 10:28:40.406785 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/335c46dc-6884-4132-8aef-e3a9a81492f9-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-ghfss\" (UID: \"335c46dc-6884-4132-8aef-e3a9a81492f9\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ghfss" Feb 27 10:28:40 crc kubenswrapper[4728]: I0227 10:28:40.407185 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-rpr29" Feb 27 10:28:40 crc kubenswrapper[4728]: I0227 10:28:40.419809 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/335c46dc-6884-4132-8aef-e3a9a81492f9-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-ghfss\" (UID: \"335c46dc-6884-4132-8aef-e3a9a81492f9\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ghfss" Feb 27 10:28:40 crc kubenswrapper[4728]: I0227 10:28:40.465423 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ghfss" Feb 27 10:28:40 crc kubenswrapper[4728]: I0227 10:28:40.725400 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 10:28:40 crc kubenswrapper[4728]: I0227 10:28:40.725464 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 10:28:40 crc kubenswrapper[4728]: I0227 10:28:40.725525 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 10:28:40 crc kubenswrapper[4728]: E0227 10:28:40.725734 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 10:28:40 crc kubenswrapper[4728]: E0227 10:28:40.726059 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 10:28:40 crc kubenswrapper[4728]: E0227 10:28:40.726135 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 10:28:40 crc kubenswrapper[4728]: I0227 10:28:40.746251 4728 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Feb 27 10:28:40 crc kubenswrapper[4728]: I0227 10:28:40.755618 4728 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 27 10:28:40 crc kubenswrapper[4728]: E0227 10:28:40.813886 4728 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 27 10:28:41 crc kubenswrapper[4728]: I0227 10:28:41.365404 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ghfss" event={"ID":"335c46dc-6884-4132-8aef-e3a9a81492f9","Type":"ContainerStarted","Data":"6b765c54232387c894cb45ca177adebc6cc303dd10770f15275a466a79bde0c8"} Feb 27 10:28:41 crc kubenswrapper[4728]: I0227 10:28:41.365479 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ghfss" event={"ID":"335c46dc-6884-4132-8aef-e3a9a81492f9","Type":"ContainerStarted","Data":"7925d4c9833bf2739c3de1a4f34579334b59d8534e17df1895cf5fb60ee98672"} Feb 27 10:28:41 crc kubenswrapper[4728]: I0227 10:28:41.370668 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-xghgn" event={"ID":"0cd760d8-c9b2-4e95-97a3-94bc759c9884","Type":"ContainerStarted","Data":"be4231e4af374cb6a6c5d79edaf92353de973f4d78d186d10d58ca340f313271"} Feb 27 10:28:41 crc kubenswrapper[4728]: I0227 10:28:41.388002 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ghfss" podStartSLOduration=72.387979705 podStartE2EDuration="1m12.387979705s" podCreationTimestamp="2026-02-27 10:27:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:28:41.386075831 +0000 UTC m=+141.348441967" watchObservedRunningTime="2026-02-27 10:28:41.387979705 +0000 UTC m=+141.350345841" Feb 27 10:28:41 crc kubenswrapper[4728]: I0227 10:28:41.724777 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wv4rk" Feb 27 10:28:41 crc kubenswrapper[4728]: E0227 10:28:41.724953 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wv4rk" podUID="861d0263-093a-4dfa-93d7-d3efb29da94b" Feb 27 10:28:42 crc kubenswrapper[4728]: I0227 10:28:42.724716 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 10:28:42 crc kubenswrapper[4728]: E0227 10:28:42.724852 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 10:28:42 crc kubenswrapper[4728]: I0227 10:28:42.724734 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 10:28:42 crc kubenswrapper[4728]: I0227 10:28:42.724932 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 10:28:42 crc kubenswrapper[4728]: E0227 10:28:42.725053 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 10:28:42 crc kubenswrapper[4728]: E0227 10:28:42.725158 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 10:28:42 crc kubenswrapper[4728]: I0227 10:28:42.828035 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-xghgn" podStartSLOduration=72.828009352 podStartE2EDuration="1m12.828009352s" podCreationTimestamp="2026-02-27 10:27:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:28:41.415294119 +0000 UTC m=+141.377660225" watchObservedRunningTime="2026-02-27 10:28:42.828009352 +0000 UTC m=+142.790375468" Feb 27 10:28:42 crc kubenswrapper[4728]: I0227 10:28:42.828420 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-wv4rk"] Feb 27 10:28:42 crc kubenswrapper[4728]: I0227 10:28:42.828557 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wv4rk" Feb 27 10:28:42 crc kubenswrapper[4728]: E0227 10:28:42.828700 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wv4rk" podUID="861d0263-093a-4dfa-93d7-d3efb29da94b" Feb 27 10:28:44 crc kubenswrapper[4728]: I0227 10:28:44.724204 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wv4rk" Feb 27 10:28:44 crc kubenswrapper[4728]: I0227 10:28:44.724238 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 10:28:44 crc kubenswrapper[4728]: E0227 10:28:44.724776 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wv4rk" podUID="861d0263-093a-4dfa-93d7-d3efb29da94b" Feb 27 10:28:44 crc kubenswrapper[4728]: I0227 10:28:44.724395 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 10:28:44 crc kubenswrapper[4728]: E0227 10:28:44.724924 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 10:28:44 crc kubenswrapper[4728]: I0227 10:28:44.724941 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 10:28:44 crc kubenswrapper[4728]: E0227 10:28:44.725018 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 10:28:44 crc kubenswrapper[4728]: E0227 10:28:44.725324 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 10:28:45 crc kubenswrapper[4728]: E0227 10:28:45.815703 4728 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 27 10:28:46 crc kubenswrapper[4728]: I0227 10:28:46.724240 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wv4rk" Feb 27 10:28:46 crc kubenswrapper[4728]: I0227 10:28:46.724287 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 10:28:46 crc kubenswrapper[4728]: I0227 10:28:46.724293 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 10:28:46 crc kubenswrapper[4728]: I0227 10:28:46.724352 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 10:28:46 crc kubenswrapper[4728]: E0227 10:28:46.724449 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wv4rk" podUID="861d0263-093a-4dfa-93d7-d3efb29da94b" Feb 27 10:28:46 crc kubenswrapper[4728]: E0227 10:28:46.724786 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 10:28:46 crc kubenswrapper[4728]: E0227 10:28:46.724947 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 10:28:46 crc kubenswrapper[4728]: E0227 10:28:46.725089 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 10:28:46 crc kubenswrapper[4728]: I0227 10:28:46.739813 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Feb 27 10:28:48 crc kubenswrapper[4728]: I0227 10:28:48.724704 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 10:28:48 crc kubenswrapper[4728]: I0227 10:28:48.724792 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wv4rk" Feb 27 10:28:48 crc kubenswrapper[4728]: E0227 10:28:48.724944 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 10:28:48 crc kubenswrapper[4728]: I0227 10:28:48.724990 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 10:28:48 crc kubenswrapper[4728]: I0227 10:28:48.724968 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 10:28:48 crc kubenswrapper[4728]: E0227 10:28:48.725156 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wv4rk" podUID="861d0263-093a-4dfa-93d7-d3efb29da94b" Feb 27 10:28:48 crc kubenswrapper[4728]: E0227 10:28:48.725325 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 10:28:48 crc kubenswrapper[4728]: E0227 10:28:48.725450 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 10:28:50 crc kubenswrapper[4728]: I0227 10:28:50.313258 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/861d0263-093a-4dfa-93d7-d3efb29da94b-metrics-certs\") pod \"network-metrics-daemon-wv4rk\" (UID: \"861d0263-093a-4dfa-93d7-d3efb29da94b\") " pod="openshift-multus/network-metrics-daemon-wv4rk" Feb 27 10:28:50 crc kubenswrapper[4728]: E0227 10:28:50.313546 4728 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 27 10:28:50 crc kubenswrapper[4728]: E0227 10:28:50.313818 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/861d0263-093a-4dfa-93d7-d3efb29da94b-metrics-certs podName:861d0263-093a-4dfa-93d7-d3efb29da94b nodeName:}" failed. No retries permitted until 2026-02-27 10:29:22.31379082 +0000 UTC m=+182.276156956 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/861d0263-093a-4dfa-93d7-d3efb29da94b-metrics-certs") pod "network-metrics-daemon-wv4rk" (UID: "861d0263-093a-4dfa-93d7-d3efb29da94b") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 27 10:28:50 crc kubenswrapper[4728]: I0227 10:28:50.723849 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 10:28:50 crc kubenswrapper[4728]: I0227 10:28:50.723911 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wv4rk" Feb 27 10:28:50 crc kubenswrapper[4728]: E0227 10:28:50.724068 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 10:28:50 crc kubenswrapper[4728]: I0227 10:28:50.724158 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 10:28:50 crc kubenswrapper[4728]: E0227 10:28:50.726310 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wv4rk" podUID="861d0263-093a-4dfa-93d7-d3efb29da94b" Feb 27 10:28:50 crc kubenswrapper[4728]: I0227 10:28:50.726395 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 10:28:50 crc kubenswrapper[4728]: E0227 10:28:50.726957 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 10:28:50 crc kubenswrapper[4728]: E0227 10:28:50.727321 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 10:28:50 crc kubenswrapper[4728]: I0227 10:28:50.745474 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Feb 27 10:28:50 crc kubenswrapper[4728]: I0227 10:28:50.752772 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=4.752745746 podStartE2EDuration="4.752745746s" podCreationTimestamp="2026-02-27 10:28:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:28:50.751620465 +0000 UTC m=+150.713986641" watchObservedRunningTime="2026-02-27 10:28:50.752745746 +0000 UTC m=+150.715111892" Feb 27 10:28:52 crc kubenswrapper[4728]: I0227 10:28:52.724429 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 10:28:52 crc kubenswrapper[4728]: I0227 10:28:52.724490 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 10:28:52 crc kubenswrapper[4728]: I0227 10:28:52.728545 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 27 10:28:52 crc kubenswrapper[4728]: I0227 10:28:52.728843 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 27 10:28:52 crc kubenswrapper[4728]: I0227 10:28:52.728947 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 27 10:28:52 crc kubenswrapper[4728]: I0227 10:28:52.728966 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 27 10:28:52 crc kubenswrapper[4728]: I0227 10:28:52.728973 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 10:28:52 crc kubenswrapper[4728]: I0227 10:28:52.730142 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wv4rk" Feb 27 10:28:52 crc kubenswrapper[4728]: I0227 10:28:52.828924 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 27 10:28:52 crc kubenswrapper[4728]: I0227 10:28:52.828924 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 27 10:28:56 crc kubenswrapper[4728]: I0227 10:28:56.498826 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 10:28:56 crc kubenswrapper[4728]: E0227 10:28:56.499175 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 10:30:00.499135986 +0000 UTC m=+220.461502122 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:28:56 crc kubenswrapper[4728]: I0227 10:28:56.701102 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 10:28:56 crc kubenswrapper[4728]: I0227 10:28:56.701175 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 10:28:56 crc kubenswrapper[4728]: I0227 10:28:56.701252 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 10:28:56 crc kubenswrapper[4728]: I0227 10:28:56.701290 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 10:28:56 crc kubenswrapper[4728]: I0227 10:28:56.704301 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 10:28:56 crc kubenswrapper[4728]: I0227 10:28:56.709094 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 10:28:56 crc kubenswrapper[4728]: I0227 10:28:56.709110 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 10:28:56 crc kubenswrapper[4728]: I0227 10:28:56.709953 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 10:28:56 crc kubenswrapper[4728]: I0227 10:28:56.740753 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 10:28:56 crc kubenswrapper[4728]: I0227 10:28:56.750927 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 10:28:56 crc kubenswrapper[4728]: I0227 10:28:56.756868 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 10:28:57 crc kubenswrapper[4728]: W0227 10:28:57.040554 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-e74915425c458a8ea1f73d919642c7e2ecde1f7eb63ac4614e8269556820e638 WatchSource:0}: Error finding container e74915425c458a8ea1f73d919642c7e2ecde1f7eb63ac4614e8269556820e638: Status 404 returned error can't find the container with id e74915425c458a8ea1f73d919642c7e2ecde1f7eb63ac4614e8269556820e638 Feb 27 10:28:57 crc kubenswrapper[4728]: W0227 10:28:57.062210 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-6db53bdc0bb493a3b47adb53cf3dbdf43a470aaf2c9e98a69bd1a3d852adb74a WatchSource:0}: Error finding container 6db53bdc0bb493a3b47adb53cf3dbdf43a470aaf2c9e98a69bd1a3d852adb74a: Status 404 returned error can't find the container with id 6db53bdc0bb493a3b47adb53cf3dbdf43a470aaf2c9e98a69bd1a3d852adb74a Feb 27 10:28:57 crc kubenswrapper[4728]: I0227 10:28:57.472984 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"26a3005beab2c8aea8504ea84c6afbdc946c4dfc6dc4d79ef572a40b0b4c2951"} Feb 27 10:28:57 crc kubenswrapper[4728]: I0227 10:28:57.473051 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"6db53bdc0bb493a3b47adb53cf3dbdf43a470aaf2c9e98a69bd1a3d852adb74a"} Feb 27 10:28:57 crc kubenswrapper[4728]: I0227 10:28:57.478237 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"417d6e1dd0f58156d0ea6fd37d6424c8013e78a3f415b3ace5ea44cde08e9c43"} Feb 27 10:28:57 crc kubenswrapper[4728]: I0227 10:28:57.478323 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"7ac878c815eccc1d7bc555ab47e8189f93cac4bbea37b01dcb6d6907c45df7a3"} Feb 27 10:28:57 crc kubenswrapper[4728]: I0227 10:28:57.478796 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 10:28:57 crc kubenswrapper[4728]: I0227 10:28:57.481164 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"62de421b85fee92d004e1f2acadadac92c09bccac2e0f873c8e5d3ef5dd127f9"} Feb 27 10:28:57 crc kubenswrapper[4728]: I0227 10:28:57.481219 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"e74915425c458a8ea1f73d919642c7e2ecde1f7eb63ac4614e8269556820e638"} Feb 27 10:28:57 crc kubenswrapper[4728]: I0227 10:28:57.525960 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=7.525925023 podStartE2EDuration="7.525925023s" podCreationTimestamp="2026-02-27 10:28:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:28:57.523163245 +0000 UTC m=+157.485529411" watchObservedRunningTime="2026-02-27 10:28:57.525925023 +0000 UTC m=+157.488291159" Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.725711 4728 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.774481 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-2f6px"] Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.775442 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-2f6px" Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.777024 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-d2nkm"] Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.777584 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-d2nkm" Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.779665 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.779935 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.783175 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-fwnnw"] Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.785385 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.785433 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.790724 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-fwnnw" Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.795631 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-j72xn"] Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.789596 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.789682 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.789873 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.790045 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.790325 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.790342 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.790614 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.790900 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.790964 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.792598 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.795712 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.819199 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j72xn" Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.820219 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-lj5z6"] Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.820887 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-tvflj"] Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.821441 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-tvflj" Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.821740 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-lj5z6" Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.822405 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.846391 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.847454 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.849141 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.849465 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.849696 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.850009 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.850209 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.850374 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.850389 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.850635 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.850916 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-bd58v"] Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.850670 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.851184 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.851371 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bd58v" Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.852462 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-q9bgg"] Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.852934 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.853084 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.853212 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.853692 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-q9bgg" Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.854418 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.855061 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.855349 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.855554 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.855680 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.855774 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.855894 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.855996 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.856052 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.856522 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.856612 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.857427 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.857667 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.866803 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.866819 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-w4vnn"] Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.870202 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.870550 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.870691 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.871251 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.871429 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.871649 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.871816 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.871824 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.872324 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.872369 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.876839 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-c46ql"] Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.877532 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-w4vnn" Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.878307 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.878600 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.878865 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.896565 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vmtwl"] Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.896962 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-9p4mb"] Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.897250 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.898088 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-c46ql" Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.898753 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-tz4jj"] Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.898880 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vmtwl" Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.899051 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-9p4mb" Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.899115 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-7g65x"] Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.906468 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-5lw67"] Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.899289 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.899222 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tz4jj" Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.905134 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.905343 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.905424 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.906403 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.905472 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.912487 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-7g65x" Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.918082 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.919023 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-kwpmw"] Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.919542 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kwpmw" Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.919924 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-5lw67" Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.919927 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-zdjkf"] Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.920099 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.920370 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.920978 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-zdjkf" Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.924494 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.924535 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-npkjm"] Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.927441 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.927631 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-npkjm" Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.927636 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.927731 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.927810 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.928563 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.928619 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.928720 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.928763 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.928726 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.928842 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.928904 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.928995 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.929075 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.929108 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.929177 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.929222 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.929245 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.929492 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.931311 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.931443 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.931552 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-g4dbw"] Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.932152 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-g4dbw" Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.936633 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.937407 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.937716 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.937835 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.937900 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.938030 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.938044 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-h55dj"] Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.938109 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.938236 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.938368 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.938474 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.938484 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-25vw6"] Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.938614 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.938733 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.938792 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-25vw6" Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.938977 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-h55dj" Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.939664 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.939696 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-b89pv"] Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.939899 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.940795 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-b89pv" Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.941251 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-p7rtp"] Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.941677 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.941713 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.942108 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-p7rtp" Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.942396 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-d2nkm"] Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.943686 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9m4nv"] Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.944133 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9m4nv" Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.948741 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2d2c4\" (UniqueName: \"kubernetes.io/projected/6b237825-2c85-4de6-a839-b91e7d23d433-kube-api-access-2d2c4\") pod \"apiserver-7bbb656c7d-j72xn\" (UID: \"6b237825-2c85-4de6-a839-b91e7d23d433\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j72xn" Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.948787 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b1d22605-abd6-4fc6-8352-8fe78ec02332-console-serving-cert\") pod \"console-f9d7485db-tvflj\" (UID: \"b1d22605-abd6-4fc6-8352-8fe78ec02332\") " pod="openshift-console/console-f9d7485db-tvflj" Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.948823 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/206a150d-98c4-4204-84e3-609198888fd4-serving-cert\") pod \"authentication-operator-69f744f599-lj5z6\" (UID: \"206a150d-98c4-4204-84e3-609198888fd4\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-lj5z6" Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.948850 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f5b2eb8e-7b36-40ac-b745-9e1a3efaec21-encryption-config\") pod \"apiserver-76f77b778f-2f6px\" (UID: \"f5b2eb8e-7b36-40ac-b745-9e1a3efaec21\") " pod="openshift-apiserver/apiserver-76f77b778f-2f6px" Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.948869 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/52d0f01e-a7ab-4c07-bd46-d014e84c3d6a-serving-cert\") pod \"controller-manager-879f6c89f-d2nkm\" (UID: \"52d0f01e-a7ab-4c07-bd46-d014e84c3d6a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-d2nkm" Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.948895 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/52d0f01e-a7ab-4c07-bd46-d014e84c3d6a-client-ca\") pod \"controller-manager-879f6c89f-d2nkm\" (UID: \"52d0f01e-a7ab-4c07-bd46-d014e84c3d6a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-d2nkm" Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.948919 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6b237825-2c85-4de6-a839-b91e7d23d433-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-j72xn\" (UID: \"6b237825-2c85-4de6-a839-b91e7d23d433\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j72xn" Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.948941 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b1d22605-abd6-4fc6-8352-8fe78ec02332-console-oauth-config\") pod \"console-f9d7485db-tvflj\" (UID: \"b1d22605-abd6-4fc6-8352-8fe78ec02332\") " pod="openshift-console/console-f9d7485db-tvflj" Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.948963 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/ba876990-999b-4cd2-bb68-624cbf1b5701-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-fwnnw\" (UID: \"ba876990-999b-4cd2-bb68-624cbf1b5701\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-fwnnw" Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.948987 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/f5b2eb8e-7b36-40ac-b745-9e1a3efaec21-image-import-ca\") pod \"apiserver-76f77b778f-2f6px\" (UID: \"f5b2eb8e-7b36-40ac-b745-9e1a3efaec21\") " pod="openshift-apiserver/apiserver-76f77b778f-2f6px" Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.949007 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba876990-999b-4cd2-bb68-624cbf1b5701-config\") pod \"machine-api-operator-5694c8668f-fwnnw\" (UID: \"ba876990-999b-4cd2-bb68-624cbf1b5701\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-fwnnw" Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.949030 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52d0f01e-a7ab-4c07-bd46-d014e84c3d6a-config\") pod \"controller-manager-879f6c89f-d2nkm\" (UID: \"52d0f01e-a7ab-4c07-bd46-d014e84c3d6a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-d2nkm" Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.949051 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sx27b\" (UniqueName: \"kubernetes.io/projected/b1d22605-abd6-4fc6-8352-8fe78ec02332-kube-api-access-sx27b\") pod \"console-f9d7485db-tvflj\" (UID: \"b1d22605-abd6-4fc6-8352-8fe78ec02332\") " pod="openshift-console/console-f9d7485db-tvflj" Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.949072 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/52d0f01e-a7ab-4c07-bd46-d014e84c3d6a-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-d2nkm\" (UID: \"52d0f01e-a7ab-4c07-bd46-d014e84c3d6a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-d2nkm" Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.949095 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f5b2eb8e-7b36-40ac-b745-9e1a3efaec21-etcd-serving-ca\") pod \"apiserver-76f77b778f-2f6px\" (UID: \"f5b2eb8e-7b36-40ac-b745-9e1a3efaec21\") " pod="openshift-apiserver/apiserver-76f77b778f-2f6px" Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.949117 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/f5b2eb8e-7b36-40ac-b745-9e1a3efaec21-audit\") pod \"apiserver-76f77b778f-2f6px\" (UID: \"f5b2eb8e-7b36-40ac-b745-9e1a3efaec21\") " pod="openshift-apiserver/apiserver-76f77b778f-2f6px" Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.949177 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/6b237825-2c85-4de6-a839-b91e7d23d433-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-j72xn\" (UID: \"6b237825-2c85-4de6-a839-b91e7d23d433\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j72xn" Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.949214 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b1d22605-abd6-4fc6-8352-8fe78ec02332-service-ca\") pod \"console-f9d7485db-tvflj\" (UID: \"b1d22605-abd6-4fc6-8352-8fe78ec02332\") " pod="openshift-console/console-f9d7485db-tvflj" Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.949273 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8fs96\" (UniqueName: \"kubernetes.io/projected/52d0f01e-a7ab-4c07-bd46-d014e84c3d6a-kube-api-access-8fs96\") pod \"controller-manager-879f6c89f-d2nkm\" (UID: \"52d0f01e-a7ab-4c07-bd46-d014e84c3d6a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-d2nkm" Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.949303 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4fvx\" (UniqueName: \"kubernetes.io/projected/ba876990-999b-4cd2-bb68-624cbf1b5701-kube-api-access-s4fvx\") pod \"machine-api-operator-5694c8668f-fwnnw\" (UID: \"ba876990-999b-4cd2-bb68-624cbf1b5701\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-fwnnw" Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.949399 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b1d22605-abd6-4fc6-8352-8fe78ec02332-oauth-serving-cert\") pod \"console-f9d7485db-tvflj\" (UID: \"b1d22605-abd6-4fc6-8352-8fe78ec02332\") " pod="openshift-console/console-f9d7485db-tvflj" Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.949431 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6b237825-2c85-4de6-a839-b91e7d23d433-serving-cert\") pod \"apiserver-7bbb656c7d-j72xn\" (UID: \"6b237825-2c85-4de6-a839-b91e7d23d433\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j72xn" Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.949472 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/f5b2eb8e-7b36-40ac-b745-9e1a3efaec21-node-pullsecrets\") pod \"apiserver-76f77b778f-2f6px\" (UID: \"f5b2eb8e-7b36-40ac-b745-9e1a3efaec21\") " pod="openshift-apiserver/apiserver-76f77b778f-2f6px" Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.949516 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f5b2eb8e-7b36-40ac-b745-9e1a3efaec21-trusted-ca-bundle\") pod \"apiserver-76f77b778f-2f6px\" (UID: \"f5b2eb8e-7b36-40ac-b745-9e1a3efaec21\") " pod="openshift-apiserver/apiserver-76f77b778f-2f6px" Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.949540 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b1d22605-abd6-4fc6-8352-8fe78ec02332-trusted-ca-bundle\") pod \"console-f9d7485db-tvflj\" (UID: \"b1d22605-abd6-4fc6-8352-8fe78ec02332\") " pod="openshift-console/console-f9d7485db-tvflj" Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.949584 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6b237825-2c85-4de6-a839-b91e7d23d433-audit-policies\") pod \"apiserver-7bbb656c7d-j72xn\" (UID: \"6b237825-2c85-4de6-a839-b91e7d23d433\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j72xn" Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.949781 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5b2eb8e-7b36-40ac-b745-9e1a3efaec21-config\") pod \"apiserver-76f77b778f-2f6px\" (UID: \"f5b2eb8e-7b36-40ac-b745-9e1a3efaec21\") " pod="openshift-apiserver/apiserver-76f77b778f-2f6px" Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.949805 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f5b2eb8e-7b36-40ac-b745-9e1a3efaec21-etcd-client\") pod \"apiserver-76f77b778f-2f6px\" (UID: \"f5b2eb8e-7b36-40ac-b745-9e1a3efaec21\") " pod="openshift-apiserver/apiserver-76f77b778f-2f6px" Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.949822 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f5b2eb8e-7b36-40ac-b745-9e1a3efaec21-serving-cert\") pod \"apiserver-76f77b778f-2f6px\" (UID: \"f5b2eb8e-7b36-40ac-b745-9e1a3efaec21\") " pod="openshift-apiserver/apiserver-76f77b778f-2f6px" Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.949841 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/206a150d-98c4-4204-84e3-609198888fd4-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-lj5z6\" (UID: \"206a150d-98c4-4204-84e3-609198888fd4\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-lj5z6" Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.949867 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b1d22605-abd6-4fc6-8352-8fe78ec02332-console-config\") pod \"console-f9d7485db-tvflj\" (UID: \"b1d22605-abd6-4fc6-8352-8fe78ec02332\") " pod="openshift-console/console-f9d7485db-tvflj" Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.949883 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6b237825-2c85-4de6-a839-b91e7d23d433-audit-dir\") pod \"apiserver-7bbb656c7d-j72xn\" (UID: \"6b237825-2c85-4de6-a839-b91e7d23d433\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j72xn" Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.949907 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f5b2eb8e-7b36-40ac-b745-9e1a3efaec21-audit-dir\") pod \"apiserver-76f77b778f-2f6px\" (UID: \"f5b2eb8e-7b36-40ac-b745-9e1a3efaec21\") " pod="openshift-apiserver/apiserver-76f77b778f-2f6px" Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.949924 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/6b237825-2c85-4de6-a839-b91e7d23d433-encryption-config\") pod \"apiserver-7bbb656c7d-j72xn\" (UID: \"6b237825-2c85-4de6-a839-b91e7d23d433\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j72xn" Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.950706 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6b237825-2c85-4de6-a839-b91e7d23d433-etcd-client\") pod \"apiserver-7bbb656c7d-j72xn\" (UID: \"6b237825-2c85-4de6-a839-b91e7d23d433\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j72xn" Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.950737 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/206a150d-98c4-4204-84e3-609198888fd4-service-ca-bundle\") pod \"authentication-operator-69f744f599-lj5z6\" (UID: \"206a150d-98c4-4204-84e3-609198888fd4\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-lj5z6" Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.950759 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mtrr9\" (UniqueName: \"kubernetes.io/projected/f5b2eb8e-7b36-40ac-b745-9e1a3efaec21-kube-api-access-mtrr9\") pod \"apiserver-76f77b778f-2f6px\" (UID: \"f5b2eb8e-7b36-40ac-b745-9e1a3efaec21\") " pod="openshift-apiserver/apiserver-76f77b778f-2f6px" Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.950775 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/ba876990-999b-4cd2-bb68-624cbf1b5701-images\") pod \"machine-api-operator-5694c8668f-fwnnw\" (UID: \"ba876990-999b-4cd2-bb68-624cbf1b5701\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-fwnnw" Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.950794 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/206a150d-98c4-4204-84e3-609198888fd4-config\") pod \"authentication-operator-69f744f599-lj5z6\" (UID: \"206a150d-98c4-4204-84e3-609198888fd4\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-lj5z6" Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.950811 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k42fz\" (UniqueName: \"kubernetes.io/projected/206a150d-98c4-4204-84e3-609198888fd4-kube-api-access-k42fz\") pod \"authentication-operator-69f744f599-lj5z6\" (UID: \"206a150d-98c4-4204-84e3-609198888fd4\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-lj5z6" Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.955297 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-fwnnw"] Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.957755 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jzdfv"] Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.958159 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-8n6md"] Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.958681 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jzdfv" Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.960039 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.960681 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-8n6md" Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.961433 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-5gztc"] Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.962043 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-5gztc" Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.962169 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-bgndl"] Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.970774 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-4psh7"] Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.971331 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bgndl" Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.971888 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-xv8vk"] Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.972400 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4psh7" Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.972725 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-zjx6k"] Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.972887 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-xv8vk" Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.975246 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-fj9d6"] Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.975576 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-zjx6k" Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.976217 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9bqx2"] Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.980285 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9bqx2" Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.976923 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-fj9d6" Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.989684 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.997249 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-f5nff"] Feb 27 10:29:00 crc kubenswrapper[4728]: I0227 10:29:00.998699 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-f5nff" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.001639 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.006529 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29536455-mx6zt"] Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.007189 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29536455-mx6zt" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.011140 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cxzlx"] Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.011724 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cxzlx" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.013277 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536468-682zs"] Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.014073 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536468-682zs" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.015462 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-7n4w8"] Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.016160 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-7n4w8" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.017162 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.017891 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-ghzzf"] Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.018460 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-ghzzf" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.021256 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-q9bgg"] Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.023412 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-zdjkf"] Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.025044 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-qt4kh"] Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.025915 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-qt4kh" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.028122 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-xv8vk"] Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.029298 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-j72xn"] Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.030307 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-sftjq"] Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.031227 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-wktw4"] Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.031724 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-sftjq" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.032360 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-wktw4" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.032667 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-tz4jj"] Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.034566 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-2f6px"] Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.035702 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-5gztc"] Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.038109 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-25vw6"] Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.039286 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-tvflj"] Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.040070 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9m4nv"] Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.040999 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-g4dbw"] Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.042009 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-9p4mb"] Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.043157 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-lj5z6"] Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.044544 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-npkjm"] Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.045947 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-w4vnn"] Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.047001 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-c46ql"] Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.047968 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vmtwl"] Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.048993 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-5lw67"] Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.050031 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-qt4kh"] Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.051114 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-w976v"] Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.051487 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7z2z7\" (UniqueName: \"kubernetes.io/projected/e0d7166c-3042-4706-9683-6c6a32d29a9c-kube-api-access-7z2z7\") pod \"route-controller-manager-6576b87f9c-tz4jj\" (UID: \"e0d7166c-3042-4706-9683-6c6a32d29a9c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tz4jj" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.051548 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmrkp\" (UniqueName: \"kubernetes.io/projected/0faf2938-8e5e-451e-99f9-c09124f6a767-kube-api-access-vmrkp\") pod \"machine-approver-56656f9798-bd58v\" (UID: \"0faf2938-8e5e-451e-99f9-c09124f6a767\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bd58v" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.051724 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/7ee9013e-5452-4e18-b4ce-3af1c8257662-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-vmtwl\" (UID: \"7ee9013e-5452-4e18-b4ce-3af1c8257662\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vmtwl" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.051767 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b1d22605-abd6-4fc6-8352-8fe78ec02332-console-serving-cert\") pod \"console-f9d7485db-tvflj\" (UID: \"b1d22605-abd6-4fc6-8352-8fe78ec02332\") " pod="openshift-console/console-f9d7485db-tvflj" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.051792 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/206a150d-98c4-4204-84e3-609198888fd4-serving-cert\") pod \"authentication-operator-69f744f599-lj5z6\" (UID: \"206a150d-98c4-4204-84e3-609198888fd4\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-lj5z6" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.051818 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dk67w\" (UniqueName: \"kubernetes.io/projected/6bba0773-58d9-41fe-90da-12a1399387a7-kube-api-access-dk67w\") pod \"kube-storage-version-migrator-operator-b67b599dd-9m4nv\" (UID: \"6bba0773-58d9-41fe-90da-12a1399387a7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9m4nv" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.051841 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e0d7166c-3042-4706-9683-6c6a32d29a9c-serving-cert\") pod \"route-controller-manager-6576b87f9c-tz4jj\" (UID: \"e0d7166c-3042-4706-9683-6c6a32d29a9c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tz4jj" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.051911 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/52d0f01e-a7ab-4c07-bd46-d014e84c3d6a-serving-cert\") pod \"controller-manager-879f6c89f-d2nkm\" (UID: \"52d0f01e-a7ab-4c07-bd46-d014e84c3d6a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-d2nkm" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.051936 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6bba0773-58d9-41fe-90da-12a1399387a7-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-9m4nv\" (UID: \"6bba0773-58d9-41fe-90da-12a1399387a7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9m4nv" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.051963 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f5b2eb8e-7b36-40ac-b745-9e1a3efaec21-encryption-config\") pod \"apiserver-76f77b778f-2f6px\" (UID: \"f5b2eb8e-7b36-40ac-b745-9e1a3efaec21\") " pod="openshift-apiserver/apiserver-76f77b778f-2f6px" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.051984 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ac17bc60-379b-44dd-bf6b-2b9ecf87bf02-serving-cert\") pod \"etcd-operator-b45778765-zdjkf\" (UID: \"ac17bc60-379b-44dd-bf6b-2b9ecf87bf02\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zdjkf" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.052009 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac17bc60-379b-44dd-bf6b-2b9ecf87bf02-config\") pod \"etcd-operator-b45778765-zdjkf\" (UID: \"ac17bc60-379b-44dd-bf6b-2b9ecf87bf02\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zdjkf" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.052103 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6b237825-2c85-4de6-a839-b91e7d23d433-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-j72xn\" (UID: \"6b237825-2c85-4de6-a839-b91e7d23d433\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j72xn" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.052128 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6e70c0a9-c703-42f3-b47c-c32dd62b435b-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-b89pv\" (UID: \"6e70c0a9-c703-42f3-b47c-c32dd62b435b\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-b89pv" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.052414 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/52d0f01e-a7ab-4c07-bd46-d014e84c3d6a-client-ca\") pod \"controller-manager-879f6c89f-d2nkm\" (UID: \"52d0f01e-a7ab-4c07-bd46-d014e84c3d6a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-d2nkm" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.052441 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/ba876990-999b-4cd2-bb68-624cbf1b5701-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-fwnnw\" (UID: \"ba876990-999b-4cd2-bb68-624cbf1b5701\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-fwnnw" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.052466 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b1d22605-abd6-4fc6-8352-8fe78ec02332-console-oauth-config\") pod \"console-f9d7485db-tvflj\" (UID: \"b1d22605-abd6-4fc6-8352-8fe78ec02332\") " pod="openshift-console/console-f9d7485db-tvflj" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.052491 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba876990-999b-4cd2-bb68-624cbf1b5701-config\") pod \"machine-api-operator-5694c8668f-fwnnw\" (UID: \"ba876990-999b-4cd2-bb68-624cbf1b5701\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-fwnnw" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.052532 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/20c5c038-19ee-4ac6-b2dc-281920c6be9a-config\") pod \"kube-apiserver-operator-766d6c64bb-h55dj\" (UID: \"20c5c038-19ee-4ac6-b2dc-281920c6be9a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-h55dj" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.052561 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/f5b2eb8e-7b36-40ac-b745-9e1a3efaec21-image-import-ca\") pod \"apiserver-76f77b778f-2f6px\" (UID: \"f5b2eb8e-7b36-40ac-b745-9e1a3efaec21\") " pod="openshift-apiserver/apiserver-76f77b778f-2f6px" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.052584 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7ee9013e-5452-4e18-b4ce-3af1c8257662-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-vmtwl\" (UID: \"7ee9013e-5452-4e18-b4ce-3af1c8257662\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vmtwl" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.052647 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8578n\" (UniqueName: \"kubernetes.io/projected/7ee9013e-5452-4e18-b4ce-3af1c8257662-kube-api-access-8578n\") pod \"cluster-image-registry-operator-dc59b4c8b-vmtwl\" (UID: \"7ee9013e-5452-4e18-b4ce-3af1c8257662\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vmtwl" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.052672 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sx27b\" (UniqueName: \"kubernetes.io/projected/b1d22605-abd6-4fc6-8352-8fe78ec02332-kube-api-access-sx27b\") pod \"console-f9d7485db-tvflj\" (UID: \"b1d22605-abd6-4fc6-8352-8fe78ec02332\") " pod="openshift-console/console-f9d7485db-tvflj" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.052723 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52d0f01e-a7ab-4c07-bd46-d014e84c3d6a-config\") pod \"controller-manager-879f6c89f-d2nkm\" (UID: \"52d0f01e-a7ab-4c07-bd46-d014e84c3d6a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-d2nkm" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.052747 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0faf2938-8e5e-451e-99f9-c09124f6a767-auth-proxy-config\") pod \"machine-approver-56656f9798-bd58v\" (UID: \"0faf2938-8e5e-451e-99f9-c09124f6a767\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bd58v" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.052753 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-7g65x"] Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.052799 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/52d0f01e-a7ab-4c07-bd46-d014e84c3d6a-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-d2nkm\" (UID: \"52d0f01e-a7ab-4c07-bd46-d014e84c3d6a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-d2nkm" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.052823 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0d7166c-3042-4706-9683-6c6a32d29a9c-config\") pod \"route-controller-manager-6576b87f9c-tz4jj\" (UID: \"e0d7166c-3042-4706-9683-6c6a32d29a9c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tz4jj" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.052843 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/ac17bc60-379b-44dd-bf6b-2b9ecf87bf02-etcd-ca\") pod \"etcd-operator-b45778765-zdjkf\" (UID: \"ac17bc60-379b-44dd-bf6b-2b9ecf87bf02\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zdjkf" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.052887 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zhlws\" (UniqueName: \"kubernetes.io/projected/e87e4167-c76f-4adc-9d67-28485f6a6397-kube-api-access-zhlws\") pod \"dns-operator-744455d44c-9p4mb\" (UID: \"e87e4167-c76f-4adc-9d67-28485f6a6397\") " pod="openshift-dns-operator/dns-operator-744455d44c-9p4mb" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.053001 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f5b2eb8e-7b36-40ac-b745-9e1a3efaec21-etcd-serving-ca\") pod \"apiserver-76f77b778f-2f6px\" (UID: \"f5b2eb8e-7b36-40ac-b745-9e1a3efaec21\") " pod="openshift-apiserver/apiserver-76f77b778f-2f6px" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.053543 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0faf2938-8e5e-451e-99f9-c09124f6a767-config\") pod \"machine-approver-56656f9798-bd58v\" (UID: \"0faf2938-8e5e-451e-99f9-c09124f6a767\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bd58v" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.053611 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/f5b2eb8e-7b36-40ac-b745-9e1a3efaec21-audit\") pod \"apiserver-76f77b778f-2f6px\" (UID: \"f5b2eb8e-7b36-40ac-b745-9e1a3efaec21\") " pod="openshift-apiserver/apiserver-76f77b778f-2f6px" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.053638 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/6b237825-2c85-4de6-a839-b91e7d23d433-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-j72xn\" (UID: \"6b237825-2c85-4de6-a839-b91e7d23d433\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j72xn" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.053660 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/52d0f01e-a7ab-4c07-bd46-d014e84c3d6a-client-ca\") pod \"controller-manager-879f6c89f-d2nkm\" (UID: \"52d0f01e-a7ab-4c07-bd46-d014e84c3d6a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-d2nkm" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.053732 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e70c0a9-c703-42f3-b47c-c32dd62b435b-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-b89pv\" (UID: \"6e70c0a9-c703-42f3-b47c-c32dd62b435b\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-b89pv" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.052826 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-w976v" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.054839 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6b237825-2c85-4de6-a839-b91e7d23d433-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-j72xn\" (UID: \"6b237825-2c85-4de6-a839-b91e7d23d433\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j72xn" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.055212 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b1d22605-abd6-4fc6-8352-8fe78ec02332-service-ca\") pod \"console-f9d7485db-tvflj\" (UID: \"b1d22605-abd6-4fc6-8352-8fe78ec02332\") " pod="openshift-console/console-f9d7485db-tvflj" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.055294 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52d0f01e-a7ab-4c07-bd46-d014e84c3d6a-config\") pod \"controller-manager-879f6c89f-d2nkm\" (UID: \"52d0f01e-a7ab-4c07-bd46-d014e84c3d6a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-d2nkm" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.055354 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/24dd67e0-7c58-4f3f-a712-ae2639e495fe-serving-cert\") pod \"openshift-config-operator-7777fb866f-7g65x\" (UID: \"24dd67e0-7c58-4f3f-a712-ae2639e495fe\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-7g65x" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.055452 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8fs96\" (UniqueName: \"kubernetes.io/projected/52d0f01e-a7ab-4c07-bd46-d014e84c3d6a-kube-api-access-8fs96\") pod \"controller-manager-879f6c89f-d2nkm\" (UID: \"52d0f01e-a7ab-4c07-bd46-d014e84c3d6a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-d2nkm" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.055550 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4fvx\" (UniqueName: \"kubernetes.io/projected/ba876990-999b-4cd2-bb68-624cbf1b5701-kube-api-access-s4fvx\") pod \"machine-api-operator-5694c8668f-fwnnw\" (UID: \"ba876990-999b-4cd2-bb68-624cbf1b5701\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-fwnnw" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.055583 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6bba0773-58d9-41fe-90da-12a1399387a7-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-9m4nv\" (UID: \"6bba0773-58d9-41fe-90da-12a1399387a7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9m4nv" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.055720 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/24dd67e0-7c58-4f3f-a712-ae2639e495fe-available-featuregates\") pod \"openshift-config-operator-7777fb866f-7g65x\" (UID: \"24dd67e0-7c58-4f3f-a712-ae2639e495fe\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-7g65x" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.055751 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7ee9013e-5452-4e18-b4ce-3af1c8257662-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-vmtwl\" (UID: \"7ee9013e-5452-4e18-b4ce-3af1c8257662\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vmtwl" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.055772 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-h55dj"] Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.056028 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wlgbh\" (UniqueName: \"kubernetes.io/projected/36e6e9de-b708-4242-9251-1ba3b849a749-kube-api-access-wlgbh\") pod \"console-operator-58897d9998-g4dbw\" (UID: \"36e6e9de-b708-4242-9251-1ba3b849a749\") " pod="openshift-console-operator/console-operator-58897d9998-g4dbw" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.056049 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grp2h\" (UniqueName: \"kubernetes.io/projected/ac17bc60-379b-44dd-bf6b-2b9ecf87bf02-kube-api-access-grp2h\") pod \"etcd-operator-b45778765-zdjkf\" (UID: \"ac17bc60-379b-44dd-bf6b-2b9ecf87bf02\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zdjkf" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.056105 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b1d22605-abd6-4fc6-8352-8fe78ec02332-oauth-serving-cert\") pod \"console-f9d7485db-tvflj\" (UID: \"b1d22605-abd6-4fc6-8352-8fe78ec02332\") " pod="openshift-console/console-f9d7485db-tvflj" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.056094 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b1d22605-abd6-4fc6-8352-8fe78ec02332-service-ca\") pod \"console-f9d7485db-tvflj\" (UID: \"b1d22605-abd6-4fc6-8352-8fe78ec02332\") " pod="openshift-console/console-f9d7485db-tvflj" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.056141 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6b237825-2c85-4de6-a839-b91e7d23d433-serving-cert\") pod \"apiserver-7bbb656c7d-j72xn\" (UID: \"6b237825-2c85-4de6-a839-b91e7d23d433\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j72xn" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.056159 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6e70c0a9-c703-42f3-b47c-c32dd62b435b-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-b89pv\" (UID: \"6e70c0a9-c703-42f3-b47c-c32dd62b435b\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-b89pv" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.056186 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6b237825-2c85-4de6-a839-b91e7d23d433-audit-policies\") pod \"apiserver-7bbb656c7d-j72xn\" (UID: \"6b237825-2c85-4de6-a839-b91e7d23d433\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j72xn" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.056147 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba876990-999b-4cd2-bb68-624cbf1b5701-config\") pod \"machine-api-operator-5694c8668f-fwnnw\" (UID: \"ba876990-999b-4cd2-bb68-624cbf1b5701\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-fwnnw" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.056771 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/f5b2eb8e-7b36-40ac-b745-9e1a3efaec21-node-pullsecrets\") pod \"apiserver-76f77b778f-2f6px\" (UID: \"f5b2eb8e-7b36-40ac-b745-9e1a3efaec21\") " pod="openshift-apiserver/apiserver-76f77b778f-2f6px" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.056815 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f5b2eb8e-7b36-40ac-b745-9e1a3efaec21-trusted-ca-bundle\") pod \"apiserver-76f77b778f-2f6px\" (UID: \"f5b2eb8e-7b36-40ac-b745-9e1a3efaec21\") " pod="openshift-apiserver/apiserver-76f77b778f-2f6px" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.056836 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b1d22605-abd6-4fc6-8352-8fe78ec02332-trusted-ca-bundle\") pod \"console-f9d7485db-tvflj\" (UID: \"b1d22605-abd6-4fc6-8352-8fe78ec02332\") " pod="openshift-console/console-f9d7485db-tvflj" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.056885 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f5b2eb8e-7b36-40ac-b745-9e1a3efaec21-serving-cert\") pod \"apiserver-76f77b778f-2f6px\" (UID: \"f5b2eb8e-7b36-40ac-b745-9e1a3efaec21\") " pod="openshift-apiserver/apiserver-76f77b778f-2f6px" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.056902 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/206a150d-98c4-4204-84e3-609198888fd4-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-lj5z6\" (UID: \"206a150d-98c4-4204-84e3-609198888fd4\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-lj5z6" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.057016 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b1d22605-abd6-4fc6-8352-8fe78ec02332-oauth-serving-cert\") pod \"console-f9d7485db-tvflj\" (UID: \"b1d22605-abd6-4fc6-8352-8fe78ec02332\") " pod="openshift-console/console-f9d7485db-tvflj" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.057097 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6b237825-2c85-4de6-a839-b91e7d23d433-audit-policies\") pod \"apiserver-7bbb656c7d-j72xn\" (UID: \"6b237825-2c85-4de6-a839-b91e7d23d433\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j72xn" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.057170 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/52d0f01e-a7ab-4c07-bd46-d014e84c3d6a-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-d2nkm\" (UID: \"52d0f01e-a7ab-4c07-bd46-d014e84c3d6a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-d2nkm" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.057781 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.063129 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/6b237825-2c85-4de6-a839-b91e7d23d433-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-j72xn\" (UID: \"6b237825-2c85-4de6-a839-b91e7d23d433\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j72xn" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.063170 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b1d22605-abd6-4fc6-8352-8fe78ec02332-console-serving-cert\") pod \"console-f9d7485db-tvflj\" (UID: \"b1d22605-abd6-4fc6-8352-8fe78ec02332\") " pod="openshift-console/console-f9d7485db-tvflj" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.058180 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5b2eb8e-7b36-40ac-b745-9e1a3efaec21-config\") pod \"apiserver-76f77b778f-2f6px\" (UID: \"f5b2eb8e-7b36-40ac-b745-9e1a3efaec21\") " pod="openshift-apiserver/apiserver-76f77b778f-2f6px" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.058251 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f5b2eb8e-7b36-40ac-b745-9e1a3efaec21-etcd-serving-ca\") pod \"apiserver-76f77b778f-2f6px\" (UID: \"f5b2eb8e-7b36-40ac-b745-9e1a3efaec21\") " pod="openshift-apiserver/apiserver-76f77b778f-2f6px" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.058720 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/f5b2eb8e-7b36-40ac-b745-9e1a3efaec21-node-pullsecrets\") pod \"apiserver-76f77b778f-2f6px\" (UID: \"f5b2eb8e-7b36-40ac-b745-9e1a3efaec21\") " pod="openshift-apiserver/apiserver-76f77b778f-2f6px" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.058671 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536468-682zs"] Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.058784 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/ba876990-999b-4cd2-bb68-624cbf1b5701-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-fwnnw\" (UID: \"ba876990-999b-4cd2-bb68-624cbf1b5701\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-fwnnw" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.058873 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/f5b2eb8e-7b36-40ac-b745-9e1a3efaec21-audit\") pod \"apiserver-76f77b778f-2f6px\" (UID: \"f5b2eb8e-7b36-40ac-b745-9e1a3efaec21\") " pod="openshift-apiserver/apiserver-76f77b778f-2f6px" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.058923 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5b2eb8e-7b36-40ac-b745-9e1a3efaec21-config\") pod \"apiserver-76f77b778f-2f6px\" (UID: \"f5b2eb8e-7b36-40ac-b745-9e1a3efaec21\") " pod="openshift-apiserver/apiserver-76f77b778f-2f6px" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.058077 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/206a150d-98c4-4204-84e3-609198888fd4-serving-cert\") pod \"authentication-operator-69f744f599-lj5z6\" (UID: \"206a150d-98c4-4204-84e3-609198888fd4\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-lj5z6" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.059317 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/f5b2eb8e-7b36-40ac-b745-9e1a3efaec21-image-import-ca\") pod \"apiserver-76f77b778f-2f6px\" (UID: \"f5b2eb8e-7b36-40ac-b745-9e1a3efaec21\") " pod="openshift-apiserver/apiserver-76f77b778f-2f6px" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.059547 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b1d22605-abd6-4fc6-8352-8fe78ec02332-trusted-ca-bundle\") pod \"console-f9d7485db-tvflj\" (UID: \"b1d22605-abd6-4fc6-8352-8fe78ec02332\") " pod="openshift-console/console-f9d7485db-tvflj" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.059570 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/206a150d-98c4-4204-84e3-609198888fd4-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-lj5z6\" (UID: \"206a150d-98c4-4204-84e3-609198888fd4\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-lj5z6" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.063872 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/52d0f01e-a7ab-4c07-bd46-d014e84c3d6a-serving-cert\") pod \"controller-manager-879f6c89f-d2nkm\" (UID: \"52d0f01e-a7ab-4c07-bd46-d014e84c3d6a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-d2nkm" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.064008 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f5b2eb8e-7b36-40ac-b745-9e1a3efaec21-serving-cert\") pod \"apiserver-76f77b778f-2f6px\" (UID: \"f5b2eb8e-7b36-40ac-b745-9e1a3efaec21\") " pod="openshift-apiserver/apiserver-76f77b778f-2f6px" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.064173 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f5b2eb8e-7b36-40ac-b745-9e1a3efaec21-encryption-config\") pod \"apiserver-76f77b778f-2f6px\" (UID: \"f5b2eb8e-7b36-40ac-b745-9e1a3efaec21\") " pod="openshift-apiserver/apiserver-76f77b778f-2f6px" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.064227 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f5b2eb8e-7b36-40ac-b745-9e1a3efaec21-etcd-client\") pod \"apiserver-76f77b778f-2f6px\" (UID: \"f5b2eb8e-7b36-40ac-b745-9e1a3efaec21\") " pod="openshift-apiserver/apiserver-76f77b778f-2f6px" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.058736 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f5b2eb8e-7b36-40ac-b745-9e1a3efaec21-trusted-ca-bundle\") pod \"apiserver-76f77b778f-2f6px\" (UID: \"f5b2eb8e-7b36-40ac-b745-9e1a3efaec21\") " pod="openshift-apiserver/apiserver-76f77b778f-2f6px" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.065019 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b1d22605-abd6-4fc6-8352-8fe78ec02332-console-oauth-config\") pod \"console-f9d7485db-tvflj\" (UID: \"b1d22605-abd6-4fc6-8352-8fe78ec02332\") " pod="openshift-console/console-f9d7485db-tvflj" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.065073 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-b89pv"] Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.065181 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b1d22605-abd6-4fc6-8352-8fe78ec02332-console-config\") pod \"console-f9d7485db-tvflj\" (UID: \"b1d22605-abd6-4fc6-8352-8fe78ec02332\") " pod="openshift-console/console-f9d7485db-tvflj" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.065215 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6b237825-2c85-4de6-a839-b91e7d23d433-audit-dir\") pod \"apiserver-7bbb656c7d-j72xn\" (UID: \"6b237825-2c85-4de6-a839-b91e7d23d433\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j72xn" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.066040 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/6b237825-2c85-4de6-a839-b91e7d23d433-encryption-config\") pod \"apiserver-7bbb656c7d-j72xn\" (UID: \"6b237825-2c85-4de6-a839-b91e7d23d433\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j72xn" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.066110 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/36e6e9de-b708-4242-9251-1ba3b849a749-config\") pod \"console-operator-58897d9998-g4dbw\" (UID: \"36e6e9de-b708-4242-9251-1ba3b849a749\") " pod="openshift-console-operator/console-operator-58897d9998-g4dbw" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.066898 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6b237825-2c85-4de6-a839-b91e7d23d433-audit-dir\") pod \"apiserver-7bbb656c7d-j72xn\" (UID: \"6b237825-2c85-4de6-a839-b91e7d23d433\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j72xn" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.067112 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f5b2eb8e-7b36-40ac-b745-9e1a3efaec21-audit-dir\") pod \"apiserver-76f77b778f-2f6px\" (UID: \"f5b2eb8e-7b36-40ac-b745-9e1a3efaec21\") " pod="openshift-apiserver/apiserver-76f77b778f-2f6px" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.067174 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/36e6e9de-b708-4242-9251-1ba3b849a749-serving-cert\") pod \"console-operator-58897d9998-g4dbw\" (UID: \"36e6e9de-b708-4242-9251-1ba3b849a749\") " pod="openshift-console-operator/console-operator-58897d9998-g4dbw" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.067217 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e0d7166c-3042-4706-9683-6c6a32d29a9c-client-ca\") pod \"route-controller-manager-6576b87f9c-tz4jj\" (UID: \"e0d7166c-3042-4706-9683-6c6a32d29a9c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tz4jj" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.067252 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6b237825-2c85-4de6-a839-b91e7d23d433-etcd-client\") pod \"apiserver-7bbb656c7d-j72xn\" (UID: \"6b237825-2c85-4de6-a839-b91e7d23d433\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j72xn" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.067284 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/20c5c038-19ee-4ac6-b2dc-281920c6be9a-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-h55dj\" (UID: \"20c5c038-19ee-4ac6-b2dc-281920c6be9a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-h55dj" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.067319 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvwmk\" (UniqueName: \"kubernetes.io/projected/24dd67e0-7c58-4f3f-a712-ae2639e495fe-kube-api-access-hvwmk\") pod \"openshift-config-operator-7777fb866f-7g65x\" (UID: \"24dd67e0-7c58-4f3f-a712-ae2639e495fe\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-7g65x" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.067350 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mtrr9\" (UniqueName: \"kubernetes.io/projected/f5b2eb8e-7b36-40ac-b745-9e1a3efaec21-kube-api-access-mtrr9\") pod \"apiserver-76f77b778f-2f6px\" (UID: \"f5b2eb8e-7b36-40ac-b745-9e1a3efaec21\") " pod="openshift-apiserver/apiserver-76f77b778f-2f6px" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.067381 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/ba876990-999b-4cd2-bb68-624cbf1b5701-images\") pod \"machine-api-operator-5694c8668f-fwnnw\" (UID: \"ba876990-999b-4cd2-bb68-624cbf1b5701\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-fwnnw" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.067410 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/206a150d-98c4-4204-84e3-609198888fd4-service-ca-bundle\") pod \"authentication-operator-69f744f599-lj5z6\" (UID: \"206a150d-98c4-4204-84e3-609198888fd4\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-lj5z6" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.067444 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ac17bc60-379b-44dd-bf6b-2b9ecf87bf02-etcd-client\") pod \"etcd-operator-b45778765-zdjkf\" (UID: \"ac17bc60-379b-44dd-bf6b-2b9ecf87bf02\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zdjkf" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.067471 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/206a150d-98c4-4204-84e3-609198888fd4-config\") pod \"authentication-operator-69f744f599-lj5z6\" (UID: \"206a150d-98c4-4204-84e3-609198888fd4\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-lj5z6" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.067516 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k42fz\" (UniqueName: \"kubernetes.io/projected/206a150d-98c4-4204-84e3-609198888fd4-kube-api-access-k42fz\") pod \"authentication-operator-69f744f599-lj5z6\" (UID: \"206a150d-98c4-4204-84e3-609198888fd4\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-lj5z6" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.067547 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/20c5c038-19ee-4ac6-b2dc-281920c6be9a-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-h55dj\" (UID: \"20c5c038-19ee-4ac6-b2dc-281920c6be9a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-h55dj" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.067574 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/36e6e9de-b708-4242-9251-1ba3b849a749-trusted-ca\") pod \"console-operator-58897d9998-g4dbw\" (UID: \"36e6e9de-b708-4242-9251-1ba3b849a749\") " pod="openshift-console-operator/console-operator-58897d9998-g4dbw" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.067603 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e87e4167-c76f-4adc-9d67-28485f6a6397-metrics-tls\") pod \"dns-operator-744455d44c-9p4mb\" (UID: \"e87e4167-c76f-4adc-9d67-28485f6a6397\") " pod="openshift-dns-operator/dns-operator-744455d44c-9p4mb" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.067627 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/ac17bc60-379b-44dd-bf6b-2b9ecf87bf02-etcd-service-ca\") pod \"etcd-operator-b45778765-zdjkf\" (UID: \"ac17bc60-379b-44dd-bf6b-2b9ecf87bf02\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zdjkf" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.067657 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/0faf2938-8e5e-451e-99f9-c09124f6a767-machine-approver-tls\") pod \"machine-approver-56656f9798-bd58v\" (UID: \"0faf2938-8e5e-451e-99f9-c09124f6a767\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bd58v" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.067688 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2d2c4\" (UniqueName: \"kubernetes.io/projected/6b237825-2c85-4de6-a839-b91e7d23d433-kube-api-access-2d2c4\") pod \"apiserver-7bbb656c7d-j72xn\" (UID: \"6b237825-2c85-4de6-a839-b91e7d23d433\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j72xn" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.067716 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-px4d7\" (UniqueName: \"kubernetes.io/projected/3bd010d3-c618-478c-9dd4-f0c095b872ac-kube-api-access-px4d7\") pod \"migrator-59844c95c7-p7rtp\" (UID: \"3bd010d3-c618-478c-9dd4-f0c095b872ac\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-p7rtp" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.067872 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f5b2eb8e-7b36-40ac-b745-9e1a3efaec21-audit-dir\") pod \"apiserver-76f77b778f-2f6px\" (UID: \"f5b2eb8e-7b36-40ac-b745-9e1a3efaec21\") " pod="openshift-apiserver/apiserver-76f77b778f-2f6px" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.068831 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b1d22605-abd6-4fc6-8352-8fe78ec02332-console-config\") pod \"console-f9d7485db-tvflj\" (UID: \"b1d22605-abd6-4fc6-8352-8fe78ec02332\") " pod="openshift-console/console-f9d7485db-tvflj" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.068902 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29536455-mx6zt"] Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.069046 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/206a150d-98c4-4204-84e3-609198888fd4-config\") pod \"authentication-operator-69f744f599-lj5z6\" (UID: \"206a150d-98c4-4204-84e3-609198888fd4\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-lj5z6" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.069605 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/6b237825-2c85-4de6-a839-b91e7d23d433-encryption-config\") pod \"apiserver-7bbb656c7d-j72xn\" (UID: \"6b237825-2c85-4de6-a839-b91e7d23d433\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j72xn" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.069936 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/ba876990-999b-4cd2-bb68-624cbf1b5701-images\") pod \"machine-api-operator-5694c8668f-fwnnw\" (UID: \"ba876990-999b-4cd2-bb68-624cbf1b5701\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-fwnnw" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.071119 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6b237825-2c85-4de6-a839-b91e7d23d433-etcd-client\") pod \"apiserver-7bbb656c7d-j72xn\" (UID: \"6b237825-2c85-4de6-a839-b91e7d23d433\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j72xn" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.071174 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-bgndl"] Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.073864 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f5b2eb8e-7b36-40ac-b745-9e1a3efaec21-etcd-client\") pod \"apiserver-76f77b778f-2f6px\" (UID: \"f5b2eb8e-7b36-40ac-b745-9e1a3efaec21\") " pod="openshift-apiserver/apiserver-76f77b778f-2f6px" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.074169 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6b237825-2c85-4de6-a839-b91e7d23d433-serving-cert\") pod \"apiserver-7bbb656c7d-j72xn\" (UID: \"6b237825-2c85-4de6-a839-b91e7d23d433\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j72xn" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.075929 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/206a150d-98c4-4204-84e3-609198888fd4-service-ca-bundle\") pod \"authentication-operator-69f744f599-lj5z6\" (UID: \"206a150d-98c4-4204-84e3-609198888fd4\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-lj5z6" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.076998 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jzdfv"] Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.078050 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-4psh7"] Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.078636 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.079096 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-kwpmw"] Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.080113 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-p7rtp"] Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.081311 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-ghzzf"] Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.082583 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-kl7tc"] Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.083312 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-kl7tc" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.084446 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-f5nff"] Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.085674 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cxzlx"] Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.086945 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-kl7tc"] Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.088003 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-fj9d6"] Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.089022 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-7n4w8"] Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.090098 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-zjx6k"] Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.091413 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9bqx2"] Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.092666 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-sftjq"] Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.097695 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.128840 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.147856 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.157724 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.168272 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dk67w\" (UniqueName: \"kubernetes.io/projected/6bba0773-58d9-41fe-90da-12a1399387a7-kube-api-access-dk67w\") pod \"kube-storage-version-migrator-operator-b67b599dd-9m4nv\" (UID: \"6bba0773-58d9-41fe-90da-12a1399387a7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9m4nv" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.168311 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e0d7166c-3042-4706-9683-6c6a32d29a9c-serving-cert\") pod \"route-controller-manager-6576b87f9c-tz4jj\" (UID: \"e0d7166c-3042-4706-9683-6c6a32d29a9c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tz4jj" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.168334 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6bba0773-58d9-41fe-90da-12a1399387a7-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-9m4nv\" (UID: \"6bba0773-58d9-41fe-90da-12a1399387a7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9m4nv" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.168358 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ac17bc60-379b-44dd-bf6b-2b9ecf87bf02-serving-cert\") pod \"etcd-operator-b45778765-zdjkf\" (UID: \"ac17bc60-379b-44dd-bf6b-2b9ecf87bf02\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zdjkf" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.168379 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac17bc60-379b-44dd-bf6b-2b9ecf87bf02-config\") pod \"etcd-operator-b45778765-zdjkf\" (UID: \"ac17bc60-379b-44dd-bf6b-2b9ecf87bf02\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zdjkf" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.168401 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6e70c0a9-c703-42f3-b47c-c32dd62b435b-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-b89pv\" (UID: \"6e70c0a9-c703-42f3-b47c-c32dd62b435b\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-b89pv" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.168427 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/20c5c038-19ee-4ac6-b2dc-281920c6be9a-config\") pod \"kube-apiserver-operator-766d6c64bb-h55dj\" (UID: \"20c5c038-19ee-4ac6-b2dc-281920c6be9a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-h55dj" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.168452 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7ee9013e-5452-4e18-b4ce-3af1c8257662-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-vmtwl\" (UID: \"7ee9013e-5452-4e18-b4ce-3af1c8257662\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vmtwl" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.168476 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8578n\" (UniqueName: \"kubernetes.io/projected/7ee9013e-5452-4e18-b4ce-3af1c8257662-kube-api-access-8578n\") pod \"cluster-image-registry-operator-dc59b4c8b-vmtwl\" (UID: \"7ee9013e-5452-4e18-b4ce-3af1c8257662\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vmtwl" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.168537 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0d7166c-3042-4706-9683-6c6a32d29a9c-config\") pod \"route-controller-manager-6576b87f9c-tz4jj\" (UID: \"e0d7166c-3042-4706-9683-6c6a32d29a9c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tz4jj" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.168560 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/ac17bc60-379b-44dd-bf6b-2b9ecf87bf02-etcd-ca\") pod \"etcd-operator-b45778765-zdjkf\" (UID: \"ac17bc60-379b-44dd-bf6b-2b9ecf87bf02\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zdjkf" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.168580 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0faf2938-8e5e-451e-99f9-c09124f6a767-auth-proxy-config\") pod \"machine-approver-56656f9798-bd58v\" (UID: \"0faf2938-8e5e-451e-99f9-c09124f6a767\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bd58v" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.168602 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zhlws\" (UniqueName: \"kubernetes.io/projected/e87e4167-c76f-4adc-9d67-28485f6a6397-kube-api-access-zhlws\") pod \"dns-operator-744455d44c-9p4mb\" (UID: \"e87e4167-c76f-4adc-9d67-28485f6a6397\") " pod="openshift-dns-operator/dns-operator-744455d44c-9p4mb" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.168624 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0faf2938-8e5e-451e-99f9-c09124f6a767-config\") pod \"machine-approver-56656f9798-bd58v\" (UID: \"0faf2938-8e5e-451e-99f9-c09124f6a767\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bd58v" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.168646 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e70c0a9-c703-42f3-b47c-c32dd62b435b-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-b89pv\" (UID: \"6e70c0a9-c703-42f3-b47c-c32dd62b435b\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-b89pv" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.168682 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6bba0773-58d9-41fe-90da-12a1399387a7-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-9m4nv\" (UID: \"6bba0773-58d9-41fe-90da-12a1399387a7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9m4nv" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.168704 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/24dd67e0-7c58-4f3f-a712-ae2639e495fe-serving-cert\") pod \"openshift-config-operator-7777fb866f-7g65x\" (UID: \"24dd67e0-7c58-4f3f-a712-ae2639e495fe\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-7g65x" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.168725 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7ee9013e-5452-4e18-b4ce-3af1c8257662-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-vmtwl\" (UID: \"7ee9013e-5452-4e18-b4ce-3af1c8257662\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vmtwl" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.168749 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wlgbh\" (UniqueName: \"kubernetes.io/projected/36e6e9de-b708-4242-9251-1ba3b849a749-kube-api-access-wlgbh\") pod \"console-operator-58897d9998-g4dbw\" (UID: \"36e6e9de-b708-4242-9251-1ba3b849a749\") " pod="openshift-console-operator/console-operator-58897d9998-g4dbw" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.168769 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/24dd67e0-7c58-4f3f-a712-ae2639e495fe-available-featuregates\") pod \"openshift-config-operator-7777fb866f-7g65x\" (UID: \"24dd67e0-7c58-4f3f-a712-ae2639e495fe\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-7g65x" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.168789 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-grp2h\" (UniqueName: \"kubernetes.io/projected/ac17bc60-379b-44dd-bf6b-2b9ecf87bf02-kube-api-access-grp2h\") pod \"etcd-operator-b45778765-zdjkf\" (UID: \"ac17bc60-379b-44dd-bf6b-2b9ecf87bf02\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zdjkf" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.168813 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6e70c0a9-c703-42f3-b47c-c32dd62b435b-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-b89pv\" (UID: \"6e70c0a9-c703-42f3-b47c-c32dd62b435b\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-b89pv" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.168844 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/36e6e9de-b708-4242-9251-1ba3b849a749-config\") pod \"console-operator-58897d9998-g4dbw\" (UID: \"36e6e9de-b708-4242-9251-1ba3b849a749\") " pod="openshift-console-operator/console-operator-58897d9998-g4dbw" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.168867 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/36e6e9de-b708-4242-9251-1ba3b849a749-serving-cert\") pod \"console-operator-58897d9998-g4dbw\" (UID: \"36e6e9de-b708-4242-9251-1ba3b849a749\") " pod="openshift-console-operator/console-operator-58897d9998-g4dbw" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.168894 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/20c5c038-19ee-4ac6-b2dc-281920c6be9a-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-h55dj\" (UID: \"20c5c038-19ee-4ac6-b2dc-281920c6be9a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-h55dj" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.168915 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e0d7166c-3042-4706-9683-6c6a32d29a9c-client-ca\") pod \"route-controller-manager-6576b87f9c-tz4jj\" (UID: \"e0d7166c-3042-4706-9683-6c6a32d29a9c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tz4jj" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.168938 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hvwmk\" (UniqueName: \"kubernetes.io/projected/24dd67e0-7c58-4f3f-a712-ae2639e495fe-kube-api-access-hvwmk\") pod \"openshift-config-operator-7777fb866f-7g65x\" (UID: \"24dd67e0-7c58-4f3f-a712-ae2639e495fe\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-7g65x" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.168965 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ac17bc60-379b-44dd-bf6b-2b9ecf87bf02-etcd-client\") pod \"etcd-operator-b45778765-zdjkf\" (UID: \"ac17bc60-379b-44dd-bf6b-2b9ecf87bf02\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zdjkf" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.168995 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/20c5c038-19ee-4ac6-b2dc-281920c6be9a-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-h55dj\" (UID: \"20c5c038-19ee-4ac6-b2dc-281920c6be9a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-h55dj" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.169016 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/36e6e9de-b708-4242-9251-1ba3b849a749-trusted-ca\") pod \"console-operator-58897d9998-g4dbw\" (UID: \"36e6e9de-b708-4242-9251-1ba3b849a749\") " pod="openshift-console-operator/console-operator-58897d9998-g4dbw" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.169037 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e87e4167-c76f-4adc-9d67-28485f6a6397-metrics-tls\") pod \"dns-operator-744455d44c-9p4mb\" (UID: \"e87e4167-c76f-4adc-9d67-28485f6a6397\") " pod="openshift-dns-operator/dns-operator-744455d44c-9p4mb" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.169058 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/ac17bc60-379b-44dd-bf6b-2b9ecf87bf02-etcd-service-ca\") pod \"etcd-operator-b45778765-zdjkf\" (UID: \"ac17bc60-379b-44dd-bf6b-2b9ecf87bf02\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zdjkf" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.169082 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac17bc60-379b-44dd-bf6b-2b9ecf87bf02-config\") pod \"etcd-operator-b45778765-zdjkf\" (UID: \"ac17bc60-379b-44dd-bf6b-2b9ecf87bf02\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zdjkf" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.169082 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/0faf2938-8e5e-451e-99f9-c09124f6a767-machine-approver-tls\") pod \"machine-approver-56656f9798-bd58v\" (UID: \"0faf2938-8e5e-451e-99f9-c09124f6a767\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bd58v" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.169133 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-px4d7\" (UniqueName: \"kubernetes.io/projected/3bd010d3-c618-478c-9dd4-f0c095b872ac-kube-api-access-px4d7\") pod \"migrator-59844c95c7-p7rtp\" (UID: \"3bd010d3-c618-478c-9dd4-f0c095b872ac\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-p7rtp" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.169153 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7z2z7\" (UniqueName: \"kubernetes.io/projected/e0d7166c-3042-4706-9683-6c6a32d29a9c-kube-api-access-7z2z7\") pod \"route-controller-manager-6576b87f9c-tz4jj\" (UID: \"e0d7166c-3042-4706-9683-6c6a32d29a9c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tz4jj" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.169173 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vmrkp\" (UniqueName: \"kubernetes.io/projected/0faf2938-8e5e-451e-99f9-c09124f6a767-kube-api-access-vmrkp\") pod \"machine-approver-56656f9798-bd58v\" (UID: \"0faf2938-8e5e-451e-99f9-c09124f6a767\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bd58v" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.169191 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/7ee9013e-5452-4e18-b4ce-3af1c8257662-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-vmtwl\" (UID: \"7ee9013e-5452-4e18-b4ce-3af1c8257662\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vmtwl" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.169291 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/20c5c038-19ee-4ac6-b2dc-281920c6be9a-config\") pod \"kube-apiserver-operator-766d6c64bb-h55dj\" (UID: \"20c5c038-19ee-4ac6-b2dc-281920c6be9a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-h55dj" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.169753 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7ee9013e-5452-4e18-b4ce-3af1c8257662-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-vmtwl\" (UID: \"7ee9013e-5452-4e18-b4ce-3af1c8257662\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vmtwl" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.170444 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/ac17bc60-379b-44dd-bf6b-2b9ecf87bf02-etcd-ca\") pod \"etcd-operator-b45778765-zdjkf\" (UID: \"ac17bc60-379b-44dd-bf6b-2b9ecf87bf02\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zdjkf" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.170586 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/36e6e9de-b708-4242-9251-1ba3b849a749-config\") pod \"console-operator-58897d9998-g4dbw\" (UID: \"36e6e9de-b708-4242-9251-1ba3b849a749\") " pod="openshift-console-operator/console-operator-58897d9998-g4dbw" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.171029 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/36e6e9de-b708-4242-9251-1ba3b849a749-trusted-ca\") pod \"console-operator-58897d9998-g4dbw\" (UID: \"36e6e9de-b708-4242-9251-1ba3b849a749\") " pod="openshift-console-operator/console-operator-58897d9998-g4dbw" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.171055 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0d7166c-3042-4706-9683-6c6a32d29a9c-config\") pod \"route-controller-manager-6576b87f9c-tz4jj\" (UID: \"e0d7166c-3042-4706-9683-6c6a32d29a9c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tz4jj" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.171085 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0faf2938-8e5e-451e-99f9-c09124f6a767-auth-proxy-config\") pod \"machine-approver-56656f9798-bd58v\" (UID: \"0faf2938-8e5e-451e-99f9-c09124f6a767\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bd58v" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.171427 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e0d7166c-3042-4706-9683-6c6a32d29a9c-client-ca\") pod \"route-controller-manager-6576b87f9c-tz4jj\" (UID: \"e0d7166c-3042-4706-9683-6c6a32d29a9c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tz4jj" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.171472 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0faf2938-8e5e-451e-99f9-c09124f6a767-config\") pod \"machine-approver-56656f9798-bd58v\" (UID: \"0faf2938-8e5e-451e-99f9-c09124f6a767\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bd58v" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.171543 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e0d7166c-3042-4706-9683-6c6a32d29a9c-serving-cert\") pod \"route-controller-manager-6576b87f9c-tz4jj\" (UID: \"e0d7166c-3042-4706-9683-6c6a32d29a9c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tz4jj" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.171835 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/24dd67e0-7c58-4f3f-a712-ae2639e495fe-available-featuregates\") pod \"openshift-config-operator-7777fb866f-7g65x\" (UID: \"24dd67e0-7c58-4f3f-a712-ae2639e495fe\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-7g65x" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.172006 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/ac17bc60-379b-44dd-bf6b-2b9ecf87bf02-etcd-service-ca\") pod \"etcd-operator-b45778765-zdjkf\" (UID: \"ac17bc60-379b-44dd-bf6b-2b9ecf87bf02\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zdjkf" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.173288 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/36e6e9de-b708-4242-9251-1ba3b849a749-serving-cert\") pod \"console-operator-58897d9998-g4dbw\" (UID: \"36e6e9de-b708-4242-9251-1ba3b849a749\") " pod="openshift-console-operator/console-operator-58897d9998-g4dbw" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.173523 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ac17bc60-379b-44dd-bf6b-2b9ecf87bf02-etcd-client\") pod \"etcd-operator-b45778765-zdjkf\" (UID: \"ac17bc60-379b-44dd-bf6b-2b9ecf87bf02\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zdjkf" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.173932 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/24dd67e0-7c58-4f3f-a712-ae2639e495fe-serving-cert\") pod \"openshift-config-operator-7777fb866f-7g65x\" (UID: \"24dd67e0-7c58-4f3f-a712-ae2639e495fe\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-7g65x" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.174063 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e87e4167-c76f-4adc-9d67-28485f6a6397-metrics-tls\") pod \"dns-operator-744455d44c-9p4mb\" (UID: \"e87e4167-c76f-4adc-9d67-28485f6a6397\") " pod="openshift-dns-operator/dns-operator-744455d44c-9p4mb" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.174121 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/7ee9013e-5452-4e18-b4ce-3af1c8257662-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-vmtwl\" (UID: \"7ee9013e-5452-4e18-b4ce-3af1c8257662\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vmtwl" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.174492 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/0faf2938-8e5e-451e-99f9-c09124f6a767-machine-approver-tls\") pod \"machine-approver-56656f9798-bd58v\" (UID: \"0faf2938-8e5e-451e-99f9-c09124f6a767\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bd58v" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.177590 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.179667 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ac17bc60-379b-44dd-bf6b-2b9ecf87bf02-serving-cert\") pod \"etcd-operator-b45778765-zdjkf\" (UID: \"ac17bc60-379b-44dd-bf6b-2b9ecf87bf02\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zdjkf" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.182487 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/20c5c038-19ee-4ac6-b2dc-281920c6be9a-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-h55dj\" (UID: \"20c5c038-19ee-4ac6-b2dc-281920c6be9a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-h55dj" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.198487 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.217913 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.237774 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.257390 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.278116 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.298094 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.318980 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.338437 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.366669 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.380030 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.398525 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.419867 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.439075 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.459795 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.479610 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.489984 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6e70c0a9-c703-42f3-b47c-c32dd62b435b-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-b89pv\" (UID: \"6e70c0a9-c703-42f3-b47c-c32dd62b435b\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-b89pv" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.499699 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.500806 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e70c0a9-c703-42f3-b47c-c32dd62b435b-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-b89pv\" (UID: \"6e70c0a9-c703-42f3-b47c-c32dd62b435b\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-b89pv" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.518474 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.538717 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.557686 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.579035 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.598572 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.610031 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6bba0773-58d9-41fe-90da-12a1399387a7-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-9m4nv\" (UID: \"6bba0773-58d9-41fe-90da-12a1399387a7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9m4nv" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.618431 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.638764 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.658846 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.667662 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6bba0773-58d9-41fe-90da-12a1399387a7-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-9m4nv\" (UID: \"6bba0773-58d9-41fe-90da-12a1399387a7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9m4nv" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.698165 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.719103 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.738393 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.758459 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.779126 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.798099 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.818552 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.838037 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.858479 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.878724 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.898672 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.919316 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.939120 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.958870 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.976563 4728 request.go:700] Waited for 1.004926619s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmachine-config-controller-dockercfg-c2lfx&limit=500&resourceVersion=0 Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.978582 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 27 10:29:01 crc kubenswrapper[4728]: I0227 10:29:01.999160 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 27 10:29:02 crc kubenswrapper[4728]: I0227 10:29:02.018951 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 27 10:29:02 crc kubenswrapper[4728]: I0227 10:29:02.038552 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 27 10:29:02 crc kubenswrapper[4728]: I0227 10:29:02.074137 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 27 10:29:02 crc kubenswrapper[4728]: I0227 10:29:02.078193 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 27 10:29:02 crc kubenswrapper[4728]: I0227 10:29:02.098909 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 27 10:29:02 crc kubenswrapper[4728]: I0227 10:29:02.118017 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 27 10:29:02 crc kubenswrapper[4728]: I0227 10:29:02.137977 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 27 10:29:02 crc kubenswrapper[4728]: I0227 10:29:02.171322 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 27 10:29:02 crc kubenswrapper[4728]: I0227 10:29:02.179875 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 27 10:29:02 crc kubenswrapper[4728]: I0227 10:29:02.199159 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 27 10:29:02 crc kubenswrapper[4728]: I0227 10:29:02.219285 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 27 10:29:02 crc kubenswrapper[4728]: I0227 10:29:02.239217 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 27 10:29:02 crc kubenswrapper[4728]: I0227 10:29:02.258643 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 27 10:29:02 crc kubenswrapper[4728]: I0227 10:29:02.278839 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 27 10:29:02 crc kubenswrapper[4728]: I0227 10:29:02.298150 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 27 10:29:02 crc kubenswrapper[4728]: I0227 10:29:02.318809 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 27 10:29:02 crc kubenswrapper[4728]: I0227 10:29:02.337664 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 27 10:29:02 crc kubenswrapper[4728]: I0227 10:29:02.358325 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 27 10:29:02 crc kubenswrapper[4728]: I0227 10:29:02.377430 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 27 10:29:02 crc kubenswrapper[4728]: I0227 10:29:02.397670 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 27 10:29:02 crc kubenswrapper[4728]: I0227 10:29:02.417676 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 27 10:29:02 crc kubenswrapper[4728]: I0227 10:29:02.438400 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 27 10:29:02 crc kubenswrapper[4728]: I0227 10:29:02.457286 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 27 10:29:02 crc kubenswrapper[4728]: I0227 10:29:02.478001 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 10:29:02 crc kubenswrapper[4728]: I0227 10:29:02.499142 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 10:29:02 crc kubenswrapper[4728]: I0227 10:29:02.518704 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 27 10:29:02 crc kubenswrapper[4728]: I0227 10:29:02.538884 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 27 10:29:02 crc kubenswrapper[4728]: I0227 10:29:02.558100 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 27 10:29:02 crc kubenswrapper[4728]: I0227 10:29:02.578672 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 27 10:29:02 crc kubenswrapper[4728]: I0227 10:29:02.598659 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 27 10:29:02 crc kubenswrapper[4728]: I0227 10:29:02.618938 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 27 10:29:02 crc kubenswrapper[4728]: I0227 10:29:02.637932 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 27 10:29:02 crc kubenswrapper[4728]: I0227 10:29:02.658159 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 27 10:29:02 crc kubenswrapper[4728]: I0227 10:29:02.678262 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 27 10:29:02 crc kubenswrapper[4728]: I0227 10:29:02.697939 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 27 10:29:02 crc kubenswrapper[4728]: I0227 10:29:02.718726 4728 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 27 10:29:02 crc kubenswrapper[4728]: I0227 10:29:02.738432 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 27 10:29:02 crc kubenswrapper[4728]: I0227 10:29:02.757989 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 27 10:29:02 crc kubenswrapper[4728]: I0227 10:29:02.778822 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 27 10:29:02 crc kubenswrapper[4728]: I0227 10:29:02.799057 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 27 10:29:02 crc kubenswrapper[4728]: I0227 10:29:02.838310 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-sysctl-allowlist" Feb 27 10:29:02 crc kubenswrapper[4728]: I0227 10:29:02.893554 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8fs96\" (UniqueName: \"kubernetes.io/projected/52d0f01e-a7ab-4c07-bd46-d014e84c3d6a-kube-api-access-8fs96\") pod \"controller-manager-879f6c89f-d2nkm\" (UID: \"52d0f01e-a7ab-4c07-bd46-d014e84c3d6a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-d2nkm" Feb 27 10:29:02 crc kubenswrapper[4728]: I0227 10:29:02.912857 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4fvx\" (UniqueName: \"kubernetes.io/projected/ba876990-999b-4cd2-bb68-624cbf1b5701-kube-api-access-s4fvx\") pod \"machine-api-operator-5694c8668f-fwnnw\" (UID: \"ba876990-999b-4cd2-bb68-624cbf1b5701\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-fwnnw" Feb 27 10:29:02 crc kubenswrapper[4728]: I0227 10:29:02.924443 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sx27b\" (UniqueName: \"kubernetes.io/projected/b1d22605-abd6-4fc6-8352-8fe78ec02332-kube-api-access-sx27b\") pod \"console-f9d7485db-tvflj\" (UID: \"b1d22605-abd6-4fc6-8352-8fe78ec02332\") " pod="openshift-console/console-f9d7485db-tvflj" Feb 27 10:29:02 crc kubenswrapper[4728]: I0227 10:29:02.943003 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-d2nkm" Feb 27 10:29:02 crc kubenswrapper[4728]: I0227 10:29:02.947023 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mtrr9\" (UniqueName: \"kubernetes.io/projected/f5b2eb8e-7b36-40ac-b745-9e1a3efaec21-kube-api-access-mtrr9\") pod \"apiserver-76f77b778f-2f6px\" (UID: \"f5b2eb8e-7b36-40ac-b745-9e1a3efaec21\") " pod="openshift-apiserver/apiserver-76f77b778f-2f6px" Feb 27 10:29:02 crc kubenswrapper[4728]: I0227 10:29:02.966747 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k42fz\" (UniqueName: \"kubernetes.io/projected/206a150d-98c4-4204-84e3-609198888fd4-kube-api-access-k42fz\") pod \"authentication-operator-69f744f599-lj5z6\" (UID: \"206a150d-98c4-4204-84e3-609198888fd4\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-lj5z6" Feb 27 10:29:02 crc kubenswrapper[4728]: I0227 10:29:02.978697 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 27 10:29:02 crc kubenswrapper[4728]: I0227 10:29:02.986388 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2d2c4\" (UniqueName: \"kubernetes.io/projected/6b237825-2c85-4de6-a839-b91e7d23d433-kube-api-access-2d2c4\") pod \"apiserver-7bbb656c7d-j72xn\" (UID: \"6b237825-2c85-4de6-a839-b91e7d23d433\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j72xn" Feb 27 10:29:02 crc kubenswrapper[4728]: I0227 10:29:02.997176 4728 request.go:700] Waited for 1.913540691s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-canary/secrets?fieldSelector=metadata.name%3Ddefault-dockercfg-2llfx&limit=500&resourceVersion=0 Feb 27 10:29:02 crc kubenswrapper[4728]: I0227 10:29:02.998523 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-fwnnw" Feb 27 10:29:02 crc kubenswrapper[4728]: I0227 10:29:02.999078 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.018190 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.021709 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-tvflj" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.030478 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-lj5z6" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.038810 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.085154 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dk67w\" (UniqueName: \"kubernetes.io/projected/6bba0773-58d9-41fe-90da-12a1399387a7-kube-api-access-dk67w\") pod \"kube-storage-version-migrator-operator-b67b599dd-9m4nv\" (UID: \"6bba0773-58d9-41fe-90da-12a1399387a7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9m4nv" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.096329 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6e70c0a9-c703-42f3-b47c-c32dd62b435b-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-b89pv\" (UID: \"6e70c0a9-c703-42f3-b47c-c32dd62b435b\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-b89pv" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.114647 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8578n\" (UniqueName: \"kubernetes.io/projected/7ee9013e-5452-4e18-b4ce-3af1c8257662-kube-api-access-8578n\") pod \"cluster-image-registry-operator-dc59b4c8b-vmtwl\" (UID: \"7ee9013e-5452-4e18-b4ce-3af1c8257662\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vmtwl" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.151700 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7z2z7\" (UniqueName: \"kubernetes.io/projected/e0d7166c-3042-4706-9683-6c6a32d29a9c-kube-api-access-7z2z7\") pod \"route-controller-manager-6576b87f9c-tz4jj\" (UID: \"e0d7166c-3042-4706-9683-6c6a32d29a9c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tz4jj" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.154933 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hvwmk\" (UniqueName: \"kubernetes.io/projected/24dd67e0-7c58-4f3f-a712-ae2639e495fe-kube-api-access-hvwmk\") pod \"openshift-config-operator-7777fb866f-7g65x\" (UID: \"24dd67e0-7c58-4f3f-a712-ae2639e495fe\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-7g65x" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.174622 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-grp2h\" (UniqueName: \"kubernetes.io/projected/ac17bc60-379b-44dd-bf6b-2b9ecf87bf02-kube-api-access-grp2h\") pod \"etcd-operator-b45778765-zdjkf\" (UID: \"ac17bc60-379b-44dd-bf6b-2b9ecf87bf02\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zdjkf" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.193404 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-px4d7\" (UniqueName: \"kubernetes.io/projected/3bd010d3-c618-478c-9dd4-f0c095b872ac-kube-api-access-px4d7\") pod \"migrator-59844c95c7-p7rtp\" (UID: \"3bd010d3-c618-478c-9dd4-f0c095b872ac\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-p7rtp" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.195227 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-d2nkm"] Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.206910 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-b89pv" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.210275 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/20c5c038-19ee-4ac6-b2dc-281920c6be9a-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-h55dj\" (UID: \"20c5c038-19ee-4ac6-b2dc-281920c6be9a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-h55dj" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.212814 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-p7rtp" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.219999 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9m4nv" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.231272 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zhlws\" (UniqueName: \"kubernetes.io/projected/e87e4167-c76f-4adc-9d67-28485f6a6397-kube-api-access-zhlws\") pod \"dns-operator-744455d44c-9p4mb\" (UID: \"e87e4167-c76f-4adc-9d67-28485f6a6397\") " pod="openshift-dns-operator/dns-operator-744455d44c-9p4mb" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.234793 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-2f6px" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.253395 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wlgbh\" (UniqueName: \"kubernetes.io/projected/36e6e9de-b708-4242-9251-1ba3b849a749-kube-api-access-wlgbh\") pod \"console-operator-58897d9998-g4dbw\" (UID: \"36e6e9de-b708-4242-9251-1ba3b849a749\") " pod="openshift-console-operator/console-operator-58897d9998-g4dbw" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.259414 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j72xn" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.280622 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-fwnnw"] Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.288577 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7ee9013e-5452-4e18-b4ce-3af1c8257662-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-vmtwl\" (UID: \"7ee9013e-5452-4e18-b4ce-3af1c8257662\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vmtwl" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.298129 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vmrkp\" (UniqueName: \"kubernetes.io/projected/0faf2938-8e5e-451e-99f9-c09124f6a767-kube-api-access-vmrkp\") pod \"machine-approver-56656f9798-bd58v\" (UID: \"0faf2938-8e5e-451e-99f9-c09124f6a767\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bd58v" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.311256 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/de6bdf95-032d-42a2-a8b5-0202641a05c1-audit-policies\") pod \"oauth-openshift-558db77b4-25vw6\" (UID: \"de6bdf95-032d-42a2-a8b5-0202641a05c1\") " pod="openshift-authentication/oauth-openshift-558db77b4-25vw6" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.311305 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w4vnn\" (UID: \"9f01d342-6bde-4063-b99d-b0efda456aef\") " pod="openshift-image-registry/image-registry-697d97f7c8-w4vnn" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.311323 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/de6bdf95-032d-42a2-a8b5-0202641a05c1-audit-dir\") pod \"oauth-openshift-558db77b4-25vw6\" (UID: \"de6bdf95-032d-42a2-a8b5-0202641a05c1\") " pod="openshift-authentication/oauth-openshift-558db77b4-25vw6" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.311338 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/de6bdf95-032d-42a2-a8b5-0202641a05c1-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-25vw6\" (UID: \"de6bdf95-032d-42a2-a8b5-0202641a05c1\") " pod="openshift-authentication/oauth-openshift-558db77b4-25vw6" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.311433 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/de6bdf95-032d-42a2-a8b5-0202641a05c1-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-25vw6\" (UID: \"de6bdf95-032d-42a2-a8b5-0202641a05c1\") " pod="openshift-authentication/oauth-openshift-558db77b4-25vw6" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.311476 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/7319e158-317b-4f98-b9da-0481f2c0aca8-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-5lw67\" (UID: \"7319e158-317b-4f98-b9da-0481f2c0aca8\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-5lw67" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.311497 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/de6bdf95-032d-42a2-a8b5-0202641a05c1-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-25vw6\" (UID: \"de6bdf95-032d-42a2-a8b5-0202641a05c1\") " pod="openshift-authentication/oauth-openshift-558db77b4-25vw6" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.311549 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dcbb9385-ebab-4698-9b8b-ceaf90e103f2-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-npkjm\" (UID: \"dcbb9385-ebab-4698-9b8b-ceaf90e103f2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-npkjm" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.311571 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8tzlx\" (UniqueName: \"kubernetes.io/projected/8f2ca8d1-e849-4530-9260-09dd161dc4c3-kube-api-access-8tzlx\") pod \"openshift-controller-manager-operator-756b6f6bc6-q9bgg\" (UID: \"8f2ca8d1-e849-4530-9260-09dd161dc4c3\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-q9bgg" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.311587 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcbb9385-ebab-4698-9b8b-ceaf90e103f2-config\") pod \"openshift-apiserver-operator-796bbdcf4f-npkjm\" (UID: \"dcbb9385-ebab-4698-9b8b-ceaf90e103f2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-npkjm" Feb 27 10:29:03 crc kubenswrapper[4728]: E0227 10:29:03.311597 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 10:29:03.811585412 +0000 UTC m=+163.773951518 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w4vnn" (UID: "9f01d342-6bde-4063-b99d-b0efda456aef") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.311618 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f2ca8d1-e849-4530-9260-09dd161dc4c3-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-q9bgg\" (UID: \"8f2ca8d1-e849-4530-9260-09dd161dc4c3\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-q9bgg" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.311643 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/de6bdf95-032d-42a2-a8b5-0202641a05c1-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-25vw6\" (UID: \"de6bdf95-032d-42a2-a8b5-0202641a05c1\") " pod="openshift-authentication/oauth-openshift-558db77b4-25vw6" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.311658 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9f01d342-6bde-4063-b99d-b0efda456aef-trusted-ca\") pod \"image-registry-697d97f7c8-w4vnn\" (UID: \"9f01d342-6bde-4063-b99d-b0efda456aef\") " pod="openshift-image-registry/image-registry-697d97f7c8-w4vnn" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.311675 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4kgv\" (UniqueName: \"kubernetes.io/projected/9f01d342-6bde-4063-b99d-b0efda456aef-kube-api-access-c4kgv\") pod \"image-registry-697d97f7c8-w4vnn\" (UID: \"9f01d342-6bde-4063-b99d-b0efda456aef\") " pod="openshift-image-registry/image-registry-697d97f7c8-w4vnn" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.311693 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2f537b0d-c3b9-4f04-b471-91c204f854a0-metrics-tls\") pod \"ingress-operator-5b745b69d9-kwpmw\" (UID: \"2f537b0d-c3b9-4f04-b471-91c204f854a0\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kwpmw" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.311707 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzwwm\" (UniqueName: \"kubernetes.io/projected/2f537b0d-c3b9-4f04-b471-91c204f854a0-kube-api-access-fzwwm\") pod \"ingress-operator-5b745b69d9-kwpmw\" (UID: \"2f537b0d-c3b9-4f04-b471-91c204f854a0\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kwpmw" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.311728 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9f01d342-6bde-4063-b99d-b0efda456aef-registry-tls\") pod \"image-registry-697d97f7c8-w4vnn\" (UID: \"9f01d342-6bde-4063-b99d-b0efda456aef\") " pod="openshift-image-registry/image-registry-697d97f7c8-w4vnn" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.311743 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/de6bdf95-032d-42a2-a8b5-0202641a05c1-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-25vw6\" (UID: \"de6bdf95-032d-42a2-a8b5-0202641a05c1\") " pod="openshift-authentication/oauth-openshift-558db77b4-25vw6" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.311756 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/de6bdf95-032d-42a2-a8b5-0202641a05c1-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-25vw6\" (UID: \"de6bdf95-032d-42a2-a8b5-0202641a05c1\") " pod="openshift-authentication/oauth-openshift-558db77b4-25vw6" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.311773 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/de6bdf95-032d-42a2-a8b5-0202641a05c1-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-25vw6\" (UID: \"de6bdf95-032d-42a2-a8b5-0202641a05c1\") " pod="openshift-authentication/oauth-openshift-558db77b4-25vw6" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.311793 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9f01d342-6bde-4063-b99d-b0efda456aef-ca-trust-extracted\") pod \"image-registry-697d97f7c8-w4vnn\" (UID: \"9f01d342-6bde-4063-b99d-b0efda456aef\") " pod="openshift-image-registry/image-registry-697d97f7c8-w4vnn" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.311808 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9f01d342-6bde-4063-b99d-b0efda456aef-registry-certificates\") pod \"image-registry-697d97f7c8-w4vnn\" (UID: \"9f01d342-6bde-4063-b99d-b0efda456aef\") " pod="openshift-image-registry/image-registry-697d97f7c8-w4vnn" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.311823 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9f01d342-6bde-4063-b99d-b0efda456aef-installation-pull-secrets\") pod \"image-registry-697d97f7c8-w4vnn\" (UID: \"9f01d342-6bde-4063-b99d-b0efda456aef\") " pod="openshift-image-registry/image-registry-697d97f7c8-w4vnn" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.311842 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/de6bdf95-032d-42a2-a8b5-0202641a05c1-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-25vw6\" (UID: \"de6bdf95-032d-42a2-a8b5-0202641a05c1\") " pod="openshift-authentication/oauth-openshift-558db77b4-25vw6" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.311858 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/de6bdf95-032d-42a2-a8b5-0202641a05c1-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-25vw6\" (UID: \"de6bdf95-032d-42a2-a8b5-0202641a05c1\") " pod="openshift-authentication/oauth-openshift-558db77b4-25vw6" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.311874 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwn5v\" (UniqueName: \"kubernetes.io/projected/dcbb9385-ebab-4698-9b8b-ceaf90e103f2-kube-api-access-pwn5v\") pod \"openshift-apiserver-operator-796bbdcf4f-npkjm\" (UID: \"dcbb9385-ebab-4698-9b8b-ceaf90e103f2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-npkjm" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.311888 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8f2ca8d1-e849-4530-9260-09dd161dc4c3-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-q9bgg\" (UID: \"8f2ca8d1-e849-4530-9260-09dd161dc4c3\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-q9bgg" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.311904 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9f01d342-6bde-4063-b99d-b0efda456aef-bound-sa-token\") pod \"image-registry-697d97f7c8-w4vnn\" (UID: \"9f01d342-6bde-4063-b99d-b0efda456aef\") " pod="openshift-image-registry/image-registry-697d97f7c8-w4vnn" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.311917 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/de6bdf95-032d-42a2-a8b5-0202641a05c1-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-25vw6\" (UID: \"de6bdf95-032d-42a2-a8b5-0202641a05c1\") " pod="openshift-authentication/oauth-openshift-558db77b4-25vw6" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.311935 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dpzjx\" (UniqueName: \"kubernetes.io/projected/a3656135-373e-4ec6-9cf1-e34d6a95c5a5-kube-api-access-dpzjx\") pod \"downloads-7954f5f757-c46ql\" (UID: \"a3656135-373e-4ec6-9cf1-e34d6a95c5a5\") " pod="openshift-console/downloads-7954f5f757-c46ql" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.311955 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2f537b0d-c3b9-4f04-b471-91c204f854a0-trusted-ca\") pod \"ingress-operator-5b745b69d9-kwpmw\" (UID: \"2f537b0d-c3b9-4f04-b471-91c204f854a0\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kwpmw" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.311980 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/de6bdf95-032d-42a2-a8b5-0202641a05c1-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-25vw6\" (UID: \"de6bdf95-032d-42a2-a8b5-0202641a05c1\") " pod="openshift-authentication/oauth-openshift-558db77b4-25vw6" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.311993 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzfpf\" (UniqueName: \"kubernetes.io/projected/de6bdf95-032d-42a2-a8b5-0202641a05c1-kube-api-access-fzfpf\") pod \"oauth-openshift-558db77b4-25vw6\" (UID: \"de6bdf95-032d-42a2-a8b5-0202641a05c1\") " pod="openshift-authentication/oauth-openshift-558db77b4-25vw6" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.312006 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2f537b0d-c3b9-4f04-b471-91c204f854a0-bound-sa-token\") pod \"ingress-operator-5b745b69d9-kwpmw\" (UID: \"2f537b0d-c3b9-4f04-b471-91c204f854a0\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kwpmw" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.312022 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7x2k4\" (UniqueName: \"kubernetes.io/projected/7319e158-317b-4f98-b9da-0481f2c0aca8-kube-api-access-7x2k4\") pod \"cluster-samples-operator-665b6dd947-5lw67\" (UID: \"7319e158-317b-4f98-b9da-0481f2c0aca8\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-5lw67" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.325846 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-tvflj"] Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.345930 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bd58v" Feb 27 10:29:03 crc kubenswrapper[4728]: W0227 10:29:03.363675 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb1d22605_abd6_4fc6_8352_8fe78ec02332.slice/crio-56c79be41d0ed79e41abcf9d588c28c4a9e0751970eb545cc2f507f5cd3ea33c WatchSource:0}: Error finding container 56c79be41d0ed79e41abcf9d588c28c4a9e0751970eb545cc2f507f5cd3ea33c: Status 404 returned error can't find the container with id 56c79be41d0ed79e41abcf9d588c28c4a9e0751970eb545cc2f507f5cd3ea33c Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.404866 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vmtwl" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.410601 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-9p4mb" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.410930 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-b89pv"] Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.412765 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.413239 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/5eed07da-15ae-438b-a97f-568cd05ea1ee-profile-collector-cert\") pod \"olm-operator-6b444d44fb-fj9d6\" (UID: \"5eed07da-15ae-438b-a97f-568cd05ea1ee\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-fj9d6" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.413343 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f82e7468-152d-46a6-9012-3bb0b4219b3f-secret-volume\") pod \"collect-profiles-29536455-mx6zt\" (UID: \"f82e7468-152d-46a6-9012-3bb0b4219b3f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536455-mx6zt" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.413366 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2kl4f\" (UniqueName: \"kubernetes.io/projected/f82e7468-152d-46a6-9012-3bb0b4219b3f-kube-api-access-2kl4f\") pod \"collect-profiles-29536455-mx6zt\" (UID: \"f82e7468-152d-46a6-9012-3bb0b4219b3f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536455-mx6zt" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.413617 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2f537b0d-c3b9-4f04-b471-91c204f854a0-trusted-ca\") pod \"ingress-operator-5b745b69d9-kwpmw\" (UID: \"2f537b0d-c3b9-4f04-b471-91c204f854a0\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kwpmw" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.413832 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fzfpf\" (UniqueName: \"kubernetes.io/projected/de6bdf95-032d-42a2-a8b5-0202641a05c1-kube-api-access-fzfpf\") pod \"oauth-openshift-558db77b4-25vw6\" (UID: \"de6bdf95-032d-42a2-a8b5-0202641a05c1\") " pod="openshift-authentication/oauth-openshift-558db77b4-25vw6" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.413957 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/f91e18b1-f9ed-4a0d-8aff-e7344791fb5e-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-9bqx2\" (UID: \"f91e18b1-f9ed-4a0d-8aff-e7344791fb5e\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9bqx2" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.413991 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7x2k4\" (UniqueName: \"kubernetes.io/projected/7319e158-317b-4f98-b9da-0481f2c0aca8-kube-api-access-7x2k4\") pod \"cluster-samples-operator-665b6dd947-5lw67\" (UID: \"7319e158-317b-4f98-b9da-0481f2c0aca8\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-5lw67" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.414058 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqc8z\" (UniqueName: \"kubernetes.io/projected/8adeb294-3f47-4d17-bf64-c0d6328b6f2d-kube-api-access-pqc8z\") pod \"machine-config-operator-74547568cd-4psh7\" (UID: \"8adeb294-3f47-4d17-bf64-c0d6328b6f2d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4psh7" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.414198 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/de6bdf95-032d-42a2-a8b5-0202641a05c1-audit-dir\") pod \"oauth-openshift-558db77b4-25vw6\" (UID: \"de6bdf95-032d-42a2-a8b5-0202641a05c1\") " pod="openshift-authentication/oauth-openshift-558db77b4-25vw6" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.414245 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/de6bdf95-032d-42a2-a8b5-0202641a05c1-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-25vw6\" (UID: \"de6bdf95-032d-42a2-a8b5-0202641a05c1\") " pod="openshift-authentication/oauth-openshift-558db77b4-25vw6" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.414343 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/103ae8fe-45e0-4696-be10-bb2ced3ee561-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-zjx6k\" (UID: \"103ae8fe-45e0-4696-be10-bb2ced3ee561\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-zjx6k" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.414388 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x4l27\" (UniqueName: \"kubernetes.io/projected/103ae8fe-45e0-4696-be10-bb2ced3ee561-kube-api-access-x4l27\") pod \"multus-admission-controller-857f4d67dd-zjx6k\" (UID: \"103ae8fe-45e0-4696-be10-bb2ced3ee561\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-zjx6k" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.414432 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ba4b5d33-bc92-49db-bb38-80a7bd3ca2f6-config-volume\") pod \"dns-default-qt4kh\" (UID: \"ba4b5d33-bc92-49db-bb38-80a7bd3ca2f6\") " pod="openshift-dns/dns-default-qt4kh" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.414553 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8tzlx\" (UniqueName: \"kubernetes.io/projected/8f2ca8d1-e849-4530-9260-09dd161dc4c3-kube-api-access-8tzlx\") pod \"openshift-controller-manager-operator-756b6f6bc6-q9bgg\" (UID: \"8f2ca8d1-e849-4530-9260-09dd161dc4c3\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-q9bgg" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.414581 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcbb9385-ebab-4698-9b8b-ceaf90e103f2-config\") pod \"openshift-apiserver-operator-796bbdcf4f-npkjm\" (UID: \"dcbb9385-ebab-4698-9b8b-ceaf90e103f2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-npkjm" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.414606 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2zs9x\" (UniqueName: \"kubernetes.io/projected/f91e18b1-f9ed-4a0d-8aff-e7344791fb5e-kube-api-access-2zs9x\") pod \"package-server-manager-789f6589d5-9bqx2\" (UID: \"f91e18b1-f9ed-4a0d-8aff-e7344791fb5e\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9bqx2" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.414633 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8adeb294-3f47-4d17-bf64-c0d6328b6f2d-proxy-tls\") pod \"machine-config-operator-74547568cd-4psh7\" (UID: \"8adeb294-3f47-4d17-bf64-c0d6328b6f2d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4psh7" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.414582 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/de6bdf95-032d-42a2-a8b5-0202641a05c1-audit-dir\") pod \"oauth-openshift-558db77b4-25vw6\" (UID: \"de6bdf95-032d-42a2-a8b5-0202641a05c1\") " pod="openshift-authentication/oauth-openshift-558db77b4-25vw6" Feb 27 10:29:03 crc kubenswrapper[4728]: E0227 10:29:03.415417 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 10:29:03.915332379 +0000 UTC m=+163.877698495 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.415674 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8adeb294-3f47-4d17-bf64-c0d6328b6f2d-images\") pod \"machine-config-operator-74547568cd-4psh7\" (UID: \"8adeb294-3f47-4d17-bf64-c0d6328b6f2d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4psh7" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.415734 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8adeb294-3f47-4d17-bf64-c0d6328b6f2d-auth-proxy-config\") pod \"machine-config-operator-74547568cd-4psh7\" (UID: \"8adeb294-3f47-4d17-bf64-c0d6328b6f2d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4psh7" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.415739 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcbb9385-ebab-4698-9b8b-ceaf90e103f2-config\") pod \"openshift-apiserver-operator-796bbdcf4f-npkjm\" (UID: \"dcbb9385-ebab-4698-9b8b-ceaf90e103f2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-npkjm" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.415805 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/5eed07da-15ae-438b-a97f-568cd05ea1ee-srv-cert\") pod \"olm-operator-6b444d44fb-fj9d6\" (UID: \"5eed07da-15ae-438b-a97f-568cd05ea1ee\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-fj9d6" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.415851 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c4kgv\" (UniqueName: \"kubernetes.io/projected/9f01d342-6bde-4063-b99d-b0efda456aef-kube-api-access-c4kgv\") pod \"image-registry-697d97f7c8-w4vnn\" (UID: \"9f01d342-6bde-4063-b99d-b0efda456aef\") " pod="openshift-image-registry/image-registry-697d97f7c8-w4vnn" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.415904 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/6e5d2a5d-0128-4d37-b653-555cc40a8d39-mountpoint-dir\") pod \"csi-hostpathplugin-sftjq\" (UID: \"6e5d2a5d-0128-4d37-b653-555cc40a8d39\") " pod="hostpath-provisioner/csi-hostpathplugin-sftjq" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.415928 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/6e5d2a5d-0128-4d37-b653-555cc40a8d39-socket-dir\") pod \"csi-hostpathplugin-sftjq\" (UID: \"6e5d2a5d-0128-4d37-b653-555cc40a8d39\") " pod="hostpath-provisioner/csi-hostpathplugin-sftjq" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.415964 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ftrrd\" (UniqueName: \"kubernetes.io/projected/bf0a7b0a-5d73-4cf1-81b5-00c7232ced39-kube-api-access-ftrrd\") pod \"machine-config-server-wktw4\" (UID: \"bf0a7b0a-5d73-4cf1-81b5-00c7232ced39\") " pod="openshift-machine-config-operator/machine-config-server-wktw4" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.416002 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/de6bdf95-032d-42a2-a8b5-0202641a05c1-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-25vw6\" (UID: \"de6bdf95-032d-42a2-a8b5-0202641a05c1\") " pod="openshift-authentication/oauth-openshift-558db77b4-25vw6" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.416037 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9f01d342-6bde-4063-b99d-b0efda456aef-ca-trust-extracted\") pod \"image-registry-697d97f7c8-w4vnn\" (UID: \"9f01d342-6bde-4063-b99d-b0efda456aef\") " pod="openshift-image-registry/image-registry-697d97f7c8-w4vnn" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.416057 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9f01d342-6bde-4063-b99d-b0efda456aef-installation-pull-secrets\") pod \"image-registry-697d97f7c8-w4vnn\" (UID: \"9f01d342-6bde-4063-b99d-b0efda456aef\") " pod="openshift-image-registry/image-registry-697d97f7c8-w4vnn" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.416078 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ba4b5d33-bc92-49db-bb38-80a7bd3ca2f6-metrics-tls\") pod \"dns-default-qt4kh\" (UID: \"ba4b5d33-bc92-49db-bb38-80a7bd3ca2f6\") " pod="openshift-dns/dns-default-qt4kh" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.416089 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/de6bdf95-032d-42a2-a8b5-0202641a05c1-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-25vw6\" (UID: \"de6bdf95-032d-42a2-a8b5-0202641a05c1\") " pod="openshift-authentication/oauth-openshift-558db77b4-25vw6" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.416145 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/de6bdf95-032d-42a2-a8b5-0202641a05c1-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-25vw6\" (UID: \"de6bdf95-032d-42a2-a8b5-0202641a05c1\") " pod="openshift-authentication/oauth-openshift-558db77b4-25vw6" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.416174 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f8cqf\" (UniqueName: \"kubernetes.io/projected/6e5d2a5d-0128-4d37-b653-555cc40a8d39-kube-api-access-f8cqf\") pod \"csi-hostpathplugin-sftjq\" (UID: \"6e5d2a5d-0128-4d37-b653-555cc40a8d39\") " pod="hostpath-provisioner/csi-hostpathplugin-sftjq" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.416198 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/bf0a7b0a-5d73-4cf1-81b5-00c7232ced39-certs\") pod \"machine-config-server-wktw4\" (UID: \"bf0a7b0a-5d73-4cf1-81b5-00c7232ced39\") " pod="openshift-machine-config-operator/machine-config-server-wktw4" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.416235 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/de6bdf95-032d-42a2-a8b5-0202641a05c1-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-25vw6\" (UID: \"de6bdf95-032d-42a2-a8b5-0202641a05c1\") " pod="openshift-authentication/oauth-openshift-558db77b4-25vw6" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.416295 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/6e5d2a5d-0128-4d37-b653-555cc40a8d39-registration-dir\") pod \"csi-hostpathplugin-sftjq\" (UID: \"6e5d2a5d-0128-4d37-b653-555cc40a8d39\") " pod="hostpath-provisioner/csi-hostpathplugin-sftjq" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.416335 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/de6bdf95-032d-42a2-a8b5-0202641a05c1-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-25vw6\" (UID: \"de6bdf95-032d-42a2-a8b5-0202641a05c1\") " pod="openshift-authentication/oauth-openshift-558db77b4-25vw6" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.416356 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2f537b0d-c3b9-4f04-b471-91c204f854a0-bound-sa-token\") pod \"ingress-operator-5b745b69d9-kwpmw\" (UID: \"2f537b0d-c3b9-4f04-b471-91c204f854a0\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kwpmw" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.416406 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/de6bdf95-032d-42a2-a8b5-0202641a05c1-audit-policies\") pod \"oauth-openshift-558db77b4-25vw6\" (UID: \"de6bdf95-032d-42a2-a8b5-0202641a05c1\") " pod="openshift-authentication/oauth-openshift-558db77b4-25vw6" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.416428 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/de6bdf95-032d-42a2-a8b5-0202641a05c1-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-25vw6\" (UID: \"de6bdf95-032d-42a2-a8b5-0202641a05c1\") " pod="openshift-authentication/oauth-openshift-558db77b4-25vw6" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.416457 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w4vnn\" (UID: \"9f01d342-6bde-4063-b99d-b0efda456aef\") " pod="openshift-image-registry/image-registry-697d97f7c8-w4vnn" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.416484 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/7319e158-317b-4f98-b9da-0481f2c0aca8-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-5lw67\" (UID: \"7319e158-317b-4f98-b9da-0481f2c0aca8\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-5lw67" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.416526 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/de6bdf95-032d-42a2-a8b5-0202641a05c1-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-25vw6\" (UID: \"de6bdf95-032d-42a2-a8b5-0202641a05c1\") " pod="openshift-authentication/oauth-openshift-558db77b4-25vw6" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.416565 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/bf0a7b0a-5d73-4cf1-81b5-00c7232ced39-node-bootstrap-token\") pod \"machine-config-server-wktw4\" (UID: \"bf0a7b0a-5d73-4cf1-81b5-00c7232ced39\") " pod="openshift-machine-config-operator/machine-config-server-wktw4" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.416615 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dcbb9385-ebab-4698-9b8b-ceaf90e103f2-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-npkjm\" (UID: \"dcbb9385-ebab-4698-9b8b-ceaf90e103f2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-npkjm" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.416640 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2tf78\" (UniqueName: \"kubernetes.io/projected/ba4b5d33-bc92-49db-bb38-80a7bd3ca2f6-kube-api-access-2tf78\") pod \"dns-default-qt4kh\" (UID: \"ba4b5d33-bc92-49db-bb38-80a7bd3ca2f6\") " pod="openshift-dns/dns-default-qt4kh" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.416644 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2f537b0d-c3b9-4f04-b471-91c204f854a0-trusted-ca\") pod \"ingress-operator-5b745b69d9-kwpmw\" (UID: \"2f537b0d-c3b9-4f04-b471-91c204f854a0\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kwpmw" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.416678 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f2ca8d1-e849-4530-9260-09dd161dc4c3-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-q9bgg\" (UID: \"8f2ca8d1-e849-4530-9260-09dd161dc4c3\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-q9bgg" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.416704 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fbnmk\" (UniqueName: \"kubernetes.io/projected/5eed07da-15ae-438b-a97f-568cd05ea1ee-kube-api-access-fbnmk\") pod \"olm-operator-6b444d44fb-fj9d6\" (UID: \"5eed07da-15ae-438b-a97f-568cd05ea1ee\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-fj9d6" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.416729 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/de6bdf95-032d-42a2-a8b5-0202641a05c1-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-25vw6\" (UID: \"de6bdf95-032d-42a2-a8b5-0202641a05c1\") " pod="openshift-authentication/oauth-openshift-558db77b4-25vw6" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.416759 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/6e5d2a5d-0128-4d37-b653-555cc40a8d39-csi-data-dir\") pod \"csi-hostpathplugin-sftjq\" (UID: \"6e5d2a5d-0128-4d37-b653-555cc40a8d39\") " pod="hostpath-provisioner/csi-hostpathplugin-sftjq" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.416787 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9f01d342-6bde-4063-b99d-b0efda456aef-trusted-ca\") pod \"image-registry-697d97f7c8-w4vnn\" (UID: \"9f01d342-6bde-4063-b99d-b0efda456aef\") " pod="openshift-image-registry/image-registry-697d97f7c8-w4vnn" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.416823 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/77a21ee7-0d79-437a-8041-55bb91ef0212-profile-collector-cert\") pod \"catalog-operator-68c6474976-ghzzf\" (UID: \"77a21ee7-0d79-437a-8041-55bb91ef0212\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-ghzzf" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.416848 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f82e7468-152d-46a6-9012-3bb0b4219b3f-config-volume\") pod \"collect-profiles-29536455-mx6zt\" (UID: \"f82e7468-152d-46a6-9012-3bb0b4219b3f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536455-mx6zt" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.416906 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2f537b0d-c3b9-4f04-b471-91c204f854a0-metrics-tls\") pod \"ingress-operator-5b745b69d9-kwpmw\" (UID: \"2f537b0d-c3b9-4f04-b471-91c204f854a0\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kwpmw" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.416930 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fzwwm\" (UniqueName: \"kubernetes.io/projected/2f537b0d-c3b9-4f04-b471-91c204f854a0-kube-api-access-fzwwm\") pod \"ingress-operator-5b745b69d9-kwpmw\" (UID: \"2f537b0d-c3b9-4f04-b471-91c204f854a0\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kwpmw" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.416957 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/6e5d2a5d-0128-4d37-b653-555cc40a8d39-plugins-dir\") pod \"csi-hostpathplugin-sftjq\" (UID: \"6e5d2a5d-0128-4d37-b653-555cc40a8d39\") " pod="hostpath-provisioner/csi-hostpathplugin-sftjq" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.417009 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9f01d342-6bde-4063-b99d-b0efda456aef-registry-tls\") pod \"image-registry-697d97f7c8-w4vnn\" (UID: \"9f01d342-6bde-4063-b99d-b0efda456aef\") " pod="openshift-image-registry/image-registry-697d97f7c8-w4vnn" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.417035 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/de6bdf95-032d-42a2-a8b5-0202641a05c1-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-25vw6\" (UID: \"de6bdf95-032d-42a2-a8b5-0202641a05c1\") " pod="openshift-authentication/oauth-openshift-558db77b4-25vw6" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.417062 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/de6bdf95-032d-42a2-a8b5-0202641a05c1-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-25vw6\" (UID: \"de6bdf95-032d-42a2-a8b5-0202641a05c1\") " pod="openshift-authentication/oauth-openshift-558db77b4-25vw6" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.417083 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/77a21ee7-0d79-437a-8041-55bb91ef0212-srv-cert\") pod \"catalog-operator-68c6474976-ghzzf\" (UID: \"77a21ee7-0d79-437a-8041-55bb91ef0212\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-ghzzf" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.417120 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9f01d342-6bde-4063-b99d-b0efda456aef-registry-certificates\") pod \"image-registry-697d97f7c8-w4vnn\" (UID: \"9f01d342-6bde-4063-b99d-b0efda456aef\") " pod="openshift-image-registry/image-registry-697d97f7c8-w4vnn" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.417144 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/de6bdf95-032d-42a2-a8b5-0202641a05c1-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-25vw6\" (UID: \"de6bdf95-032d-42a2-a8b5-0202641a05c1\") " pod="openshift-authentication/oauth-openshift-558db77b4-25vw6" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.417166 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49tf4\" (UniqueName: \"kubernetes.io/projected/77a21ee7-0d79-437a-8041-55bb91ef0212-kube-api-access-49tf4\") pod \"catalog-operator-68c6474976-ghzzf\" (UID: \"77a21ee7-0d79-437a-8041-55bb91ef0212\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-ghzzf" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.417190 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pwn5v\" (UniqueName: \"kubernetes.io/projected/dcbb9385-ebab-4698-9b8b-ceaf90e103f2-kube-api-access-pwn5v\") pod \"openshift-apiserver-operator-796bbdcf4f-npkjm\" (UID: \"dcbb9385-ebab-4698-9b8b-ceaf90e103f2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-npkjm" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.417216 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8f2ca8d1-e849-4530-9260-09dd161dc4c3-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-q9bgg\" (UID: \"8f2ca8d1-e849-4530-9260-09dd161dc4c3\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-q9bgg" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.417255 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9f01d342-6bde-4063-b99d-b0efda456aef-bound-sa-token\") pod \"image-registry-697d97f7c8-w4vnn\" (UID: \"9f01d342-6bde-4063-b99d-b0efda456aef\") " pod="openshift-image-registry/image-registry-697d97f7c8-w4vnn" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.417294 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dpzjx\" (UniqueName: \"kubernetes.io/projected/a3656135-373e-4ec6-9cf1-e34d6a95c5a5-kube-api-access-dpzjx\") pod \"downloads-7954f5f757-c46ql\" (UID: \"a3656135-373e-4ec6-9cf1-e34d6a95c5a5\") " pod="openshift-console/downloads-7954f5f757-c46ql" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.418165 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tz4jj" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.418842 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f2ca8d1-e849-4530-9260-09dd161dc4c3-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-q9bgg\" (UID: \"8f2ca8d1-e849-4530-9260-09dd161dc4c3\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-q9bgg" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.419068 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9f01d342-6bde-4063-b99d-b0efda456aef-ca-trust-extracted\") pod \"image-registry-697d97f7c8-w4vnn\" (UID: \"9f01d342-6bde-4063-b99d-b0efda456aef\") " pod="openshift-image-registry/image-registry-697d97f7c8-w4vnn" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.419731 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-p7rtp"] Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.420363 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/de6bdf95-032d-42a2-a8b5-0202641a05c1-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-25vw6\" (UID: \"de6bdf95-032d-42a2-a8b5-0202641a05c1\") " pod="openshift-authentication/oauth-openshift-558db77b4-25vw6" Feb 27 10:29:03 crc kubenswrapper[4728]: E0227 10:29:03.420666 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 10:29:03.92065099 +0000 UTC m=+163.883017096 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w4vnn" (UID: "9f01d342-6bde-4063-b99d-b0efda456aef") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.421194 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9f01d342-6bde-4063-b99d-b0efda456aef-registry-certificates\") pod \"image-registry-697d97f7c8-w4vnn\" (UID: \"9f01d342-6bde-4063-b99d-b0efda456aef\") " pod="openshift-image-registry/image-registry-697d97f7c8-w4vnn" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.422714 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8f2ca8d1-e849-4530-9260-09dd161dc4c3-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-q9bgg\" (UID: \"8f2ca8d1-e849-4530-9260-09dd161dc4c3\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-q9bgg" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.423667 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9f01d342-6bde-4063-b99d-b0efda456aef-registry-tls\") pod \"image-registry-697d97f7c8-w4vnn\" (UID: \"9f01d342-6bde-4063-b99d-b0efda456aef\") " pod="openshift-image-registry/image-registry-697d97f7c8-w4vnn" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.423817 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/7319e158-317b-4f98-b9da-0481f2c0aca8-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-5lw67\" (UID: \"7319e158-317b-4f98-b9da-0481f2c0aca8\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-5lw67" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.425215 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/de6bdf95-032d-42a2-a8b5-0202641a05c1-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-25vw6\" (UID: \"de6bdf95-032d-42a2-a8b5-0202641a05c1\") " pod="openshift-authentication/oauth-openshift-558db77b4-25vw6" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.425588 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/de6bdf95-032d-42a2-a8b5-0202641a05c1-audit-policies\") pod \"oauth-openshift-558db77b4-25vw6\" (UID: \"de6bdf95-032d-42a2-a8b5-0202641a05c1\") " pod="openshift-authentication/oauth-openshift-558db77b4-25vw6" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.425600 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-7g65x" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.426167 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/de6bdf95-032d-42a2-a8b5-0202641a05c1-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-25vw6\" (UID: \"de6bdf95-032d-42a2-a8b5-0202641a05c1\") " pod="openshift-authentication/oauth-openshift-558db77b4-25vw6" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.427038 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9f01d342-6bde-4063-b99d-b0efda456aef-trusted-ca\") pod \"image-registry-697d97f7c8-w4vnn\" (UID: \"9f01d342-6bde-4063-b99d-b0efda456aef\") " pod="openshift-image-registry/image-registry-697d97f7c8-w4vnn" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.427269 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dcbb9385-ebab-4698-9b8b-ceaf90e103f2-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-npkjm\" (UID: \"dcbb9385-ebab-4698-9b8b-ceaf90e103f2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-npkjm" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.427668 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9f01d342-6bde-4063-b99d-b0efda456aef-installation-pull-secrets\") pod \"image-registry-697d97f7c8-w4vnn\" (UID: \"9f01d342-6bde-4063-b99d-b0efda456aef\") " pod="openshift-image-registry/image-registry-697d97f7c8-w4vnn" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.427930 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/de6bdf95-032d-42a2-a8b5-0202641a05c1-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-25vw6\" (UID: \"de6bdf95-032d-42a2-a8b5-0202641a05c1\") " pod="openshift-authentication/oauth-openshift-558db77b4-25vw6" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.428683 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/de6bdf95-032d-42a2-a8b5-0202641a05c1-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-25vw6\" (UID: \"de6bdf95-032d-42a2-a8b5-0202641a05c1\") " pod="openshift-authentication/oauth-openshift-558db77b4-25vw6" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.429753 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/de6bdf95-032d-42a2-a8b5-0202641a05c1-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-25vw6\" (UID: \"de6bdf95-032d-42a2-a8b5-0202641a05c1\") " pod="openshift-authentication/oauth-openshift-558db77b4-25vw6" Feb 27 10:29:03 crc kubenswrapper[4728]: W0227 10:29:03.430460 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3bd010d3_c618_478c_9dd4_f0c095b872ac.slice/crio-c85eb1fde8ab8cb1cb4aa7d4517426e11961dac1d29c3384e95fbc89f3f87e22 WatchSource:0}: Error finding container c85eb1fde8ab8cb1cb4aa7d4517426e11961dac1d29c3384e95fbc89f3f87e22: Status 404 returned error can't find the container with id c85eb1fde8ab8cb1cb4aa7d4517426e11961dac1d29c3384e95fbc89f3f87e22 Feb 27 10:29:03 crc kubenswrapper[4728]: W0227 10:29:03.432176 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6e70c0a9_c703_42f3_b47c_c32dd62b435b.slice/crio-155b542fbf5a569bb47b8320af7614fa815030d85f8247ea772bc53bbf93fbbc WatchSource:0}: Error finding container 155b542fbf5a569bb47b8320af7614fa815030d85f8247ea772bc53bbf93fbbc: Status 404 returned error can't find the container with id 155b542fbf5a569bb47b8320af7614fa815030d85f8247ea772bc53bbf93fbbc Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.433360 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2f537b0d-c3b9-4f04-b471-91c204f854a0-metrics-tls\") pod \"ingress-operator-5b745b69d9-kwpmw\" (UID: \"2f537b0d-c3b9-4f04-b471-91c204f854a0\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kwpmw" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.434563 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/de6bdf95-032d-42a2-a8b5-0202641a05c1-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-25vw6\" (UID: \"de6bdf95-032d-42a2-a8b5-0202641a05c1\") " pod="openshift-authentication/oauth-openshift-558db77b4-25vw6" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.434847 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/de6bdf95-032d-42a2-a8b5-0202641a05c1-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-25vw6\" (UID: \"de6bdf95-032d-42a2-a8b5-0202641a05c1\") " pod="openshift-authentication/oauth-openshift-558db77b4-25vw6" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.435316 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/de6bdf95-032d-42a2-a8b5-0202641a05c1-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-25vw6\" (UID: \"de6bdf95-032d-42a2-a8b5-0202641a05c1\") " pod="openshift-authentication/oauth-openshift-558db77b4-25vw6" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.440645 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/de6bdf95-032d-42a2-a8b5-0202641a05c1-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-25vw6\" (UID: \"de6bdf95-032d-42a2-a8b5-0202641a05c1\") " pod="openshift-authentication/oauth-openshift-558db77b4-25vw6" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.443065 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9m4nv"] Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.446228 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-zdjkf" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.458818 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8tzlx\" (UniqueName: \"kubernetes.io/projected/8f2ca8d1-e849-4530-9260-09dd161dc4c3-kube-api-access-8tzlx\") pod \"openshift-controller-manager-operator-756b6f6bc6-q9bgg\" (UID: \"8f2ca8d1-e849-4530-9260-09dd161dc4c3\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-q9bgg" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.482053 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7x2k4\" (UniqueName: \"kubernetes.io/projected/7319e158-317b-4f98-b9da-0481f2c0aca8-kube-api-access-7x2k4\") pod \"cluster-samples-operator-665b6dd947-5lw67\" (UID: \"7319e158-317b-4f98-b9da-0481f2c0aca8\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-5lw67" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.484662 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-2f6px"] Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.489676 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-g4dbw" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.498863 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fzfpf\" (UniqueName: \"kubernetes.io/projected/de6bdf95-032d-42a2-a8b5-0202641a05c1-kube-api-access-fzfpf\") pod \"oauth-openshift-558db77b4-25vw6\" (UID: \"de6bdf95-032d-42a2-a8b5-0202641a05c1\") " pod="openshift-authentication/oauth-openshift-558db77b4-25vw6" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.501067 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-h55dj" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.516873 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-p7rtp" event={"ID":"3bd010d3-c618-478c-9dd4-f0c095b872ac","Type":"ContainerStarted","Data":"c85eb1fde8ab8cb1cb4aa7d4517426e11961dac1d29c3384e95fbc89f3f87e22"} Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.517758 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.517961 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/77a21ee7-0d79-437a-8041-55bb91ef0212-srv-cert\") pod \"catalog-operator-68c6474976-ghzzf\" (UID: \"77a21ee7-0d79-437a-8041-55bb91ef0212\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-ghzzf" Feb 27 10:29:03 crc kubenswrapper[4728]: E0227 10:29:03.518559 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 10:29:04.018531441 +0000 UTC m=+163.980897547 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.518740 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-762bl\" (UniqueName: \"kubernetes.io/projected/826461a8-eef9-4a1f-b4a7-4ff8076ec729-kube-api-access-762bl\") pod \"auto-csr-approver-29536468-682zs\" (UID: \"826461a8-eef9-4a1f-b4a7-4ff8076ec729\") " pod="openshift-infra/auto-csr-approver-29536468-682zs" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.518789 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-49tf4\" (UniqueName: \"kubernetes.io/projected/77a21ee7-0d79-437a-8041-55bb91ef0212-kube-api-access-49tf4\") pod \"catalog-operator-68c6474976-ghzzf\" (UID: \"77a21ee7-0d79-437a-8041-55bb91ef0212\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-ghzzf" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.518828 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e774d0ad-6c46-42cf-9605-535444f24c79-config\") pod \"kube-controller-manager-operator-78b949d7b-5gztc\" (UID: \"e774d0ad-6c46-42cf-9605-535444f24c79\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-5gztc" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.518912 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/285ad280-c5dc-4312-afcd-39678c1c5c0b-apiservice-cert\") pod \"packageserver-d55dfcdfc-cxzlx\" (UID: \"285ad280-c5dc-4312-afcd-39678c1c5c0b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cxzlx" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.519008 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2kl4f\" (UniqueName: \"kubernetes.io/projected/f82e7468-152d-46a6-9012-3bb0b4219b3f-kube-api-access-2kl4f\") pod \"collect-profiles-29536455-mx6zt\" (UID: \"f82e7468-152d-46a6-9012-3bb0b4219b3f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536455-mx6zt" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.519039 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/5eed07da-15ae-438b-a97f-568cd05ea1ee-profile-collector-cert\") pod \"olm-operator-6b444d44fb-fj9d6\" (UID: \"5eed07da-15ae-438b-a97f-568cd05ea1ee\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-fj9d6" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.519063 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f82e7468-152d-46a6-9012-3bb0b4219b3f-secret-volume\") pod \"collect-profiles-29536455-mx6zt\" (UID: \"f82e7468-152d-46a6-9012-3bb0b4219b3f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536455-mx6zt" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.519094 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/438710a7-473e-43a3-8aee-6f1f2d5ac756-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-xv8vk\" (UID: \"438710a7-473e-43a3-8aee-6f1f2d5ac756\") " pod="openshift-marketplace/marketplace-operator-79b997595-xv8vk" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.519126 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7vzd\" (UniqueName: \"kubernetes.io/projected/1e079012-b36b-4046-b569-895b5100265d-kube-api-access-f7vzd\") pod \"service-ca-operator-777779d784-f5nff\" (UID: \"1e079012-b36b-4046-b569-895b5100265d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-f5nff" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.519155 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/a4d6106d-dd3c-4a6a-aaeb-b441add3fdad-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-w976v\" (UID: \"a4d6106d-dd3c-4a6a-aaeb-b441add3fdad\") " pod="openshift-multus/cni-sysctl-allowlist-ds-w976v" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.519179 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c4kgv\" (UniqueName: \"kubernetes.io/projected/9f01d342-6bde-4063-b99d-b0efda456aef-kube-api-access-c4kgv\") pod \"image-registry-697d97f7c8-w4vnn\" (UID: \"9f01d342-6bde-4063-b99d-b0efda456aef\") " pod="openshift-image-registry/image-registry-697d97f7c8-w4vnn" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.519185 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/f91e18b1-f9ed-4a0d-8aff-e7344791fb5e-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-9bqx2\" (UID: \"f91e18b1-f9ed-4a0d-8aff-e7344791fb5e\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9bqx2" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.519276 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pqc8z\" (UniqueName: \"kubernetes.io/projected/8adeb294-3f47-4d17-bf64-c0d6328b6f2d-kube-api-access-pqc8z\") pod \"machine-config-operator-74547568cd-4psh7\" (UID: \"8adeb294-3f47-4d17-bf64-c0d6328b6f2d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4psh7" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.519307 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qghnq\" (UniqueName: \"kubernetes.io/projected/5bbc683f-19d5-4c72-83a3-511c300446ad-kube-api-access-qghnq\") pod \"control-plane-machine-set-operator-78cbb6b69f-jzdfv\" (UID: \"5bbc683f-19d5-4c72-83a3-511c300446ad\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jzdfv" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.519349 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87rs7\" (UniqueName: \"kubernetes.io/projected/438710a7-473e-43a3-8aee-6f1f2d5ac756-kube-api-access-87rs7\") pod \"marketplace-operator-79b997595-xv8vk\" (UID: \"438710a7-473e-43a3-8aee-6f1f2d5ac756\") " pod="openshift-marketplace/marketplace-operator-79b997595-xv8vk" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.519390 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/15e7f85b-5175-40d2-ade3-66aa7469d9cd-signing-key\") pod \"service-ca-9c57cc56f-7n4w8\" (UID: \"15e7f85b-5175-40d2-ade3-66aa7469d9cd\") " pod="openshift-service-ca/service-ca-9c57cc56f-7n4w8" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.519460 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/103ae8fe-45e0-4696-be10-bb2ced3ee561-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-zjx6k\" (UID: \"103ae8fe-45e0-4696-be10-bb2ced3ee561\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-zjx6k" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.519491 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/5bbc683f-19d5-4c72-83a3-511c300446ad-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-jzdfv\" (UID: \"5bbc683f-19d5-4c72-83a3-511c300446ad\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jzdfv" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.519536 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x4l27\" (UniqueName: \"kubernetes.io/projected/103ae8fe-45e0-4696-be10-bb2ced3ee561-kube-api-access-x4l27\") pod \"multus-admission-controller-857f4d67dd-zjx6k\" (UID: \"103ae8fe-45e0-4696-be10-bb2ced3ee561\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-zjx6k" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.519562 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ba4b5d33-bc92-49db-bb38-80a7bd3ca2f6-config-volume\") pod \"dns-default-qt4kh\" (UID: \"ba4b5d33-bc92-49db-bb38-80a7bd3ca2f6\") " pod="openshift-dns/dns-default-qt4kh" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.519588 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2zs9x\" (UniqueName: \"kubernetes.io/projected/f91e18b1-f9ed-4a0d-8aff-e7344791fb5e-kube-api-access-2zs9x\") pod \"package-server-manager-789f6589d5-9bqx2\" (UID: \"f91e18b1-f9ed-4a0d-8aff-e7344791fb5e\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9bqx2" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.519611 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e774d0ad-6c46-42cf-9605-535444f24c79-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-5gztc\" (UID: \"e774d0ad-6c46-42cf-9605-535444f24c79\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-5gztc" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.519637 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8adeb294-3f47-4d17-bf64-c0d6328b6f2d-proxy-tls\") pod \"machine-config-operator-74547568cd-4psh7\" (UID: \"8adeb294-3f47-4d17-bf64-c0d6328b6f2d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4psh7" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.519659 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9dx8r\" (UniqueName: \"kubernetes.io/projected/85142b10-6185-436b-a4eb-0469915055fe-kube-api-access-9dx8r\") pod \"machine-config-controller-84d6567774-bgndl\" (UID: \"85142b10-6185-436b-a4eb-0469915055fe\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bgndl" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.519685 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8adeb294-3f47-4d17-bf64-c0d6328b6f2d-auth-proxy-config\") pod \"machine-config-operator-74547568cd-4psh7\" (UID: \"8adeb294-3f47-4d17-bf64-c0d6328b6f2d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4psh7" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.519714 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/90274913-fbff-4207-a3d8-f163ebcee220-default-certificate\") pod \"router-default-5444994796-8n6md\" (UID: \"90274913-fbff-4207-a3d8-f163ebcee220\") " pod="openshift-ingress/router-default-5444994796-8n6md" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.519741 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8adeb294-3f47-4d17-bf64-c0d6328b6f2d-images\") pod \"machine-config-operator-74547568cd-4psh7\" (UID: \"8adeb294-3f47-4d17-bf64-c0d6328b6f2d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4psh7" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.519767 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/5eed07da-15ae-438b-a97f-568cd05ea1ee-srv-cert\") pod \"olm-operator-6b444d44fb-fj9d6\" (UID: \"5eed07da-15ae-438b-a97f-568cd05ea1ee\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-fj9d6" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.519788 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m82jq\" (UniqueName: \"kubernetes.io/projected/90274913-fbff-4207-a3d8-f163ebcee220-kube-api-access-m82jq\") pod \"router-default-5444994796-8n6md\" (UID: \"90274913-fbff-4207-a3d8-f163ebcee220\") " pod="openshift-ingress/router-default-5444994796-8n6md" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.519837 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/6e5d2a5d-0128-4d37-b653-555cc40a8d39-mountpoint-dir\") pod \"csi-hostpathplugin-sftjq\" (UID: \"6e5d2a5d-0128-4d37-b653-555cc40a8d39\") " pod="hostpath-provisioner/csi-hostpathplugin-sftjq" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.519866 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/85142b10-6185-436b-a4eb-0469915055fe-proxy-tls\") pod \"machine-config-controller-84d6567774-bgndl\" (UID: \"85142b10-6185-436b-a4eb-0469915055fe\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bgndl" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.519897 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/6e5d2a5d-0128-4d37-b653-555cc40a8d39-socket-dir\") pod \"csi-hostpathplugin-sftjq\" (UID: \"6e5d2a5d-0128-4d37-b653-555cc40a8d39\") " pod="hostpath-provisioner/csi-hostpathplugin-sftjq" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.519919 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e99c1615-8102-40e4-ba5a-fa770e09cf9c-cert\") pod \"ingress-canary-kl7tc\" (UID: \"e99c1615-8102-40e4-ba5a-fa770e09cf9c\") " pod="openshift-ingress-canary/ingress-canary-kl7tc" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.519938 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ftrrd\" (UniqueName: \"kubernetes.io/projected/bf0a7b0a-5d73-4cf1-81b5-00c7232ced39-kube-api-access-ftrrd\") pod \"machine-config-server-wktw4\" (UID: \"bf0a7b0a-5d73-4cf1-81b5-00c7232ced39\") " pod="openshift-machine-config-operator/machine-config-server-wktw4" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.519967 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ba4b5d33-bc92-49db-bb38-80a7bd3ca2f6-metrics-tls\") pod \"dns-default-qt4kh\" (UID: \"ba4b5d33-bc92-49db-bb38-80a7bd3ca2f6\") " pod="openshift-dns/dns-default-qt4kh" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.519999 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75z8h\" (UniqueName: \"kubernetes.io/projected/285ad280-c5dc-4312-afcd-39678c1c5c0b-kube-api-access-75z8h\") pod \"packageserver-d55dfcdfc-cxzlx\" (UID: \"285ad280-c5dc-4312-afcd-39678c1c5c0b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cxzlx" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.520027 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f8cqf\" (UniqueName: \"kubernetes.io/projected/6e5d2a5d-0128-4d37-b653-555cc40a8d39-kube-api-access-f8cqf\") pod \"csi-hostpathplugin-sftjq\" (UID: \"6e5d2a5d-0128-4d37-b653-555cc40a8d39\") " pod="hostpath-provisioner/csi-hostpathplugin-sftjq" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.520052 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/a4d6106d-dd3c-4a6a-aaeb-b441add3fdad-ready\") pod \"cni-sysctl-allowlist-ds-w976v\" (UID: \"a4d6106d-dd3c-4a6a-aaeb-b441add3fdad\") " pod="openshift-multus/cni-sysctl-allowlist-ds-w976v" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.520076 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/90274913-fbff-4207-a3d8-f163ebcee220-stats-auth\") pod \"router-default-5444994796-8n6md\" (UID: \"90274913-fbff-4207-a3d8-f163ebcee220\") " pod="openshift-ingress/router-default-5444994796-8n6md" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.520097 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/15e7f85b-5175-40d2-ade3-66aa7469d9cd-signing-cabundle\") pod \"service-ca-9c57cc56f-7n4w8\" (UID: \"15e7f85b-5175-40d2-ade3-66aa7469d9cd\") " pod="openshift-service-ca/service-ca-9c57cc56f-7n4w8" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.520136 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/bf0a7b0a-5d73-4cf1-81b5-00c7232ced39-certs\") pod \"machine-config-server-wktw4\" (UID: \"bf0a7b0a-5d73-4cf1-81b5-00c7232ced39\") " pod="openshift-machine-config-operator/machine-config-server-wktw4" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.520156 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/90274913-fbff-4207-a3d8-f163ebcee220-service-ca-bundle\") pod \"router-default-5444994796-8n6md\" (UID: \"90274913-fbff-4207-a3d8-f163ebcee220\") " pod="openshift-ingress/router-default-5444994796-8n6md" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.520214 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/6e5d2a5d-0128-4d37-b653-555cc40a8d39-registration-dir\") pod \"csi-hostpathplugin-sftjq\" (UID: \"6e5d2a5d-0128-4d37-b653-555cc40a8d39\") " pod="hostpath-provisioner/csi-hostpathplugin-sftjq" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.520238 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/a4d6106d-dd3c-4a6a-aaeb-b441add3fdad-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-w976v\" (UID: \"a4d6106d-dd3c-4a6a-aaeb-b441add3fdad\") " pod="openshift-multus/cni-sysctl-allowlist-ds-w976v" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.520257 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6nm6\" (UniqueName: \"kubernetes.io/projected/15e7f85b-5175-40d2-ade3-66aa7469d9cd-kube-api-access-x6nm6\") pod \"service-ca-9c57cc56f-7n4w8\" (UID: \"15e7f85b-5175-40d2-ade3-66aa7469d9cd\") " pod="openshift-service-ca/service-ca-9c57cc56f-7n4w8" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.520277 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tqbjp\" (UniqueName: \"kubernetes.io/projected/e99c1615-8102-40e4-ba5a-fa770e09cf9c-kube-api-access-tqbjp\") pod \"ingress-canary-kl7tc\" (UID: \"e99c1615-8102-40e4-ba5a-fa770e09cf9c\") " pod="openshift-ingress-canary/ingress-canary-kl7tc" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.520302 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/285ad280-c5dc-4312-afcd-39678c1c5c0b-webhook-cert\") pod \"packageserver-d55dfcdfc-cxzlx\" (UID: \"285ad280-c5dc-4312-afcd-39678c1c5c0b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cxzlx" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.520343 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kpkm9\" (UniqueName: \"kubernetes.io/projected/a4d6106d-dd3c-4a6a-aaeb-b441add3fdad-kube-api-access-kpkm9\") pod \"cni-sysctl-allowlist-ds-w976v\" (UID: \"a4d6106d-dd3c-4a6a-aaeb-b441add3fdad\") " pod="openshift-multus/cni-sysctl-allowlist-ds-w976v" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.520395 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w4vnn\" (UID: \"9f01d342-6bde-4063-b99d-b0efda456aef\") " pod="openshift-image-registry/image-registry-697d97f7c8-w4vnn" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.520423 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1e079012-b36b-4046-b569-895b5100265d-serving-cert\") pod \"service-ca-operator-777779d784-f5nff\" (UID: \"1e079012-b36b-4046-b569-895b5100265d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-f5nff" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.520455 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/bf0a7b0a-5d73-4cf1-81b5-00c7232ced39-node-bootstrap-token\") pod \"machine-config-server-wktw4\" (UID: \"bf0a7b0a-5d73-4cf1-81b5-00c7232ced39\") " pod="openshift-machine-config-operator/machine-config-server-wktw4" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.520486 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2tf78\" (UniqueName: \"kubernetes.io/projected/ba4b5d33-bc92-49db-bb38-80a7bd3ca2f6-kube-api-access-2tf78\") pod \"dns-default-qt4kh\" (UID: \"ba4b5d33-bc92-49db-bb38-80a7bd3ca2f6\") " pod="openshift-dns/dns-default-qt4kh" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.521471 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e774d0ad-6c46-42cf-9605-535444f24c79-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-5gztc\" (UID: \"e774d0ad-6c46-42cf-9605-535444f24c79\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-5gztc" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.521650 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fbnmk\" (UniqueName: \"kubernetes.io/projected/5eed07da-15ae-438b-a97f-568cd05ea1ee-kube-api-access-fbnmk\") pod \"olm-operator-6b444d44fb-fj9d6\" (UID: \"5eed07da-15ae-438b-a97f-568cd05ea1ee\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-fj9d6" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.521472 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8adeb294-3f47-4d17-bf64-c0d6328b6f2d-images\") pod \"machine-config-operator-74547568cd-4psh7\" (UID: \"8adeb294-3f47-4d17-bf64-c0d6328b6f2d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4psh7" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.522032 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/6e5d2a5d-0128-4d37-b653-555cc40a8d39-registration-dir\") pod \"csi-hostpathplugin-sftjq\" (UID: \"6e5d2a5d-0128-4d37-b653-555cc40a8d39\") " pod="hostpath-provisioner/csi-hostpathplugin-sftjq" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.522242 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ba4b5d33-bc92-49db-bb38-80a7bd3ca2f6-config-volume\") pod \"dns-default-qt4kh\" (UID: \"ba4b5d33-bc92-49db-bb38-80a7bd3ca2f6\") " pod="openshift-dns/dns-default-qt4kh" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.522476 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8adeb294-3f47-4d17-bf64-c0d6328b6f2d-auth-proxy-config\") pod \"machine-config-operator-74547568cd-4psh7\" (UID: \"8adeb294-3f47-4d17-bf64-c0d6328b6f2d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4psh7" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.522931 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/85142b10-6185-436b-a4eb-0469915055fe-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-bgndl\" (UID: \"85142b10-6185-436b-a4eb-0469915055fe\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bgndl" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.522998 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/90274913-fbff-4207-a3d8-f163ebcee220-metrics-certs\") pod \"router-default-5444994796-8n6md\" (UID: \"90274913-fbff-4207-a3d8-f163ebcee220\") " pod="openshift-ingress/router-default-5444994796-8n6md" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.523027 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/6e5d2a5d-0128-4d37-b653-555cc40a8d39-csi-data-dir\") pod \"csi-hostpathplugin-sftjq\" (UID: \"6e5d2a5d-0128-4d37-b653-555cc40a8d39\") " pod="hostpath-provisioner/csi-hostpathplugin-sftjq" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.523048 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f82e7468-152d-46a6-9012-3bb0b4219b3f-config-volume\") pod \"collect-profiles-29536455-mx6zt\" (UID: \"f82e7468-152d-46a6-9012-3bb0b4219b3f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536455-mx6zt" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.523069 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e079012-b36b-4046-b569-895b5100265d-config\") pod \"service-ca-operator-777779d784-f5nff\" (UID: \"1e079012-b36b-4046-b569-895b5100265d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-f5nff" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.523100 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/77a21ee7-0d79-437a-8041-55bb91ef0212-profile-collector-cert\") pod \"catalog-operator-68c6474976-ghzzf\" (UID: \"77a21ee7-0d79-437a-8041-55bb91ef0212\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-ghzzf" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.523120 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/77a21ee7-0d79-437a-8041-55bb91ef0212-srv-cert\") pod \"catalog-operator-68c6474976-ghzzf\" (UID: \"77a21ee7-0d79-437a-8041-55bb91ef0212\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-ghzzf" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.523149 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/285ad280-c5dc-4312-afcd-39678c1c5c0b-tmpfs\") pod \"packageserver-d55dfcdfc-cxzlx\" (UID: \"285ad280-c5dc-4312-afcd-39678c1c5c0b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cxzlx" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.523106 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/f91e18b1-f9ed-4a0d-8aff-e7344791fb5e-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-9bqx2\" (UID: \"f91e18b1-f9ed-4a0d-8aff-e7344791fb5e\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9bqx2" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.523213 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/6e5d2a5d-0128-4d37-b653-555cc40a8d39-plugins-dir\") pod \"csi-hostpathplugin-sftjq\" (UID: \"6e5d2a5d-0128-4d37-b653-555cc40a8d39\") " pod="hostpath-provisioner/csi-hostpathplugin-sftjq" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.523240 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/438710a7-473e-43a3-8aee-6f1f2d5ac756-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-xv8vk\" (UID: \"438710a7-473e-43a3-8aee-6f1f2d5ac756\") " pod="openshift-marketplace/marketplace-operator-79b997595-xv8vk" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.523242 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/6e5d2a5d-0128-4d37-b653-555cc40a8d39-csi-data-dir\") pod \"csi-hostpathplugin-sftjq\" (UID: \"6e5d2a5d-0128-4d37-b653-555cc40a8d39\") " pod="hostpath-provisioner/csi-hostpathplugin-sftjq" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.523253 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/6e5d2a5d-0128-4d37-b653-555cc40a8d39-mountpoint-dir\") pod \"csi-hostpathplugin-sftjq\" (UID: \"6e5d2a5d-0128-4d37-b653-555cc40a8d39\") " pod="hostpath-provisioner/csi-hostpathplugin-sftjq" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.523315 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/6e5d2a5d-0128-4d37-b653-555cc40a8d39-plugins-dir\") pod \"csi-hostpathplugin-sftjq\" (UID: \"6e5d2a5d-0128-4d37-b653-555cc40a8d39\") " pod="hostpath-provisioner/csi-hostpathplugin-sftjq" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.523453 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/5eed07da-15ae-438b-a97f-568cd05ea1ee-profile-collector-cert\") pod \"olm-operator-6b444d44fb-fj9d6\" (UID: \"5eed07da-15ae-438b-a97f-568cd05ea1ee\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-fj9d6" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.523633 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/6e5d2a5d-0128-4d37-b653-555cc40a8d39-socket-dir\") pod \"csi-hostpathplugin-sftjq\" (UID: \"6e5d2a5d-0128-4d37-b653-555cc40a8d39\") " pod="hostpath-provisioner/csi-hostpathplugin-sftjq" Feb 27 10:29:03 crc kubenswrapper[4728]: E0227 10:29:03.523872 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 10:29:04.023858382 +0000 UTC m=+163.986224488 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w4vnn" (UID: "9f01d342-6bde-4063-b99d-b0efda456aef") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.525924 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/77a21ee7-0d79-437a-8041-55bb91ef0212-profile-collector-cert\") pod \"catalog-operator-68c6474976-ghzzf\" (UID: \"77a21ee7-0d79-437a-8041-55bb91ef0212\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-ghzzf" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.526046 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f82e7468-152d-46a6-9012-3bb0b4219b3f-config-volume\") pod \"collect-profiles-29536455-mx6zt\" (UID: \"f82e7468-152d-46a6-9012-3bb0b4219b3f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536455-mx6zt" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.526069 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f82e7468-152d-46a6-9012-3bb0b4219b3f-secret-volume\") pod \"collect-profiles-29536455-mx6zt\" (UID: \"f82e7468-152d-46a6-9012-3bb0b4219b3f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536455-mx6zt" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.526820 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-b89pv" event={"ID":"6e70c0a9-c703-42f3-b47c-c32dd62b435b","Type":"ContainerStarted","Data":"155b542fbf5a569bb47b8320af7614fa815030d85f8247ea772bc53bbf93fbbc"} Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.526919 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-lj5z6"] Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.529060 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/bf0a7b0a-5d73-4cf1-81b5-00c7232ced39-certs\") pod \"machine-config-server-wktw4\" (UID: \"bf0a7b0a-5d73-4cf1-81b5-00c7232ced39\") " pod="openshift-machine-config-operator/machine-config-server-wktw4" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.529140 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/103ae8fe-45e0-4696-be10-bb2ced3ee561-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-zjx6k\" (UID: \"103ae8fe-45e0-4696-be10-bb2ced3ee561\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-zjx6k" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.529149 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-tvflj" event={"ID":"b1d22605-abd6-4fc6-8352-8fe78ec02332","Type":"ContainerStarted","Data":"56c79be41d0ed79e41abcf9d588c28c4a9e0751970eb545cc2f507f5cd3ea33c"} Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.529170 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/bf0a7b0a-5d73-4cf1-81b5-00c7232ced39-node-bootstrap-token\") pod \"machine-config-server-wktw4\" (UID: \"bf0a7b0a-5d73-4cf1-81b5-00c7232ced39\") " pod="openshift-machine-config-operator/machine-config-server-wktw4" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.529225 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8adeb294-3f47-4d17-bf64-c0d6328b6f2d-proxy-tls\") pod \"machine-config-operator-74547568cd-4psh7\" (UID: \"8adeb294-3f47-4d17-bf64-c0d6328b6f2d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4psh7" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.529659 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/5eed07da-15ae-438b-a97f-568cd05ea1ee-srv-cert\") pod \"olm-operator-6b444d44fb-fj9d6\" (UID: \"5eed07da-15ae-438b-a97f-568cd05ea1ee\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-fj9d6" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.530054 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ba4b5d33-bc92-49db-bb38-80a7bd3ca2f6-metrics-tls\") pod \"dns-default-qt4kh\" (UID: \"ba4b5d33-bc92-49db-bb38-80a7bd3ca2f6\") " pod="openshift-dns/dns-default-qt4kh" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.531477 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-d2nkm" event={"ID":"52d0f01e-a7ab-4c07-bd46-d014e84c3d6a","Type":"ContainerStarted","Data":"a078e24dd3d7e3c0ed587a136a964741f1ba2caf7029180b75ed72fa7386c118"} Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.532711 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9m4nv" event={"ID":"6bba0773-58d9-41fe-90da-12a1399387a7","Type":"ContainerStarted","Data":"48205b93c11be9056b06bcde48d4374e91a4dbeb16295518cd92e2e8254cc5ab"} Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.537320 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9f01d342-6bde-4063-b99d-b0efda456aef-bound-sa-token\") pod \"image-registry-697d97f7c8-w4vnn\" (UID: \"9f01d342-6bde-4063-b99d-b0efda456aef\") " pod="openshift-image-registry/image-registry-697d97f7c8-w4vnn" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.538082 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-fwnnw" event={"ID":"ba876990-999b-4cd2-bb68-624cbf1b5701","Type":"ContainerStarted","Data":"42dbbc960832f4cc74e8e75fbe0404235fac35429eab819cd60a627b16680753"} Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.545983 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bd58v" event={"ID":"0faf2938-8e5e-451e-99f9-c09124f6a767","Type":"ContainerStarted","Data":"4fbe27654cc541a52e96c6af34cd962ac87d2dec5ce67b2cfb0d13494f38b797"} Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.555763 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dpzjx\" (UniqueName: \"kubernetes.io/projected/a3656135-373e-4ec6-9cf1-e34d6a95c5a5-kube-api-access-dpzjx\") pod \"downloads-7954f5f757-c46ql\" (UID: \"a3656135-373e-4ec6-9cf1-e34d6a95c5a5\") " pod="openshift-console/downloads-7954f5f757-c46ql" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.599806 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fzwwm\" (UniqueName: \"kubernetes.io/projected/2f537b0d-c3b9-4f04-b471-91c204f854a0-kube-api-access-fzwwm\") pod \"ingress-operator-5b745b69d9-kwpmw\" (UID: \"2f537b0d-c3b9-4f04-b471-91c204f854a0\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kwpmw" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.603487 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-j72xn"] Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.620403 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2f537b0d-c3b9-4f04-b471-91c204f854a0-bound-sa-token\") pod \"ingress-operator-5b745b69d9-kwpmw\" (UID: \"2f537b0d-c3b9-4f04-b471-91c204f854a0\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kwpmw" Feb 27 10:29:03 crc kubenswrapper[4728]: E0227 10:29:03.624763 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 10:29:04.124733258 +0000 UTC m=+164.087099364 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.624539 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.625814 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kpkm9\" (UniqueName: \"kubernetes.io/projected/a4d6106d-dd3c-4a6a-aaeb-b441add3fdad-kube-api-access-kpkm9\") pod \"cni-sysctl-allowlist-ds-w976v\" (UID: \"a4d6106d-dd3c-4a6a-aaeb-b441add3fdad\") " pod="openshift-multus/cni-sysctl-allowlist-ds-w976v" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.625846 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w4vnn\" (UID: \"9f01d342-6bde-4063-b99d-b0efda456aef\") " pod="openshift-image-registry/image-registry-697d97f7c8-w4vnn" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.625865 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1e079012-b36b-4046-b569-895b5100265d-serving-cert\") pod \"service-ca-operator-777779d784-f5nff\" (UID: \"1e079012-b36b-4046-b569-895b5100265d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-f5nff" Feb 27 10:29:03 crc kubenswrapper[4728]: E0227 10:29:03.626234 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 10:29:04.12621842 +0000 UTC m=+164.088584526 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w4vnn" (UID: "9f01d342-6bde-4063-b99d-b0efda456aef") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.625918 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e774d0ad-6c46-42cf-9605-535444f24c79-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-5gztc\" (UID: \"e774d0ad-6c46-42cf-9605-535444f24c79\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-5gztc" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.628726 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/85142b10-6185-436b-a4eb-0469915055fe-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-bgndl\" (UID: \"85142b10-6185-436b-a4eb-0469915055fe\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bgndl" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.628775 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/90274913-fbff-4207-a3d8-f163ebcee220-metrics-certs\") pod \"router-default-5444994796-8n6md\" (UID: \"90274913-fbff-4207-a3d8-f163ebcee220\") " pod="openshift-ingress/router-default-5444994796-8n6md" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.628809 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e079012-b36b-4046-b569-895b5100265d-config\") pod \"service-ca-operator-777779d784-f5nff\" (UID: \"1e079012-b36b-4046-b569-895b5100265d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-f5nff" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.628868 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/285ad280-c5dc-4312-afcd-39678c1c5c0b-tmpfs\") pod \"packageserver-d55dfcdfc-cxzlx\" (UID: \"285ad280-c5dc-4312-afcd-39678c1c5c0b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cxzlx" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.629354 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1e079012-b36b-4046-b569-895b5100265d-serving-cert\") pod \"service-ca-operator-777779d784-f5nff\" (UID: \"1e079012-b36b-4046-b569-895b5100265d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-f5nff" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.630443 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e774d0ad-6c46-42cf-9605-535444f24c79-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-5gztc\" (UID: \"e774d0ad-6c46-42cf-9605-535444f24c79\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-5gztc" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.633860 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/90274913-fbff-4207-a3d8-f163ebcee220-metrics-certs\") pod \"router-default-5444994796-8n6md\" (UID: \"90274913-fbff-4207-a3d8-f163ebcee220\") " pod="openshift-ingress/router-default-5444994796-8n6md" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.635073 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/285ad280-c5dc-4312-afcd-39678c1c5c0b-tmpfs\") pod \"packageserver-d55dfcdfc-cxzlx\" (UID: \"285ad280-c5dc-4312-afcd-39678c1c5c0b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cxzlx" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.635424 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/85142b10-6185-436b-a4eb-0469915055fe-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-bgndl\" (UID: \"85142b10-6185-436b-a4eb-0469915055fe\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bgndl" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.635569 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e079012-b36b-4046-b569-895b5100265d-config\") pod \"service-ca-operator-777779d784-f5nff\" (UID: \"1e079012-b36b-4046-b569-895b5100265d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-f5nff" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.635691 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/438710a7-473e-43a3-8aee-6f1f2d5ac756-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-xv8vk\" (UID: \"438710a7-473e-43a3-8aee-6f1f2d5ac756\") " pod="openshift-marketplace/marketplace-operator-79b997595-xv8vk" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.635731 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-762bl\" (UniqueName: \"kubernetes.io/projected/826461a8-eef9-4a1f-b4a7-4ff8076ec729-kube-api-access-762bl\") pod \"auto-csr-approver-29536468-682zs\" (UID: \"826461a8-eef9-4a1f-b4a7-4ff8076ec729\") " pod="openshift-infra/auto-csr-approver-29536468-682zs" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.635773 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e774d0ad-6c46-42cf-9605-535444f24c79-config\") pod \"kube-controller-manager-operator-78b949d7b-5gztc\" (UID: \"e774d0ad-6c46-42cf-9605-535444f24c79\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-5gztc" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.635797 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/285ad280-c5dc-4312-afcd-39678c1c5c0b-apiservice-cert\") pod \"packageserver-d55dfcdfc-cxzlx\" (UID: \"285ad280-c5dc-4312-afcd-39678c1c5c0b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cxzlx" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.635836 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/438710a7-473e-43a3-8aee-6f1f2d5ac756-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-xv8vk\" (UID: \"438710a7-473e-43a3-8aee-6f1f2d5ac756\") " pod="openshift-marketplace/marketplace-operator-79b997595-xv8vk" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.635864 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/a4d6106d-dd3c-4a6a-aaeb-b441add3fdad-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-w976v\" (UID: \"a4d6106d-dd3c-4a6a-aaeb-b441add3fdad\") " pod="openshift-multus/cni-sysctl-allowlist-ds-w976v" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.635888 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f7vzd\" (UniqueName: \"kubernetes.io/projected/1e079012-b36b-4046-b569-895b5100265d-kube-api-access-f7vzd\") pod \"service-ca-operator-777779d784-f5nff\" (UID: \"1e079012-b36b-4046-b569-895b5100265d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-f5nff" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.635928 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qghnq\" (UniqueName: \"kubernetes.io/projected/5bbc683f-19d5-4c72-83a3-511c300446ad-kube-api-access-qghnq\") pod \"control-plane-machine-set-operator-78cbb6b69f-jzdfv\" (UID: \"5bbc683f-19d5-4c72-83a3-511c300446ad\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jzdfv" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.635951 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-87rs7\" (UniqueName: \"kubernetes.io/projected/438710a7-473e-43a3-8aee-6f1f2d5ac756-kube-api-access-87rs7\") pod \"marketplace-operator-79b997595-xv8vk\" (UID: \"438710a7-473e-43a3-8aee-6f1f2d5ac756\") " pod="openshift-marketplace/marketplace-operator-79b997595-xv8vk" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.635972 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/15e7f85b-5175-40d2-ade3-66aa7469d9cd-signing-key\") pod \"service-ca-9c57cc56f-7n4w8\" (UID: \"15e7f85b-5175-40d2-ade3-66aa7469d9cd\") " pod="openshift-service-ca/service-ca-9c57cc56f-7n4w8" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.636004 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/5bbc683f-19d5-4c72-83a3-511c300446ad-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-jzdfv\" (UID: \"5bbc683f-19d5-4c72-83a3-511c300446ad\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jzdfv" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.636032 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e774d0ad-6c46-42cf-9605-535444f24c79-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-5gztc\" (UID: \"e774d0ad-6c46-42cf-9605-535444f24c79\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-5gztc" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.636062 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9dx8r\" (UniqueName: \"kubernetes.io/projected/85142b10-6185-436b-a4eb-0469915055fe-kube-api-access-9dx8r\") pod \"machine-config-controller-84d6567774-bgndl\" (UID: \"85142b10-6185-436b-a4eb-0469915055fe\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bgndl" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.636088 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/90274913-fbff-4207-a3d8-f163ebcee220-default-certificate\") pod \"router-default-5444994796-8n6md\" (UID: \"90274913-fbff-4207-a3d8-f163ebcee220\") " pod="openshift-ingress/router-default-5444994796-8n6md" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.636112 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m82jq\" (UniqueName: \"kubernetes.io/projected/90274913-fbff-4207-a3d8-f163ebcee220-kube-api-access-m82jq\") pod \"router-default-5444994796-8n6md\" (UID: \"90274913-fbff-4207-a3d8-f163ebcee220\") " pod="openshift-ingress/router-default-5444994796-8n6md" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.636161 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/85142b10-6185-436b-a4eb-0469915055fe-proxy-tls\") pod \"machine-config-controller-84d6567774-bgndl\" (UID: \"85142b10-6185-436b-a4eb-0469915055fe\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bgndl" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.636190 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e99c1615-8102-40e4-ba5a-fa770e09cf9c-cert\") pod \"ingress-canary-kl7tc\" (UID: \"e99c1615-8102-40e4-ba5a-fa770e09cf9c\") " pod="openshift-ingress-canary/ingress-canary-kl7tc" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.636230 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-75z8h\" (UniqueName: \"kubernetes.io/projected/285ad280-c5dc-4312-afcd-39678c1c5c0b-kube-api-access-75z8h\") pod \"packageserver-d55dfcdfc-cxzlx\" (UID: \"285ad280-c5dc-4312-afcd-39678c1c5c0b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cxzlx" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.636262 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/a4d6106d-dd3c-4a6a-aaeb-b441add3fdad-ready\") pod \"cni-sysctl-allowlist-ds-w976v\" (UID: \"a4d6106d-dd3c-4a6a-aaeb-b441add3fdad\") " pod="openshift-multus/cni-sysctl-allowlist-ds-w976v" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.636282 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/90274913-fbff-4207-a3d8-f163ebcee220-stats-auth\") pod \"router-default-5444994796-8n6md\" (UID: \"90274913-fbff-4207-a3d8-f163ebcee220\") " pod="openshift-ingress/router-default-5444994796-8n6md" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.636307 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/15e7f85b-5175-40d2-ade3-66aa7469d9cd-signing-cabundle\") pod \"service-ca-9c57cc56f-7n4w8\" (UID: \"15e7f85b-5175-40d2-ade3-66aa7469d9cd\") " pod="openshift-service-ca/service-ca-9c57cc56f-7n4w8" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.636329 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/90274913-fbff-4207-a3d8-f163ebcee220-service-ca-bundle\") pod \"router-default-5444994796-8n6md\" (UID: \"90274913-fbff-4207-a3d8-f163ebcee220\") " pod="openshift-ingress/router-default-5444994796-8n6md" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.636357 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/a4d6106d-dd3c-4a6a-aaeb-b441add3fdad-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-w976v\" (UID: \"a4d6106d-dd3c-4a6a-aaeb-b441add3fdad\") " pod="openshift-multus/cni-sysctl-allowlist-ds-w976v" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.636378 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x6nm6\" (UniqueName: \"kubernetes.io/projected/15e7f85b-5175-40d2-ade3-66aa7469d9cd-kube-api-access-x6nm6\") pod \"service-ca-9c57cc56f-7n4w8\" (UID: \"15e7f85b-5175-40d2-ade3-66aa7469d9cd\") " pod="openshift-service-ca/service-ca-9c57cc56f-7n4w8" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.636398 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/285ad280-c5dc-4312-afcd-39678c1c5c0b-webhook-cert\") pod \"packageserver-d55dfcdfc-cxzlx\" (UID: \"285ad280-c5dc-4312-afcd-39678c1c5c0b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cxzlx" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.636418 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tqbjp\" (UniqueName: \"kubernetes.io/projected/e99c1615-8102-40e4-ba5a-fa770e09cf9c-kube-api-access-tqbjp\") pod \"ingress-canary-kl7tc\" (UID: \"e99c1615-8102-40e4-ba5a-fa770e09cf9c\") " pod="openshift-ingress-canary/ingress-canary-kl7tc" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.636418 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-9p4mb"] Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.638439 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e774d0ad-6c46-42cf-9605-535444f24c79-config\") pod \"kube-controller-manager-operator-78b949d7b-5gztc\" (UID: \"e774d0ad-6c46-42cf-9605-535444f24c79\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-5gztc" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.638872 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/a4d6106d-dd3c-4a6a-aaeb-b441add3fdad-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-w976v\" (UID: \"a4d6106d-dd3c-4a6a-aaeb-b441add3fdad\") " pod="openshift-multus/cni-sysctl-allowlist-ds-w976v" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.639012 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/438710a7-473e-43a3-8aee-6f1f2d5ac756-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-xv8vk\" (UID: \"438710a7-473e-43a3-8aee-6f1f2d5ac756\") " pod="openshift-marketplace/marketplace-operator-79b997595-xv8vk" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.640187 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/a4d6106d-dd3c-4a6a-aaeb-b441add3fdad-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-w976v\" (UID: \"a4d6106d-dd3c-4a6a-aaeb-b441add3fdad\") " pod="openshift-multus/cni-sysctl-allowlist-ds-w976v" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.640673 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/a4d6106d-dd3c-4a6a-aaeb-b441add3fdad-ready\") pod \"cni-sysctl-allowlist-ds-w976v\" (UID: \"a4d6106d-dd3c-4a6a-aaeb-b441add3fdad\") " pod="openshift-multus/cni-sysctl-allowlist-ds-w976v" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.641095 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/85142b10-6185-436b-a4eb-0469915055fe-proxy-tls\") pod \"machine-config-controller-84d6567774-bgndl\" (UID: \"85142b10-6185-436b-a4eb-0469915055fe\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bgndl" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.643154 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/438710a7-473e-43a3-8aee-6f1f2d5ac756-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-xv8vk\" (UID: \"438710a7-473e-43a3-8aee-6f1f2d5ac756\") " pod="openshift-marketplace/marketplace-operator-79b997595-xv8vk" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.643650 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/285ad280-c5dc-4312-afcd-39678c1c5c0b-webhook-cert\") pod \"packageserver-d55dfcdfc-cxzlx\" (UID: \"285ad280-c5dc-4312-afcd-39678c1c5c0b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cxzlx" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.644209 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/90274913-fbff-4207-a3d8-f163ebcee220-stats-auth\") pod \"router-default-5444994796-8n6md\" (UID: \"90274913-fbff-4207-a3d8-f163ebcee220\") " pod="openshift-ingress/router-default-5444994796-8n6md" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.644328 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pwn5v\" (UniqueName: \"kubernetes.io/projected/dcbb9385-ebab-4698-9b8b-ceaf90e103f2-kube-api-access-pwn5v\") pod \"openshift-apiserver-operator-796bbdcf4f-npkjm\" (UID: \"dcbb9385-ebab-4698-9b8b-ceaf90e103f2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-npkjm" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.644719 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/90274913-fbff-4207-a3d8-f163ebcee220-default-certificate\") pod \"router-default-5444994796-8n6md\" (UID: \"90274913-fbff-4207-a3d8-f163ebcee220\") " pod="openshift-ingress/router-default-5444994796-8n6md" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.645181 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e99c1615-8102-40e4-ba5a-fa770e09cf9c-cert\") pod \"ingress-canary-kl7tc\" (UID: \"e99c1615-8102-40e4-ba5a-fa770e09cf9c\") " pod="openshift-ingress-canary/ingress-canary-kl7tc" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.647210 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/90274913-fbff-4207-a3d8-f163ebcee220-service-ca-bundle\") pod \"router-default-5444994796-8n6md\" (UID: \"90274913-fbff-4207-a3d8-f163ebcee220\") " pod="openshift-ingress/router-default-5444994796-8n6md" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.647886 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/5bbc683f-19d5-4c72-83a3-511c300446ad-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-jzdfv\" (UID: \"5bbc683f-19d5-4c72-83a3-511c300446ad\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jzdfv" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.647913 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/15e7f85b-5175-40d2-ade3-66aa7469d9cd-signing-key\") pod \"service-ca-9c57cc56f-7n4w8\" (UID: \"15e7f85b-5175-40d2-ade3-66aa7469d9cd\") " pod="openshift-service-ca/service-ca-9c57cc56f-7n4w8" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.648307 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/285ad280-c5dc-4312-afcd-39678c1c5c0b-apiservice-cert\") pod \"packageserver-d55dfcdfc-cxzlx\" (UID: \"285ad280-c5dc-4312-afcd-39678c1c5c0b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cxzlx" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.653148 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/15e7f85b-5175-40d2-ade3-66aa7469d9cd-signing-cabundle\") pod \"service-ca-9c57cc56f-7n4w8\" (UID: \"15e7f85b-5175-40d2-ade3-66aa7469d9cd\") " pod="openshift-service-ca/service-ca-9c57cc56f-7n4w8" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.654067 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-49tf4\" (UniqueName: \"kubernetes.io/projected/77a21ee7-0d79-437a-8041-55bb91ef0212-kube-api-access-49tf4\") pod \"catalog-operator-68c6474976-ghzzf\" (UID: \"77a21ee7-0d79-437a-8041-55bb91ef0212\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-ghzzf" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.658877 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-q9bgg" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.664509 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-ghzzf" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.678906 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pqc8z\" (UniqueName: \"kubernetes.io/projected/8adeb294-3f47-4d17-bf64-c0d6328b6f2d-kube-api-access-pqc8z\") pod \"machine-config-operator-74547568cd-4psh7\" (UID: \"8adeb294-3f47-4d17-bf64-c0d6328b6f2d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4psh7" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.693277 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-c46ql" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.708099 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vmtwl"] Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.709947 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2kl4f\" (UniqueName: \"kubernetes.io/projected/f82e7468-152d-46a6-9012-3bb0b4219b3f-kube-api-access-2kl4f\") pod \"collect-profiles-29536455-mx6zt\" (UID: \"f82e7468-152d-46a6-9012-3bb0b4219b3f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536455-mx6zt" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.719784 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ftrrd\" (UniqueName: \"kubernetes.io/projected/bf0a7b0a-5d73-4cf1-81b5-00c7232ced39-kube-api-access-ftrrd\") pod \"machine-config-server-wktw4\" (UID: \"bf0a7b0a-5d73-4cf1-81b5-00c7232ced39\") " pod="openshift-machine-config-operator/machine-config-server-wktw4" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.731983 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kwpmw" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.734900 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-tz4jj"] Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.737467 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 10:29:03 crc kubenswrapper[4728]: E0227 10:29:03.737582 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 10:29:04.237557011 +0000 UTC m=+164.199923117 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.738080 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w4vnn\" (UID: \"9f01d342-6bde-4063-b99d-b0efda456aef\") " pod="openshift-image-registry/image-registry-697d97f7c8-w4vnn" Feb 27 10:29:03 crc kubenswrapper[4728]: E0227 10:29:03.738383 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 10:29:04.238372795 +0000 UTC m=+164.200738991 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w4vnn" (UID: "9f01d342-6bde-4063-b99d-b0efda456aef") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.739563 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-5lw67" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.743244 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2zs9x\" (UniqueName: \"kubernetes.io/projected/f91e18b1-f9ed-4a0d-8aff-e7344791fb5e-kube-api-access-2zs9x\") pod \"package-server-manager-789f6589d5-9bqx2\" (UID: \"f91e18b1-f9ed-4a0d-8aff-e7344791fb5e\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9bqx2" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.762811 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-npkjm" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.768267 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f8cqf\" (UniqueName: \"kubernetes.io/projected/6e5d2a5d-0128-4d37-b653-555cc40a8d39-kube-api-access-f8cqf\") pod \"csi-hostpathplugin-sftjq\" (UID: \"6e5d2a5d-0128-4d37-b653-555cc40a8d39\") " pod="hostpath-provisioner/csi-hostpathplugin-sftjq" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.778424 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x4l27\" (UniqueName: \"kubernetes.io/projected/103ae8fe-45e0-4696-be10-bb2ced3ee561-kube-api-access-x4l27\") pod \"multus-admission-controller-857f4d67dd-zjx6k\" (UID: \"103ae8fe-45e0-4696-be10-bb2ced3ee561\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-zjx6k" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.793160 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2tf78\" (UniqueName: \"kubernetes.io/projected/ba4b5d33-bc92-49db-bb38-80a7bd3ca2f6-kube-api-access-2tf78\") pod \"dns-default-qt4kh\" (UID: \"ba4b5d33-bc92-49db-bb38-80a7bd3ca2f6\") " pod="openshift-dns/dns-default-qt4kh" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.795155 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-25vw6" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.820732 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fbnmk\" (UniqueName: \"kubernetes.io/projected/5eed07da-15ae-438b-a97f-568cd05ea1ee-kube-api-access-fbnmk\") pod \"olm-operator-6b444d44fb-fj9d6\" (UID: \"5eed07da-15ae-438b-a97f-568cd05ea1ee\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-fj9d6" Feb 27 10:29:03 crc kubenswrapper[4728]: W0227 10:29:03.836850 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode0d7166c_3042_4706_9683_6c6a32d29a9c.slice/crio-eb23be9ed170476531702eca60fa01b2f89feaa93fce314539ebc25aac92dc62 WatchSource:0}: Error finding container eb23be9ed170476531702eca60fa01b2f89feaa93fce314539ebc25aac92dc62: Status 404 returned error can't find the container with id eb23be9ed170476531702eca60fa01b2f89feaa93fce314539ebc25aac92dc62 Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.839865 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 10:29:03 crc kubenswrapper[4728]: E0227 10:29:03.840309 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 10:29:04.34029411 +0000 UTC m=+164.302660216 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.853874 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4psh7" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.859557 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kpkm9\" (UniqueName: \"kubernetes.io/projected/a4d6106d-dd3c-4a6a-aaeb-b441add3fdad-kube-api-access-kpkm9\") pod \"cni-sysctl-allowlist-ds-w976v\" (UID: \"a4d6106d-dd3c-4a6a-aaeb-b441add3fdad\") " pod="openshift-multus/cni-sysctl-allowlist-ds-w976v" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.869799 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-zjx6k" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.874958 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-7g65x"] Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.880341 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9dx8r\" (UniqueName: \"kubernetes.io/projected/85142b10-6185-436b-a4eb-0469915055fe-kube-api-access-9dx8r\") pod \"machine-config-controller-84d6567774-bgndl\" (UID: \"85142b10-6185-436b-a4eb-0469915055fe\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bgndl" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.900201 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9bqx2" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.900639 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m82jq\" (UniqueName: \"kubernetes.io/projected/90274913-fbff-4207-a3d8-f163ebcee220-kube-api-access-m82jq\") pod \"router-default-5444994796-8n6md\" (UID: \"90274913-fbff-4207-a3d8-f163ebcee220\") " pod="openshift-ingress/router-default-5444994796-8n6md" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.902672 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-zdjkf"] Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.909210 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-fj9d6" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.916328 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-762bl\" (UniqueName: \"kubernetes.io/projected/826461a8-eef9-4a1f-b4a7-4ff8076ec729-kube-api-access-762bl\") pod \"auto-csr-approver-29536468-682zs\" (UID: \"826461a8-eef9-4a1f-b4a7-4ff8076ec729\") " pod="openshift-infra/auto-csr-approver-29536468-682zs" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.926398 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29536455-mx6zt" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.933744 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x6nm6\" (UniqueName: \"kubernetes.io/projected/15e7f85b-5175-40d2-ade3-66aa7469d9cd-kube-api-access-x6nm6\") pod \"service-ca-9c57cc56f-7n4w8\" (UID: \"15e7f85b-5175-40d2-ade3-66aa7469d9cd\") " pod="openshift-service-ca/service-ca-9c57cc56f-7n4w8" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.941914 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w4vnn\" (UID: \"9f01d342-6bde-4063-b99d-b0efda456aef\") " pod="openshift-image-registry/image-registry-697d97f7c8-w4vnn" Feb 27 10:29:03 crc kubenswrapper[4728]: E0227 10:29:03.942683 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 10:29:04.442649868 +0000 UTC m=+164.405015974 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w4vnn" (UID: "9f01d342-6bde-4063-b99d-b0efda456aef") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.948015 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536468-682zs" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.955578 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tqbjp\" (UniqueName: \"kubernetes.io/projected/e99c1615-8102-40e4-ba5a-fa770e09cf9c-kube-api-access-tqbjp\") pod \"ingress-canary-kl7tc\" (UID: \"e99c1615-8102-40e4-ba5a-fa770e09cf9c\") " pod="openshift-ingress-canary/ingress-canary-kl7tc" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.955910 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-7n4w8" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.961528 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-g4dbw"] Feb 27 10:29:03 crc kubenswrapper[4728]: W0227 10:29:03.963272 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod24dd67e0_7c58_4f3f_a712_ae2639e495fe.slice/crio-bcd8e99bc02efe1ea0b7ad535aaddf1f5779c0694b560ca2b1854c701987bfbb WatchSource:0}: Error finding container bcd8e99bc02efe1ea0b7ad535aaddf1f5779c0694b560ca2b1854c701987bfbb: Status 404 returned error can't find the container with id bcd8e99bc02efe1ea0b7ad535aaddf1f5779c0694b560ca2b1854c701987bfbb Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.972034 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-qt4kh" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.977522 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-sftjq" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.981071 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qghnq\" (UniqueName: \"kubernetes.io/projected/5bbc683f-19d5-4c72-83a3-511c300446ad-kube-api-access-qghnq\") pod \"control-plane-machine-set-operator-78cbb6b69f-jzdfv\" (UID: \"5bbc683f-19d5-4c72-83a3-511c300446ad\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jzdfv" Feb 27 10:29:03 crc kubenswrapper[4728]: I0227 10:29:03.998494 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-wktw4" Feb 27 10:29:04 crc kubenswrapper[4728]: I0227 10:29:04.006237 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-w976v" Feb 27 10:29:04 crc kubenswrapper[4728]: I0227 10:29:04.013816 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-kl7tc" Feb 27 10:29:04 crc kubenswrapper[4728]: I0227 10:29:04.020466 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-87rs7\" (UniqueName: \"kubernetes.io/projected/438710a7-473e-43a3-8aee-6f1f2d5ac756-kube-api-access-87rs7\") pod \"marketplace-operator-79b997595-xv8vk\" (UID: \"438710a7-473e-43a3-8aee-6f1f2d5ac756\") " pod="openshift-marketplace/marketplace-operator-79b997595-xv8vk" Feb 27 10:29:04 crc kubenswrapper[4728]: I0227 10:29:04.025950 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e774d0ad-6c46-42cf-9605-535444f24c79-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-5gztc\" (UID: \"e774d0ad-6c46-42cf-9605-535444f24c79\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-5gztc" Feb 27 10:29:04 crc kubenswrapper[4728]: I0227 10:29:04.031475 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-75z8h\" (UniqueName: \"kubernetes.io/projected/285ad280-c5dc-4312-afcd-39678c1c5c0b-kube-api-access-75z8h\") pod \"packageserver-d55dfcdfc-cxzlx\" (UID: \"285ad280-c5dc-4312-afcd-39678c1c5c0b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cxzlx" Feb 27 10:29:04 crc kubenswrapper[4728]: I0227 10:29:04.053110 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 10:29:04 crc kubenswrapper[4728]: E0227 10:29:04.053561 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 10:29:04.553541447 +0000 UTC m=+164.515907563 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:29:04 crc kubenswrapper[4728]: I0227 10:29:04.059632 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f7vzd\" (UniqueName: \"kubernetes.io/projected/1e079012-b36b-4046-b569-895b5100265d-kube-api-access-f7vzd\") pod \"service-ca-operator-777779d784-f5nff\" (UID: \"1e079012-b36b-4046-b569-895b5100265d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-f5nff" Feb 27 10:29:04 crc kubenswrapper[4728]: W0227 10:29:04.070653 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod36e6e9de_b708_4242_9251_1ba3b849a749.slice/crio-d4672091b167e4c296ff9ffc211fd59e4a496d0e29a060a620ee48c93601f48a WatchSource:0}: Error finding container d4672091b167e4c296ff9ffc211fd59e4a496d0e29a060a620ee48c93601f48a: Status 404 returned error can't find the container with id d4672091b167e4c296ff9ffc211fd59e4a496d0e29a060a620ee48c93601f48a Feb 27 10:29:04 crc kubenswrapper[4728]: I0227 10:29:04.120461 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-q9bgg"] Feb 27 10:29:04 crc kubenswrapper[4728]: I0227 10:29:04.130470 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-5lw67"] Feb 27 10:29:04 crc kubenswrapper[4728]: I0227 10:29:04.131824 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-8n6md" Feb 27 10:29:04 crc kubenswrapper[4728]: I0227 10:29:04.133207 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-5gztc" Feb 27 10:29:04 crc kubenswrapper[4728]: I0227 10:29:04.140823 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jzdfv" Feb 27 10:29:04 crc kubenswrapper[4728]: I0227 10:29:04.147209 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bgndl" Feb 27 10:29:04 crc kubenswrapper[4728]: I0227 10:29:04.154278 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w4vnn\" (UID: \"9f01d342-6bde-4063-b99d-b0efda456aef\") " pod="openshift-image-registry/image-registry-697d97f7c8-w4vnn" Feb 27 10:29:04 crc kubenswrapper[4728]: I0227 10:29:04.154477 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-ghzzf"] Feb 27 10:29:04 crc kubenswrapper[4728]: E0227 10:29:04.154717 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 10:29:04.65470113 +0000 UTC m=+164.617067236 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w4vnn" (UID: "9f01d342-6bde-4063-b99d-b0efda456aef") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:29:04 crc kubenswrapper[4728]: I0227 10:29:04.160566 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-xv8vk" Feb 27 10:29:04 crc kubenswrapper[4728]: I0227 10:29:04.180035 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-h55dj"] Feb 27 10:29:04 crc kubenswrapper[4728]: I0227 10:29:04.218441 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-f5nff" Feb 27 10:29:04 crc kubenswrapper[4728]: I0227 10:29:04.235161 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cxzlx" Feb 27 10:29:04 crc kubenswrapper[4728]: I0227 10:29:04.255312 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 10:29:04 crc kubenswrapper[4728]: E0227 10:29:04.255471 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 10:29:04.755442572 +0000 UTC m=+164.717808678 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:29:04 crc kubenswrapper[4728]: I0227 10:29:04.255549 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w4vnn\" (UID: \"9f01d342-6bde-4063-b99d-b0efda456aef\") " pod="openshift-image-registry/image-registry-697d97f7c8-w4vnn" Feb 27 10:29:04 crc kubenswrapper[4728]: E0227 10:29:04.255888 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 10:29:04.755880915 +0000 UTC m=+164.718247021 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w4vnn" (UID: "9f01d342-6bde-4063-b99d-b0efda456aef") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:29:04 crc kubenswrapper[4728]: I0227 10:29:04.357966 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 10:29:04 crc kubenswrapper[4728]: E0227 10:29:04.358719 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 10:29:04.858692416 +0000 UTC m=+164.821058522 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:29:04 crc kubenswrapper[4728]: I0227 10:29:04.358845 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w4vnn\" (UID: \"9f01d342-6bde-4063-b99d-b0efda456aef\") " pod="openshift-image-registry/image-registry-697d97f7c8-w4vnn" Feb 27 10:29:04 crc kubenswrapper[4728]: E0227 10:29:04.359162 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 10:29:04.859145418 +0000 UTC m=+164.821511524 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w4vnn" (UID: "9f01d342-6bde-4063-b99d-b0efda456aef") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:29:04 crc kubenswrapper[4728]: W0227 10:29:04.373468 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod20c5c038_19ee_4ac6_b2dc_281920c6be9a.slice/crio-3e46f19eb16952c4d4aa8e4ad11cddb7709364b56f8e0b49ba1637f81d747e95 WatchSource:0}: Error finding container 3e46f19eb16952c4d4aa8e4ad11cddb7709364b56f8e0b49ba1637f81d747e95: Status 404 returned error can't find the container with id 3e46f19eb16952c4d4aa8e4ad11cddb7709364b56f8e0b49ba1637f81d747e95 Feb 27 10:29:04 crc kubenswrapper[4728]: I0227 10:29:04.463635 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 10:29:04 crc kubenswrapper[4728]: E0227 10:29:04.463840 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 10:29:04.963799481 +0000 UTC m=+164.926165577 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:29:04 crc kubenswrapper[4728]: I0227 10:29:04.465047 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w4vnn\" (UID: \"9f01d342-6bde-4063-b99d-b0efda456aef\") " pod="openshift-image-registry/image-registry-697d97f7c8-w4vnn" Feb 27 10:29:04 crc kubenswrapper[4728]: E0227 10:29:04.465427 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 10:29:04.965418787 +0000 UTC m=+164.927784893 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w4vnn" (UID: "9f01d342-6bde-4063-b99d-b0efda456aef") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:29:04 crc kubenswrapper[4728]: I0227 10:29:04.475944 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-kwpmw"] Feb 27 10:29:04 crc kubenswrapper[4728]: I0227 10:29:04.550608 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-5lw67" event={"ID":"7319e158-317b-4f98-b9da-0481f2c0aca8","Type":"ContainerStarted","Data":"7efde232881fcb422cb35c5dc5085b4501b345cb5128cb60f9faa035799290bf"} Feb 27 10:29:04 crc kubenswrapper[4728]: I0227 10:29:04.553327 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-lj5z6" event={"ID":"206a150d-98c4-4204-84e3-609198888fd4","Type":"ContainerStarted","Data":"e403aeeb3708bfd85bd5b46298c0335643e93a274ca8fcb1d47a4bd0e30a0cdc"} Feb 27 10:29:04 crc kubenswrapper[4728]: I0227 10:29:04.553354 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-lj5z6" event={"ID":"206a150d-98c4-4204-84e3-609198888fd4","Type":"ContainerStarted","Data":"89dc0b0137c3f7ee4aff8fd548e7714289a5f1f1aea20ec948d07bce43e03fc7"} Feb 27 10:29:04 crc kubenswrapper[4728]: I0227 10:29:04.557207 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-h55dj" event={"ID":"20c5c038-19ee-4ac6-b2dc-281920c6be9a","Type":"ContainerStarted","Data":"3e46f19eb16952c4d4aa8e4ad11cddb7709364b56f8e0b49ba1637f81d747e95"} Feb 27 10:29:04 crc kubenswrapper[4728]: I0227 10:29:04.561600 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-d2nkm" event={"ID":"52d0f01e-a7ab-4c07-bd46-d014e84c3d6a","Type":"ContainerStarted","Data":"3543c7d69eef4755964ea7d1f54094ce89e5f119789732465e5e76e5a2a32963"} Feb 27 10:29:04 crc kubenswrapper[4728]: I0227 10:29:04.562361 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-d2nkm" Feb 27 10:29:04 crc kubenswrapper[4728]: I0227 10:29:04.563455 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tz4jj" event={"ID":"e0d7166c-3042-4706-9683-6c6a32d29a9c","Type":"ContainerStarted","Data":"eb23be9ed170476531702eca60fa01b2f89feaa93fce314539ebc25aac92dc62"} Feb 27 10:29:04 crc kubenswrapper[4728]: I0227 10:29:04.564462 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-tvflj" event={"ID":"b1d22605-abd6-4fc6-8352-8fe78ec02332","Type":"ContainerStarted","Data":"f0362825852f38e4ef290066e730fbff332e995d46735cb7c29c1f8dc563bad0"} Feb 27 10:29:04 crc kubenswrapper[4728]: I0227 10:29:04.565682 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 10:29:04 crc kubenswrapper[4728]: E0227 10:29:04.565998 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 10:29:05.065984524 +0000 UTC m=+165.028350630 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:29:04 crc kubenswrapper[4728]: I0227 10:29:04.566063 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-zdjkf" event={"ID":"ac17bc60-379b-44dd-bf6b-2b9ecf87bf02","Type":"ContainerStarted","Data":"fe54f70001043ea71ce73bc07f6b6ae3fdb099d369a7b9eeb464e4b7ef37429c"} Feb 27 10:29:04 crc kubenswrapper[4728]: I0227 10:29:04.567154 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-g4dbw" event={"ID":"36e6e9de-b708-4242-9251-1ba3b849a749","Type":"ContainerStarted","Data":"d4672091b167e4c296ff9ffc211fd59e4a496d0e29a060a620ee48c93601f48a"} Feb 27 10:29:04 crc kubenswrapper[4728]: I0227 10:29:04.567858 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-7g65x" event={"ID":"24dd67e0-7c58-4f3f-a712-ae2639e495fe","Type":"ContainerStarted","Data":"bcd8e99bc02efe1ea0b7ad535aaddf1f5779c0694b560ca2b1854c701987bfbb"} Feb 27 10:29:04 crc kubenswrapper[4728]: I0227 10:29:04.570210 4728 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-d2nkm container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Feb 27 10:29:04 crc kubenswrapper[4728]: I0227 10:29:04.570246 4728 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-d2nkm" podUID="52d0f01e-a7ab-4c07-bd46-d014e84c3d6a" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Feb 27 10:29:04 crc kubenswrapper[4728]: I0227 10:29:04.582617 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-9p4mb" event={"ID":"e87e4167-c76f-4adc-9d67-28485f6a6397","Type":"ContainerStarted","Data":"ae52767bf4f68d13b66281c556c4f051bc5d8a0e148191634985b70b1ab80cde"} Feb 27 10:29:04 crc kubenswrapper[4728]: I0227 10:29:04.582653 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-9p4mb" event={"ID":"e87e4167-c76f-4adc-9d67-28485f6a6397","Type":"ContainerStarted","Data":"92b59a1a57024c136372ebc3c857113066b0e3b5a27ea97a8e5f9b245f4dbced"} Feb 27 10:29:04 crc kubenswrapper[4728]: I0227 10:29:04.590807 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vmtwl" event={"ID":"7ee9013e-5452-4e18-b4ce-3af1c8257662","Type":"ContainerStarted","Data":"d3db41ecfac62eb5eb4c0e5b74fb50cfe87edda35a31686abcdfac394a158c83"} Feb 27 10:29:04 crc kubenswrapper[4728]: I0227 10:29:04.590873 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vmtwl" event={"ID":"7ee9013e-5452-4e18-b4ce-3af1c8257662","Type":"ContainerStarted","Data":"8638f59aab6054704c9db0d5bf77dddf055b59db97bc992bafee8526ad40c04e"} Feb 27 10:29:04 crc kubenswrapper[4728]: I0227 10:29:04.594897 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-25vw6"] Feb 27 10:29:04 crc kubenswrapper[4728]: I0227 10:29:04.595924 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-p7rtp" event={"ID":"3bd010d3-c618-478c-9dd4-f0c095b872ac","Type":"ContainerStarted","Data":"f43c8bb55347fe393a251f8bb33224a50b8454d931c15e7a46a681413a7db5cd"} Feb 27 10:29:04 crc kubenswrapper[4728]: I0227 10:29:04.617130 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-c46ql"] Feb 27 10:29:04 crc kubenswrapper[4728]: I0227 10:29:04.622446 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-qt4kh"] Feb 27 10:29:04 crc kubenswrapper[4728]: I0227 10:29:04.640931 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-b89pv" event={"ID":"6e70c0a9-c703-42f3-b47c-c32dd62b435b","Type":"ContainerStarted","Data":"815b568fdbd3bb1efbb011f784087673ccb56c8bfe55222428db7aa7f25c25c9"} Feb 27 10:29:04 crc kubenswrapper[4728]: I0227 10:29:04.642590 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-ghzzf" event={"ID":"77a21ee7-0d79-437a-8041-55bb91ef0212","Type":"ContainerStarted","Data":"433e1a65b693bf9478f6d12de624ee557cd3ce919e44d503d49ac4f34fbbca3e"} Feb 27 10:29:04 crc kubenswrapper[4728]: I0227 10:29:04.644010 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-fwnnw" event={"ID":"ba876990-999b-4cd2-bb68-624cbf1b5701","Type":"ContainerStarted","Data":"d9fda57bda6fec4c68576f36267a60ceffd9fe92ffcc475c3b80402dc408c103"} Feb 27 10:29:04 crc kubenswrapper[4728]: I0227 10:29:04.645202 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bd58v" event={"ID":"0faf2938-8e5e-451e-99f9-c09124f6a767","Type":"ContainerStarted","Data":"b87058445610bb2fbf4a0a6fd4556a77113bdbf85cc5223c790f821cb7cfddd6"} Feb 27 10:29:04 crc kubenswrapper[4728]: I0227 10:29:04.646616 4728 generic.go:334] "Generic (PLEG): container finished" podID="f5b2eb8e-7b36-40ac-b745-9e1a3efaec21" containerID="a0377c9b9f7c764c2fe6c1713731000dbd58cbcadb6287949afe233d8fa1c6bf" exitCode=0 Feb 27 10:29:04 crc kubenswrapper[4728]: I0227 10:29:04.646660 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-2f6px" event={"ID":"f5b2eb8e-7b36-40ac-b745-9e1a3efaec21","Type":"ContainerDied","Data":"a0377c9b9f7c764c2fe6c1713731000dbd58cbcadb6287949afe233d8fa1c6bf"} Feb 27 10:29:04 crc kubenswrapper[4728]: I0227 10:29:04.646677 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-2f6px" event={"ID":"f5b2eb8e-7b36-40ac-b745-9e1a3efaec21","Type":"ContainerStarted","Data":"792f64738bf70ef53b030fffe169410d96eddcb04b6f99c20491a63ae5c3c718"} Feb 27 10:29:04 crc kubenswrapper[4728]: I0227 10:29:04.650523 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j72xn" event={"ID":"6b237825-2c85-4de6-a839-b91e7d23d433","Type":"ContainerStarted","Data":"106dba2d55e6bff86b2b2c7e4241935e72a835a6e94cccf2a9120aade5d03c7b"} Feb 27 10:29:04 crc kubenswrapper[4728]: I0227 10:29:04.654639 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9m4nv" event={"ID":"6bba0773-58d9-41fe-90da-12a1399387a7","Type":"ContainerStarted","Data":"d44a7be87f9050d1bdb16576800fc368b8dc9a355f80f496c24114f78ef0ae41"} Feb 27 10:29:04 crc kubenswrapper[4728]: I0227 10:29:04.664307 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-q9bgg" event={"ID":"8f2ca8d1-e849-4530-9260-09dd161dc4c3","Type":"ContainerStarted","Data":"ae46d151259bae3298bfdb7a280e1641442151a811a88dc9a9f1babe868f2b03"} Feb 27 10:29:04 crc kubenswrapper[4728]: I0227 10:29:04.667782 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w4vnn\" (UID: \"9f01d342-6bde-4063-b99d-b0efda456aef\") " pod="openshift-image-registry/image-registry-697d97f7c8-w4vnn" Feb 27 10:29:04 crc kubenswrapper[4728]: E0227 10:29:04.668780 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 10:29:05.168769263 +0000 UTC m=+165.131135369 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w4vnn" (UID: "9f01d342-6bde-4063-b99d-b0efda456aef") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:29:04 crc kubenswrapper[4728]: I0227 10:29:04.770068 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 10:29:04 crc kubenswrapper[4728]: E0227 10:29:04.770238 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 10:29:05.270211195 +0000 UTC m=+165.232577301 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:29:04 crc kubenswrapper[4728]: I0227 10:29:04.770565 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-4psh7"] Feb 27 10:29:04 crc kubenswrapper[4728]: I0227 10:29:04.770608 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-npkjm"] Feb 27 10:29:04 crc kubenswrapper[4728]: I0227 10:29:04.770632 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w4vnn\" (UID: \"9f01d342-6bde-4063-b99d-b0efda456aef\") " pod="openshift-image-registry/image-registry-697d97f7c8-w4vnn" Feb 27 10:29:04 crc kubenswrapper[4728]: E0227 10:29:04.770896 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 10:29:05.270884894 +0000 UTC m=+165.233251000 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w4vnn" (UID: "9f01d342-6bde-4063-b99d-b0efda456aef") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:29:04 crc kubenswrapper[4728]: I0227 10:29:04.871519 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 10:29:04 crc kubenswrapper[4728]: E0227 10:29:04.871911 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 10:29:05.371884194 +0000 UTC m=+165.334250330 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:29:04 crc kubenswrapper[4728]: I0227 10:29:04.978554 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w4vnn\" (UID: \"9f01d342-6bde-4063-b99d-b0efda456aef\") " pod="openshift-image-registry/image-registry-697d97f7c8-w4vnn" Feb 27 10:29:04 crc kubenswrapper[4728]: E0227 10:29:04.979352 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 10:29:05.479337215 +0000 UTC m=+165.441703331 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w4vnn" (UID: "9f01d342-6bde-4063-b99d-b0efda456aef") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:29:05 crc kubenswrapper[4728]: I0227 10:29:05.025357 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-7n4w8"] Feb 27 10:29:05 crc kubenswrapper[4728]: I0227 10:29:05.079620 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 10:29:05 crc kubenswrapper[4728]: E0227 10:29:05.079804 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 10:29:05.579782269 +0000 UTC m=+165.542148375 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:29:05 crc kubenswrapper[4728]: I0227 10:29:05.079959 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w4vnn\" (UID: \"9f01d342-6bde-4063-b99d-b0efda456aef\") " pod="openshift-image-registry/image-registry-697d97f7c8-w4vnn" Feb 27 10:29:05 crc kubenswrapper[4728]: E0227 10:29:05.080488 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 10:29:05.580469799 +0000 UTC m=+165.542835905 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w4vnn" (UID: "9f01d342-6bde-4063-b99d-b0efda456aef") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:29:05 crc kubenswrapper[4728]: I0227 10:29:05.181720 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 10:29:05 crc kubenswrapper[4728]: E0227 10:29:05.182162 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 10:29:05.682139937 +0000 UTC m=+165.644506043 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:29:05 crc kubenswrapper[4728]: I0227 10:29:05.275475 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-sftjq"] Feb 27 10:29:05 crc kubenswrapper[4728]: I0227 10:29:05.284692 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w4vnn\" (UID: \"9f01d342-6bde-4063-b99d-b0efda456aef\") " pod="openshift-image-registry/image-registry-697d97f7c8-w4vnn" Feb 27 10:29:05 crc kubenswrapper[4728]: E0227 10:29:05.285237 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 10:29:05.785223345 +0000 UTC m=+165.747589451 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w4vnn" (UID: "9f01d342-6bde-4063-b99d-b0efda456aef") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:29:05 crc kubenswrapper[4728]: I0227 10:29:05.312045 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-kl7tc"] Feb 27 10:29:05 crc kubenswrapper[4728]: W0227 10:29:05.362307 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6e5d2a5d_0128_4d37_b653_555cc40a8d39.slice/crio-7ca2c6fd3fe14dcc0e1ee8f85057dfc7d92211868f886916c3e7a8e540ebe82c WatchSource:0}: Error finding container 7ca2c6fd3fe14dcc0e1ee8f85057dfc7d92211868f886916c3e7a8e540ebe82c: Status 404 returned error can't find the container with id 7ca2c6fd3fe14dcc0e1ee8f85057dfc7d92211868f886916c3e7a8e540ebe82c Feb 27 10:29:05 crc kubenswrapper[4728]: I0227 10:29:05.385774 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 10:29:05 crc kubenswrapper[4728]: E0227 10:29:05.387284 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 10:29:05.887260244 +0000 UTC m=+165.849626350 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:29:05 crc kubenswrapper[4728]: I0227 10:29:05.461323 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536468-682zs"] Feb 27 10:29:05 crc kubenswrapper[4728]: I0227 10:29:05.474332 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29536455-mx6zt"] Feb 27 10:29:05 crc kubenswrapper[4728]: I0227 10:29:05.484329 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-xv8vk"] Feb 27 10:29:05 crc kubenswrapper[4728]: I0227 10:29:05.487777 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w4vnn\" (UID: \"9f01d342-6bde-4063-b99d-b0efda456aef\") " pod="openshift-image-registry/image-registry-697d97f7c8-w4vnn" Feb 27 10:29:05 crc kubenswrapper[4728]: E0227 10:29:05.488624 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 10:29:05.988609623 +0000 UTC m=+165.950975729 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w4vnn" (UID: "9f01d342-6bde-4063-b99d-b0efda456aef") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:29:05 crc kubenswrapper[4728]: I0227 10:29:05.501486 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-zjx6k"] Feb 27 10:29:05 crc kubenswrapper[4728]: I0227 10:29:05.526251 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9bqx2"] Feb 27 10:29:05 crc kubenswrapper[4728]: W0227 10:29:05.535030 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf82e7468_152d_46a6_9012_3bb0b4219b3f.slice/crio-5b6dedda68e3e6d9345b1c943c9dedffb37215d26d72fde983e13f974ad05ec9 WatchSource:0}: Error finding container 5b6dedda68e3e6d9345b1c943c9dedffb37215d26d72fde983e13f974ad05ec9: Status 404 returned error can't find the container with id 5b6dedda68e3e6d9345b1c943c9dedffb37215d26d72fde983e13f974ad05ec9 Feb 27 10:29:05 crc kubenswrapper[4728]: W0227 10:29:05.588876 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod103ae8fe_45e0_4696_be10_bb2ced3ee561.slice/crio-7e6b7b0426c1852a3ce288e71549d3b97520fd0f374b7c6265b9b965103df599 WatchSource:0}: Error finding container 7e6b7b0426c1852a3ce288e71549d3b97520fd0f374b7c6265b9b965103df599: Status 404 returned error can't find the container with id 7e6b7b0426c1852a3ce288e71549d3b97520fd0f374b7c6265b9b965103df599 Feb 27 10:29:05 crc kubenswrapper[4728]: I0227 10:29:05.589077 4728 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 27 10:29:05 crc kubenswrapper[4728]: I0227 10:29:05.590249 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 10:29:05 crc kubenswrapper[4728]: E0227 10:29:05.591537 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 10:29:06.091522507 +0000 UTC m=+166.053888613 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:29:05 crc kubenswrapper[4728]: I0227 10:29:05.593655 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-bgndl"] Feb 27 10:29:05 crc kubenswrapper[4728]: I0227 10:29:05.606637 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-tvflj" podStartSLOduration=96.606617183 podStartE2EDuration="1m36.606617183s" podCreationTimestamp="2026-02-27 10:27:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:29:05.595789708 +0000 UTC m=+165.558155814" watchObservedRunningTime="2026-02-27 10:29:05.606617183 +0000 UTC m=+165.568983289" Feb 27 10:29:05 crc kubenswrapper[4728]: I0227 10:29:05.608281 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-fj9d6"] Feb 27 10:29:05 crc kubenswrapper[4728]: I0227 10:29:05.608337 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-5gztc"] Feb 27 10:29:05 crc kubenswrapper[4728]: I0227 10:29:05.611932 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-f5nff"] Feb 27 10:29:05 crc kubenswrapper[4728]: I0227 10:29:05.652536 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vmtwl" podStartSLOduration=95.652518843 podStartE2EDuration="1m35.652518843s" podCreationTimestamp="2026-02-27 10:27:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:29:05.637246581 +0000 UTC m=+165.599612687" watchObservedRunningTime="2026-02-27 10:29:05.652518843 +0000 UTC m=+165.614884939" Feb 27 10:29:05 crc kubenswrapper[4728]: I0227 10:29:05.681067 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-b89pv" podStartSLOduration=95.6810481 podStartE2EDuration="1m35.6810481s" podCreationTimestamp="2026-02-27 10:27:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:29:05.679351272 +0000 UTC m=+165.641717378" watchObservedRunningTime="2026-02-27 10:29:05.6810481 +0000 UTC m=+165.643414206" Feb 27 10:29:05 crc kubenswrapper[4728]: I0227 10:29:05.694340 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w4vnn\" (UID: \"9f01d342-6bde-4063-b99d-b0efda456aef\") " pod="openshift-image-registry/image-registry-697d97f7c8-w4vnn" Feb 27 10:29:05 crc kubenswrapper[4728]: E0227 10:29:05.694777 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 10:29:06.194755749 +0000 UTC m=+166.157121925 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w4vnn" (UID: "9f01d342-6bde-4063-b99d-b0efda456aef") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:29:05 crc kubenswrapper[4728]: I0227 10:29:05.707308 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-wktw4" event={"ID":"bf0a7b0a-5d73-4cf1-81b5-00c7232ced39","Type":"ContainerStarted","Data":"e591bd6142ab6caccfd46794eb13a170be8b374730346208bb08fadaed2d13a5"} Feb 27 10:29:05 crc kubenswrapper[4728]: I0227 10:29:05.707354 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-wktw4" event={"ID":"bf0a7b0a-5d73-4cf1-81b5-00c7232ced39","Type":"ContainerStarted","Data":"453e31a9d363e62f50dad8045be8e1aac7cd3049c1b69e031a1b425b95eff052"} Feb 27 10:29:05 crc kubenswrapper[4728]: I0227 10:29:05.712675 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-d2nkm" podStartSLOduration=95.712657796 podStartE2EDuration="1m35.712657796s" podCreationTimestamp="2026-02-27 10:27:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:29:05.711877673 +0000 UTC m=+165.674243789" watchObservedRunningTime="2026-02-27 10:29:05.712657796 +0000 UTC m=+165.675023902" Feb 27 10:29:05 crc kubenswrapper[4728]: I0227 10:29:05.718456 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bd58v" event={"ID":"0faf2938-8e5e-451e-99f9-c09124f6a767","Type":"ContainerStarted","Data":"48082026fd395329f5023a7d275454e67536549e0ab6075645079ebef883aede"} Feb 27 10:29:05 crc kubenswrapper[4728]: W0227 10:29:05.719030 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5eed07da_15ae_438b_a97f_568cd05ea1ee.slice/crio-9be927e135b7d26bfaa88b9ac9fa7d3b0ac24fc839c2d76ec44f1817ae659ac1 WatchSource:0}: Error finding container 9be927e135b7d26bfaa88b9ac9fa7d3b0ac24fc839c2d76ec44f1817ae659ac1: Status 404 returned error can't find the container with id 9be927e135b7d26bfaa88b9ac9fa7d3b0ac24fc839c2d76ec44f1817ae659ac1 Feb 27 10:29:05 crc kubenswrapper[4728]: I0227 10:29:05.731231 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-25vw6" event={"ID":"de6bdf95-032d-42a2-a8b5-0202641a05c1","Type":"ContainerStarted","Data":"0282cb8eb7faa5bd65f9280e85830dca357989a93dd75e817f870d1e5713a634"} Feb 27 10:29:05 crc kubenswrapper[4728]: I0227 10:29:05.736417 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cxzlx"] Feb 27 10:29:05 crc kubenswrapper[4728]: I0227 10:29:05.737326 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jzdfv"] Feb 27 10:29:05 crc kubenswrapper[4728]: I0227 10:29:05.766646 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-c46ql" event={"ID":"a3656135-373e-4ec6-9cf1-e34d6a95c5a5","Type":"ContainerStarted","Data":"99fb97cde50373a0edaea6cf7538fdb30201153e1ae52206bd6d8c563955e15f"} Feb 27 10:29:05 crc kubenswrapper[4728]: W0227 10:29:05.770275 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1e079012_b36b_4046_b569_895b5100265d.slice/crio-713fd4dbd65b89df6c8fb713c9921068b1282dd716f499bef22642799ae508c1 WatchSource:0}: Error finding container 713fd4dbd65b89df6c8fb713c9921068b1282dd716f499bef22642799ae508c1: Status 404 returned error can't find the container with id 713fd4dbd65b89df6c8fb713c9921068b1282dd716f499bef22642799ae508c1 Feb 27 10:29:05 crc kubenswrapper[4728]: I0227 10:29:05.772586 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-sftjq" event={"ID":"6e5d2a5d-0128-4d37-b653-555cc40a8d39","Type":"ContainerStarted","Data":"7ca2c6fd3fe14dcc0e1ee8f85057dfc7d92211868f886916c3e7a8e540ebe82c"} Feb 27 10:29:05 crc kubenswrapper[4728]: I0227 10:29:05.789628 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-npkjm" event={"ID":"dcbb9385-ebab-4698-9b8b-ceaf90e103f2","Type":"ContainerStarted","Data":"d74e5d4f8a6b7e729646138d1b796525b01d53ceb0f81832d744a04e6172860a"} Feb 27 10:29:05 crc kubenswrapper[4728]: I0227 10:29:05.789675 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-npkjm" event={"ID":"dcbb9385-ebab-4698-9b8b-ceaf90e103f2","Type":"ContainerStarted","Data":"5686048e8024a79928f3a327a3107bfded5f5328efeee6f15660ddc03172de90"} Feb 27 10:29:05 crc kubenswrapper[4728]: I0227 10:29:05.791928 4728 generic.go:334] "Generic (PLEG): container finished" podID="24dd67e0-7c58-4f3f-a712-ae2639e495fe" containerID="1e496aa0ba6b4490299c29fe0d8a4e3eb7c86cc7a3d85e8f29f82cddec0873a9" exitCode=0 Feb 27 10:29:05 crc kubenswrapper[4728]: I0227 10:29:05.792135 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-7g65x" event={"ID":"24dd67e0-7c58-4f3f-a712-ae2639e495fe","Type":"ContainerDied","Data":"1e496aa0ba6b4490299c29fe0d8a4e3eb7c86cc7a3d85e8f29f82cddec0873a9"} Feb 27 10:29:05 crc kubenswrapper[4728]: I0227 10:29:05.796014 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 10:29:05 crc kubenswrapper[4728]: E0227 10:29:05.797202 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 10:29:06.297180428 +0000 UTC m=+166.259546534 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:29:05 crc kubenswrapper[4728]: I0227 10:29:05.807265 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-zdjkf" event={"ID":"ac17bc60-379b-44dd-bf6b-2b9ecf87bf02","Type":"ContainerStarted","Data":"0fe9f3abc477968d63e075bd41f875a557dec283f12f83696dea188ae9f7ef94"} Feb 27 10:29:05 crc kubenswrapper[4728]: I0227 10:29:05.809669 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-lj5z6" podStartSLOduration=96.809656571 podStartE2EDuration="1m36.809656571s" podCreationTimestamp="2026-02-27 10:27:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:29:05.809130347 +0000 UTC m=+165.771496463" watchObservedRunningTime="2026-02-27 10:29:05.809656571 +0000 UTC m=+165.772022677" Feb 27 10:29:05 crc kubenswrapper[4728]: I0227 10:29:05.810480 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9m4nv" podStartSLOduration=95.810475105 podStartE2EDuration="1m35.810475105s" podCreationTimestamp="2026-02-27 10:27:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:29:05.751572447 +0000 UTC m=+165.713938553" watchObservedRunningTime="2026-02-27 10:29:05.810475105 +0000 UTC m=+165.772841211" Feb 27 10:29:05 crc kubenswrapper[4728]: I0227 10:29:05.831040 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9bqx2" event={"ID":"f91e18b1-f9ed-4a0d-8aff-e7344791fb5e","Type":"ContainerStarted","Data":"f289908e09eb7a7b65099687f9d9e4e59fad16cd115e9418bf19253e65e9de47"} Feb 27 10:29:05 crc kubenswrapper[4728]: I0227 10:29:05.832132 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-8n6md" event={"ID":"90274913-fbff-4207-a3d8-f163ebcee220","Type":"ContainerStarted","Data":"04ccca8a3c9bc75b0503638282d8fb41c421ae31dfc79aa2b3ce3b984b229ed8"} Feb 27 10:29:05 crc kubenswrapper[4728]: I0227 10:29:05.832156 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-8n6md" event={"ID":"90274913-fbff-4207-a3d8-f163ebcee220","Type":"ContainerStarted","Data":"3f4d527e955f5a12b07ef0ccf6ba6deb8312433664e520de94de1fd9aae2d308"} Feb 27 10:29:05 crc kubenswrapper[4728]: I0227 10:29:05.835400 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-7n4w8" event={"ID":"15e7f85b-5175-40d2-ade3-66aa7469d9cd","Type":"ContainerStarted","Data":"4813e8ec2712225b0151e2a40dd10e4a3ef13a31ab4f4689d185f551bcbb1124"} Feb 27 10:29:05 crc kubenswrapper[4728]: I0227 10:29:05.840587 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-p7rtp" event={"ID":"3bd010d3-c618-478c-9dd4-f0c095b872ac","Type":"ContainerStarted","Data":"ed6c4141b26e1ae362ed364b66e6be6963d7fe7c438cc79be1723659c334f664"} Feb 27 10:29:05 crc kubenswrapper[4728]: I0227 10:29:05.844624 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-zjx6k" event={"ID":"103ae8fe-45e0-4696-be10-bb2ced3ee561","Type":"ContainerStarted","Data":"7e6b7b0426c1852a3ce288e71549d3b97520fd0f374b7c6265b9b965103df599"} Feb 27 10:29:05 crc kubenswrapper[4728]: I0227 10:29:05.870312 4728 generic.go:334] "Generic (PLEG): container finished" podID="6b237825-2c85-4de6-a839-b91e7d23d433" containerID="6a1b299a5a00decb7f20a835b3f185acebf736212e2e5f6a5115a052af7bc6df" exitCode=0 Feb 27 10:29:05 crc kubenswrapper[4728]: I0227 10:29:05.870397 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j72xn" event={"ID":"6b237825-2c85-4de6-a839-b91e7d23d433","Type":"ContainerDied","Data":"6a1b299a5a00decb7f20a835b3f185acebf736212e2e5f6a5115a052af7bc6df"} Feb 27 10:29:05 crc kubenswrapper[4728]: I0227 10:29:05.873952 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29536455-mx6zt" event={"ID":"f82e7468-152d-46a6-9012-3bb0b4219b3f","Type":"ContainerStarted","Data":"5b6dedda68e3e6d9345b1c943c9dedffb37215d26d72fde983e13f974ad05ec9"} Feb 27 10:29:05 crc kubenswrapper[4728]: I0227 10:29:05.886857 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-7n4w8" podStartSLOduration=95.886837286 podStartE2EDuration="1m35.886837286s" podCreationTimestamp="2026-02-27 10:27:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:29:05.88025599 +0000 UTC m=+165.842622096" watchObservedRunningTime="2026-02-27 10:29:05.886837286 +0000 UTC m=+165.849203392" Feb 27 10:29:05 crc kubenswrapper[4728]: I0227 10:29:05.892530 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-q9bgg" event={"ID":"8f2ca8d1-e849-4530-9260-09dd161dc4c3","Type":"ContainerStarted","Data":"f5ad0d69027da1ef7b5b43cc4de2faff86162cb6455c63097c9561adb4b8efb1"} Feb 27 10:29:05 crc kubenswrapper[4728]: I0227 10:29:05.898150 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w4vnn\" (UID: \"9f01d342-6bde-4063-b99d-b0efda456aef\") " pod="openshift-image-registry/image-registry-697d97f7c8-w4vnn" Feb 27 10:29:05 crc kubenswrapper[4728]: E0227 10:29:05.901606 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 10:29:06.401591834 +0000 UTC m=+166.363957940 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w4vnn" (UID: "9f01d342-6bde-4063-b99d-b0efda456aef") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:29:05 crc kubenswrapper[4728]: I0227 10:29:05.919330 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-xv8vk" event={"ID":"438710a7-473e-43a3-8aee-6f1f2d5ac756","Type":"ContainerStarted","Data":"e1713c9eb79bc30322252cbda60fc1ec5d1ff31c8c4887d3abfdf2dc2017d475"} Feb 27 10:29:05 crc kubenswrapper[4728]: I0227 10:29:05.946344 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-kl7tc" event={"ID":"e99c1615-8102-40e4-ba5a-fa770e09cf9c","Type":"ContainerStarted","Data":"5447ba95435ee9899e2abae8c56202317036d70bdc4cedfcc6978f9ed1115055"} Feb 27 10:29:05 crc kubenswrapper[4728]: I0227 10:29:05.951021 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kwpmw" event={"ID":"2f537b0d-c3b9-4f04-b471-91c204f854a0","Type":"ContainerStarted","Data":"30ffaadd10d3755ed47c6ebef182e5fc8d954b5b4788f87aa54a0c53d72e3107"} Feb 27 10:29:05 crc kubenswrapper[4728]: I0227 10:29:05.951059 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kwpmw" event={"ID":"2f537b0d-c3b9-4f04-b471-91c204f854a0","Type":"ContainerStarted","Data":"6f6303e0a4366a58ec14f659075dd0858cf4b2c459b54e3d09e25014cc59c3be"} Feb 27 10:29:05 crc kubenswrapper[4728]: I0227 10:29:05.959306 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-8n6md" podStartSLOduration=95.959290197 podStartE2EDuration="1m35.959290197s" podCreationTimestamp="2026-02-27 10:27:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:29:05.957539138 +0000 UTC m=+165.919905234" watchObservedRunningTime="2026-02-27 10:29:05.959290197 +0000 UTC m=+165.921656303" Feb 27 10:29:05 crc kubenswrapper[4728]: I0227 10:29:05.961042 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-npkjm" podStartSLOduration=96.961033317 podStartE2EDuration="1m36.961033317s" podCreationTimestamp="2026-02-27 10:27:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:29:05.912940935 +0000 UTC m=+165.875307041" watchObservedRunningTime="2026-02-27 10:29:05.961033317 +0000 UTC m=+165.923399423" Feb 27 10:29:05 crc kubenswrapper[4728]: I0227 10:29:05.962561 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-qt4kh" event={"ID":"ba4b5d33-bc92-49db-bb38-80a7bd3ca2f6","Type":"ContainerStarted","Data":"fac09314784fc08a0bcdd243e705433aa37409c605948d55aecf180b5e7b999d"} Feb 27 10:29:05 crc kubenswrapper[4728]: I0227 10:29:05.976314 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-fwnnw" event={"ID":"ba876990-999b-4cd2-bb68-624cbf1b5701","Type":"ContainerStarted","Data":"2c4a865d1d7dc24ebef0095aaa542212e3b661a617f3c744179f4c1f755970f8"} Feb 27 10:29:05 crc kubenswrapper[4728]: I0227 10:29:05.979894 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-ghzzf" event={"ID":"77a21ee7-0d79-437a-8041-55bb91ef0212","Type":"ContainerStarted","Data":"60a0e3831da010c0020a67dec8a6d5cd274010b2c596b0ed6f1e5a0958ae38e9"} Feb 27 10:29:05 crc kubenswrapper[4728]: I0227 10:29:05.982963 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-ghzzf" Feb 27 10:29:05 crc kubenswrapper[4728]: I0227 10:29:05.985088 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4psh7" event={"ID":"8adeb294-3f47-4d17-bf64-c0d6328b6f2d","Type":"ContainerStarted","Data":"72d1b9a30db59391ab9befbdc97bd05b2160f1e27ce1ab808d5b040c5aa27e53"} Feb 27 10:29:05 crc kubenswrapper[4728]: I0227 10:29:05.985130 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4psh7" event={"ID":"8adeb294-3f47-4d17-bf64-c0d6328b6f2d","Type":"ContainerStarted","Data":"32539f90b719748e59efd5efd9eea42df5a655e11c7d39e1590d892086717a4b"} Feb 27 10:29:05 crc kubenswrapper[4728]: I0227 10:29:05.989446 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tz4jj" event={"ID":"e0d7166c-3042-4706-9683-6c6a32d29a9c","Type":"ContainerStarted","Data":"40bbbe8abe3c7d1dbae7b3ba6b9103f549120818480625470e257632c58203a8"} Feb 27 10:29:05 crc kubenswrapper[4728]: I0227 10:29:05.990291 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tz4jj" Feb 27 10:29:06 crc kubenswrapper[4728]: I0227 10:29:05.998904 4728 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-ghzzf container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" start-of-body= Feb 27 10:29:06 crc kubenswrapper[4728]: I0227 10:29:05.998970 4728 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-ghzzf" podUID="77a21ee7-0d79-437a-8041-55bb91ef0212" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" Feb 27 10:29:06 crc kubenswrapper[4728]: I0227 10:29:06.009288 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 10:29:06 crc kubenswrapper[4728]: I0227 10:29:06.009551 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-g4dbw" event={"ID":"36e6e9de-b708-4242-9251-1ba3b849a749","Type":"ContainerStarted","Data":"0568b96a8ff9e6be63291ad1803077952e9f72fd6177b5895b07b30a667f3e94"} Feb 27 10:29:06 crc kubenswrapper[4728]: I0227 10:29:06.010316 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-g4dbw" Feb 27 10:29:06 crc kubenswrapper[4728]: E0227 10:29:06.011126 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 10:29:06.511109854 +0000 UTC m=+166.473475960 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:29:06 crc kubenswrapper[4728]: I0227 10:29:06.019664 4728 patch_prober.go:28] interesting pod/console-operator-58897d9998-g4dbw container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.23:8443/readyz\": dial tcp 10.217.0.23:8443: connect: connection refused" start-of-body= Feb 27 10:29:06 crc kubenswrapper[4728]: I0227 10:29:06.019713 4728 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-g4dbw" podUID="36e6e9de-b708-4242-9251-1ba3b849a749" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.23:8443/readyz\": dial tcp 10.217.0.23:8443: connect: connection refused" Feb 27 10:29:06 crc kubenswrapper[4728]: I0227 10:29:06.021941 4728 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-tz4jj container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.14:8443/healthz\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Feb 27 10:29:06 crc kubenswrapper[4728]: I0227 10:29:06.022650 4728 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tz4jj" podUID="e0d7166c-3042-4706-9683-6c6a32d29a9c" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.14:8443/healthz\": dial tcp 10.217.0.14:8443: connect: connection refused" Feb 27 10:29:06 crc kubenswrapper[4728]: I0227 10:29:06.032603 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bd58v" podStartSLOduration=97.032583533 podStartE2EDuration="1m37.032583533s" podCreationTimestamp="2026-02-27 10:27:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:29:06.028211978 +0000 UTC m=+165.990578074" watchObservedRunningTime="2026-02-27 10:29:06.032583533 +0000 UTC m=+165.994949629" Feb 27 10:29:06 crc kubenswrapper[4728]: I0227 10:29:06.052575 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-5lw67" event={"ID":"7319e158-317b-4f98-b9da-0481f2c0aca8","Type":"ContainerStarted","Data":"5dfe433fcbca12898a54cdebf5f21127dbec3edf1c42569b187358f897d03453"} Feb 27 10:29:06 crc kubenswrapper[4728]: I0227 10:29:06.057018 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536468-682zs" event={"ID":"826461a8-eef9-4a1f-b4a7-4ff8076ec729","Type":"ContainerStarted","Data":"5d7bfae19605849c6effc627087526eab119f59e77c0269f316a9cf2300fdf43"} Feb 27 10:29:06 crc kubenswrapper[4728]: I0227 10:29:06.062426 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-w976v" event={"ID":"a4d6106d-dd3c-4a6a-aaeb-b441add3fdad","Type":"ContainerStarted","Data":"1206c848258532f67f69f951cef6e00d801573f63a796e06bc1603e25228dccf"} Feb 27 10:29:06 crc kubenswrapper[4728]: I0227 10:29:06.062460 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-w976v" event={"ID":"a4d6106d-dd3c-4a6a-aaeb-b441add3fdad","Type":"ContainerStarted","Data":"0cf3e969d7eea28f8ac45a8700df99275ff52e402615c51d8b23b8728d46e473"} Feb 27 10:29:06 crc kubenswrapper[4728]: I0227 10:29:06.062809 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-multus/cni-sysctl-allowlist-ds-w976v" Feb 27 10:29:06 crc kubenswrapper[4728]: I0227 10:29:06.075708 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-p7rtp" podStartSLOduration=96.075691643 podStartE2EDuration="1m36.075691643s" podCreationTimestamp="2026-02-27 10:27:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:29:06.075621181 +0000 UTC m=+166.037987287" watchObservedRunningTime="2026-02-27 10:29:06.075691643 +0000 UTC m=+166.038057749" Feb 27 10:29:06 crc kubenswrapper[4728]: I0227 10:29:06.105296 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-d2nkm" Feb 27 10:29:06 crc kubenswrapper[4728]: I0227 10:29:06.112938 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w4vnn\" (UID: \"9f01d342-6bde-4063-b99d-b0efda456aef\") " pod="openshift-image-registry/image-registry-697d97f7c8-w4vnn" Feb 27 10:29:06 crc kubenswrapper[4728]: E0227 10:29:06.113402 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 10:29:06.61338249 +0000 UTC m=+166.575748596 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w4vnn" (UID: "9f01d342-6bde-4063-b99d-b0efda456aef") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:29:06 crc kubenswrapper[4728]: I0227 10:29:06.133651 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-8n6md" Feb 27 10:29:06 crc kubenswrapper[4728]: I0227 10:29:06.139838 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-zdjkf" podStartSLOduration=96.139818158 podStartE2EDuration="1m36.139818158s" podCreationTimestamp="2026-02-27 10:27:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:29:06.128633892 +0000 UTC m=+166.090999998" watchObservedRunningTime="2026-02-27 10:29:06.139818158 +0000 UTC m=+166.102184254" Feb 27 10:29:06 crc kubenswrapper[4728]: I0227 10:29:06.140028 4728 patch_prober.go:28] interesting pod/router-default-5444994796-8n6md container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Feb 27 10:29:06 crc kubenswrapper[4728]: I0227 10:29:06.140077 4728 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-8n6md" podUID="90274913-fbff-4207-a3d8-f163ebcee220" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Feb 27 10:29:06 crc kubenswrapper[4728]: I0227 10:29:06.215521 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-kl7tc" podStartSLOduration=6.2154893 podStartE2EDuration="6.2154893s" podCreationTimestamp="2026-02-27 10:29:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:29:06.185144021 +0000 UTC m=+166.147510127" watchObservedRunningTime="2026-02-27 10:29:06.2154893 +0000 UTC m=+166.177855406" Feb 27 10:29:06 crc kubenswrapper[4728]: I0227 10:29:06.216392 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 10:29:06 crc kubenswrapper[4728]: E0227 10:29:06.216665 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 10:29:06.716651333 +0000 UTC m=+166.679017439 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:29:06 crc kubenswrapper[4728]: I0227 10:29:06.242547 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-fwnnw" podStartSLOduration=96.242530296 podStartE2EDuration="1m36.242530296s" podCreationTimestamp="2026-02-27 10:27:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:29:06.21582559 +0000 UTC m=+166.178191696" watchObservedRunningTime="2026-02-27 10:29:06.242530296 +0000 UTC m=+166.204896402" Feb 27 10:29:06 crc kubenswrapper[4728]: I0227 10:29:06.243895 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-ghzzf" podStartSLOduration=96.243888234 podStartE2EDuration="1m36.243888234s" podCreationTimestamp="2026-02-27 10:27:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:29:06.243039391 +0000 UTC m=+166.205405497" watchObservedRunningTime="2026-02-27 10:29:06.243888234 +0000 UTC m=+166.206254340" Feb 27 10:29:06 crc kubenswrapper[4728]: I0227 10:29:06.258631 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-multus/cni-sysctl-allowlist-ds-w976v" Feb 27 10:29:06 crc kubenswrapper[4728]: I0227 10:29:06.270001 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/cni-sysctl-allowlist-ds-w976v" podStartSLOduration=6.269963403 podStartE2EDuration="6.269963403s" podCreationTimestamp="2026-02-27 10:29:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:29:06.2684598 +0000 UTC m=+166.230825906" watchObservedRunningTime="2026-02-27 10:29:06.269963403 +0000 UTC m=+166.232329509" Feb 27 10:29:06 crc kubenswrapper[4728]: I0227 10:29:06.318137 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w4vnn\" (UID: \"9f01d342-6bde-4063-b99d-b0efda456aef\") " pod="openshift-image-registry/image-registry-697d97f7c8-w4vnn" Feb 27 10:29:06 crc kubenswrapper[4728]: E0227 10:29:06.319108 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 10:29:06.819094953 +0000 UTC m=+166.781461059 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w4vnn" (UID: "9f01d342-6bde-4063-b99d-b0efda456aef") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:29:06 crc kubenswrapper[4728]: I0227 10:29:06.336348 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tz4jj" podStartSLOduration=96.336333441 podStartE2EDuration="1m36.336333441s" podCreationTimestamp="2026-02-27 10:27:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:29:06.33519309 +0000 UTC m=+166.297559196" watchObservedRunningTime="2026-02-27 10:29:06.336333441 +0000 UTC m=+166.298699547" Feb 27 10:29:06 crc kubenswrapper[4728]: I0227 10:29:06.411400 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-g4dbw" podStartSLOduration=97.411379786 podStartE2EDuration="1m37.411379786s" podCreationTimestamp="2026-02-27 10:27:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:29:06.404664386 +0000 UTC m=+166.367030492" watchObservedRunningTime="2026-02-27 10:29:06.411379786 +0000 UTC m=+166.373745892" Feb 27 10:29:06 crc kubenswrapper[4728]: I0227 10:29:06.422093 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 10:29:06 crc kubenswrapper[4728]: E0227 10:29:06.422433 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 10:29:06.922417279 +0000 UTC m=+166.884783385 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:29:06 crc kubenswrapper[4728]: I0227 10:29:06.439271 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-q9bgg" podStartSLOduration=96.439249185 podStartE2EDuration="1m36.439249185s" podCreationTimestamp="2026-02-27 10:27:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:29:06.435943902 +0000 UTC m=+166.398310008" watchObservedRunningTime="2026-02-27 10:29:06.439249185 +0000 UTC m=+166.401615291" Feb 27 10:29:06 crc kubenswrapper[4728]: I0227 10:29:06.526382 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w4vnn\" (UID: \"9f01d342-6bde-4063-b99d-b0efda456aef\") " pod="openshift-image-registry/image-registry-697d97f7c8-w4vnn" Feb 27 10:29:06 crc kubenswrapper[4728]: E0227 10:29:06.527027 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 10:29:07.026995019 +0000 UTC m=+166.989361125 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w4vnn" (UID: "9f01d342-6bde-4063-b99d-b0efda456aef") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:29:06 crc kubenswrapper[4728]: I0227 10:29:06.627625 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 10:29:06 crc kubenswrapper[4728]: E0227 10:29:06.627978 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 10:29:07.127953787 +0000 UTC m=+167.090319893 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:29:06 crc kubenswrapper[4728]: I0227 10:29:06.729140 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w4vnn\" (UID: \"9f01d342-6bde-4063-b99d-b0efda456aef\") " pod="openshift-image-registry/image-registry-697d97f7c8-w4vnn" Feb 27 10:29:06 crc kubenswrapper[4728]: E0227 10:29:06.729478 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 10:29:07.229465951 +0000 UTC m=+167.191832047 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w4vnn" (UID: "9f01d342-6bde-4063-b99d-b0efda456aef") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:29:06 crc kubenswrapper[4728]: I0227 10:29:06.788106 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-rpr29" Feb 27 10:29:06 crc kubenswrapper[4728]: I0227 10:29:06.832006 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 10:29:06 crc kubenswrapper[4728]: E0227 10:29:06.832793 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 10:29:07.332772305 +0000 UTC m=+167.295138411 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:29:06 crc kubenswrapper[4728]: I0227 10:29:06.935364 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w4vnn\" (UID: \"9f01d342-6bde-4063-b99d-b0efda456aef\") " pod="openshift-image-registry/image-registry-697d97f7c8-w4vnn" Feb 27 10:29:06 crc kubenswrapper[4728]: E0227 10:29:06.935683 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 10:29:07.435668839 +0000 UTC m=+167.398034945 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w4vnn" (UID: "9f01d342-6bde-4063-b99d-b0efda456aef") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:29:06 crc kubenswrapper[4728]: I0227 10:29:06.980864 4728 ???:1] "http: TLS handshake error from 192.168.126.11:37504: no serving certificate available for the kubelet" Feb 27 10:29:07 crc kubenswrapper[4728]: I0227 10:29:07.036418 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 10:29:07 crc kubenswrapper[4728]: E0227 10:29:07.036906 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 10:29:07.536885254 +0000 UTC m=+167.499251370 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:29:07 crc kubenswrapper[4728]: I0227 10:29:07.092057 4728 ???:1] "http: TLS handshake error from 192.168.126.11:37516: no serving certificate available for the kubelet" Feb 27 10:29:07 crc kubenswrapper[4728]: I0227 10:29:07.097076 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kwpmw" event={"ID":"2f537b0d-c3b9-4f04-b471-91c204f854a0","Type":"ContainerStarted","Data":"11d0fd87e478e085322cebd7feb6b2e9906a3f2fb685aafe651451fcea7b3d55"} Feb 27 10:29:07 crc kubenswrapper[4728]: I0227 10:29:07.112455 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-f5nff" event={"ID":"1e079012-b36b-4046-b569-895b5100265d","Type":"ContainerStarted","Data":"bc29a420f5375402e6ba6605a2d90117446bdb48e0b66aa27ca7c7220416da6d"} Feb 27 10:29:07 crc kubenswrapper[4728]: I0227 10:29:07.112811 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-f5nff" event={"ID":"1e079012-b36b-4046-b569-895b5100265d","Type":"ContainerStarted","Data":"713fd4dbd65b89df6c8fb713c9921068b1282dd716f499bef22642799ae508c1"} Feb 27 10:29:07 crc kubenswrapper[4728]: I0227 10:29:07.136308 4728 patch_prober.go:28] interesting pod/router-default-5444994796-8n6md container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 27 10:29:07 crc kubenswrapper[4728]: [-]has-synced failed: reason withheld Feb 27 10:29:07 crc kubenswrapper[4728]: [+]process-running ok Feb 27 10:29:07 crc kubenswrapper[4728]: healthz check failed Feb 27 10:29:07 crc kubenswrapper[4728]: I0227 10:29:07.136686 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-qt4kh" event={"ID":"ba4b5d33-bc92-49db-bb38-80a7bd3ca2f6","Type":"ContainerStarted","Data":"2c7febd3e2ada4f71c312e2d4b9bf2579966f8137065e3eeaa6d584f85c7f8ea"} Feb 27 10:29:07 crc kubenswrapper[4728]: I0227 10:29:07.136699 4728 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-8n6md" podUID="90274913-fbff-4207-a3d8-f163ebcee220" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 27 10:29:07 crc kubenswrapper[4728]: I0227 10:29:07.137567 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w4vnn\" (UID: \"9f01d342-6bde-4063-b99d-b0efda456aef\") " pod="openshift-image-registry/image-registry-697d97f7c8-w4vnn" Feb 27 10:29:07 crc kubenswrapper[4728]: E0227 10:29:07.139051 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 10:29:07.639039976 +0000 UTC m=+167.601406082 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w4vnn" (UID: "9f01d342-6bde-4063-b99d-b0efda456aef") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:29:07 crc kubenswrapper[4728]: I0227 10:29:07.165328 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cxzlx" event={"ID":"285ad280-c5dc-4312-afcd-39678c1c5c0b","Type":"ContainerStarted","Data":"99472bcf65013fa79768723bd62b01306bef15c0f6af0525eb8829cd1923573c"} Feb 27 10:29:07 crc kubenswrapper[4728]: I0227 10:29:07.165390 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cxzlx" event={"ID":"285ad280-c5dc-4312-afcd-39678c1c5c0b","Type":"ContainerStarted","Data":"34e2feef390611a9c9d58e682da18a3f13d9dc44198d6d1b4c10d72d888c16ca"} Feb 27 10:29:07 crc kubenswrapper[4728]: I0227 10:29:07.165479 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cxzlx" Feb 27 10:29:07 crc kubenswrapper[4728]: I0227 10:29:07.168331 4728 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-cxzlx container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.37:5443/healthz\": dial tcp 10.217.0.37:5443: connect: connection refused" start-of-body= Feb 27 10:29:07 crc kubenswrapper[4728]: I0227 10:29:07.168396 4728 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cxzlx" podUID="285ad280-c5dc-4312-afcd-39678c1c5c0b" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.37:5443/healthz\": dial tcp 10.217.0.37:5443: connect: connection refused" Feb 27 10:29:07 crc kubenswrapper[4728]: I0227 10:29:07.174009 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-f5nff" podStartSLOduration=97.173994966 podStartE2EDuration="1m37.173994966s" podCreationTimestamp="2026-02-27 10:27:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:29:07.168961943 +0000 UTC m=+167.131328049" watchObservedRunningTime="2026-02-27 10:29:07.173994966 +0000 UTC m=+167.136361072" Feb 27 10:29:07 crc kubenswrapper[4728]: I0227 10:29:07.174722 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kwpmw" podStartSLOduration=97.174718316 podStartE2EDuration="1m37.174718316s" podCreationTimestamp="2026-02-27 10:27:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:29:07.148180154 +0000 UTC m=+167.110546260" watchObservedRunningTime="2026-02-27 10:29:07.174718316 +0000 UTC m=+167.137084412" Feb 27 10:29:07 crc kubenswrapper[4728]: I0227 10:29:07.209312 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cxzlx" podStartSLOduration=97.209295965 podStartE2EDuration="1m37.209295965s" podCreationTimestamp="2026-02-27 10:27:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:29:07.208632465 +0000 UTC m=+167.170998581" watchObservedRunningTime="2026-02-27 10:29:07.209295965 +0000 UTC m=+167.171662071" Feb 27 10:29:07 crc kubenswrapper[4728]: I0227 10:29:07.217911 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-9p4mb" event={"ID":"e87e4167-c76f-4adc-9d67-28485f6a6397","Type":"ContainerStarted","Data":"ab9180540dec52cff0c2d52c0ee7a572b852deec42b9f60be3f2b5d86ecad441"} Feb 27 10:29:07 crc kubenswrapper[4728]: I0227 10:29:07.228252 4728 ???:1] "http: TLS handshake error from 192.168.126.11:37530: no serving certificate available for the kubelet" Feb 27 10:29:07 crc kubenswrapper[4728]: I0227 10:29:07.238467 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 10:29:07 crc kubenswrapper[4728]: E0227 10:29:07.239369 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 10:29:07.739353415 +0000 UTC m=+167.701719521 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:29:07 crc kubenswrapper[4728]: I0227 10:29:07.254299 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jzdfv" event={"ID":"5bbc683f-19d5-4c72-83a3-511c300446ad","Type":"ContainerStarted","Data":"3db8554f3a8ffc194111dfc6b5a12db76b79151cfad490716488650a5ff532e5"} Feb 27 10:29:07 crc kubenswrapper[4728]: I0227 10:29:07.254357 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jzdfv" event={"ID":"5bbc683f-19d5-4c72-83a3-511c300446ad","Type":"ContainerStarted","Data":"5c5890ce65d2f06517c0dbdb97c1889158f9752242b3b57786d0c8a70725bb5c"} Feb 27 10:29:07 crc kubenswrapper[4728]: I0227 10:29:07.256399 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-9p4mb" podStartSLOduration=97.256383238 podStartE2EDuration="1m37.256383238s" podCreationTimestamp="2026-02-27 10:27:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:29:07.242977918 +0000 UTC m=+167.205344024" watchObservedRunningTime="2026-02-27 10:29:07.256383238 +0000 UTC m=+167.218749344" Feb 27 10:29:07 crc kubenswrapper[4728]: I0227 10:29:07.295460 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jzdfv" podStartSLOduration=97.295443733 podStartE2EDuration="1m37.295443733s" podCreationTimestamp="2026-02-27 10:27:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:29:07.293989782 +0000 UTC m=+167.256355898" watchObservedRunningTime="2026-02-27 10:29:07.295443733 +0000 UTC m=+167.257809839" Feb 27 10:29:07 crc kubenswrapper[4728]: I0227 10:29:07.312812 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9bqx2" event={"ID":"f91e18b1-f9ed-4a0d-8aff-e7344791fb5e","Type":"ContainerStarted","Data":"9d04d0fa800306b31fec452cf4aa33ec2076a063291340d8c278b87a00477ca9"} Feb 27 10:29:07 crc kubenswrapper[4728]: I0227 10:29:07.313852 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9bqx2" Feb 27 10:29:07 crc kubenswrapper[4728]: I0227 10:29:07.339105 4728 ???:1] "http: TLS handshake error from 192.168.126.11:37542: no serving certificate available for the kubelet" Feb 27 10:29:07 crc kubenswrapper[4728]: I0227 10:29:07.340127 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w4vnn\" (UID: \"9f01d342-6bde-4063-b99d-b0efda456aef\") " pod="openshift-image-registry/image-registry-697d97f7c8-w4vnn" Feb 27 10:29:07 crc kubenswrapper[4728]: E0227 10:29:07.340367 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 10:29:07.840357465 +0000 UTC m=+167.802723571 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w4vnn" (UID: "9f01d342-6bde-4063-b99d-b0efda456aef") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:29:07 crc kubenswrapper[4728]: I0227 10:29:07.353298 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-xv8vk" event={"ID":"438710a7-473e-43a3-8aee-6f1f2d5ac756","Type":"ContainerStarted","Data":"32d42c8d87e85626ed012613fdb85429cd29ed489457e98837214d20f6df2f0a"} Feb 27 10:29:07 crc kubenswrapper[4728]: I0227 10:29:07.354236 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-xv8vk" Feb 27 10:29:07 crc kubenswrapper[4728]: I0227 10:29:07.366853 4728 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-xv8vk container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.21:8080/healthz\": dial tcp 10.217.0.21:8080: connect: connection refused" start-of-body= Feb 27 10:29:07 crc kubenswrapper[4728]: I0227 10:29:07.366904 4728 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-xv8vk" podUID="438710a7-473e-43a3-8aee-6f1f2d5ac756" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.21:8080/healthz\": dial tcp 10.217.0.21:8080: connect: connection refused" Feb 27 10:29:07 crc kubenswrapper[4728]: I0227 10:29:07.367614 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-zjx6k" event={"ID":"103ae8fe-45e0-4696-be10-bb2ced3ee561","Type":"ContainerStarted","Data":"c158e92da102b1b222e0f05bdc67917c6129737cc7dae36992a5cec295a3915c"} Feb 27 10:29:07 crc kubenswrapper[4728]: I0227 10:29:07.371679 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9bqx2" podStartSLOduration=97.371659201 podStartE2EDuration="1m37.371659201s" podCreationTimestamp="2026-02-27 10:27:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:29:07.339387528 +0000 UTC m=+167.301753634" watchObservedRunningTime="2026-02-27 10:29:07.371659201 +0000 UTC m=+167.334025307" Feb 27 10:29:07 crc kubenswrapper[4728]: I0227 10:29:07.372850 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-xv8vk" podStartSLOduration=97.372843545 podStartE2EDuration="1m37.372843545s" podCreationTimestamp="2026-02-27 10:27:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:29:07.370970361 +0000 UTC m=+167.333336467" watchObservedRunningTime="2026-02-27 10:29:07.372843545 +0000 UTC m=+167.335209651" Feb 27 10:29:07 crc kubenswrapper[4728]: I0227 10:29:07.375598 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j72xn" event={"ID":"6b237825-2c85-4de6-a839-b91e7d23d433","Type":"ContainerStarted","Data":"c82304ebbd2a923810baeeb2a84da528e48a2b11be33335a32fd40c5bee65bd2"} Feb 27 10:29:07 crc kubenswrapper[4728]: I0227 10:29:07.387923 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29536455-mx6zt" event={"ID":"f82e7468-152d-46a6-9012-3bb0b4219b3f","Type":"ContainerStarted","Data":"aa1cb35b13eb3354dc69d33a288dd974cee0bf56a76e56257f834a3057988edb"} Feb 27 10:29:07 crc kubenswrapper[4728]: I0227 10:29:07.401213 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-fj9d6" event={"ID":"5eed07da-15ae-438b-a97f-568cd05ea1ee","Type":"ContainerStarted","Data":"2dc775279ff0c48ce321a8338be97f2ade64d704ff3c15082748f7c2f3fd4a6c"} Feb 27 10:29:07 crc kubenswrapper[4728]: I0227 10:29:07.401256 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-fj9d6" event={"ID":"5eed07da-15ae-438b-a97f-568cd05ea1ee","Type":"ContainerStarted","Data":"9be927e135b7d26bfaa88b9ac9fa7d3b0ac24fc839c2d76ec44f1817ae659ac1"} Feb 27 10:29:07 crc kubenswrapper[4728]: I0227 10:29:07.402017 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-fj9d6" Feb 27 10:29:07 crc kubenswrapper[4728]: I0227 10:29:07.405223 4728 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-fj9d6 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.44:8443/healthz\": dial tcp 10.217.0.44:8443: connect: connection refused" start-of-body= Feb 27 10:29:07 crc kubenswrapper[4728]: I0227 10:29:07.405259 4728 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-fj9d6" podUID="5eed07da-15ae-438b-a97f-568cd05ea1ee" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.44:8443/healthz\": dial tcp 10.217.0.44:8443: connect: connection refused" Feb 27 10:29:07 crc kubenswrapper[4728]: I0227 10:29:07.413836 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j72xn" podStartSLOduration=97.413821485 podStartE2EDuration="1m37.413821485s" podCreationTimestamp="2026-02-27 10:27:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:29:07.412895108 +0000 UTC m=+167.375261214" watchObservedRunningTime="2026-02-27 10:29:07.413821485 +0000 UTC m=+167.376187591" Feb 27 10:29:07 crc kubenswrapper[4728]: I0227 10:29:07.427709 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-kl7tc" event={"ID":"e99c1615-8102-40e4-ba5a-fa770e09cf9c","Type":"ContainerStarted","Data":"a539b505670d495e20d09516b1255b36002b92baee1ac4e3b1e67a8215a94dc3"} Feb 27 10:29:07 crc kubenswrapper[4728]: I0227 10:29:07.432011 4728 ???:1] "http: TLS handshake error from 192.168.126.11:37554: no serving certificate available for the kubelet" Feb 27 10:29:07 crc kubenswrapper[4728]: I0227 10:29:07.441145 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 10:29:07 crc kubenswrapper[4728]: E0227 10:29:07.442223 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 10:29:07.942207898 +0000 UTC m=+167.904574004 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:29:07 crc kubenswrapper[4728]: I0227 10:29:07.463862 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bgndl" event={"ID":"85142b10-6185-436b-a4eb-0469915055fe","Type":"ContainerStarted","Data":"5f0f0384ff489d73efb09056ed24a0d20d309e5281a307dcdbae860f903882b3"} Feb 27 10:29:07 crc kubenswrapper[4728]: I0227 10:29:07.463902 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bgndl" event={"ID":"85142b10-6185-436b-a4eb-0469915055fe","Type":"ContainerStarted","Data":"68eb28931b9fa2b4ae340af16fa476e03127b40dc465a45211c06f63118fac6a"} Feb 27 10:29:07 crc kubenswrapper[4728]: I0227 10:29:07.463912 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bgndl" event={"ID":"85142b10-6185-436b-a4eb-0469915055fe","Type":"ContainerStarted","Data":"f4f9235d561e502c540fa1e2515babd9c8368df7867c1a012804dac0f1d9ec11"} Feb 27 10:29:07 crc kubenswrapper[4728]: I0227 10:29:07.470098 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4psh7" event={"ID":"8adeb294-3f47-4d17-bf64-c0d6328b6f2d","Type":"ContainerStarted","Data":"aa4a0245b82a7977ae006e71f93972a545d73d82a1cd3a53336d4ac6855a2673"} Feb 27 10:29:07 crc kubenswrapper[4728]: I0227 10:29:07.479075 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-7n4w8" event={"ID":"15e7f85b-5175-40d2-ade3-66aa7469d9cd","Type":"ContainerStarted","Data":"a7b01f82f3fc0b070a403e57403f169e705a773f8d7f2965c418b68be3f7c94b"} Feb 27 10:29:07 crc kubenswrapper[4728]: I0227 10:29:07.490310 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-7g65x" event={"ID":"24dd67e0-7c58-4f3f-a712-ae2639e495fe","Type":"ContainerStarted","Data":"3327c60cbd34a036e72fbebdffb5fa17b3f7d34844162bfabe5cc0d920099b58"} Feb 27 10:29:07 crc kubenswrapper[4728]: I0227 10:29:07.490891 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-7g65x" Feb 27 10:29:07 crc kubenswrapper[4728]: I0227 10:29:07.497841 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29536455-mx6zt" podStartSLOduration=98.497827663 podStartE2EDuration="1m38.497827663s" podCreationTimestamp="2026-02-27 10:27:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:29:07.462924405 +0000 UTC m=+167.425290511" watchObservedRunningTime="2026-02-27 10:29:07.497827663 +0000 UTC m=+167.460193769" Feb 27 10:29:07 crc kubenswrapper[4728]: I0227 10:29:07.500138 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-5lw67" event={"ID":"7319e158-317b-4f98-b9da-0481f2c0aca8","Type":"ContainerStarted","Data":"3eeddf1b5f2daba044db4e2affebc00f4620e41fdda676612ad6ef34e6c8bcff"} Feb 27 10:29:07 crc kubenswrapper[4728]: I0227 10:29:07.508002 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-c46ql" event={"ID":"a3656135-373e-4ec6-9cf1-e34d6a95c5a5","Type":"ContainerStarted","Data":"6714f5307e1d47935a84ddded170716263149299f12034e07a235f40f49e1e43"} Feb 27 10:29:07 crc kubenswrapper[4728]: I0227 10:29:07.508743 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-c46ql" Feb 27 10:29:07 crc kubenswrapper[4728]: I0227 10:29:07.510073 4728 patch_prober.go:28] interesting pod/downloads-7954f5f757-c46ql container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Feb 27 10:29:07 crc kubenswrapper[4728]: I0227 10:29:07.510106 4728 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-c46ql" podUID="a3656135-373e-4ec6-9cf1-e34d6a95c5a5" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Feb 27 10:29:07 crc kubenswrapper[4728]: I0227 10:29:07.522282 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-25vw6" event={"ID":"de6bdf95-032d-42a2-a8b5-0202641a05c1","Type":"ContainerStarted","Data":"663fac5063b4ed346937f0493e50ea2f79981be85156caa72e27186d7e102b4b"} Feb 27 10:29:07 crc kubenswrapper[4728]: I0227 10:29:07.523107 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-25vw6" Feb 27 10:29:07 crc kubenswrapper[4728]: I0227 10:29:07.533267 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-2f6px" event={"ID":"f5b2eb8e-7b36-40ac-b745-9e1a3efaec21","Type":"ContainerStarted","Data":"7e74c4d88385034b0cd34d6e04108bcc5334109d3fff3f60470d31b643273ea1"} Feb 27 10:29:07 crc kubenswrapper[4728]: I0227 10:29:07.535399 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-5gztc" event={"ID":"e774d0ad-6c46-42cf-9605-535444f24c79","Type":"ContainerStarted","Data":"7bf05b96e36744d4d4526b2f627e0be90da1d7ed8cae9d45d55564548ca868ff"} Feb 27 10:29:07 crc kubenswrapper[4728]: I0227 10:29:07.535445 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-5gztc" event={"ID":"e774d0ad-6c46-42cf-9605-535444f24c79","Type":"ContainerStarted","Data":"f3ae7782017b98d5c9298c0250ac4793df91e91ed7b4b23d12cc245f9e2990f2"} Feb 27 10:29:07 crc kubenswrapper[4728]: I0227 10:29:07.542429 4728 ???:1] "http: TLS handshake error from 192.168.126.11:37564: no serving certificate available for the kubelet" Feb 27 10:29:07 crc kubenswrapper[4728]: I0227 10:29:07.543173 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w4vnn\" (UID: \"9f01d342-6bde-4063-b99d-b0efda456aef\") " pod="openshift-image-registry/image-registry-697d97f7c8-w4vnn" Feb 27 10:29:07 crc kubenswrapper[4728]: E0227 10:29:07.547104 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 10:29:08.047073027 +0000 UTC m=+168.009439253 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w4vnn" (UID: "9f01d342-6bde-4063-b99d-b0efda456aef") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:29:07 crc kubenswrapper[4728]: I0227 10:29:07.547151 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-h55dj" event={"ID":"20c5c038-19ee-4ac6-b2dc-281920c6be9a","Type":"ContainerStarted","Data":"6dc7b7d580b98797e10917c7bfa7ee8de52f914048bf4b79e6254bd4eb0cca3c"} Feb 27 10:29:07 crc kubenswrapper[4728]: I0227 10:29:07.557819 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-fj9d6" podStartSLOduration=97.557799481 podStartE2EDuration="1m37.557799481s" podCreationTimestamp="2026-02-27 10:27:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:29:07.498220474 +0000 UTC m=+167.460586580" watchObservedRunningTime="2026-02-27 10:29:07.557799481 +0000 UTC m=+167.520165587" Feb 27 10:29:07 crc kubenswrapper[4728]: I0227 10:29:07.558858 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-7g65x" podStartSLOduration=98.558853301 podStartE2EDuration="1m38.558853301s" podCreationTimestamp="2026-02-27 10:27:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:29:07.537259109 +0000 UTC m=+167.499625205" watchObservedRunningTime="2026-02-27 10:29:07.558853301 +0000 UTC m=+167.521219407" Feb 27 10:29:07 crc kubenswrapper[4728]: I0227 10:29:07.562809 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tz4jj" Feb 27 10:29:07 crc kubenswrapper[4728]: I0227 10:29:07.568764 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-w976v"] Feb 27 10:29:07 crc kubenswrapper[4728]: I0227 10:29:07.574916 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bgndl" podStartSLOduration=97.574902455 podStartE2EDuration="1m37.574902455s" podCreationTimestamp="2026-02-27 10:27:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:29:07.573884236 +0000 UTC m=+167.536250342" watchObservedRunningTime="2026-02-27 10:29:07.574902455 +0000 UTC m=+167.537268561" Feb 27 10:29:07 crc kubenswrapper[4728]: I0227 10:29:07.577119 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-ghzzf" Feb 27 10:29:07 crc kubenswrapper[4728]: I0227 10:29:07.635047 4728 ???:1] "http: TLS handshake error from 192.168.126.11:37578: no serving certificate available for the kubelet" Feb 27 10:29:07 crc kubenswrapper[4728]: I0227 10:29:07.647111 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 10:29:07 crc kubenswrapper[4728]: E0227 10:29:07.648280 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 10:29:08.148266301 +0000 UTC m=+168.110632407 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:29:07 crc kubenswrapper[4728]: I0227 10:29:07.701596 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4psh7" podStartSLOduration=97.701581392 podStartE2EDuration="1m37.701581392s" podCreationTimestamp="2026-02-27 10:27:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:29:07.614063443 +0000 UTC m=+167.576429549" watchObservedRunningTime="2026-02-27 10:29:07.701581392 +0000 UTC m=+167.663947498" Feb 27 10:29:07 crc kubenswrapper[4728]: I0227 10:29:07.702226 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-2f6px" podStartSLOduration=98.702222439 podStartE2EDuration="1m38.702222439s" podCreationTimestamp="2026-02-27 10:27:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:29:07.700566532 +0000 UTC m=+167.662932638" watchObservedRunningTime="2026-02-27 10:29:07.702222439 +0000 UTC m=+167.664588545" Feb 27 10:29:07 crc kubenswrapper[4728]: I0227 10:29:07.748770 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w4vnn\" (UID: \"9f01d342-6bde-4063-b99d-b0efda456aef\") " pod="openshift-image-registry/image-registry-697d97f7c8-w4vnn" Feb 27 10:29:07 crc kubenswrapper[4728]: E0227 10:29:07.749041 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 10:29:08.249028554 +0000 UTC m=+168.211394660 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w4vnn" (UID: "9f01d342-6bde-4063-b99d-b0efda456aef") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:29:07 crc kubenswrapper[4728]: I0227 10:29:07.780113 4728 ???:1] "http: TLS handshake error from 192.168.126.11:37580: no serving certificate available for the kubelet" Feb 27 10:29:07 crc kubenswrapper[4728]: I0227 10:29:07.799880 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-5lw67" podStartSLOduration=98.799863204 podStartE2EDuration="1m38.799863204s" podCreationTimestamp="2026-02-27 10:27:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:29:07.729762639 +0000 UTC m=+167.692128745" watchObservedRunningTime="2026-02-27 10:29:07.799863204 +0000 UTC m=+167.762229310" Feb 27 10:29:07 crc kubenswrapper[4728]: I0227 10:29:07.800746 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-25vw6" podStartSLOduration=98.800741308 podStartE2EDuration="1m38.800741308s" podCreationTimestamp="2026-02-27 10:27:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:29:07.799441391 +0000 UTC m=+167.761807497" watchObservedRunningTime="2026-02-27 10:29:07.800741308 +0000 UTC m=+167.763107414" Feb 27 10:29:07 crc kubenswrapper[4728]: I0227 10:29:07.830257 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-h55dj" podStartSLOduration=97.830240543 podStartE2EDuration="1m37.830240543s" podCreationTimestamp="2026-02-27 10:27:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:29:07.827328081 +0000 UTC m=+167.789694187" watchObservedRunningTime="2026-02-27 10:29:07.830240543 +0000 UTC m=+167.792606649" Feb 27 10:29:07 crc kubenswrapper[4728]: I0227 10:29:07.849434 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 10:29:07 crc kubenswrapper[4728]: E0227 10:29:07.849753 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 10:29:08.349737935 +0000 UTC m=+168.312104041 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:29:07 crc kubenswrapper[4728]: I0227 10:29:07.962578 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w4vnn\" (UID: \"9f01d342-6bde-4063-b99d-b0efda456aef\") " pod="openshift-image-registry/image-registry-697d97f7c8-w4vnn" Feb 27 10:29:07 crc kubenswrapper[4728]: E0227 10:29:07.962924 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 10:29:08.462912289 +0000 UTC m=+168.425278395 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w4vnn" (UID: "9f01d342-6bde-4063-b99d-b0efda456aef") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:29:07 crc kubenswrapper[4728]: I0227 10:29:07.963774 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-5gztc" podStartSLOduration=97.96375737299999 podStartE2EDuration="1m37.963757373s" podCreationTimestamp="2026-02-27 10:27:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:29:07.961921701 +0000 UTC m=+167.924287807" watchObservedRunningTime="2026-02-27 10:29:07.963757373 +0000 UTC m=+167.926123479" Feb 27 10:29:07 crc kubenswrapper[4728]: I0227 10:29:07.995366 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-g4dbw" Feb 27 10:29:08 crc kubenswrapper[4728]: I0227 10:29:08.002950 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-wktw4" podStartSLOduration=8.002935662 podStartE2EDuration="8.002935662s" podCreationTimestamp="2026-02-27 10:29:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:29:08.001964445 +0000 UTC m=+167.964330551" watchObservedRunningTime="2026-02-27 10:29:08.002935662 +0000 UTC m=+167.965301768" Feb 27 10:29:08 crc kubenswrapper[4728]: I0227 10:29:08.063987 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 10:29:08 crc kubenswrapper[4728]: E0227 10:29:08.064277 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 10:29:08.564261499 +0000 UTC m=+168.526627605 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:29:08 crc kubenswrapper[4728]: I0227 10:29:08.081106 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-c46ql" podStartSLOduration=98.081091725 podStartE2EDuration="1m38.081091725s" podCreationTimestamp="2026-02-27 10:27:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:29:08.07985421 +0000 UTC m=+168.042220316" watchObservedRunningTime="2026-02-27 10:29:08.081091725 +0000 UTC m=+168.043457831" Feb 27 10:29:08 crc kubenswrapper[4728]: I0227 10:29:08.142675 4728 patch_prober.go:28] interesting pod/router-default-5444994796-8n6md container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 27 10:29:08 crc kubenswrapper[4728]: [-]has-synced failed: reason withheld Feb 27 10:29:08 crc kubenswrapper[4728]: [+]process-running ok Feb 27 10:29:08 crc kubenswrapper[4728]: healthz check failed Feb 27 10:29:08 crc kubenswrapper[4728]: I0227 10:29:08.142725 4728 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-8n6md" podUID="90274913-fbff-4207-a3d8-f163ebcee220" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 27 10:29:08 crc kubenswrapper[4728]: I0227 10:29:08.165821 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w4vnn\" (UID: \"9f01d342-6bde-4063-b99d-b0efda456aef\") " pod="openshift-image-registry/image-registry-697d97f7c8-w4vnn" Feb 27 10:29:08 crc kubenswrapper[4728]: E0227 10:29:08.166104 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 10:29:08.666091871 +0000 UTC m=+168.628457987 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w4vnn" (UID: "9f01d342-6bde-4063-b99d-b0efda456aef") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:29:08 crc kubenswrapper[4728]: I0227 10:29:08.236060 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-2f6px" Feb 27 10:29:08 crc kubenswrapper[4728]: I0227 10:29:08.236263 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-2f6px" Feb 27 10:29:08 crc kubenswrapper[4728]: I0227 10:29:08.259965 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j72xn" Feb 27 10:29:08 crc kubenswrapper[4728]: I0227 10:29:08.260013 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j72xn" Feb 27 10:29:08 crc kubenswrapper[4728]: I0227 10:29:08.261001 4728 patch_prober.go:28] interesting pod/apiserver-7bbb656c7d-j72xn container/oauth-apiserver namespace/openshift-oauth-apiserver: Startup probe status=failure output="Get \"https://10.217.0.8:8443/livez\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Feb 27 10:29:08 crc kubenswrapper[4728]: I0227 10:29:08.261040 4728 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j72xn" podUID="6b237825-2c85-4de6-a839-b91e7d23d433" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.217.0.8:8443/livez\": dial tcp 10.217.0.8:8443: connect: connection refused" Feb 27 10:29:08 crc kubenswrapper[4728]: I0227 10:29:08.267003 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 10:29:08 crc kubenswrapper[4728]: E0227 10:29:08.267148 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 10:29:08.767121241 +0000 UTC m=+168.729487357 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:29:08 crc kubenswrapper[4728]: I0227 10:29:08.267183 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w4vnn\" (UID: \"9f01d342-6bde-4063-b99d-b0efda456aef\") " pod="openshift-image-registry/image-registry-697d97f7c8-w4vnn" Feb 27 10:29:08 crc kubenswrapper[4728]: E0227 10:29:08.267495 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 10:29:08.767486732 +0000 UTC m=+168.729852838 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w4vnn" (UID: "9f01d342-6bde-4063-b99d-b0efda456aef") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:29:08 crc kubenswrapper[4728]: I0227 10:29:08.275792 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-b97gn"] Feb 27 10:29:08 crc kubenswrapper[4728]: I0227 10:29:08.276755 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b97gn" Feb 27 10:29:08 crc kubenswrapper[4728]: I0227 10:29:08.291578 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 27 10:29:08 crc kubenswrapper[4728]: I0227 10:29:08.305663 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-b97gn"] Feb 27 10:29:08 crc kubenswrapper[4728]: I0227 10:29:08.367740 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 10:29:08 crc kubenswrapper[4728]: E0227 10:29:08.367862 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 10:29:08.867836992 +0000 UTC m=+168.830203098 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:29:08 crc kubenswrapper[4728]: I0227 10:29:08.368035 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w4vnn\" (UID: \"9f01d342-6bde-4063-b99d-b0efda456aef\") " pod="openshift-image-registry/image-registry-697d97f7c8-w4vnn" Feb 27 10:29:08 crc kubenswrapper[4728]: I0227 10:29:08.368088 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34088c2f-1e95-4227-9242-9e4cde7a9fde-catalog-content\") pod \"community-operators-b97gn\" (UID: \"34088c2f-1e95-4227-9242-9e4cde7a9fde\") " pod="openshift-marketplace/community-operators-b97gn" Feb 27 10:29:08 crc kubenswrapper[4728]: I0227 10:29:08.368106 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-npgnf\" (UniqueName: \"kubernetes.io/projected/34088c2f-1e95-4227-9242-9e4cde7a9fde-kube-api-access-npgnf\") pod \"community-operators-b97gn\" (UID: \"34088c2f-1e95-4227-9242-9e4cde7a9fde\") " pod="openshift-marketplace/community-operators-b97gn" Feb 27 10:29:08 crc kubenswrapper[4728]: I0227 10:29:08.368138 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34088c2f-1e95-4227-9242-9e4cde7a9fde-utilities\") pod \"community-operators-b97gn\" (UID: \"34088c2f-1e95-4227-9242-9e4cde7a9fde\") " pod="openshift-marketplace/community-operators-b97gn" Feb 27 10:29:08 crc kubenswrapper[4728]: E0227 10:29:08.368405 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 10:29:08.868388118 +0000 UTC m=+168.830754214 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w4vnn" (UID: "9f01d342-6bde-4063-b99d-b0efda456aef") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:29:08 crc kubenswrapper[4728]: I0227 10:29:08.465571 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-t4bnk"] Feb 27 10:29:08 crc kubenswrapper[4728]: I0227 10:29:08.466826 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t4bnk" Feb 27 10:29:08 crc kubenswrapper[4728]: I0227 10:29:08.469537 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 27 10:29:08 crc kubenswrapper[4728]: I0227 10:29:08.470268 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 10:29:08 crc kubenswrapper[4728]: E0227 10:29:08.470444 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 10:29:08.970422037 +0000 UTC m=+168.932788133 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:29:08 crc kubenswrapper[4728]: I0227 10:29:08.470482 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34088c2f-1e95-4227-9242-9e4cde7a9fde-utilities\") pod \"community-operators-b97gn\" (UID: \"34088c2f-1e95-4227-9242-9e4cde7a9fde\") " pod="openshift-marketplace/community-operators-b97gn" Feb 27 10:29:08 crc kubenswrapper[4728]: I0227 10:29:08.470536 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7771abc7-886d-41eb-b966-74538062511f-utilities\") pod \"certified-operators-t4bnk\" (UID: \"7771abc7-886d-41eb-b966-74538062511f\") " pod="openshift-marketplace/certified-operators-t4bnk" Feb 27 10:29:08 crc kubenswrapper[4728]: I0227 10:29:08.470724 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w4vnn\" (UID: \"9f01d342-6bde-4063-b99d-b0efda456aef\") " pod="openshift-image-registry/image-registry-697d97f7c8-w4vnn" Feb 27 10:29:08 crc kubenswrapper[4728]: I0227 10:29:08.470788 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34088c2f-1e95-4227-9242-9e4cde7a9fde-catalog-content\") pod \"community-operators-b97gn\" (UID: \"34088c2f-1e95-4227-9242-9e4cde7a9fde\") " pod="openshift-marketplace/community-operators-b97gn" Feb 27 10:29:08 crc kubenswrapper[4728]: I0227 10:29:08.470815 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-npgnf\" (UniqueName: \"kubernetes.io/projected/34088c2f-1e95-4227-9242-9e4cde7a9fde-kube-api-access-npgnf\") pod \"community-operators-b97gn\" (UID: \"34088c2f-1e95-4227-9242-9e4cde7a9fde\") " pod="openshift-marketplace/community-operators-b97gn" Feb 27 10:29:08 crc kubenswrapper[4728]: I0227 10:29:08.470853 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7771abc7-886d-41eb-b966-74538062511f-catalog-content\") pod \"certified-operators-t4bnk\" (UID: \"7771abc7-886d-41eb-b966-74538062511f\") " pod="openshift-marketplace/certified-operators-t4bnk" Feb 27 10:29:08 crc kubenswrapper[4728]: I0227 10:29:08.470877 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34088c2f-1e95-4227-9242-9e4cde7a9fde-utilities\") pod \"community-operators-b97gn\" (UID: \"34088c2f-1e95-4227-9242-9e4cde7a9fde\") " pod="openshift-marketplace/community-operators-b97gn" Feb 27 10:29:08 crc kubenswrapper[4728]: I0227 10:29:08.470893 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmk8m\" (UniqueName: \"kubernetes.io/projected/7771abc7-886d-41eb-b966-74538062511f-kube-api-access-dmk8m\") pod \"certified-operators-t4bnk\" (UID: \"7771abc7-886d-41eb-b966-74538062511f\") " pod="openshift-marketplace/certified-operators-t4bnk" Feb 27 10:29:08 crc kubenswrapper[4728]: I0227 10:29:08.471148 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34088c2f-1e95-4227-9242-9e4cde7a9fde-catalog-content\") pod \"community-operators-b97gn\" (UID: \"34088c2f-1e95-4227-9242-9e4cde7a9fde\") " pod="openshift-marketplace/community-operators-b97gn" Feb 27 10:29:08 crc kubenswrapper[4728]: E0227 10:29:08.471429 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 10:29:08.971414324 +0000 UTC m=+168.933780430 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w4vnn" (UID: "9f01d342-6bde-4063-b99d-b0efda456aef") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:29:08 crc kubenswrapper[4728]: I0227 10:29:08.479025 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-t4bnk"] Feb 27 10:29:08 crc kubenswrapper[4728]: I0227 10:29:08.494192 4728 ???:1] "http: TLS handshake error from 192.168.126.11:37596: no serving certificate available for the kubelet" Feb 27 10:29:08 crc kubenswrapper[4728]: I0227 10:29:08.502268 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-npgnf\" (UniqueName: \"kubernetes.io/projected/34088c2f-1e95-4227-9242-9e4cde7a9fde-kube-api-access-npgnf\") pod \"community-operators-b97gn\" (UID: \"34088c2f-1e95-4227-9242-9e4cde7a9fde\") " pod="openshift-marketplace/community-operators-b97gn" Feb 27 10:29:08 crc kubenswrapper[4728]: I0227 10:29:08.524605 4728 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-25vw6 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.16:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 27 10:29:08 crc kubenswrapper[4728]: I0227 10:29:08.524685 4728 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-25vw6" podUID="de6bdf95-032d-42a2-a8b5-0202641a05c1" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.16:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 27 10:29:08 crc kubenswrapper[4728]: I0227 10:29:08.572071 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 10:29:08 crc kubenswrapper[4728]: E0227 10:29:08.572190 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 10:29:09.072163817 +0000 UTC m=+169.034529923 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:29:08 crc kubenswrapper[4728]: I0227 10:29:08.572431 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w4vnn\" (UID: \"9f01d342-6bde-4063-b99d-b0efda456aef\") " pod="openshift-image-registry/image-registry-697d97f7c8-w4vnn" Feb 27 10:29:08 crc kubenswrapper[4728]: I0227 10:29:08.572475 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7771abc7-886d-41eb-b966-74538062511f-catalog-content\") pod \"certified-operators-t4bnk\" (UID: \"7771abc7-886d-41eb-b966-74538062511f\") " pod="openshift-marketplace/certified-operators-t4bnk" Feb 27 10:29:08 crc kubenswrapper[4728]: I0227 10:29:08.572513 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dmk8m\" (UniqueName: \"kubernetes.io/projected/7771abc7-886d-41eb-b966-74538062511f-kube-api-access-dmk8m\") pod \"certified-operators-t4bnk\" (UID: \"7771abc7-886d-41eb-b966-74538062511f\") " pod="openshift-marketplace/certified-operators-t4bnk" Feb 27 10:29:08 crc kubenswrapper[4728]: I0227 10:29:08.572539 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7771abc7-886d-41eb-b966-74538062511f-utilities\") pod \"certified-operators-t4bnk\" (UID: \"7771abc7-886d-41eb-b966-74538062511f\") " pod="openshift-marketplace/certified-operators-t4bnk" Feb 27 10:29:08 crc kubenswrapper[4728]: I0227 10:29:08.572966 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7771abc7-886d-41eb-b966-74538062511f-utilities\") pod \"certified-operators-t4bnk\" (UID: \"7771abc7-886d-41eb-b966-74538062511f\") " pod="openshift-marketplace/certified-operators-t4bnk" Feb 27 10:29:08 crc kubenswrapper[4728]: E0227 10:29:08.573196 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 10:29:09.073184556 +0000 UTC m=+169.035550662 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w4vnn" (UID: "9f01d342-6bde-4063-b99d-b0efda456aef") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:29:08 crc kubenswrapper[4728]: I0227 10:29:08.573424 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7771abc7-886d-41eb-b966-74538062511f-catalog-content\") pod \"certified-operators-t4bnk\" (UID: \"7771abc7-886d-41eb-b966-74538062511f\") " pod="openshift-marketplace/certified-operators-t4bnk" Feb 27 10:29:08 crc kubenswrapper[4728]: I0227 10:29:08.585281 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-qt4kh" event={"ID":"ba4b5d33-bc92-49db-bb38-80a7bd3ca2f6","Type":"ContainerStarted","Data":"ed0d9f5fce5fe7f2d846a3eed7d9b8468617f25fe94848ed09ba85df641df347"} Feb 27 10:29:08 crc kubenswrapper[4728]: I0227 10:29:08.586224 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-qt4kh" Feb 27 10:29:08 crc kubenswrapper[4728]: I0227 10:29:08.591980 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b97gn" Feb 27 10:29:08 crc kubenswrapper[4728]: I0227 10:29:08.616593 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-2f6px" event={"ID":"f5b2eb8e-7b36-40ac-b745-9e1a3efaec21","Type":"ContainerStarted","Data":"24a06d5f81d76e59d6edde138913c78682bef5ea2744694db81cc4dbc56bdae5"} Feb 27 10:29:08 crc kubenswrapper[4728]: I0227 10:29:08.618019 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-qt4kh" podStartSLOduration=8.618001544 podStartE2EDuration="8.618001544s" podCreationTimestamp="2026-02-27 10:29:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:29:08.614951148 +0000 UTC m=+168.577317264" watchObservedRunningTime="2026-02-27 10:29:08.618001544 +0000 UTC m=+168.580367650" Feb 27 10:29:08 crc kubenswrapper[4728]: I0227 10:29:08.618116 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dmk8m\" (UniqueName: \"kubernetes.io/projected/7771abc7-886d-41eb-b966-74538062511f-kube-api-access-dmk8m\") pod \"certified-operators-t4bnk\" (UID: \"7771abc7-886d-41eb-b966-74538062511f\") " pod="openshift-marketplace/certified-operators-t4bnk" Feb 27 10:29:08 crc kubenswrapper[4728]: I0227 10:29:08.621984 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9bqx2" event={"ID":"f91e18b1-f9ed-4a0d-8aff-e7344791fb5e","Type":"ContainerStarted","Data":"daab4809be820a0a87971c1459cde314d9bc6731cb0fa83bd2ffc42b1e62eae0"} Feb 27 10:29:08 crc kubenswrapper[4728]: I0227 10:29:08.645895 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-zjx6k" event={"ID":"103ae8fe-45e0-4696-be10-bb2ced3ee561","Type":"ContainerStarted","Data":"020788df1ebcab345d7a8e3c26b4edec2f483db169754ef49e82f4f45bd2342b"} Feb 27 10:29:08 crc kubenswrapper[4728]: I0227 10:29:08.650558 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-sftjq" event={"ID":"6e5d2a5d-0128-4d37-b653-555cc40a8d39","Type":"ContainerStarted","Data":"63722cbb9398736edaa03af5d087f838251ef0e5278934f834009aa9f3654ae2"} Feb 27 10:29:08 crc kubenswrapper[4728]: I0227 10:29:08.654883 4728 patch_prober.go:28] interesting pod/downloads-7954f5f757-c46ql container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Feb 27 10:29:08 crc kubenswrapper[4728]: I0227 10:29:08.654933 4728 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-c46ql" podUID="a3656135-373e-4ec6-9cf1-e34d6a95c5a5" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Feb 27 10:29:08 crc kubenswrapper[4728]: I0227 10:29:08.655193 4728 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-xv8vk container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.21:8080/healthz\": dial tcp 10.217.0.21:8080: connect: connection refused" start-of-body= Feb 27 10:29:08 crc kubenswrapper[4728]: I0227 10:29:08.655237 4728 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-xv8vk" podUID="438710a7-473e-43a3-8aee-6f1f2d5ac756" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.21:8080/healthz\": dial tcp 10.217.0.21:8080: connect: connection refused" Feb 27 10:29:08 crc kubenswrapper[4728]: I0227 10:29:08.668906 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-fj9d6" Feb 27 10:29:08 crc kubenswrapper[4728]: I0227 10:29:08.669549 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-brtfb"] Feb 27 10:29:08 crc kubenswrapper[4728]: I0227 10:29:08.670406 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-brtfb" Feb 27 10:29:08 crc kubenswrapper[4728]: I0227 10:29:08.673816 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 10:29:08 crc kubenswrapper[4728]: E0227 10:29:08.674146 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 10:29:09.174129083 +0000 UTC m=+169.136495189 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:29:08 crc kubenswrapper[4728]: I0227 10:29:08.718981 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-zjx6k" podStartSLOduration=98.718967393 podStartE2EDuration="1m38.718967393s" podCreationTimestamp="2026-02-27 10:27:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:29:08.713710084 +0000 UTC m=+168.676076190" watchObservedRunningTime="2026-02-27 10:29:08.718967393 +0000 UTC m=+168.681333499" Feb 27 10:29:08 crc kubenswrapper[4728]: I0227 10:29:08.719123 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-brtfb"] Feb 27 10:29:08 crc kubenswrapper[4728]: I0227 10:29:08.742789 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-25vw6" Feb 27 10:29:08 crc kubenswrapper[4728]: I0227 10:29:08.775180 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w4vnn\" (UID: \"9f01d342-6bde-4063-b99d-b0efda456aef\") " pod="openshift-image-registry/image-registry-697d97f7c8-w4vnn" Feb 27 10:29:08 crc kubenswrapper[4728]: I0227 10:29:08.775370 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvlqh\" (UniqueName: \"kubernetes.io/projected/7c5a3750-282d-4f84-a9c9-b3167aa283b8-kube-api-access-dvlqh\") pod \"community-operators-brtfb\" (UID: \"7c5a3750-282d-4f84-a9c9-b3167aa283b8\") " pod="openshift-marketplace/community-operators-brtfb" Feb 27 10:29:08 crc kubenswrapper[4728]: I0227 10:29:08.776042 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c5a3750-282d-4f84-a9c9-b3167aa283b8-utilities\") pod \"community-operators-brtfb\" (UID: \"7c5a3750-282d-4f84-a9c9-b3167aa283b8\") " pod="openshift-marketplace/community-operators-brtfb" Feb 27 10:29:08 crc kubenswrapper[4728]: I0227 10:29:08.776065 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c5a3750-282d-4f84-a9c9-b3167aa283b8-catalog-content\") pod \"community-operators-brtfb\" (UID: \"7c5a3750-282d-4f84-a9c9-b3167aa283b8\") " pod="openshift-marketplace/community-operators-brtfb" Feb 27 10:29:08 crc kubenswrapper[4728]: E0227 10:29:08.778708 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 10:29:09.278696154 +0000 UTC m=+169.241062260 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w4vnn" (UID: "9f01d342-6bde-4063-b99d-b0efda456aef") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:29:08 crc kubenswrapper[4728]: I0227 10:29:08.782794 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t4bnk" Feb 27 10:29:08 crc kubenswrapper[4728]: I0227 10:29:08.877439 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-gg7mm"] Feb 27 10:29:08 crc kubenswrapper[4728]: I0227 10:29:08.878442 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gg7mm" Feb 27 10:29:08 crc kubenswrapper[4728]: I0227 10:29:08.882284 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 10:29:08 crc kubenswrapper[4728]: I0227 10:29:08.882531 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dvlqh\" (UniqueName: \"kubernetes.io/projected/7c5a3750-282d-4f84-a9c9-b3167aa283b8-kube-api-access-dvlqh\") pod \"community-operators-brtfb\" (UID: \"7c5a3750-282d-4f84-a9c9-b3167aa283b8\") " pod="openshift-marketplace/community-operators-brtfb" Feb 27 10:29:08 crc kubenswrapper[4728]: I0227 10:29:08.882595 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c5a3750-282d-4f84-a9c9-b3167aa283b8-utilities\") pod \"community-operators-brtfb\" (UID: \"7c5a3750-282d-4f84-a9c9-b3167aa283b8\") " pod="openshift-marketplace/community-operators-brtfb" Feb 27 10:29:08 crc kubenswrapper[4728]: I0227 10:29:08.882619 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c5a3750-282d-4f84-a9c9-b3167aa283b8-catalog-content\") pod \"community-operators-brtfb\" (UID: \"7c5a3750-282d-4f84-a9c9-b3167aa283b8\") " pod="openshift-marketplace/community-operators-brtfb" Feb 27 10:29:08 crc kubenswrapper[4728]: E0227 10:29:08.883017 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 10:29:09.382993526 +0000 UTC m=+169.345359642 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:29:08 crc kubenswrapper[4728]: I0227 10:29:08.883363 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c5a3750-282d-4f84-a9c9-b3167aa283b8-catalog-content\") pod \"community-operators-brtfb\" (UID: \"7c5a3750-282d-4f84-a9c9-b3167aa283b8\") " pod="openshift-marketplace/community-operators-brtfb" Feb 27 10:29:08 crc kubenswrapper[4728]: I0227 10:29:08.883851 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c5a3750-282d-4f84-a9c9-b3167aa283b8-utilities\") pod \"community-operators-brtfb\" (UID: \"7c5a3750-282d-4f84-a9c9-b3167aa283b8\") " pod="openshift-marketplace/community-operators-brtfb" Feb 27 10:29:08 crc kubenswrapper[4728]: I0227 10:29:08.936232 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gg7mm"] Feb 27 10:29:08 crc kubenswrapper[4728]: I0227 10:29:08.943932 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dvlqh\" (UniqueName: \"kubernetes.io/projected/7c5a3750-282d-4f84-a9c9-b3167aa283b8-kube-api-access-dvlqh\") pod \"community-operators-brtfb\" (UID: \"7c5a3750-282d-4f84-a9c9-b3167aa283b8\") " pod="openshift-marketplace/community-operators-brtfb" Feb 27 10:29:08 crc kubenswrapper[4728]: I0227 10:29:08.989202 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a7d9e95-6291-465f-9f94-f99fc86e4389-catalog-content\") pod \"certified-operators-gg7mm\" (UID: \"0a7d9e95-6291-465f-9f94-f99fc86e4389\") " pod="openshift-marketplace/certified-operators-gg7mm" Feb 27 10:29:08 crc kubenswrapper[4728]: I0227 10:29:08.989484 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a7d9e95-6291-465f-9f94-f99fc86e4389-utilities\") pod \"certified-operators-gg7mm\" (UID: \"0a7d9e95-6291-465f-9f94-f99fc86e4389\") " pod="openshift-marketplace/certified-operators-gg7mm" Feb 27 10:29:08 crc kubenswrapper[4728]: I0227 10:29:08.989536 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w4vnn\" (UID: \"9f01d342-6bde-4063-b99d-b0efda456aef\") " pod="openshift-image-registry/image-registry-697d97f7c8-w4vnn" Feb 27 10:29:08 crc kubenswrapper[4728]: I0227 10:29:08.989556 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ml2lk\" (UniqueName: \"kubernetes.io/projected/0a7d9e95-6291-465f-9f94-f99fc86e4389-kube-api-access-ml2lk\") pod \"certified-operators-gg7mm\" (UID: \"0a7d9e95-6291-465f-9f94-f99fc86e4389\") " pod="openshift-marketplace/certified-operators-gg7mm" Feb 27 10:29:08 crc kubenswrapper[4728]: E0227 10:29:08.989907 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 10:29:09.489895473 +0000 UTC m=+169.452261569 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w4vnn" (UID: "9f01d342-6bde-4063-b99d-b0efda456aef") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:29:09 crc kubenswrapper[4728]: I0227 10:29:09.004287 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-brtfb" Feb 27 10:29:09 crc kubenswrapper[4728]: I0227 10:29:09.094033 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 10:29:09 crc kubenswrapper[4728]: I0227 10:29:09.094323 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a7d9e95-6291-465f-9f94-f99fc86e4389-catalog-content\") pod \"certified-operators-gg7mm\" (UID: \"0a7d9e95-6291-465f-9f94-f99fc86e4389\") " pod="openshift-marketplace/certified-operators-gg7mm" Feb 27 10:29:09 crc kubenswrapper[4728]: I0227 10:29:09.094363 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a7d9e95-6291-465f-9f94-f99fc86e4389-utilities\") pod \"certified-operators-gg7mm\" (UID: \"0a7d9e95-6291-465f-9f94-f99fc86e4389\") " pod="openshift-marketplace/certified-operators-gg7mm" Feb 27 10:29:09 crc kubenswrapper[4728]: I0227 10:29:09.094388 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ml2lk\" (UniqueName: \"kubernetes.io/projected/0a7d9e95-6291-465f-9f94-f99fc86e4389-kube-api-access-ml2lk\") pod \"certified-operators-gg7mm\" (UID: \"0a7d9e95-6291-465f-9f94-f99fc86e4389\") " pod="openshift-marketplace/certified-operators-gg7mm" Feb 27 10:29:09 crc kubenswrapper[4728]: E0227 10:29:09.094607 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 10:29:09.594591936 +0000 UTC m=+169.556958042 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:29:09 crc kubenswrapper[4728]: I0227 10:29:09.094925 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a7d9e95-6291-465f-9f94-f99fc86e4389-catalog-content\") pod \"certified-operators-gg7mm\" (UID: \"0a7d9e95-6291-465f-9f94-f99fc86e4389\") " pod="openshift-marketplace/certified-operators-gg7mm" Feb 27 10:29:09 crc kubenswrapper[4728]: I0227 10:29:09.095127 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a7d9e95-6291-465f-9f94-f99fc86e4389-utilities\") pod \"certified-operators-gg7mm\" (UID: \"0a7d9e95-6291-465f-9f94-f99fc86e4389\") " pod="openshift-marketplace/certified-operators-gg7mm" Feb 27 10:29:09 crc kubenswrapper[4728]: I0227 10:29:09.128448 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-d2nkm"] Feb 27 10:29:09 crc kubenswrapper[4728]: I0227 10:29:09.155559 4728 patch_prober.go:28] interesting pod/router-default-5444994796-8n6md container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 27 10:29:09 crc kubenswrapper[4728]: [-]has-synced failed: reason withheld Feb 27 10:29:09 crc kubenswrapper[4728]: [+]process-running ok Feb 27 10:29:09 crc kubenswrapper[4728]: healthz check failed Feb 27 10:29:09 crc kubenswrapper[4728]: I0227 10:29:09.155597 4728 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-8n6md" podUID="90274913-fbff-4207-a3d8-f163ebcee220" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 27 10:29:09 crc kubenswrapper[4728]: I0227 10:29:09.156441 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ml2lk\" (UniqueName: \"kubernetes.io/projected/0a7d9e95-6291-465f-9f94-f99fc86e4389-kube-api-access-ml2lk\") pod \"certified-operators-gg7mm\" (UID: \"0a7d9e95-6291-465f-9f94-f99fc86e4389\") " pod="openshift-marketplace/certified-operators-gg7mm" Feb 27 10:29:09 crc kubenswrapper[4728]: I0227 10:29:09.196843 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w4vnn\" (UID: \"9f01d342-6bde-4063-b99d-b0efda456aef\") " pod="openshift-image-registry/image-registry-697d97f7c8-w4vnn" Feb 27 10:29:09 crc kubenswrapper[4728]: E0227 10:29:09.197128 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 10:29:09.697115419 +0000 UTC m=+169.659481525 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w4vnn" (UID: "9f01d342-6bde-4063-b99d-b0efda456aef") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:29:09 crc kubenswrapper[4728]: I0227 10:29:09.241743 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gg7mm" Feb 27 10:29:09 crc kubenswrapper[4728]: I0227 10:29:09.302078 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 10:29:09 crc kubenswrapper[4728]: E0227 10:29:09.302373 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 10:29:09.802356848 +0000 UTC m=+169.764722954 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:29:09 crc kubenswrapper[4728]: I0227 10:29:09.399064 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-tz4jj"] Feb 27 10:29:09 crc kubenswrapper[4728]: I0227 10:29:09.403731 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w4vnn\" (UID: \"9f01d342-6bde-4063-b99d-b0efda456aef\") " pod="openshift-image-registry/image-registry-697d97f7c8-w4vnn" Feb 27 10:29:09 crc kubenswrapper[4728]: E0227 10:29:09.404060 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 10:29:09.904046036 +0000 UTC m=+169.866412142 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w4vnn" (UID: "9f01d342-6bde-4063-b99d-b0efda456aef") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:29:09 crc kubenswrapper[4728]: W0227 10:29:09.466214 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod34088c2f_1e95_4227_9242_9e4cde7a9fde.slice/crio-7da586308b762260a4e430b04ac5f27ed14d9e51d52cb5234c07ab19d03e39a3 WatchSource:0}: Error finding container 7da586308b762260a4e430b04ac5f27ed14d9e51d52cb5234c07ab19d03e39a3: Status 404 returned error can't find the container with id 7da586308b762260a4e430b04ac5f27ed14d9e51d52cb5234c07ab19d03e39a3 Feb 27 10:29:09 crc kubenswrapper[4728]: I0227 10:29:09.484427 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-b97gn"] Feb 27 10:29:09 crc kubenswrapper[4728]: I0227 10:29:09.493875 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-t4bnk"] Feb 27 10:29:09 crc kubenswrapper[4728]: I0227 10:29:09.509296 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 10:29:09 crc kubenswrapper[4728]: E0227 10:29:09.509609 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 10:29:10.009593125 +0000 UTC m=+169.971959231 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:29:09 crc kubenswrapper[4728]: I0227 10:29:09.613781 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w4vnn\" (UID: \"9f01d342-6bde-4063-b99d-b0efda456aef\") " pod="openshift-image-registry/image-registry-697d97f7c8-w4vnn" Feb 27 10:29:09 crc kubenswrapper[4728]: E0227 10:29:09.614048 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 10:29:10.114037071 +0000 UTC m=+170.076403177 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w4vnn" (UID: "9f01d342-6bde-4063-b99d-b0efda456aef") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:29:09 crc kubenswrapper[4728]: I0227 10:29:09.659863 4728 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-cxzlx container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.37:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 27 10:29:09 crc kubenswrapper[4728]: I0227 10:29:09.660153 4728 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cxzlx" podUID="285ad280-c5dc-4312-afcd-39678c1c5c0b" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.37:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 27 10:29:09 crc kubenswrapper[4728]: I0227 10:29:09.706735 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b97gn" event={"ID":"34088c2f-1e95-4227-9242-9e4cde7a9fde","Type":"ContainerStarted","Data":"7da586308b762260a4e430b04ac5f27ed14d9e51d52cb5234c07ab19d03e39a3"} Feb 27 10:29:09 crc kubenswrapper[4728]: I0227 10:29:09.715030 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 10:29:09 crc kubenswrapper[4728]: E0227 10:29:09.715470 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 10:29:10.215456323 +0000 UTC m=+170.177822429 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:29:09 crc kubenswrapper[4728]: I0227 10:29:09.765860 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-sftjq" event={"ID":"6e5d2a5d-0128-4d37-b653-555cc40a8d39","Type":"ContainerStarted","Data":"dc85336d3753a59617f16cb60dc099fa1b79fe8d9b434e963180a42f4ee6bc88"} Feb 27 10:29:09 crc kubenswrapper[4728]: I0227 10:29:09.782048 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t4bnk" event={"ID":"7771abc7-886d-41eb-b966-74538062511f","Type":"ContainerStarted","Data":"c4cbddc869f0ebf2e50759993c9b7fff94903bd1dd9626a0aa995561acde2004"} Feb 27 10:29:09 crc kubenswrapper[4728]: I0227 10:29:09.784471 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-multus/cni-sysctl-allowlist-ds-w976v" podUID="a4d6106d-dd3c-4a6a-aaeb-b441add3fdad" containerName="kube-multus-additional-cni-plugins" containerID="cri-o://1206c848258532f67f69f951cef6e00d801573f63a796e06bc1603e25228dccf" gracePeriod=30 Feb 27 10:29:09 crc kubenswrapper[4728]: I0227 10:29:09.792250 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-d2nkm" podUID="52d0f01e-a7ab-4c07-bd46-d014e84c3d6a" containerName="controller-manager" containerID="cri-o://3543c7d69eef4755964ea7d1f54094ce89e5f119789732465e5e76e5a2a32963" gracePeriod=30 Feb 27 10:29:09 crc kubenswrapper[4728]: I0227 10:29:09.792732 4728 patch_prober.go:28] interesting pod/downloads-7954f5f757-c46ql container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Feb 27 10:29:09 crc kubenswrapper[4728]: I0227 10:29:09.792774 4728 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-c46ql" podUID="a3656135-373e-4ec6-9cf1-e34d6a95c5a5" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Feb 27 10:29:09 crc kubenswrapper[4728]: I0227 10:29:09.816872 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w4vnn\" (UID: \"9f01d342-6bde-4063-b99d-b0efda456aef\") " pod="openshift-image-registry/image-registry-697d97f7c8-w4vnn" Feb 27 10:29:09 crc kubenswrapper[4728]: E0227 10:29:09.817207 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 10:29:10.317196843 +0000 UTC m=+170.279562949 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w4vnn" (UID: "9f01d342-6bde-4063-b99d-b0efda456aef") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:29:09 crc kubenswrapper[4728]: I0227 10:29:09.834068 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-xv8vk" Feb 27 10:29:09 crc kubenswrapper[4728]: I0227 10:29:09.858754 4728 patch_prober.go:28] interesting pod/apiserver-76f77b778f-2f6px container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Feb 27 10:29:09 crc kubenswrapper[4728]: [+]log ok Feb 27 10:29:09 crc kubenswrapper[4728]: [+]etcd ok Feb 27 10:29:09 crc kubenswrapper[4728]: [+]poststarthook/start-apiserver-admission-initializer ok Feb 27 10:29:09 crc kubenswrapper[4728]: [+]poststarthook/generic-apiserver-start-informers ok Feb 27 10:29:09 crc kubenswrapper[4728]: [+]poststarthook/max-in-flight-filter ok Feb 27 10:29:09 crc kubenswrapper[4728]: [+]poststarthook/storage-object-count-tracker-hook ok Feb 27 10:29:09 crc kubenswrapper[4728]: [+]poststarthook/image.openshift.io-apiserver-caches ok Feb 27 10:29:09 crc kubenswrapper[4728]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Feb 27 10:29:09 crc kubenswrapper[4728]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Feb 27 10:29:09 crc kubenswrapper[4728]: [+]poststarthook/project.openshift.io-projectcache ok Feb 27 10:29:09 crc kubenswrapper[4728]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Feb 27 10:29:09 crc kubenswrapper[4728]: [-]poststarthook/openshift.io-startinformers failed: reason withheld Feb 27 10:29:09 crc kubenswrapper[4728]: [+]poststarthook/openshift.io-restmapperupdater ok Feb 27 10:29:09 crc kubenswrapper[4728]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Feb 27 10:29:09 crc kubenswrapper[4728]: livez check failed Feb 27 10:29:09 crc kubenswrapper[4728]: I0227 10:29:09.858832 4728 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-2f6px" podUID="f5b2eb8e-7b36-40ac-b745-9e1a3efaec21" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 27 10:29:09 crc kubenswrapper[4728]: I0227 10:29:09.866223 4728 ???:1] "http: TLS handshake error from 192.168.126.11:37610: no serving certificate available for the kubelet" Feb 27 10:29:09 crc kubenswrapper[4728]: I0227 10:29:09.903937 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gg7mm"] Feb 27 10:29:09 crc kubenswrapper[4728]: I0227 10:29:09.917677 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 10:29:09 crc kubenswrapper[4728]: E0227 10:29:09.919154 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 10:29:10.419133198 +0000 UTC m=+170.381499304 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:29:09 crc kubenswrapper[4728]: W0227 10:29:09.974611 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0a7d9e95_6291_465f_9f94_f99fc86e4389.slice/crio-d608e445ce5ff869da41ef045aaa499ed97caf7477db66b53a7c4659d07b97e4 WatchSource:0}: Error finding container d608e445ce5ff869da41ef045aaa499ed97caf7477db66b53a7c4659d07b97e4: Status 404 returned error can't find the container with id d608e445ce5ff869da41ef045aaa499ed97caf7477db66b53a7c4659d07b97e4 Feb 27 10:29:10 crc kubenswrapper[4728]: I0227 10:29:10.014155 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-7g65x" Feb 27 10:29:10 crc kubenswrapper[4728]: I0227 10:29:10.019441 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w4vnn\" (UID: \"9f01d342-6bde-4063-b99d-b0efda456aef\") " pod="openshift-image-registry/image-registry-697d97f7c8-w4vnn" Feb 27 10:29:10 crc kubenswrapper[4728]: E0227 10:29:10.019887 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 10:29:10.51986546 +0000 UTC m=+170.482231566 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w4vnn" (UID: "9f01d342-6bde-4063-b99d-b0efda456aef") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:29:10 crc kubenswrapper[4728]: I0227 10:29:10.081624 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-brtfb"] Feb 27 10:29:10 crc kubenswrapper[4728]: I0227 10:29:10.120081 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 10:29:10 crc kubenswrapper[4728]: E0227 10:29:10.120361 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 10:29:10.620346275 +0000 UTC m=+170.582712381 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:29:10 crc kubenswrapper[4728]: I0227 10:29:10.143704 4728 patch_prober.go:28] interesting pod/router-default-5444994796-8n6md container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 27 10:29:10 crc kubenswrapper[4728]: [-]has-synced failed: reason withheld Feb 27 10:29:10 crc kubenswrapper[4728]: [+]process-running ok Feb 27 10:29:10 crc kubenswrapper[4728]: healthz check failed Feb 27 10:29:10 crc kubenswrapper[4728]: I0227 10:29:10.143754 4728 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-8n6md" podUID="90274913-fbff-4207-a3d8-f163ebcee220" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 27 10:29:10 crc kubenswrapper[4728]: I0227 10:29:10.160065 4728 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Feb 27 10:29:10 crc kubenswrapper[4728]: I0227 10:29:10.222068 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w4vnn\" (UID: \"9f01d342-6bde-4063-b99d-b0efda456aef\") " pod="openshift-image-registry/image-registry-697d97f7c8-w4vnn" Feb 27 10:29:10 crc kubenswrapper[4728]: E0227 10:29:10.222365 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 10:29:10.722352733 +0000 UTC m=+170.684718839 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w4vnn" (UID: "9f01d342-6bde-4063-b99d-b0efda456aef") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:29:10 crc kubenswrapper[4728]: I0227 10:29:10.323195 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 10:29:10 crc kubenswrapper[4728]: E0227 10:29:10.323368 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 10:29:10.823342272 +0000 UTC m=+170.785708378 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:29:10 crc kubenswrapper[4728]: I0227 10:29:10.323406 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w4vnn\" (UID: \"9f01d342-6bde-4063-b99d-b0efda456aef\") " pod="openshift-image-registry/image-registry-697d97f7c8-w4vnn" Feb 27 10:29:10 crc kubenswrapper[4728]: E0227 10:29:10.323958 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 10:29:10.823948308 +0000 UTC m=+170.786314414 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w4vnn" (UID: "9f01d342-6bde-4063-b99d-b0efda456aef") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:29:10 crc kubenswrapper[4728]: I0227 10:29:10.424868 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 10:29:10 crc kubenswrapper[4728]: E0227 10:29:10.425178 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 10:29:10.925163734 +0000 UTC m=+170.887529840 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:29:10 crc kubenswrapper[4728]: I0227 10:29:10.457709 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-wnfnp"] Feb 27 10:29:10 crc kubenswrapper[4728]: I0227 10:29:10.460209 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wnfnp" Feb 27 10:29:10 crc kubenswrapper[4728]: I0227 10:29:10.462739 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wnfnp"] Feb 27 10:29:10 crc kubenswrapper[4728]: I0227 10:29:10.463901 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 27 10:29:10 crc kubenswrapper[4728]: I0227 10:29:10.540813 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b27f5cf8-de13-42a0-825a-0bc27ddc8466-utilities\") pod \"redhat-marketplace-wnfnp\" (UID: \"b27f5cf8-de13-42a0-825a-0bc27ddc8466\") " pod="openshift-marketplace/redhat-marketplace-wnfnp" Feb 27 10:29:10 crc kubenswrapper[4728]: I0227 10:29:10.540898 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6qpz\" (UniqueName: \"kubernetes.io/projected/b27f5cf8-de13-42a0-825a-0bc27ddc8466-kube-api-access-x6qpz\") pod \"redhat-marketplace-wnfnp\" (UID: \"b27f5cf8-de13-42a0-825a-0bc27ddc8466\") " pod="openshift-marketplace/redhat-marketplace-wnfnp" Feb 27 10:29:10 crc kubenswrapper[4728]: I0227 10:29:10.540925 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b27f5cf8-de13-42a0-825a-0bc27ddc8466-catalog-content\") pod \"redhat-marketplace-wnfnp\" (UID: \"b27f5cf8-de13-42a0-825a-0bc27ddc8466\") " pod="openshift-marketplace/redhat-marketplace-wnfnp" Feb 27 10:29:10 crc kubenswrapper[4728]: I0227 10:29:10.541033 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w4vnn\" (UID: \"9f01d342-6bde-4063-b99d-b0efda456aef\") " pod="openshift-image-registry/image-registry-697d97f7c8-w4vnn" Feb 27 10:29:10 crc kubenswrapper[4728]: E0227 10:29:10.541312 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 10:29:11.041300292 +0000 UTC m=+171.003666398 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w4vnn" (UID: "9f01d342-6bde-4063-b99d-b0efda456aef") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:29:10 crc kubenswrapper[4728]: I0227 10:29:10.648577 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 10:29:10 crc kubenswrapper[4728]: E0227 10:29:10.648807 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 10:29:11.148769914 +0000 UTC m=+171.111136030 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:29:10 crc kubenswrapper[4728]: I0227 10:29:10.649092 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w4vnn\" (UID: \"9f01d342-6bde-4063-b99d-b0efda456aef\") " pod="openshift-image-registry/image-registry-697d97f7c8-w4vnn" Feb 27 10:29:10 crc kubenswrapper[4728]: I0227 10:29:10.649131 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b27f5cf8-de13-42a0-825a-0bc27ddc8466-utilities\") pod \"redhat-marketplace-wnfnp\" (UID: \"b27f5cf8-de13-42a0-825a-0bc27ddc8466\") " pod="openshift-marketplace/redhat-marketplace-wnfnp" Feb 27 10:29:10 crc kubenswrapper[4728]: I0227 10:29:10.649168 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x6qpz\" (UniqueName: \"kubernetes.io/projected/b27f5cf8-de13-42a0-825a-0bc27ddc8466-kube-api-access-x6qpz\") pod \"redhat-marketplace-wnfnp\" (UID: \"b27f5cf8-de13-42a0-825a-0bc27ddc8466\") " pod="openshift-marketplace/redhat-marketplace-wnfnp" Feb 27 10:29:10 crc kubenswrapper[4728]: I0227 10:29:10.649186 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b27f5cf8-de13-42a0-825a-0bc27ddc8466-catalog-content\") pod \"redhat-marketplace-wnfnp\" (UID: \"b27f5cf8-de13-42a0-825a-0bc27ddc8466\") " pod="openshift-marketplace/redhat-marketplace-wnfnp" Feb 27 10:29:10 crc kubenswrapper[4728]: I0227 10:29:10.649672 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b27f5cf8-de13-42a0-825a-0bc27ddc8466-catalog-content\") pod \"redhat-marketplace-wnfnp\" (UID: \"b27f5cf8-de13-42a0-825a-0bc27ddc8466\") " pod="openshift-marketplace/redhat-marketplace-wnfnp" Feb 27 10:29:10 crc kubenswrapper[4728]: E0227 10:29:10.649925 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 10:29:11.149912647 +0000 UTC m=+171.112278833 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w4vnn" (UID: "9f01d342-6bde-4063-b99d-b0efda456aef") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:29:10 crc kubenswrapper[4728]: I0227 10:29:10.650166 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b27f5cf8-de13-42a0-825a-0bc27ddc8466-utilities\") pod \"redhat-marketplace-wnfnp\" (UID: \"b27f5cf8-de13-42a0-825a-0bc27ddc8466\") " pod="openshift-marketplace/redhat-marketplace-wnfnp" Feb 27 10:29:10 crc kubenswrapper[4728]: I0227 10:29:10.685869 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x6qpz\" (UniqueName: \"kubernetes.io/projected/b27f5cf8-de13-42a0-825a-0bc27ddc8466-kube-api-access-x6qpz\") pod \"redhat-marketplace-wnfnp\" (UID: \"b27f5cf8-de13-42a0-825a-0bc27ddc8466\") " pod="openshift-marketplace/redhat-marketplace-wnfnp" Feb 27 10:29:10 crc kubenswrapper[4728]: I0227 10:29:10.749878 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 10:29:10 crc kubenswrapper[4728]: E0227 10:29:10.750214 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 10:29:11.250198846 +0000 UTC m=+171.212564952 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:29:10 crc kubenswrapper[4728]: I0227 10:29:10.753099 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-d2nkm" Feb 27 10:29:10 crc kubenswrapper[4728]: I0227 10:29:10.776237 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wnfnp" Feb 27 10:29:10 crc kubenswrapper[4728]: I0227 10:29:10.814084 4728 generic.go:334] "Generic (PLEG): container finished" podID="7c5a3750-282d-4f84-a9c9-b3167aa283b8" containerID="402e7c6d0bb1dc6970509d12a2436cdc1b369627cea9a8f6a8d3f64c8971e5ae" exitCode=0 Feb 27 10:29:10 crc kubenswrapper[4728]: I0227 10:29:10.814148 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-brtfb" event={"ID":"7c5a3750-282d-4f84-a9c9-b3167aa283b8","Type":"ContainerDied","Data":"402e7c6d0bb1dc6970509d12a2436cdc1b369627cea9a8f6a8d3f64c8971e5ae"} Feb 27 10:29:10 crc kubenswrapper[4728]: I0227 10:29:10.814176 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-brtfb" event={"ID":"7c5a3750-282d-4f84-a9c9-b3167aa283b8","Type":"ContainerStarted","Data":"37c99086912d0b21379fdbc7215dde0cf9bb8add722cf47e61073c9943c2a0ea"} Feb 27 10:29:10 crc kubenswrapper[4728]: I0227 10:29:10.840845 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-sftjq" event={"ID":"6e5d2a5d-0128-4d37-b653-555cc40a8d39","Type":"ContainerStarted","Data":"d82f7a43e8565f531374a7ae477844a0996289ecf115f9a66877f4c2413e33f4"} Feb 27 10:29:10 crc kubenswrapper[4728]: I0227 10:29:10.851376 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/52d0f01e-a7ab-4c07-bd46-d014e84c3d6a-serving-cert\") pod \"52d0f01e-a7ab-4c07-bd46-d014e84c3d6a\" (UID: \"52d0f01e-a7ab-4c07-bd46-d014e84c3d6a\") " Feb 27 10:29:10 crc kubenswrapper[4728]: I0227 10:29:10.851416 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/52d0f01e-a7ab-4c07-bd46-d014e84c3d6a-client-ca\") pod \"52d0f01e-a7ab-4c07-bd46-d014e84c3d6a\" (UID: \"52d0f01e-a7ab-4c07-bd46-d014e84c3d6a\") " Feb 27 10:29:10 crc kubenswrapper[4728]: I0227 10:29:10.851568 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8fs96\" (UniqueName: \"kubernetes.io/projected/52d0f01e-a7ab-4c07-bd46-d014e84c3d6a-kube-api-access-8fs96\") pod \"52d0f01e-a7ab-4c07-bd46-d014e84c3d6a\" (UID: \"52d0f01e-a7ab-4c07-bd46-d014e84c3d6a\") " Feb 27 10:29:10 crc kubenswrapper[4728]: I0227 10:29:10.851608 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52d0f01e-a7ab-4c07-bd46-d014e84c3d6a-config\") pod \"52d0f01e-a7ab-4c07-bd46-d014e84c3d6a\" (UID: \"52d0f01e-a7ab-4c07-bd46-d014e84c3d6a\") " Feb 27 10:29:10 crc kubenswrapper[4728]: I0227 10:29:10.851639 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/52d0f01e-a7ab-4c07-bd46-d014e84c3d6a-proxy-ca-bundles\") pod \"52d0f01e-a7ab-4c07-bd46-d014e84c3d6a\" (UID: \"52d0f01e-a7ab-4c07-bd46-d014e84c3d6a\") " Feb 27 10:29:10 crc kubenswrapper[4728]: I0227 10:29:10.851764 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w4vnn\" (UID: \"9f01d342-6bde-4063-b99d-b0efda456aef\") " pod="openshift-image-registry/image-registry-697d97f7c8-w4vnn" Feb 27 10:29:10 crc kubenswrapper[4728]: E0227 10:29:10.852064 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 10:29:11.35205298 +0000 UTC m=+171.314419086 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w4vnn" (UID: "9f01d342-6bde-4063-b99d-b0efda456aef") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:29:10 crc kubenswrapper[4728]: I0227 10:29:10.853578 4728 generic.go:334] "Generic (PLEG): container finished" podID="7771abc7-886d-41eb-b966-74538062511f" containerID="99e0529609a00e3a0a6091c152fb0164d74318871a1e821e5dfd5afc992a9c40" exitCode=0 Feb 27 10:29:10 crc kubenswrapper[4728]: I0227 10:29:10.853657 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t4bnk" event={"ID":"7771abc7-886d-41eb-b966-74538062511f","Type":"ContainerDied","Data":"99e0529609a00e3a0a6091c152fb0164d74318871a1e821e5dfd5afc992a9c40"} Feb 27 10:29:10 crc kubenswrapper[4728]: I0227 10:29:10.854981 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/52d0f01e-a7ab-4c07-bd46-d014e84c3d6a-config" (OuterVolumeSpecName: "config") pod "52d0f01e-a7ab-4c07-bd46-d014e84c3d6a" (UID: "52d0f01e-a7ab-4c07-bd46-d014e84c3d6a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:29:10 crc kubenswrapper[4728]: I0227 10:29:10.855146 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/52d0f01e-a7ab-4c07-bd46-d014e84c3d6a-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "52d0f01e-a7ab-4c07-bd46-d014e84c3d6a" (UID: "52d0f01e-a7ab-4c07-bd46-d014e84c3d6a"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:29:10 crc kubenswrapper[4728]: I0227 10:29:10.855286 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-rq2kj"] Feb 27 10:29:10 crc kubenswrapper[4728]: I0227 10:29:10.855519 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/52d0f01e-a7ab-4c07-bd46-d014e84c3d6a-client-ca" (OuterVolumeSpecName: "client-ca") pod "52d0f01e-a7ab-4c07-bd46-d014e84c3d6a" (UID: "52d0f01e-a7ab-4c07-bd46-d014e84c3d6a"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:29:10 crc kubenswrapper[4728]: E0227 10:29:10.855739 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52d0f01e-a7ab-4c07-bd46-d014e84c3d6a" containerName="controller-manager" Feb 27 10:29:10 crc kubenswrapper[4728]: I0227 10:29:10.855755 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="52d0f01e-a7ab-4c07-bd46-d014e84c3d6a" containerName="controller-manager" Feb 27 10:29:10 crc kubenswrapper[4728]: I0227 10:29:10.855880 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="52d0f01e-a7ab-4c07-bd46-d014e84c3d6a" containerName="controller-manager" Feb 27 10:29:10 crc kubenswrapper[4728]: I0227 10:29:10.857458 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rq2kj" Feb 27 10:29:10 crc kubenswrapper[4728]: I0227 10:29:10.859661 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52d0f01e-a7ab-4c07-bd46-d014e84c3d6a-kube-api-access-8fs96" (OuterVolumeSpecName: "kube-api-access-8fs96") pod "52d0f01e-a7ab-4c07-bd46-d014e84c3d6a" (UID: "52d0f01e-a7ab-4c07-bd46-d014e84c3d6a"). InnerVolumeSpecName "kube-api-access-8fs96". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:29:10 crc kubenswrapper[4728]: I0227 10:29:10.867177 4728 generic.go:334] "Generic (PLEG): container finished" podID="34088c2f-1e95-4227-9242-9e4cde7a9fde" containerID="5c093ac091d15329058dd0850f1ace6f6015413207b11a5938f328ae75ae0042" exitCode=0 Feb 27 10:29:10 crc kubenswrapper[4728]: I0227 10:29:10.867244 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b97gn" event={"ID":"34088c2f-1e95-4227-9242-9e4cde7a9fde","Type":"ContainerDied","Data":"5c093ac091d15329058dd0850f1ace6f6015413207b11a5938f328ae75ae0042"} Feb 27 10:29:10 crc kubenswrapper[4728]: I0227 10:29:10.876735 4728 generic.go:334] "Generic (PLEG): container finished" podID="52d0f01e-a7ab-4c07-bd46-d014e84c3d6a" containerID="3543c7d69eef4755964ea7d1f54094ce89e5f119789732465e5e76e5a2a32963" exitCode=0 Feb 27 10:29:10 crc kubenswrapper[4728]: I0227 10:29:10.876958 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-d2nkm" event={"ID":"52d0f01e-a7ab-4c07-bd46-d014e84c3d6a","Type":"ContainerDied","Data":"3543c7d69eef4755964ea7d1f54094ce89e5f119789732465e5e76e5a2a32963"} Feb 27 10:29:10 crc kubenswrapper[4728]: I0227 10:29:10.877097 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-d2nkm" event={"ID":"52d0f01e-a7ab-4c07-bd46-d014e84c3d6a","Type":"ContainerDied","Data":"a078e24dd3d7e3c0ed587a136a964741f1ba2caf7029180b75ed72fa7386c118"} Feb 27 10:29:10 crc kubenswrapper[4728]: I0227 10:29:10.877196 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-d2nkm" Feb 27 10:29:10 crc kubenswrapper[4728]: I0227 10:29:10.877233 4728 scope.go:117] "RemoveContainer" containerID="3543c7d69eef4755964ea7d1f54094ce89e5f119789732465e5e76e5a2a32963" Feb 27 10:29:10 crc kubenswrapper[4728]: I0227 10:29:10.877544 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52d0f01e-a7ab-4c07-bd46-d014e84c3d6a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "52d0f01e-a7ab-4c07-bd46-d014e84c3d6a" (UID: "52d0f01e-a7ab-4c07-bd46-d014e84c3d6a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:29:10 crc kubenswrapper[4728]: I0227 10:29:10.892203 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rq2kj"] Feb 27 10:29:10 crc kubenswrapper[4728]: I0227 10:29:10.897444 4728 generic.go:334] "Generic (PLEG): container finished" podID="0a7d9e95-6291-465f-9f94-f99fc86e4389" containerID="c3b3f06c1b3ad39d67c4cd03c6dd87964a2ce5c3733bf9cf37afbb4bc5878188" exitCode=0 Feb 27 10:29:10 crc kubenswrapper[4728]: I0227 10:29:10.897646 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gg7mm" event={"ID":"0a7d9e95-6291-465f-9f94-f99fc86e4389","Type":"ContainerDied","Data":"c3b3f06c1b3ad39d67c4cd03c6dd87964a2ce5c3733bf9cf37afbb4bc5878188"} Feb 27 10:29:10 crc kubenswrapper[4728]: I0227 10:29:10.898371 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gg7mm" event={"ID":"0a7d9e95-6291-465f-9f94-f99fc86e4389","Type":"ContainerStarted","Data":"d608e445ce5ff869da41ef045aaa499ed97caf7477db66b53a7c4659d07b97e4"} Feb 27 10:29:10 crc kubenswrapper[4728]: I0227 10:29:10.900943 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tz4jj" podUID="e0d7166c-3042-4706-9683-6c6a32d29a9c" containerName="route-controller-manager" containerID="cri-o://40bbbe8abe3c7d1dbae7b3ba6b9103f549120818480625470e257632c58203a8" gracePeriod=30 Feb 27 10:29:10 crc kubenswrapper[4728]: I0227 10:29:10.918276 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-76b589b978-x8q75"] Feb 27 10:29:10 crc kubenswrapper[4728]: I0227 10:29:10.919347 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-76b589b978-x8q75" Feb 27 10:29:10 crc kubenswrapper[4728]: I0227 10:29:10.938647 4728 scope.go:117] "RemoveContainer" containerID="3543c7d69eef4755964ea7d1f54094ce89e5f119789732465e5e76e5a2a32963" Feb 27 10:29:10 crc kubenswrapper[4728]: I0227 10:29:10.940207 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-76b589b978-x8q75"] Feb 27 10:29:10 crc kubenswrapper[4728]: E0227 10:29:10.942151 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3543c7d69eef4755964ea7d1f54094ce89e5f119789732465e5e76e5a2a32963\": container with ID starting with 3543c7d69eef4755964ea7d1f54094ce89e5f119789732465e5e76e5a2a32963 not found: ID does not exist" containerID="3543c7d69eef4755964ea7d1f54094ce89e5f119789732465e5e76e5a2a32963" Feb 27 10:29:10 crc kubenswrapper[4728]: I0227 10:29:10.942193 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3543c7d69eef4755964ea7d1f54094ce89e5f119789732465e5e76e5a2a32963"} err="failed to get container status \"3543c7d69eef4755964ea7d1f54094ce89e5f119789732465e5e76e5a2a32963\": rpc error: code = NotFound desc = could not find container \"3543c7d69eef4755964ea7d1f54094ce89e5f119789732465e5e76e5a2a32963\": container with ID starting with 3543c7d69eef4755964ea7d1f54094ce89e5f119789732465e5e76e5a2a32963 not found: ID does not exist" Feb 27 10:29:10 crc kubenswrapper[4728]: I0227 10:29:10.952672 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 10:29:10 crc kubenswrapper[4728]: I0227 10:29:10.953607 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nd859\" (UniqueName: \"kubernetes.io/projected/055d41f1-5e49-481e-8662-a245ba878526-kube-api-access-nd859\") pod \"redhat-marketplace-rq2kj\" (UID: \"055d41f1-5e49-481e-8662-a245ba878526\") " pod="openshift-marketplace/redhat-marketplace-rq2kj" Feb 27 10:29:10 crc kubenswrapper[4728]: I0227 10:29:10.953644 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/055d41f1-5e49-481e-8662-a245ba878526-catalog-content\") pod \"redhat-marketplace-rq2kj\" (UID: \"055d41f1-5e49-481e-8662-a245ba878526\") " pod="openshift-marketplace/redhat-marketplace-rq2kj" Feb 27 10:29:10 crc kubenswrapper[4728]: I0227 10:29:10.953677 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/055d41f1-5e49-481e-8662-a245ba878526-utilities\") pod \"redhat-marketplace-rq2kj\" (UID: \"055d41f1-5e49-481e-8662-a245ba878526\") " pod="openshift-marketplace/redhat-marketplace-rq2kj" Feb 27 10:29:10 crc kubenswrapper[4728]: I0227 10:29:10.953745 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8fs96\" (UniqueName: \"kubernetes.io/projected/52d0f01e-a7ab-4c07-bd46-d014e84c3d6a-kube-api-access-8fs96\") on node \"crc\" DevicePath \"\"" Feb 27 10:29:10 crc kubenswrapper[4728]: I0227 10:29:10.953756 4728 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52d0f01e-a7ab-4c07-bd46-d014e84c3d6a-config\") on node \"crc\" DevicePath \"\"" Feb 27 10:29:10 crc kubenswrapper[4728]: I0227 10:29:10.953766 4728 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/52d0f01e-a7ab-4c07-bd46-d014e84c3d6a-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 27 10:29:10 crc kubenswrapper[4728]: I0227 10:29:10.953775 4728 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/52d0f01e-a7ab-4c07-bd46-d014e84c3d6a-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 10:29:10 crc kubenswrapper[4728]: I0227 10:29:10.953783 4728 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/52d0f01e-a7ab-4c07-bd46-d014e84c3d6a-client-ca\") on node \"crc\" DevicePath \"\"" Feb 27 10:29:10 crc kubenswrapper[4728]: E0227 10:29:10.954578 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 10:29:11.454558411 +0000 UTC m=+171.416924517 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 10:29:11 crc kubenswrapper[4728]: I0227 10:29:11.014351 4728 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-02-27T10:29:10.16009039Z","Handler":null,"Name":""} Feb 27 10:29:11 crc kubenswrapper[4728]: I0227 10:29:11.017541 4728 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Feb 27 10:29:11 crc kubenswrapper[4728]: I0227 10:29:11.017584 4728 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Feb 27 10:29:11 crc kubenswrapper[4728]: I0227 10:29:11.054949 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/055d41f1-5e49-481e-8662-a245ba878526-catalog-content\") pod \"redhat-marketplace-rq2kj\" (UID: \"055d41f1-5e49-481e-8662-a245ba878526\") " pod="openshift-marketplace/redhat-marketplace-rq2kj" Feb 27 10:29:11 crc kubenswrapper[4728]: I0227 10:29:11.055044 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/055d41f1-5e49-481e-8662-a245ba878526-utilities\") pod \"redhat-marketplace-rq2kj\" (UID: \"055d41f1-5e49-481e-8662-a245ba878526\") " pod="openshift-marketplace/redhat-marketplace-rq2kj" Feb 27 10:29:11 crc kubenswrapper[4728]: I0227 10:29:11.055084 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jtqc2\" (UniqueName: \"kubernetes.io/projected/0386cbf8-9da3-4ca2-86c0-5e38713e390f-kube-api-access-jtqc2\") pod \"controller-manager-76b589b978-x8q75\" (UID: \"0386cbf8-9da3-4ca2-86c0-5e38713e390f\") " pod="openshift-controller-manager/controller-manager-76b589b978-x8q75" Feb 27 10:29:11 crc kubenswrapper[4728]: I0227 10:29:11.055118 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0386cbf8-9da3-4ca2-86c0-5e38713e390f-client-ca\") pod \"controller-manager-76b589b978-x8q75\" (UID: \"0386cbf8-9da3-4ca2-86c0-5e38713e390f\") " pod="openshift-controller-manager/controller-manager-76b589b978-x8q75" Feb 27 10:29:11 crc kubenswrapper[4728]: I0227 10:29:11.055155 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w4vnn\" (UID: \"9f01d342-6bde-4063-b99d-b0efda456aef\") " pod="openshift-image-registry/image-registry-697d97f7c8-w4vnn" Feb 27 10:29:11 crc kubenswrapper[4728]: I0227 10:29:11.055810 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/055d41f1-5e49-481e-8662-a245ba878526-utilities\") pod \"redhat-marketplace-rq2kj\" (UID: \"055d41f1-5e49-481e-8662-a245ba878526\") " pod="openshift-marketplace/redhat-marketplace-rq2kj" Feb 27 10:29:11 crc kubenswrapper[4728]: I0227 10:29:11.056092 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0386cbf8-9da3-4ca2-86c0-5e38713e390f-serving-cert\") pod \"controller-manager-76b589b978-x8q75\" (UID: \"0386cbf8-9da3-4ca2-86c0-5e38713e390f\") " pod="openshift-controller-manager/controller-manager-76b589b978-x8q75" Feb 27 10:29:11 crc kubenswrapper[4728]: I0227 10:29:11.056369 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0386cbf8-9da3-4ca2-86c0-5e38713e390f-config\") pod \"controller-manager-76b589b978-x8q75\" (UID: \"0386cbf8-9da3-4ca2-86c0-5e38713e390f\") " pod="openshift-controller-manager/controller-manager-76b589b978-x8q75" Feb 27 10:29:11 crc kubenswrapper[4728]: I0227 10:29:11.056396 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0386cbf8-9da3-4ca2-86c0-5e38713e390f-proxy-ca-bundles\") pod \"controller-manager-76b589b978-x8q75\" (UID: \"0386cbf8-9da3-4ca2-86c0-5e38713e390f\") " pod="openshift-controller-manager/controller-manager-76b589b978-x8q75" Feb 27 10:29:11 crc kubenswrapper[4728]: I0227 10:29:11.056898 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nd859\" (UniqueName: \"kubernetes.io/projected/055d41f1-5e49-481e-8662-a245ba878526-kube-api-access-nd859\") pod \"redhat-marketplace-rq2kj\" (UID: \"055d41f1-5e49-481e-8662-a245ba878526\") " pod="openshift-marketplace/redhat-marketplace-rq2kj" Feb 27 10:29:11 crc kubenswrapper[4728]: I0227 10:29:11.057937 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/055d41f1-5e49-481e-8662-a245ba878526-catalog-content\") pod \"redhat-marketplace-rq2kj\" (UID: \"055d41f1-5e49-481e-8662-a245ba878526\") " pod="openshift-marketplace/redhat-marketplace-rq2kj" Feb 27 10:29:11 crc kubenswrapper[4728]: I0227 10:29:11.059098 4728 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 27 10:29:11 crc kubenswrapper[4728]: I0227 10:29:11.059540 4728 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w4vnn\" (UID: \"9f01d342-6bde-4063-b99d-b0efda456aef\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-w4vnn" Feb 27 10:29:11 crc kubenswrapper[4728]: I0227 10:29:11.083022 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nd859\" (UniqueName: \"kubernetes.io/projected/055d41f1-5e49-481e-8662-a245ba878526-kube-api-access-nd859\") pod \"redhat-marketplace-rq2kj\" (UID: \"055d41f1-5e49-481e-8662-a245ba878526\") " pod="openshift-marketplace/redhat-marketplace-rq2kj" Feb 27 10:29:11 crc kubenswrapper[4728]: I0227 10:29:11.101206 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w4vnn\" (UID: \"9f01d342-6bde-4063-b99d-b0efda456aef\") " pod="openshift-image-registry/image-registry-697d97f7c8-w4vnn" Feb 27 10:29:11 crc kubenswrapper[4728]: I0227 10:29:11.137199 4728 patch_prober.go:28] interesting pod/router-default-5444994796-8n6md container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 27 10:29:11 crc kubenswrapper[4728]: [-]has-synced failed: reason withheld Feb 27 10:29:11 crc kubenswrapper[4728]: [+]process-running ok Feb 27 10:29:11 crc kubenswrapper[4728]: healthz check failed Feb 27 10:29:11 crc kubenswrapper[4728]: I0227 10:29:11.137245 4728 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-8n6md" podUID="90274913-fbff-4207-a3d8-f163ebcee220" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 27 10:29:11 crc kubenswrapper[4728]: I0227 10:29:11.145203 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wnfnp"] Feb 27 10:29:11 crc kubenswrapper[4728]: W0227 10:29:11.154846 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb27f5cf8_de13_42a0_825a_0bc27ddc8466.slice/crio-5e2ca7c5d14ba46ed81fb10d92732b7e6c5a717dcd8d97a255b930e79f1c1a66 WatchSource:0}: Error finding container 5e2ca7c5d14ba46ed81fb10d92732b7e6c5a717dcd8d97a255b930e79f1c1a66: Status 404 returned error can't find the container with id 5e2ca7c5d14ba46ed81fb10d92732b7e6c5a717dcd8d97a255b930e79f1c1a66 Feb 27 10:29:11 crc kubenswrapper[4728]: I0227 10:29:11.158141 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 10:29:11 crc kubenswrapper[4728]: I0227 10:29:11.158293 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0386cbf8-9da3-4ca2-86c0-5e38713e390f-serving-cert\") pod \"controller-manager-76b589b978-x8q75\" (UID: \"0386cbf8-9da3-4ca2-86c0-5e38713e390f\") " pod="openshift-controller-manager/controller-manager-76b589b978-x8q75" Feb 27 10:29:11 crc kubenswrapper[4728]: I0227 10:29:11.158321 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0386cbf8-9da3-4ca2-86c0-5e38713e390f-config\") pod \"controller-manager-76b589b978-x8q75\" (UID: \"0386cbf8-9da3-4ca2-86c0-5e38713e390f\") " pod="openshift-controller-manager/controller-manager-76b589b978-x8q75" Feb 27 10:29:11 crc kubenswrapper[4728]: I0227 10:29:11.158339 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0386cbf8-9da3-4ca2-86c0-5e38713e390f-proxy-ca-bundles\") pod \"controller-manager-76b589b978-x8q75\" (UID: \"0386cbf8-9da3-4ca2-86c0-5e38713e390f\") " pod="openshift-controller-manager/controller-manager-76b589b978-x8q75" Feb 27 10:29:11 crc kubenswrapper[4728]: I0227 10:29:11.158380 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jtqc2\" (UniqueName: \"kubernetes.io/projected/0386cbf8-9da3-4ca2-86c0-5e38713e390f-kube-api-access-jtqc2\") pod \"controller-manager-76b589b978-x8q75\" (UID: \"0386cbf8-9da3-4ca2-86c0-5e38713e390f\") " pod="openshift-controller-manager/controller-manager-76b589b978-x8q75" Feb 27 10:29:11 crc kubenswrapper[4728]: I0227 10:29:11.158404 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0386cbf8-9da3-4ca2-86c0-5e38713e390f-client-ca\") pod \"controller-manager-76b589b978-x8q75\" (UID: \"0386cbf8-9da3-4ca2-86c0-5e38713e390f\") " pod="openshift-controller-manager/controller-manager-76b589b978-x8q75" Feb 27 10:29:11 crc kubenswrapper[4728]: I0227 10:29:11.159235 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0386cbf8-9da3-4ca2-86c0-5e38713e390f-client-ca\") pod \"controller-manager-76b589b978-x8q75\" (UID: \"0386cbf8-9da3-4ca2-86c0-5e38713e390f\") " pod="openshift-controller-manager/controller-manager-76b589b978-x8q75" Feb 27 10:29:11 crc kubenswrapper[4728]: I0227 10:29:11.160285 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0386cbf8-9da3-4ca2-86c0-5e38713e390f-config\") pod \"controller-manager-76b589b978-x8q75\" (UID: \"0386cbf8-9da3-4ca2-86c0-5e38713e390f\") " pod="openshift-controller-manager/controller-manager-76b589b978-x8q75" Feb 27 10:29:11 crc kubenswrapper[4728]: I0227 10:29:11.160903 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0386cbf8-9da3-4ca2-86c0-5e38713e390f-proxy-ca-bundles\") pod \"controller-manager-76b589b978-x8q75\" (UID: \"0386cbf8-9da3-4ca2-86c0-5e38713e390f\") " pod="openshift-controller-manager/controller-manager-76b589b978-x8q75" Feb 27 10:29:11 crc kubenswrapper[4728]: I0227 10:29:11.166876 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 27 10:29:11 crc kubenswrapper[4728]: I0227 10:29:11.169405 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0386cbf8-9da3-4ca2-86c0-5e38713e390f-serving-cert\") pod \"controller-manager-76b589b978-x8q75\" (UID: \"0386cbf8-9da3-4ca2-86c0-5e38713e390f\") " pod="openshift-controller-manager/controller-manager-76b589b978-x8q75" Feb 27 10:29:11 crc kubenswrapper[4728]: I0227 10:29:11.172051 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-w4vnn" Feb 27 10:29:11 crc kubenswrapper[4728]: I0227 10:29:11.173771 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jtqc2\" (UniqueName: \"kubernetes.io/projected/0386cbf8-9da3-4ca2-86c0-5e38713e390f-kube-api-access-jtqc2\") pod \"controller-manager-76b589b978-x8q75\" (UID: \"0386cbf8-9da3-4ca2-86c0-5e38713e390f\") " pod="openshift-controller-manager/controller-manager-76b589b978-x8q75" Feb 27 10:29:11 crc kubenswrapper[4728]: I0227 10:29:11.203248 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-d2nkm"] Feb 27 10:29:11 crc kubenswrapper[4728]: I0227 10:29:11.205669 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-d2nkm"] Feb 27 10:29:11 crc kubenswrapper[4728]: I0227 10:29:11.239094 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rq2kj" Feb 27 10:29:11 crc kubenswrapper[4728]: I0227 10:29:11.259796 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-76b589b978-x8q75" Feb 27 10:29:11 crc kubenswrapper[4728]: I0227 10:29:11.409722 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-w4vnn"] Feb 27 10:29:11 crc kubenswrapper[4728]: I0227 10:29:11.450556 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-pztrz"] Feb 27 10:29:11 crc kubenswrapper[4728]: I0227 10:29:11.451748 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pztrz" Feb 27 10:29:11 crc kubenswrapper[4728]: I0227 10:29:11.454165 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 27 10:29:11 crc kubenswrapper[4728]: I0227 10:29:11.463350 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pztrz"] Feb 27 10:29:11 crc kubenswrapper[4728]: I0227 10:29:11.489872 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rq2kj"] Feb 27 10:29:11 crc kubenswrapper[4728]: I0227 10:29:11.565272 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7850a694-dd44-4f4d-9b97-ecaa50efb803-catalog-content\") pod \"redhat-operators-pztrz\" (UID: \"7850a694-dd44-4f4d-9b97-ecaa50efb803\") " pod="openshift-marketplace/redhat-operators-pztrz" Feb 27 10:29:11 crc kubenswrapper[4728]: I0227 10:29:11.565328 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7850a694-dd44-4f4d-9b97-ecaa50efb803-utilities\") pod \"redhat-operators-pztrz\" (UID: \"7850a694-dd44-4f4d-9b97-ecaa50efb803\") " pod="openshift-marketplace/redhat-operators-pztrz" Feb 27 10:29:11 crc kubenswrapper[4728]: I0227 10:29:11.565350 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bpx97\" (UniqueName: \"kubernetes.io/projected/7850a694-dd44-4f4d-9b97-ecaa50efb803-kube-api-access-bpx97\") pod \"redhat-operators-pztrz\" (UID: \"7850a694-dd44-4f4d-9b97-ecaa50efb803\") " pod="openshift-marketplace/redhat-operators-pztrz" Feb 27 10:29:11 crc kubenswrapper[4728]: I0227 10:29:11.668379 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7850a694-dd44-4f4d-9b97-ecaa50efb803-catalog-content\") pod \"redhat-operators-pztrz\" (UID: \"7850a694-dd44-4f4d-9b97-ecaa50efb803\") " pod="openshift-marketplace/redhat-operators-pztrz" Feb 27 10:29:11 crc kubenswrapper[4728]: I0227 10:29:11.668703 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7850a694-dd44-4f4d-9b97-ecaa50efb803-utilities\") pod \"redhat-operators-pztrz\" (UID: \"7850a694-dd44-4f4d-9b97-ecaa50efb803\") " pod="openshift-marketplace/redhat-operators-pztrz" Feb 27 10:29:11 crc kubenswrapper[4728]: I0227 10:29:11.668728 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bpx97\" (UniqueName: \"kubernetes.io/projected/7850a694-dd44-4f4d-9b97-ecaa50efb803-kube-api-access-bpx97\") pod \"redhat-operators-pztrz\" (UID: \"7850a694-dd44-4f4d-9b97-ecaa50efb803\") " pod="openshift-marketplace/redhat-operators-pztrz" Feb 27 10:29:11 crc kubenswrapper[4728]: I0227 10:29:11.668954 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7850a694-dd44-4f4d-9b97-ecaa50efb803-catalog-content\") pod \"redhat-operators-pztrz\" (UID: \"7850a694-dd44-4f4d-9b97-ecaa50efb803\") " pod="openshift-marketplace/redhat-operators-pztrz" Feb 27 10:29:11 crc kubenswrapper[4728]: I0227 10:29:11.669093 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7850a694-dd44-4f4d-9b97-ecaa50efb803-utilities\") pod \"redhat-operators-pztrz\" (UID: \"7850a694-dd44-4f4d-9b97-ecaa50efb803\") " pod="openshift-marketplace/redhat-operators-pztrz" Feb 27 10:29:11 crc kubenswrapper[4728]: I0227 10:29:11.689473 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bpx97\" (UniqueName: \"kubernetes.io/projected/7850a694-dd44-4f4d-9b97-ecaa50efb803-kube-api-access-bpx97\") pod \"redhat-operators-pztrz\" (UID: \"7850a694-dd44-4f4d-9b97-ecaa50efb803\") " pod="openshift-marketplace/redhat-operators-pztrz" Feb 27 10:29:11 crc kubenswrapper[4728]: I0227 10:29:11.744025 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-76b589b978-x8q75"] Feb 27 10:29:11 crc kubenswrapper[4728]: W0227 10:29:11.762419 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0386cbf8_9da3_4ca2_86c0_5e38713e390f.slice/crio-5c2a760395f5d741d41b07674c582ae4076faae61765ff79369cf475e466d0bd WatchSource:0}: Error finding container 5c2a760395f5d741d41b07674c582ae4076faae61765ff79369cf475e466d0bd: Status 404 returned error can't find the container with id 5c2a760395f5d741d41b07674c582ae4076faae61765ff79369cf475e466d0bd Feb 27 10:29:11 crc kubenswrapper[4728]: I0227 10:29:11.815163 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tz4jj" Feb 27 10:29:11 crc kubenswrapper[4728]: I0227 10:29:11.821836 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pztrz" Feb 27 10:29:11 crc kubenswrapper[4728]: I0227 10:29:11.850438 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-xsskq"] Feb 27 10:29:11 crc kubenswrapper[4728]: E0227 10:29:11.850702 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0d7166c-3042-4706-9683-6c6a32d29a9c" containerName="route-controller-manager" Feb 27 10:29:11 crc kubenswrapper[4728]: I0227 10:29:11.850714 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0d7166c-3042-4706-9683-6c6a32d29a9c" containerName="route-controller-manager" Feb 27 10:29:11 crc kubenswrapper[4728]: I0227 10:29:11.850819 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0d7166c-3042-4706-9683-6c6a32d29a9c" containerName="route-controller-manager" Feb 27 10:29:11 crc kubenswrapper[4728]: I0227 10:29:11.851575 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xsskq" Feb 27 10:29:11 crc kubenswrapper[4728]: I0227 10:29:11.867002 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xsskq"] Feb 27 10:29:11 crc kubenswrapper[4728]: I0227 10:29:11.870365 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0d7166c-3042-4706-9683-6c6a32d29a9c-config\") pod \"e0d7166c-3042-4706-9683-6c6a32d29a9c\" (UID: \"e0d7166c-3042-4706-9683-6c6a32d29a9c\") " Feb 27 10:29:11 crc kubenswrapper[4728]: I0227 10:29:11.870418 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e0d7166c-3042-4706-9683-6c6a32d29a9c-serving-cert\") pod \"e0d7166c-3042-4706-9683-6c6a32d29a9c\" (UID: \"e0d7166c-3042-4706-9683-6c6a32d29a9c\") " Feb 27 10:29:11 crc kubenswrapper[4728]: I0227 10:29:11.870566 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e0d7166c-3042-4706-9683-6c6a32d29a9c-client-ca\") pod \"e0d7166c-3042-4706-9683-6c6a32d29a9c\" (UID: \"e0d7166c-3042-4706-9683-6c6a32d29a9c\") " Feb 27 10:29:11 crc kubenswrapper[4728]: I0227 10:29:11.870840 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7z2z7\" (UniqueName: \"kubernetes.io/projected/e0d7166c-3042-4706-9683-6c6a32d29a9c-kube-api-access-7z2z7\") pod \"e0d7166c-3042-4706-9683-6c6a32d29a9c\" (UID: \"e0d7166c-3042-4706-9683-6c6a32d29a9c\") " Feb 27 10:29:11 crc kubenswrapper[4728]: I0227 10:29:11.871489 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e0d7166c-3042-4706-9683-6c6a32d29a9c-client-ca" (OuterVolumeSpecName: "client-ca") pod "e0d7166c-3042-4706-9683-6c6a32d29a9c" (UID: "e0d7166c-3042-4706-9683-6c6a32d29a9c"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:29:11 crc kubenswrapper[4728]: I0227 10:29:11.871704 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e0d7166c-3042-4706-9683-6c6a32d29a9c-config" (OuterVolumeSpecName: "config") pod "e0d7166c-3042-4706-9683-6c6a32d29a9c" (UID: "e0d7166c-3042-4706-9683-6c6a32d29a9c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:29:11 crc kubenswrapper[4728]: I0227 10:29:11.871937 4728 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e0d7166c-3042-4706-9683-6c6a32d29a9c-client-ca\") on node \"crc\" DevicePath \"\"" Feb 27 10:29:11 crc kubenswrapper[4728]: I0227 10:29:11.871962 4728 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0d7166c-3042-4706-9683-6c6a32d29a9c-config\") on node \"crc\" DevicePath \"\"" Feb 27 10:29:11 crc kubenswrapper[4728]: I0227 10:29:11.877373 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0d7166c-3042-4706-9683-6c6a32d29a9c-kube-api-access-7z2z7" (OuterVolumeSpecName: "kube-api-access-7z2z7") pod "e0d7166c-3042-4706-9683-6c6a32d29a9c" (UID: "e0d7166c-3042-4706-9683-6c6a32d29a9c"). InnerVolumeSpecName "kube-api-access-7z2z7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:29:11 crc kubenswrapper[4728]: I0227 10:29:11.882906 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0d7166c-3042-4706-9683-6c6a32d29a9c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e0d7166c-3042-4706-9683-6c6a32d29a9c" (UID: "e0d7166c-3042-4706-9683-6c6a32d29a9c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:29:11 crc kubenswrapper[4728]: I0227 10:29:11.912491 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-sftjq" event={"ID":"6e5d2a5d-0128-4d37-b653-555cc40a8d39","Type":"ContainerStarted","Data":"009abd51aa86c17407652d27416a1a8bedc4b85e5955b29d5fff1bad8eaa2914"} Feb 27 10:29:11 crc kubenswrapper[4728]: I0227 10:29:11.921264 4728 generic.go:334] "Generic (PLEG): container finished" podID="055d41f1-5e49-481e-8662-a245ba878526" containerID="73c89c06ea962f51600b9489cd54f13a9780d3127827b0d05a55bc2d7d0437d4" exitCode=0 Feb 27 10:29:11 crc kubenswrapper[4728]: I0227 10:29:11.921446 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rq2kj" event={"ID":"055d41f1-5e49-481e-8662-a245ba878526","Type":"ContainerDied","Data":"73c89c06ea962f51600b9489cd54f13a9780d3127827b0d05a55bc2d7d0437d4"} Feb 27 10:29:11 crc kubenswrapper[4728]: I0227 10:29:11.921476 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rq2kj" event={"ID":"055d41f1-5e49-481e-8662-a245ba878526","Type":"ContainerStarted","Data":"f995b5b18ffaa7610f311a384b788553479fcc99627719e1d6b7bf86fae7c2a1"} Feb 27 10:29:11 crc kubenswrapper[4728]: I0227 10:29:11.928479 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-76b589b978-x8q75" event={"ID":"0386cbf8-9da3-4ca2-86c0-5e38713e390f","Type":"ContainerStarted","Data":"5c2a760395f5d741d41b07674c582ae4076faae61765ff79369cf475e466d0bd"} Feb 27 10:29:11 crc kubenswrapper[4728]: I0227 10:29:11.938551 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-w4vnn" event={"ID":"9f01d342-6bde-4063-b99d-b0efda456aef","Type":"ContainerStarted","Data":"4251ecde32a2c009d59cc49c987d3a45a0917a6d4e04ba12ccc59a2159ee5c46"} Feb 27 10:29:11 crc kubenswrapper[4728]: I0227 10:29:11.938587 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-w4vnn" event={"ID":"9f01d342-6bde-4063-b99d-b0efda456aef","Type":"ContainerStarted","Data":"806d53efbb4f84dd8a088d8c0237b502cc6124d832b40e3687bbc823a2d1d7c7"} Feb 27 10:29:11 crc kubenswrapper[4728]: I0227 10:29:11.938770 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-w4vnn" Feb 27 10:29:11 crc kubenswrapper[4728]: I0227 10:29:11.944082 4728 generic.go:334] "Generic (PLEG): container finished" podID="e0d7166c-3042-4706-9683-6c6a32d29a9c" containerID="40bbbe8abe3c7d1dbae7b3ba6b9103f549120818480625470e257632c58203a8" exitCode=0 Feb 27 10:29:11 crc kubenswrapper[4728]: I0227 10:29:11.944144 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tz4jj" event={"ID":"e0d7166c-3042-4706-9683-6c6a32d29a9c","Type":"ContainerDied","Data":"40bbbe8abe3c7d1dbae7b3ba6b9103f549120818480625470e257632c58203a8"} Feb 27 10:29:11 crc kubenswrapper[4728]: I0227 10:29:11.944167 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tz4jj" event={"ID":"e0d7166c-3042-4706-9683-6c6a32d29a9c","Type":"ContainerDied","Data":"eb23be9ed170476531702eca60fa01b2f89feaa93fce314539ebc25aac92dc62"} Feb 27 10:29:11 crc kubenswrapper[4728]: I0227 10:29:11.944176 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tz4jj" Feb 27 10:29:11 crc kubenswrapper[4728]: I0227 10:29:11.944183 4728 scope.go:117] "RemoveContainer" containerID="40bbbe8abe3c7d1dbae7b3ba6b9103f549120818480625470e257632c58203a8" Feb 27 10:29:11 crc kubenswrapper[4728]: I0227 10:29:11.947057 4728 generic.go:334] "Generic (PLEG): container finished" podID="b27f5cf8-de13-42a0-825a-0bc27ddc8466" containerID="f4e883f75c7162b2da2577019ed98c246c7b645d40b4538e79648dcac6a8405b" exitCode=0 Feb 27 10:29:11 crc kubenswrapper[4728]: I0227 10:29:11.947084 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wnfnp" event={"ID":"b27f5cf8-de13-42a0-825a-0bc27ddc8466","Type":"ContainerDied","Data":"f4e883f75c7162b2da2577019ed98c246c7b645d40b4538e79648dcac6a8405b"} Feb 27 10:29:11 crc kubenswrapper[4728]: I0227 10:29:11.947102 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wnfnp" event={"ID":"b27f5cf8-de13-42a0-825a-0bc27ddc8466","Type":"ContainerStarted","Data":"5e2ca7c5d14ba46ed81fb10d92732b7e6c5a717dcd8d97a255b930e79f1c1a66"} Feb 27 10:29:11 crc kubenswrapper[4728]: I0227 10:29:11.961007 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-sftjq" podStartSLOduration=11.960992543 podStartE2EDuration="11.960992543s" podCreationTimestamp="2026-02-27 10:29:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:29:11.936138999 +0000 UTC m=+171.898505105" watchObservedRunningTime="2026-02-27 10:29:11.960992543 +0000 UTC m=+171.923358649" Feb 27 10:29:11 crc kubenswrapper[4728]: I0227 10:29:11.973641 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/49693a3e-1583-4584-9049-fe85013bb9ab-utilities\") pod \"redhat-operators-xsskq\" (UID: \"49693a3e-1583-4584-9049-fe85013bb9ab\") " pod="openshift-marketplace/redhat-operators-xsskq" Feb 27 10:29:11 crc kubenswrapper[4728]: I0227 10:29:11.973705 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rt98j\" (UniqueName: \"kubernetes.io/projected/49693a3e-1583-4584-9049-fe85013bb9ab-kube-api-access-rt98j\") pod \"redhat-operators-xsskq\" (UID: \"49693a3e-1583-4584-9049-fe85013bb9ab\") " pod="openshift-marketplace/redhat-operators-xsskq" Feb 27 10:29:11 crc kubenswrapper[4728]: I0227 10:29:11.973750 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/49693a3e-1583-4584-9049-fe85013bb9ab-catalog-content\") pod \"redhat-operators-xsskq\" (UID: \"49693a3e-1583-4584-9049-fe85013bb9ab\") " pod="openshift-marketplace/redhat-operators-xsskq" Feb 27 10:29:11 crc kubenswrapper[4728]: I0227 10:29:11.973797 4728 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e0d7166c-3042-4706-9683-6c6a32d29a9c-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 10:29:11 crc kubenswrapper[4728]: I0227 10:29:11.973809 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7z2z7\" (UniqueName: \"kubernetes.io/projected/e0d7166c-3042-4706-9683-6c6a32d29a9c-kube-api-access-7z2z7\") on node \"crc\" DevicePath \"\"" Feb 27 10:29:11 crc kubenswrapper[4728]: I0227 10:29:11.990091 4728 scope.go:117] "RemoveContainer" containerID="40bbbe8abe3c7d1dbae7b3ba6b9103f549120818480625470e257632c58203a8" Feb 27 10:29:11 crc kubenswrapper[4728]: E0227 10:29:11.997643 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"40bbbe8abe3c7d1dbae7b3ba6b9103f549120818480625470e257632c58203a8\": container with ID starting with 40bbbe8abe3c7d1dbae7b3ba6b9103f549120818480625470e257632c58203a8 not found: ID does not exist" containerID="40bbbe8abe3c7d1dbae7b3ba6b9103f549120818480625470e257632c58203a8" Feb 27 10:29:11 crc kubenswrapper[4728]: I0227 10:29:11.997687 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40bbbe8abe3c7d1dbae7b3ba6b9103f549120818480625470e257632c58203a8"} err="failed to get container status \"40bbbe8abe3c7d1dbae7b3ba6b9103f549120818480625470e257632c58203a8\": rpc error: code = NotFound desc = could not find container \"40bbbe8abe3c7d1dbae7b3ba6b9103f549120818480625470e257632c58203a8\": container with ID starting with 40bbbe8abe3c7d1dbae7b3ba6b9103f549120818480625470e257632c58203a8 not found: ID does not exist" Feb 27 10:29:12 crc kubenswrapper[4728]: I0227 10:29:12.033437 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-w4vnn" podStartSLOduration=102.033421553 podStartE2EDuration="1m42.033421553s" podCreationTimestamp="2026-02-27 10:27:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:29:12.019620363 +0000 UTC m=+171.981986469" watchObservedRunningTime="2026-02-27 10:29:12.033421553 +0000 UTC m=+171.995787659" Feb 27 10:29:12 crc kubenswrapper[4728]: I0227 10:29:12.035371 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-tz4jj"] Feb 27 10:29:12 crc kubenswrapper[4728]: I0227 10:29:12.038119 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-tz4jj"] Feb 27 10:29:12 crc kubenswrapper[4728]: I0227 10:29:12.078276 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rt98j\" (UniqueName: \"kubernetes.io/projected/49693a3e-1583-4584-9049-fe85013bb9ab-kube-api-access-rt98j\") pod \"redhat-operators-xsskq\" (UID: \"49693a3e-1583-4584-9049-fe85013bb9ab\") " pod="openshift-marketplace/redhat-operators-xsskq" Feb 27 10:29:12 crc kubenswrapper[4728]: I0227 10:29:12.078724 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/49693a3e-1583-4584-9049-fe85013bb9ab-catalog-content\") pod \"redhat-operators-xsskq\" (UID: \"49693a3e-1583-4584-9049-fe85013bb9ab\") " pod="openshift-marketplace/redhat-operators-xsskq" Feb 27 10:29:12 crc kubenswrapper[4728]: I0227 10:29:12.079626 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/49693a3e-1583-4584-9049-fe85013bb9ab-catalog-content\") pod \"redhat-operators-xsskq\" (UID: \"49693a3e-1583-4584-9049-fe85013bb9ab\") " pod="openshift-marketplace/redhat-operators-xsskq" Feb 27 10:29:12 crc kubenswrapper[4728]: I0227 10:29:12.079776 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/49693a3e-1583-4584-9049-fe85013bb9ab-utilities\") pod \"redhat-operators-xsskq\" (UID: \"49693a3e-1583-4584-9049-fe85013bb9ab\") " pod="openshift-marketplace/redhat-operators-xsskq" Feb 27 10:29:12 crc kubenswrapper[4728]: I0227 10:29:12.080045 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/49693a3e-1583-4584-9049-fe85013bb9ab-utilities\") pod \"redhat-operators-xsskq\" (UID: \"49693a3e-1583-4584-9049-fe85013bb9ab\") " pod="openshift-marketplace/redhat-operators-xsskq" Feb 27 10:29:12 crc kubenswrapper[4728]: I0227 10:29:12.119679 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rt98j\" (UniqueName: \"kubernetes.io/projected/49693a3e-1583-4584-9049-fe85013bb9ab-kube-api-access-rt98j\") pod \"redhat-operators-xsskq\" (UID: \"49693a3e-1583-4584-9049-fe85013bb9ab\") " pod="openshift-marketplace/redhat-operators-xsskq" Feb 27 10:29:12 crc kubenswrapper[4728]: I0227 10:29:12.134460 4728 patch_prober.go:28] interesting pod/router-default-5444994796-8n6md container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 27 10:29:12 crc kubenswrapper[4728]: [-]has-synced failed: reason withheld Feb 27 10:29:12 crc kubenswrapper[4728]: [+]process-running ok Feb 27 10:29:12 crc kubenswrapper[4728]: healthz check failed Feb 27 10:29:12 crc kubenswrapper[4728]: I0227 10:29:12.134789 4728 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-8n6md" podUID="90274913-fbff-4207-a3d8-f163ebcee220" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 27 10:29:12 crc kubenswrapper[4728]: I0227 10:29:12.182028 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pztrz"] Feb 27 10:29:12 crc kubenswrapper[4728]: I0227 10:29:12.193963 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xsskq" Feb 27 10:29:12 crc kubenswrapper[4728]: I0227 10:29:12.487443 4728 ???:1] "http: TLS handshake error from 192.168.126.11:42276: no serving certificate available for the kubelet" Feb 27 10:29:12 crc kubenswrapper[4728]: I0227 10:29:12.657063 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xsskq"] Feb 27 10:29:12 crc kubenswrapper[4728]: I0227 10:29:12.760201 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="52d0f01e-a7ab-4c07-bd46-d014e84c3d6a" path="/var/lib/kubelet/pods/52d0f01e-a7ab-4c07-bd46-d014e84c3d6a/volumes" Feb 27 10:29:12 crc kubenswrapper[4728]: I0227 10:29:12.760990 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Feb 27 10:29:12 crc kubenswrapper[4728]: I0227 10:29:12.761461 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e0d7166c-3042-4706-9683-6c6a32d29a9c" path="/var/lib/kubelet/pods/e0d7166c-3042-4706-9683-6c6a32d29a9c/volumes" Feb 27 10:29:12 crc kubenswrapper[4728]: I0227 10:29:12.879367 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 27 10:29:12 crc kubenswrapper[4728]: I0227 10:29:12.880028 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 27 10:29:12 crc kubenswrapper[4728]: I0227 10:29:12.883214 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 27 10:29:12 crc kubenswrapper[4728]: I0227 10:29:12.883283 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 27 10:29:12 crc kubenswrapper[4728]: I0227 10:29:12.886961 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 27 10:29:12 crc kubenswrapper[4728]: I0227 10:29:12.962051 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-76b589b978-x8q75" event={"ID":"0386cbf8-9da3-4ca2-86c0-5e38713e390f","Type":"ContainerStarted","Data":"856c2caee3c7918df12836b0c71fac4399d6c6b7930b4372ff04db9a16a47c4d"} Feb 27 10:29:12 crc kubenswrapper[4728]: I0227 10:29:12.962284 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-76b589b978-x8q75" Feb 27 10:29:12 crc kubenswrapper[4728]: I0227 10:29:12.965825 4728 generic.go:334] "Generic (PLEG): container finished" podID="7850a694-dd44-4f4d-9b97-ecaa50efb803" containerID="63c6ec26161b66afa73c35c10df5963837024140867e7a30cf365ba8d2d788d7" exitCode=0 Feb 27 10:29:12 crc kubenswrapper[4728]: I0227 10:29:12.966054 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pztrz" event={"ID":"7850a694-dd44-4f4d-9b97-ecaa50efb803","Type":"ContainerDied","Data":"63c6ec26161b66afa73c35c10df5963837024140867e7a30cf365ba8d2d788d7"} Feb 27 10:29:12 crc kubenswrapper[4728]: I0227 10:29:12.966097 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pztrz" event={"ID":"7850a694-dd44-4f4d-9b97-ecaa50efb803","Type":"ContainerStarted","Data":"3629dbc3cc76503a951f580fa44eccb5e3a74544f144dc7acd9c93a06db286d0"} Feb 27 10:29:12 crc kubenswrapper[4728]: I0227 10:29:12.966832 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-76b589b978-x8q75" Feb 27 10:29:12 crc kubenswrapper[4728]: I0227 10:29:12.967628 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xsskq" event={"ID":"49693a3e-1583-4584-9049-fe85013bb9ab","Type":"ContainerStarted","Data":"fbc4f4cc9abb4c564f8c7bb60b3a363970d283b50849bfae94b7d280ce129783"} Feb 27 10:29:12 crc kubenswrapper[4728]: I0227 10:29:12.998246 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d16d774a-941b-4d0c-b6d7-5c13a7e88153-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"d16d774a-941b-4d0c-b6d7-5c13a7e88153\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 27 10:29:12 crc kubenswrapper[4728]: I0227 10:29:12.998310 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d16d774a-941b-4d0c-b6d7-5c13a7e88153-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"d16d774a-941b-4d0c-b6d7-5c13a7e88153\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 27 10:29:13 crc kubenswrapper[4728]: I0227 10:29:13.007104 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-76b589b978-x8q75" podStartSLOduration=4.007088317 podStartE2EDuration="4.007088317s" podCreationTimestamp="2026-02-27 10:29:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:29:12.98426497 +0000 UTC m=+172.946631076" watchObservedRunningTime="2026-02-27 10:29:13.007088317 +0000 UTC m=+172.969454423" Feb 27 10:29:13 crc kubenswrapper[4728]: I0227 10:29:13.022827 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-tvflj" Feb 27 10:29:13 crc kubenswrapper[4728]: I0227 10:29:13.022863 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-tvflj" Feb 27 10:29:13 crc kubenswrapper[4728]: I0227 10:29:13.024291 4728 patch_prober.go:28] interesting pod/console-f9d7485db-tvflj container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.9:8443/health\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Feb 27 10:29:13 crc kubenswrapper[4728]: I0227 10:29:13.024320 4728 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-tvflj" podUID="b1d22605-abd6-4fc6-8352-8fe78ec02332" containerName="console" probeResult="failure" output="Get \"https://10.217.0.9:8443/health\": dial tcp 10.217.0.9:8443: connect: connection refused" Feb 27 10:29:13 crc kubenswrapper[4728]: I0227 10:29:13.100056 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d16d774a-941b-4d0c-b6d7-5c13a7e88153-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"d16d774a-941b-4d0c-b6d7-5c13a7e88153\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 27 10:29:13 crc kubenswrapper[4728]: I0227 10:29:13.100154 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d16d774a-941b-4d0c-b6d7-5c13a7e88153-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"d16d774a-941b-4d0c-b6d7-5c13a7e88153\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 27 10:29:13 crc kubenswrapper[4728]: I0227 10:29:13.100278 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d16d774a-941b-4d0c-b6d7-5c13a7e88153-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"d16d774a-941b-4d0c-b6d7-5c13a7e88153\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 27 10:29:13 crc kubenswrapper[4728]: I0227 10:29:13.121138 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d16d774a-941b-4d0c-b6d7-5c13a7e88153-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"d16d774a-941b-4d0c-b6d7-5c13a7e88153\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 27 10:29:13 crc kubenswrapper[4728]: I0227 10:29:13.137448 4728 patch_prober.go:28] interesting pod/router-default-5444994796-8n6md container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 27 10:29:13 crc kubenswrapper[4728]: [-]has-synced failed: reason withheld Feb 27 10:29:13 crc kubenswrapper[4728]: [+]process-running ok Feb 27 10:29:13 crc kubenswrapper[4728]: healthz check failed Feb 27 10:29:13 crc kubenswrapper[4728]: I0227 10:29:13.137552 4728 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-8n6md" podUID="90274913-fbff-4207-a3d8-f163ebcee220" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 27 10:29:13 crc kubenswrapper[4728]: I0227 10:29:13.194199 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 27 10:29:13 crc kubenswrapper[4728]: I0227 10:29:13.242422 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-2f6px" Feb 27 10:29:13 crc kubenswrapper[4728]: I0227 10:29:13.247854 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-2f6px" Feb 27 10:29:13 crc kubenswrapper[4728]: I0227 10:29:13.271074 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j72xn" Feb 27 10:29:13 crc kubenswrapper[4728]: I0227 10:29:13.280732 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j72xn" Feb 27 10:29:13 crc kubenswrapper[4728]: I0227 10:29:13.413040 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 27 10:29:13 crc kubenswrapper[4728]: I0227 10:29:13.415458 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 27 10:29:13 crc kubenswrapper[4728]: I0227 10:29:13.426605 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Feb 27 10:29:13 crc kubenswrapper[4728]: I0227 10:29:13.426883 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Feb 27 10:29:13 crc kubenswrapper[4728]: I0227 10:29:13.432172 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 27 10:29:13 crc kubenswrapper[4728]: I0227 10:29:13.497979 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6b9d5c76fc-vqvnz"] Feb 27 10:29:13 crc kubenswrapper[4728]: I0227 10:29:13.498604 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6b9d5c76fc-vqvnz" Feb 27 10:29:13 crc kubenswrapper[4728]: I0227 10:29:13.502133 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 27 10:29:13 crc kubenswrapper[4728]: I0227 10:29:13.502351 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 27 10:29:13 crc kubenswrapper[4728]: I0227 10:29:13.503157 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 27 10:29:13 crc kubenswrapper[4728]: I0227 10:29:13.503167 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 27 10:29:13 crc kubenswrapper[4728]: I0227 10:29:13.503678 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 27 10:29:13 crc kubenswrapper[4728]: I0227 10:29:13.503771 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 27 10:29:13 crc kubenswrapper[4728]: I0227 10:29:13.513949 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6b9d5c76fc-vqvnz"] Feb 27 10:29:13 crc kubenswrapper[4728]: I0227 10:29:13.518101 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bbab99ad-e0f9-47d1-8fbb-478886f84964-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"bbab99ad-e0f9-47d1-8fbb-478886f84964\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 27 10:29:13 crc kubenswrapper[4728]: I0227 10:29:13.518145 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bbab99ad-e0f9-47d1-8fbb-478886f84964-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"bbab99ad-e0f9-47d1-8fbb-478886f84964\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 27 10:29:13 crc kubenswrapper[4728]: I0227 10:29:13.619911 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/98ac1ff8-14b4-40f9-9d69-87c2fbab5aa9-config\") pod \"route-controller-manager-6b9d5c76fc-vqvnz\" (UID: \"98ac1ff8-14b4-40f9-9d69-87c2fbab5aa9\") " pod="openshift-route-controller-manager/route-controller-manager-6b9d5c76fc-vqvnz" Feb 27 10:29:13 crc kubenswrapper[4728]: I0227 10:29:13.619986 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bbab99ad-e0f9-47d1-8fbb-478886f84964-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"bbab99ad-e0f9-47d1-8fbb-478886f84964\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 27 10:29:13 crc kubenswrapper[4728]: I0227 10:29:13.620009 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bbab99ad-e0f9-47d1-8fbb-478886f84964-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"bbab99ad-e0f9-47d1-8fbb-478886f84964\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 27 10:29:13 crc kubenswrapper[4728]: I0227 10:29:13.620031 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/98ac1ff8-14b4-40f9-9d69-87c2fbab5aa9-serving-cert\") pod \"route-controller-manager-6b9d5c76fc-vqvnz\" (UID: \"98ac1ff8-14b4-40f9-9d69-87c2fbab5aa9\") " pod="openshift-route-controller-manager/route-controller-manager-6b9d5c76fc-vqvnz" Feb 27 10:29:13 crc kubenswrapper[4728]: I0227 10:29:13.620071 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/98ac1ff8-14b4-40f9-9d69-87c2fbab5aa9-client-ca\") pod \"route-controller-manager-6b9d5c76fc-vqvnz\" (UID: \"98ac1ff8-14b4-40f9-9d69-87c2fbab5aa9\") " pod="openshift-route-controller-manager/route-controller-manager-6b9d5c76fc-vqvnz" Feb 27 10:29:13 crc kubenswrapper[4728]: I0227 10:29:13.620089 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-629xs\" (UniqueName: \"kubernetes.io/projected/98ac1ff8-14b4-40f9-9d69-87c2fbab5aa9-kube-api-access-629xs\") pod \"route-controller-manager-6b9d5c76fc-vqvnz\" (UID: \"98ac1ff8-14b4-40f9-9d69-87c2fbab5aa9\") " pod="openshift-route-controller-manager/route-controller-manager-6b9d5c76fc-vqvnz" Feb 27 10:29:13 crc kubenswrapper[4728]: I0227 10:29:13.620255 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bbab99ad-e0f9-47d1-8fbb-478886f84964-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"bbab99ad-e0f9-47d1-8fbb-478886f84964\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 27 10:29:13 crc kubenswrapper[4728]: I0227 10:29:13.645829 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bbab99ad-e0f9-47d1-8fbb-478886f84964-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"bbab99ad-e0f9-47d1-8fbb-478886f84964\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 27 10:29:13 crc kubenswrapper[4728]: I0227 10:29:13.696405 4728 patch_prober.go:28] interesting pod/downloads-7954f5f757-c46ql container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Feb 27 10:29:13 crc kubenswrapper[4728]: I0227 10:29:13.696446 4728 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-c46ql" podUID="a3656135-373e-4ec6-9cf1-e34d6a95c5a5" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Feb 27 10:29:13 crc kubenswrapper[4728]: I0227 10:29:13.696718 4728 patch_prober.go:28] interesting pod/downloads-7954f5f757-c46ql container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Feb 27 10:29:13 crc kubenswrapper[4728]: I0227 10:29:13.696802 4728 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-c46ql" podUID="a3656135-373e-4ec6-9cf1-e34d6a95c5a5" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Feb 27 10:29:13 crc kubenswrapper[4728]: I0227 10:29:13.721557 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/98ac1ff8-14b4-40f9-9d69-87c2fbab5aa9-config\") pod \"route-controller-manager-6b9d5c76fc-vqvnz\" (UID: \"98ac1ff8-14b4-40f9-9d69-87c2fbab5aa9\") " pod="openshift-route-controller-manager/route-controller-manager-6b9d5c76fc-vqvnz" Feb 27 10:29:13 crc kubenswrapper[4728]: I0227 10:29:13.721609 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/98ac1ff8-14b4-40f9-9d69-87c2fbab5aa9-serving-cert\") pod \"route-controller-manager-6b9d5c76fc-vqvnz\" (UID: \"98ac1ff8-14b4-40f9-9d69-87c2fbab5aa9\") " pod="openshift-route-controller-manager/route-controller-manager-6b9d5c76fc-vqvnz" Feb 27 10:29:13 crc kubenswrapper[4728]: I0227 10:29:13.721633 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-629xs\" (UniqueName: \"kubernetes.io/projected/98ac1ff8-14b4-40f9-9d69-87c2fbab5aa9-kube-api-access-629xs\") pod \"route-controller-manager-6b9d5c76fc-vqvnz\" (UID: \"98ac1ff8-14b4-40f9-9d69-87c2fbab5aa9\") " pod="openshift-route-controller-manager/route-controller-manager-6b9d5c76fc-vqvnz" Feb 27 10:29:13 crc kubenswrapper[4728]: I0227 10:29:13.721655 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/98ac1ff8-14b4-40f9-9d69-87c2fbab5aa9-client-ca\") pod \"route-controller-manager-6b9d5c76fc-vqvnz\" (UID: \"98ac1ff8-14b4-40f9-9d69-87c2fbab5aa9\") " pod="openshift-route-controller-manager/route-controller-manager-6b9d5c76fc-vqvnz" Feb 27 10:29:13 crc kubenswrapper[4728]: I0227 10:29:13.722690 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/98ac1ff8-14b4-40f9-9d69-87c2fbab5aa9-client-ca\") pod \"route-controller-manager-6b9d5c76fc-vqvnz\" (UID: \"98ac1ff8-14b4-40f9-9d69-87c2fbab5aa9\") " pod="openshift-route-controller-manager/route-controller-manager-6b9d5c76fc-vqvnz" Feb 27 10:29:13 crc kubenswrapper[4728]: I0227 10:29:13.724321 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/98ac1ff8-14b4-40f9-9d69-87c2fbab5aa9-config\") pod \"route-controller-manager-6b9d5c76fc-vqvnz\" (UID: \"98ac1ff8-14b4-40f9-9d69-87c2fbab5aa9\") " pod="openshift-route-controller-manager/route-controller-manager-6b9d5c76fc-vqvnz" Feb 27 10:29:13 crc kubenswrapper[4728]: I0227 10:29:13.741447 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/98ac1ff8-14b4-40f9-9d69-87c2fbab5aa9-serving-cert\") pod \"route-controller-manager-6b9d5c76fc-vqvnz\" (UID: \"98ac1ff8-14b4-40f9-9d69-87c2fbab5aa9\") " pod="openshift-route-controller-manager/route-controller-manager-6b9d5c76fc-vqvnz" Feb 27 10:29:13 crc kubenswrapper[4728]: I0227 10:29:13.756099 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 27 10:29:13 crc kubenswrapper[4728]: I0227 10:29:13.758247 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-629xs\" (UniqueName: \"kubernetes.io/projected/98ac1ff8-14b4-40f9-9d69-87c2fbab5aa9-kube-api-access-629xs\") pod \"route-controller-manager-6b9d5c76fc-vqvnz\" (UID: \"98ac1ff8-14b4-40f9-9d69-87c2fbab5aa9\") " pod="openshift-route-controller-manager/route-controller-manager-6b9d5c76fc-vqvnz" Feb 27 10:29:13 crc kubenswrapper[4728]: I0227 10:29:13.765888 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 27 10:29:13 crc kubenswrapper[4728]: I0227 10:29:13.840628 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6b9d5c76fc-vqvnz" Feb 27 10:29:14 crc kubenswrapper[4728]: E0227 10:29:14.014727 4728 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1206c848258532f67f69f951cef6e00d801573f63a796e06bc1603e25228dccf" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 27 10:29:14 crc kubenswrapper[4728]: E0227 10:29:14.020778 4728 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1206c848258532f67f69f951cef6e00d801573f63a796e06bc1603e25228dccf" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 27 10:29:14 crc kubenswrapper[4728]: I0227 10:29:14.023068 4728 generic.go:334] "Generic (PLEG): container finished" podID="f82e7468-152d-46a6-9012-3bb0b4219b3f" containerID="aa1cb35b13eb3354dc69d33a288dd974cee0bf56a76e56257f834a3057988edb" exitCode=0 Feb 27 10:29:14 crc kubenswrapper[4728]: I0227 10:29:14.023123 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29536455-mx6zt" event={"ID":"f82e7468-152d-46a6-9012-3bb0b4219b3f","Type":"ContainerDied","Data":"aa1cb35b13eb3354dc69d33a288dd974cee0bf56a76e56257f834a3057988edb"} Feb 27 10:29:14 crc kubenswrapper[4728]: E0227 10:29:14.026347 4728 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1206c848258532f67f69f951cef6e00d801573f63a796e06bc1603e25228dccf" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 27 10:29:14 crc kubenswrapper[4728]: E0227 10:29:14.026374 4728 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-w976v" podUID="a4d6106d-dd3c-4a6a-aaeb-b441add3fdad" containerName="kube-multus-additional-cni-plugins" Feb 27 10:29:14 crc kubenswrapper[4728]: I0227 10:29:14.028456 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"d16d774a-941b-4d0c-b6d7-5c13a7e88153","Type":"ContainerStarted","Data":"69e08abd418794ace1b31affc5004d900cce75b220636032531dc3ebaf7449f8"} Feb 27 10:29:14 crc kubenswrapper[4728]: I0227 10:29:14.046687 4728 generic.go:334] "Generic (PLEG): container finished" podID="49693a3e-1583-4584-9049-fe85013bb9ab" containerID="bee8ee694307cb7e7009e7378d26a8070fb0e8dc035e34d6cc00a05c1023408e" exitCode=0 Feb 27 10:29:14 crc kubenswrapper[4728]: I0227 10:29:14.046820 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xsskq" event={"ID":"49693a3e-1583-4584-9049-fe85013bb9ab","Type":"ContainerDied","Data":"bee8ee694307cb7e7009e7378d26a8070fb0e8dc035e34d6cc00a05c1023408e"} Feb 27 10:29:14 crc kubenswrapper[4728]: I0227 10:29:14.091570 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 27 10:29:14 crc kubenswrapper[4728]: W0227 10:29:14.103659 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podbbab99ad_e0f9_47d1_8fbb_478886f84964.slice/crio-3cdd6c704b45941e14f4e3911fc665b061c179e6f8b7a8b8ae27ad06435ee14e WatchSource:0}: Error finding container 3cdd6c704b45941e14f4e3911fc665b061c179e6f8b7a8b8ae27ad06435ee14e: Status 404 returned error can't find the container with id 3cdd6c704b45941e14f4e3911fc665b061c179e6f8b7a8b8ae27ad06435ee14e Feb 27 10:29:14 crc kubenswrapper[4728]: I0227 10:29:14.133215 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-8n6md" Feb 27 10:29:14 crc kubenswrapper[4728]: I0227 10:29:14.136137 4728 patch_prober.go:28] interesting pod/router-default-5444994796-8n6md container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 27 10:29:14 crc kubenswrapper[4728]: [-]has-synced failed: reason withheld Feb 27 10:29:14 crc kubenswrapper[4728]: [+]process-running ok Feb 27 10:29:14 crc kubenswrapper[4728]: healthz check failed Feb 27 10:29:14 crc kubenswrapper[4728]: I0227 10:29:14.136182 4728 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-8n6md" podUID="90274913-fbff-4207-a3d8-f163ebcee220" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 27 10:29:14 crc kubenswrapper[4728]: I0227 10:29:14.240964 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cxzlx" Feb 27 10:29:14 crc kubenswrapper[4728]: I0227 10:29:14.348987 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6b9d5c76fc-vqvnz"] Feb 27 10:29:14 crc kubenswrapper[4728]: W0227 10:29:14.389605 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod98ac1ff8_14b4_40f9_9d69_87c2fbab5aa9.slice/crio-f8260ac7e1ff95f5a37cdada4d8ead833da38b447a163c69f87560429991075a WatchSource:0}: Error finding container f8260ac7e1ff95f5a37cdada4d8ead833da38b447a163c69f87560429991075a: Status 404 returned error can't find the container with id f8260ac7e1ff95f5a37cdada4d8ead833da38b447a163c69f87560429991075a Feb 27 10:29:15 crc kubenswrapper[4728]: I0227 10:29:15.069423 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"bbab99ad-e0f9-47d1-8fbb-478886f84964","Type":"ContainerStarted","Data":"422f10ddef713402370340e295a2cd7b351faba26872e6a9e04ef862368ed2ea"} Feb 27 10:29:15 crc kubenswrapper[4728]: I0227 10:29:15.069753 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"bbab99ad-e0f9-47d1-8fbb-478886f84964","Type":"ContainerStarted","Data":"3cdd6c704b45941e14f4e3911fc665b061c179e6f8b7a8b8ae27ad06435ee14e"} Feb 27 10:29:15 crc kubenswrapper[4728]: I0227 10:29:15.074625 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6b9d5c76fc-vqvnz" event={"ID":"98ac1ff8-14b4-40f9-9d69-87c2fbab5aa9","Type":"ContainerStarted","Data":"2e72df5008f9f47bbba1166fde91008ed8b175a17cf53bc293ce423868fae854"} Feb 27 10:29:15 crc kubenswrapper[4728]: I0227 10:29:15.074651 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6b9d5c76fc-vqvnz" event={"ID":"98ac1ff8-14b4-40f9-9d69-87c2fbab5aa9","Type":"ContainerStarted","Data":"f8260ac7e1ff95f5a37cdada4d8ead833da38b447a163c69f87560429991075a"} Feb 27 10:29:15 crc kubenswrapper[4728]: I0227 10:29:15.075276 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6b9d5c76fc-vqvnz" Feb 27 10:29:15 crc kubenswrapper[4728]: I0227 10:29:15.077063 4728 generic.go:334] "Generic (PLEG): container finished" podID="d16d774a-941b-4d0c-b6d7-5c13a7e88153" containerID="8c6d3d02b04dcf70673498584a05c84bad5089073559ed29329b1e977818a544" exitCode=0 Feb 27 10:29:15 crc kubenswrapper[4728]: I0227 10:29:15.077255 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"d16d774a-941b-4d0c-b6d7-5c13a7e88153","Type":"ContainerDied","Data":"8c6d3d02b04dcf70673498584a05c84bad5089073559ed29329b1e977818a544"} Feb 27 10:29:15 crc kubenswrapper[4728]: I0227 10:29:15.091028 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6b9d5c76fc-vqvnz" Feb 27 10:29:15 crc kubenswrapper[4728]: I0227 10:29:15.095587 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=2.095573381 podStartE2EDuration="2.095573381s" podCreationTimestamp="2026-02-27 10:29:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:29:15.095571831 +0000 UTC m=+175.057937947" watchObservedRunningTime="2026-02-27 10:29:15.095573381 +0000 UTC m=+175.057939487" Feb 27 10:29:15 crc kubenswrapper[4728]: I0227 10:29:15.132319 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6b9d5c76fc-vqvnz" podStartSLOduration=6.132304321 podStartE2EDuration="6.132304321s" podCreationTimestamp="2026-02-27 10:29:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:29:15.128909664 +0000 UTC m=+175.091275770" watchObservedRunningTime="2026-02-27 10:29:15.132304321 +0000 UTC m=+175.094670427" Feb 27 10:29:15 crc kubenswrapper[4728]: I0227 10:29:15.136856 4728 patch_prober.go:28] interesting pod/router-default-5444994796-8n6md container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 27 10:29:15 crc kubenswrapper[4728]: [-]has-synced failed: reason withheld Feb 27 10:29:15 crc kubenswrapper[4728]: [+]process-running ok Feb 27 10:29:15 crc kubenswrapper[4728]: healthz check failed Feb 27 10:29:15 crc kubenswrapper[4728]: I0227 10:29:15.136921 4728 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-8n6md" podUID="90274913-fbff-4207-a3d8-f163ebcee220" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 27 10:29:15 crc kubenswrapper[4728]: I0227 10:29:15.430916 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29536455-mx6zt" Feb 27 10:29:15 crc kubenswrapper[4728]: I0227 10:29:15.560779 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2kl4f\" (UniqueName: \"kubernetes.io/projected/f82e7468-152d-46a6-9012-3bb0b4219b3f-kube-api-access-2kl4f\") pod \"f82e7468-152d-46a6-9012-3bb0b4219b3f\" (UID: \"f82e7468-152d-46a6-9012-3bb0b4219b3f\") " Feb 27 10:29:15 crc kubenswrapper[4728]: I0227 10:29:15.560838 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f82e7468-152d-46a6-9012-3bb0b4219b3f-config-volume\") pod \"f82e7468-152d-46a6-9012-3bb0b4219b3f\" (UID: \"f82e7468-152d-46a6-9012-3bb0b4219b3f\") " Feb 27 10:29:15 crc kubenswrapper[4728]: I0227 10:29:15.560916 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f82e7468-152d-46a6-9012-3bb0b4219b3f-secret-volume\") pod \"f82e7468-152d-46a6-9012-3bb0b4219b3f\" (UID: \"f82e7468-152d-46a6-9012-3bb0b4219b3f\") " Feb 27 10:29:15 crc kubenswrapper[4728]: I0227 10:29:15.562001 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f82e7468-152d-46a6-9012-3bb0b4219b3f-config-volume" (OuterVolumeSpecName: "config-volume") pod "f82e7468-152d-46a6-9012-3bb0b4219b3f" (UID: "f82e7468-152d-46a6-9012-3bb0b4219b3f"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:29:15 crc kubenswrapper[4728]: I0227 10:29:15.568139 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f82e7468-152d-46a6-9012-3bb0b4219b3f-kube-api-access-2kl4f" (OuterVolumeSpecName: "kube-api-access-2kl4f") pod "f82e7468-152d-46a6-9012-3bb0b4219b3f" (UID: "f82e7468-152d-46a6-9012-3bb0b4219b3f"). InnerVolumeSpecName "kube-api-access-2kl4f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:29:15 crc kubenswrapper[4728]: I0227 10:29:15.583441 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f82e7468-152d-46a6-9012-3bb0b4219b3f-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "f82e7468-152d-46a6-9012-3bb0b4219b3f" (UID: "f82e7468-152d-46a6-9012-3bb0b4219b3f"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:29:15 crc kubenswrapper[4728]: I0227 10:29:15.663349 4728 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f82e7468-152d-46a6-9012-3bb0b4219b3f-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 27 10:29:15 crc kubenswrapper[4728]: I0227 10:29:15.663386 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2kl4f\" (UniqueName: \"kubernetes.io/projected/f82e7468-152d-46a6-9012-3bb0b4219b3f-kube-api-access-2kl4f\") on node \"crc\" DevicePath \"\"" Feb 27 10:29:15 crc kubenswrapper[4728]: I0227 10:29:15.663395 4728 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f82e7468-152d-46a6-9012-3bb0b4219b3f-config-volume\") on node \"crc\" DevicePath \"\"" Feb 27 10:29:15 crc kubenswrapper[4728]: I0227 10:29:15.984865 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-qt4kh" Feb 27 10:29:16 crc kubenswrapper[4728]: I0227 10:29:16.097103 4728 generic.go:334] "Generic (PLEG): container finished" podID="bbab99ad-e0f9-47d1-8fbb-478886f84964" containerID="422f10ddef713402370340e295a2cd7b351faba26872e6a9e04ef862368ed2ea" exitCode=0 Feb 27 10:29:16 crc kubenswrapper[4728]: I0227 10:29:16.097157 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"bbab99ad-e0f9-47d1-8fbb-478886f84964","Type":"ContainerDied","Data":"422f10ddef713402370340e295a2cd7b351faba26872e6a9e04ef862368ed2ea"} Feb 27 10:29:16 crc kubenswrapper[4728]: I0227 10:29:16.099495 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29536455-mx6zt" Feb 27 10:29:16 crc kubenswrapper[4728]: I0227 10:29:16.099690 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29536455-mx6zt" event={"ID":"f82e7468-152d-46a6-9012-3bb0b4219b3f","Type":"ContainerDied","Data":"5b6dedda68e3e6d9345b1c943c9dedffb37215d26d72fde983e13f974ad05ec9"} Feb 27 10:29:16 crc kubenswrapper[4728]: I0227 10:29:16.099728 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5b6dedda68e3e6d9345b1c943c9dedffb37215d26d72fde983e13f974ad05ec9" Feb 27 10:29:16 crc kubenswrapper[4728]: I0227 10:29:16.136080 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-8n6md" Feb 27 10:29:16 crc kubenswrapper[4728]: I0227 10:29:16.138441 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-8n6md" Feb 27 10:29:16 crc kubenswrapper[4728]: I0227 10:29:16.567744 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 27 10:29:16 crc kubenswrapper[4728]: I0227 10:29:16.686767 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d16d774a-941b-4d0c-b6d7-5c13a7e88153-kube-api-access\") pod \"d16d774a-941b-4d0c-b6d7-5c13a7e88153\" (UID: \"d16d774a-941b-4d0c-b6d7-5c13a7e88153\") " Feb 27 10:29:16 crc kubenswrapper[4728]: I0227 10:29:16.686880 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d16d774a-941b-4d0c-b6d7-5c13a7e88153-kubelet-dir\") pod \"d16d774a-941b-4d0c-b6d7-5c13a7e88153\" (UID: \"d16d774a-941b-4d0c-b6d7-5c13a7e88153\") " Feb 27 10:29:16 crc kubenswrapper[4728]: I0227 10:29:16.687134 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d16d774a-941b-4d0c-b6d7-5c13a7e88153-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "d16d774a-941b-4d0c-b6d7-5c13a7e88153" (UID: "d16d774a-941b-4d0c-b6d7-5c13a7e88153"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 10:29:16 crc kubenswrapper[4728]: I0227 10:29:16.692369 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d16d774a-941b-4d0c-b6d7-5c13a7e88153-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "d16d774a-941b-4d0c-b6d7-5c13a7e88153" (UID: "d16d774a-941b-4d0c-b6d7-5c13a7e88153"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:29:16 crc kubenswrapper[4728]: I0227 10:29:16.788756 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d16d774a-941b-4d0c-b6d7-5c13a7e88153-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 27 10:29:16 crc kubenswrapper[4728]: I0227 10:29:16.788787 4728 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d16d774a-941b-4d0c-b6d7-5c13a7e88153-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 27 10:29:17 crc kubenswrapper[4728]: I0227 10:29:17.138344 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-samples-operator_cluster-samples-operator-665b6dd947-5lw67_7319e158-317b-4f98-b9da-0481f2c0aca8/cluster-samples-operator/0.log" Feb 27 10:29:17 crc kubenswrapper[4728]: I0227 10:29:17.138390 4728 generic.go:334] "Generic (PLEG): container finished" podID="7319e158-317b-4f98-b9da-0481f2c0aca8" containerID="5dfe433fcbca12898a54cdebf5f21127dbec3edf1c42569b187358f897d03453" exitCode=2 Feb 27 10:29:17 crc kubenswrapper[4728]: I0227 10:29:17.138454 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-5lw67" event={"ID":"7319e158-317b-4f98-b9da-0481f2c0aca8","Type":"ContainerDied","Data":"5dfe433fcbca12898a54cdebf5f21127dbec3edf1c42569b187358f897d03453"} Feb 27 10:29:17 crc kubenswrapper[4728]: I0227 10:29:17.138780 4728 scope.go:117] "RemoveContainer" containerID="5dfe433fcbca12898a54cdebf5f21127dbec3edf1c42569b187358f897d03453" Feb 27 10:29:17 crc kubenswrapper[4728]: I0227 10:29:17.149289 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"d16d774a-941b-4d0c-b6d7-5c13a7e88153","Type":"ContainerDied","Data":"69e08abd418794ace1b31affc5004d900cce75b220636032531dc3ebaf7449f8"} Feb 27 10:29:17 crc kubenswrapper[4728]: I0227 10:29:17.149329 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="69e08abd418794ace1b31affc5004d900cce75b220636032531dc3ebaf7449f8" Feb 27 10:29:17 crc kubenswrapper[4728]: I0227 10:29:17.149403 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 27 10:29:17 crc kubenswrapper[4728]: I0227 10:29:17.442718 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 27 10:29:17 crc kubenswrapper[4728]: I0227 10:29:17.516222 4728 ???:1] "http: TLS handshake error from 192.168.126.11:42280: no serving certificate available for the kubelet" Feb 27 10:29:17 crc kubenswrapper[4728]: I0227 10:29:17.601258 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bbab99ad-e0f9-47d1-8fbb-478886f84964-kubelet-dir\") pod \"bbab99ad-e0f9-47d1-8fbb-478886f84964\" (UID: \"bbab99ad-e0f9-47d1-8fbb-478886f84964\") " Feb 27 10:29:17 crc kubenswrapper[4728]: I0227 10:29:17.601347 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bbab99ad-e0f9-47d1-8fbb-478886f84964-kube-api-access\") pod \"bbab99ad-e0f9-47d1-8fbb-478886f84964\" (UID: \"bbab99ad-e0f9-47d1-8fbb-478886f84964\") " Feb 27 10:29:17 crc kubenswrapper[4728]: I0227 10:29:17.601367 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bbab99ad-e0f9-47d1-8fbb-478886f84964-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "bbab99ad-e0f9-47d1-8fbb-478886f84964" (UID: "bbab99ad-e0f9-47d1-8fbb-478886f84964"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 10:29:17 crc kubenswrapper[4728]: I0227 10:29:17.601694 4728 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bbab99ad-e0f9-47d1-8fbb-478886f84964-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 27 10:29:17 crc kubenswrapper[4728]: I0227 10:29:17.605119 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bbab99ad-e0f9-47d1-8fbb-478886f84964-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "bbab99ad-e0f9-47d1-8fbb-478886f84964" (UID: "bbab99ad-e0f9-47d1-8fbb-478886f84964"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:29:17 crc kubenswrapper[4728]: I0227 10:29:17.626834 4728 ???:1] "http: TLS handshake error from 192.168.126.11:42282: no serving certificate available for the kubelet" Feb 27 10:29:17 crc kubenswrapper[4728]: I0227 10:29:17.702964 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bbab99ad-e0f9-47d1-8fbb-478886f84964-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 27 10:29:18 crc kubenswrapper[4728]: I0227 10:29:18.162734 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-samples-operator_cluster-samples-operator-665b6dd947-5lw67_7319e158-317b-4f98-b9da-0481f2c0aca8/cluster-samples-operator/0.log" Feb 27 10:29:18 crc kubenswrapper[4728]: I0227 10:29:18.162808 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-5lw67" event={"ID":"7319e158-317b-4f98-b9da-0481f2c0aca8","Type":"ContainerStarted","Data":"efbe6396ce37eccf34bfe283cf3ee6f138a48bba257883a278b0972ea66d6e3c"} Feb 27 10:29:18 crc kubenswrapper[4728]: I0227 10:29:18.166903 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"bbab99ad-e0f9-47d1-8fbb-478886f84964","Type":"ContainerDied","Data":"3cdd6c704b45941e14f4e3911fc665b061c179e6f8b7a8b8ae27ad06435ee14e"} Feb 27 10:29:18 crc kubenswrapper[4728]: I0227 10:29:18.167239 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3cdd6c704b45941e14f4e3911fc665b061c179e6f8b7a8b8ae27ad06435ee14e" Feb 27 10:29:18 crc kubenswrapper[4728]: I0227 10:29:18.167081 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 27 10:29:22 crc kubenswrapper[4728]: I0227 10:29:22.368805 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/861d0263-093a-4dfa-93d7-d3efb29da94b-metrics-certs\") pod \"network-metrics-daemon-wv4rk\" (UID: \"861d0263-093a-4dfa-93d7-d3efb29da94b\") " pod="openshift-multus/network-metrics-daemon-wv4rk" Feb 27 10:29:22 crc kubenswrapper[4728]: I0227 10:29:22.370431 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 27 10:29:22 crc kubenswrapper[4728]: I0227 10:29:22.393564 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/861d0263-093a-4dfa-93d7-d3efb29da94b-metrics-certs\") pod \"network-metrics-daemon-wv4rk\" (UID: \"861d0263-093a-4dfa-93d7-d3efb29da94b\") " pod="openshift-multus/network-metrics-daemon-wv4rk" Feb 27 10:29:22 crc kubenswrapper[4728]: I0227 10:29:22.564206 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 27 10:29:22 crc kubenswrapper[4728]: I0227 10:29:22.572342 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wv4rk" Feb 27 10:29:23 crc kubenswrapper[4728]: I0227 10:29:23.039316 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-tvflj" Feb 27 10:29:23 crc kubenswrapper[4728]: I0227 10:29:23.045128 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-tvflj" Feb 27 10:29:23 crc kubenswrapper[4728]: I0227 10:29:23.696822 4728 patch_prober.go:28] interesting pod/downloads-7954f5f757-c46ql container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Feb 27 10:29:23 crc kubenswrapper[4728]: I0227 10:29:23.696891 4728 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-c46ql" podUID="a3656135-373e-4ec6-9cf1-e34d6a95c5a5" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Feb 27 10:29:23 crc kubenswrapper[4728]: I0227 10:29:23.696946 4728 patch_prober.go:28] interesting pod/downloads-7954f5f757-c46ql container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Feb 27 10:29:23 crc kubenswrapper[4728]: I0227 10:29:23.697009 4728 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-c46ql" podUID="a3656135-373e-4ec6-9cf1-e34d6a95c5a5" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Feb 27 10:29:24 crc kubenswrapper[4728]: E0227 10:29:24.009793 4728 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1206c848258532f67f69f951cef6e00d801573f63a796e06bc1603e25228dccf" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 27 10:29:24 crc kubenswrapper[4728]: E0227 10:29:24.010902 4728 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1206c848258532f67f69f951cef6e00d801573f63a796e06bc1603e25228dccf" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 27 10:29:24 crc kubenswrapper[4728]: E0227 10:29:24.012448 4728 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1206c848258532f67f69f951cef6e00d801573f63a796e06bc1603e25228dccf" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 27 10:29:24 crc kubenswrapper[4728]: E0227 10:29:24.012525 4728 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-w976v" podUID="a4d6106d-dd3c-4a6a-aaeb-b441add3fdad" containerName="kube-multus-additional-cni-plugins" Feb 27 10:29:28 crc kubenswrapper[4728]: I0227 10:29:28.404289 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-76b589b978-x8q75"] Feb 27 10:29:28 crc kubenswrapper[4728]: I0227 10:29:28.415819 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-76b589b978-x8q75" podUID="0386cbf8-9da3-4ca2-86c0-5e38713e390f" containerName="controller-manager" containerID="cri-o://856c2caee3c7918df12836b0c71fac4399d6c6b7930b4372ff04db9a16a47c4d" gracePeriod=30 Feb 27 10:29:28 crc kubenswrapper[4728]: I0227 10:29:28.437319 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6b9d5c76fc-vqvnz"] Feb 27 10:29:28 crc kubenswrapper[4728]: I0227 10:29:28.438105 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6b9d5c76fc-vqvnz" podUID="98ac1ff8-14b4-40f9-9d69-87c2fbab5aa9" containerName="route-controller-manager" containerID="cri-o://2e72df5008f9f47bbba1166fde91008ed8b175a17cf53bc293ce423868fae854" gracePeriod=30 Feb 27 10:29:30 crc kubenswrapper[4728]: I0227 10:29:30.305014 4728 generic.go:334] "Generic (PLEG): container finished" podID="98ac1ff8-14b4-40f9-9d69-87c2fbab5aa9" containerID="2e72df5008f9f47bbba1166fde91008ed8b175a17cf53bc293ce423868fae854" exitCode=0 Feb 27 10:29:30 crc kubenswrapper[4728]: I0227 10:29:30.305089 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6b9d5c76fc-vqvnz" event={"ID":"98ac1ff8-14b4-40f9-9d69-87c2fbab5aa9","Type":"ContainerDied","Data":"2e72df5008f9f47bbba1166fde91008ed8b175a17cf53bc293ce423868fae854"} Feb 27 10:29:31 crc kubenswrapper[4728]: I0227 10:29:31.181056 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-w4vnn" Feb 27 10:29:31 crc kubenswrapper[4728]: I0227 10:29:31.261335 4728 patch_prober.go:28] interesting pod/controller-manager-76b589b978-x8q75 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.51:8443/healthz\": dial tcp 10.217.0.51:8443: connect: connection refused" start-of-body= Feb 27 10:29:31 crc kubenswrapper[4728]: I0227 10:29:31.261758 4728 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-76b589b978-x8q75" podUID="0386cbf8-9da3-4ca2-86c0-5e38713e390f" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.51:8443/healthz\": dial tcp 10.217.0.51:8443: connect: connection refused" Feb 27 10:29:32 crc kubenswrapper[4728]: I0227 10:29:32.321903 4728 generic.go:334] "Generic (PLEG): container finished" podID="0386cbf8-9da3-4ca2-86c0-5e38713e390f" containerID="856c2caee3c7918df12836b0c71fac4399d6c6b7930b4372ff04db9a16a47c4d" exitCode=0 Feb 27 10:29:32 crc kubenswrapper[4728]: I0227 10:29:32.322009 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-76b589b978-x8q75" event={"ID":"0386cbf8-9da3-4ca2-86c0-5e38713e390f","Type":"ContainerDied","Data":"856c2caee3c7918df12836b0c71fac4399d6c6b7930b4372ff04db9a16a47c4d"} Feb 27 10:29:33 crc kubenswrapper[4728]: I0227 10:29:33.709852 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-c46ql" Feb 27 10:29:33 crc kubenswrapper[4728]: I0227 10:29:33.841634 4728 patch_prober.go:28] interesting pod/route-controller-manager-6b9d5c76fc-vqvnz container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.56:8443/healthz\": dial tcp 10.217.0.56:8443: connect: connection refused" start-of-body= Feb 27 10:29:33 crc kubenswrapper[4728]: I0227 10:29:33.841699 4728 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6b9d5c76fc-vqvnz" podUID="98ac1ff8-14b4-40f9-9d69-87c2fbab5aa9" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.56:8443/healthz\": dial tcp 10.217.0.56:8443: connect: connection refused" Feb 27 10:29:34 crc kubenswrapper[4728]: E0227 10:29:34.009717 4728 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1206c848258532f67f69f951cef6e00d801573f63a796e06bc1603e25228dccf" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 27 10:29:34 crc kubenswrapper[4728]: E0227 10:29:34.011215 4728 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1206c848258532f67f69f951cef6e00d801573f63a796e06bc1603e25228dccf" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 27 10:29:34 crc kubenswrapper[4728]: E0227 10:29:34.012549 4728 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1206c848258532f67f69f951cef6e00d801573f63a796e06bc1603e25228dccf" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 27 10:29:34 crc kubenswrapper[4728]: E0227 10:29:34.012580 4728 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-w976v" podUID="a4d6106d-dd3c-4a6a-aaeb-b441add3fdad" containerName="kube-multus-additional-cni-plugins" Feb 27 10:29:36 crc kubenswrapper[4728]: I0227 10:29:36.763103 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 10:29:38 crc kubenswrapper[4728]: I0227 10:29:38.129729 4728 ???:1] "http: TLS handshake error from 192.168.126.11:46880: no serving certificate available for the kubelet" Feb 27 10:29:40 crc kubenswrapper[4728]: I0227 10:29:40.374596 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-w976v_a4d6106d-dd3c-4a6a-aaeb-b441add3fdad/kube-multus-additional-cni-plugins/0.log" Feb 27 10:29:40 crc kubenswrapper[4728]: I0227 10:29:40.375063 4728 generic.go:334] "Generic (PLEG): container finished" podID="a4d6106d-dd3c-4a6a-aaeb-b441add3fdad" containerID="1206c848258532f67f69f951cef6e00d801573f63a796e06bc1603e25228dccf" exitCode=137 Feb 27 10:29:40 crc kubenswrapper[4728]: I0227 10:29:40.375091 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-w976v" event={"ID":"a4d6106d-dd3c-4a6a-aaeb-b441add3fdad","Type":"ContainerDied","Data":"1206c848258532f67f69f951cef6e00d801573f63a796e06bc1603e25228dccf"} Feb 27 10:29:40 crc kubenswrapper[4728]: E0227 10:29:40.431372 4728 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 27 10:29:40 crc kubenswrapper[4728]: E0227 10:29:40.431532 4728 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 10:29:40 crc kubenswrapper[4728]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 27 10:29:40 crc kubenswrapper[4728]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-762bl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29536468-682zs_openshift-infra(826461a8-eef9-4a1f-b4a7-4ff8076ec729): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled Feb 27 10:29:40 crc kubenswrapper[4728]: > logger="UnhandledError" Feb 27 10:29:40 crc kubenswrapper[4728]: E0227 10:29:40.432873 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-infra/auto-csr-approver-29536468-682zs" podUID="826461a8-eef9-4a1f-b4a7-4ff8076ec729" Feb 27 10:29:40 crc kubenswrapper[4728]: I0227 10:29:40.466522 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6b9d5c76fc-vqvnz" Feb 27 10:29:40 crc kubenswrapper[4728]: I0227 10:29:40.471589 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-76b589b978-x8q75" Feb 27 10:29:40 crc kubenswrapper[4728]: I0227 10:29:40.493330 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7bfb7d7db9-52m6f"] Feb 27 10:29:40 crc kubenswrapper[4728]: E0227 10:29:40.493609 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f82e7468-152d-46a6-9012-3bb0b4219b3f" containerName="collect-profiles" Feb 27 10:29:40 crc kubenswrapper[4728]: I0227 10:29:40.493633 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="f82e7468-152d-46a6-9012-3bb0b4219b3f" containerName="collect-profiles" Feb 27 10:29:40 crc kubenswrapper[4728]: E0227 10:29:40.493648 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98ac1ff8-14b4-40f9-9d69-87c2fbab5aa9" containerName="route-controller-manager" Feb 27 10:29:40 crc kubenswrapper[4728]: I0227 10:29:40.493658 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="98ac1ff8-14b4-40f9-9d69-87c2fbab5aa9" containerName="route-controller-manager" Feb 27 10:29:40 crc kubenswrapper[4728]: E0227 10:29:40.493670 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d16d774a-941b-4d0c-b6d7-5c13a7e88153" containerName="pruner" Feb 27 10:29:40 crc kubenswrapper[4728]: I0227 10:29:40.493677 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="d16d774a-941b-4d0c-b6d7-5c13a7e88153" containerName="pruner" Feb 27 10:29:40 crc kubenswrapper[4728]: E0227 10:29:40.493686 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbab99ad-e0f9-47d1-8fbb-478886f84964" containerName="pruner" Feb 27 10:29:40 crc kubenswrapper[4728]: I0227 10:29:40.493693 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbab99ad-e0f9-47d1-8fbb-478886f84964" containerName="pruner" Feb 27 10:29:40 crc kubenswrapper[4728]: E0227 10:29:40.493712 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0386cbf8-9da3-4ca2-86c0-5e38713e390f" containerName="controller-manager" Feb 27 10:29:40 crc kubenswrapper[4728]: I0227 10:29:40.493720 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="0386cbf8-9da3-4ca2-86c0-5e38713e390f" containerName="controller-manager" Feb 27 10:29:40 crc kubenswrapper[4728]: I0227 10:29:40.493861 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="0386cbf8-9da3-4ca2-86c0-5e38713e390f" containerName="controller-manager" Feb 27 10:29:40 crc kubenswrapper[4728]: I0227 10:29:40.493878 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="d16d774a-941b-4d0c-b6d7-5c13a7e88153" containerName="pruner" Feb 27 10:29:40 crc kubenswrapper[4728]: I0227 10:29:40.493890 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="f82e7468-152d-46a6-9012-3bb0b4219b3f" containerName="collect-profiles" Feb 27 10:29:40 crc kubenswrapper[4728]: I0227 10:29:40.493901 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="bbab99ad-e0f9-47d1-8fbb-478886f84964" containerName="pruner" Feb 27 10:29:40 crc kubenswrapper[4728]: I0227 10:29:40.493914 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="98ac1ff8-14b4-40f9-9d69-87c2fbab5aa9" containerName="route-controller-manager" Feb 27 10:29:40 crc kubenswrapper[4728]: I0227 10:29:40.494429 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7bfb7d7db9-52m6f" Feb 27 10:29:40 crc kubenswrapper[4728]: I0227 10:29:40.499760 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7bfb7d7db9-52m6f"] Feb 27 10:29:40 crc kubenswrapper[4728]: I0227 10:29:40.575040 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-629xs\" (UniqueName: \"kubernetes.io/projected/98ac1ff8-14b4-40f9-9d69-87c2fbab5aa9-kube-api-access-629xs\") pod \"98ac1ff8-14b4-40f9-9d69-87c2fbab5aa9\" (UID: \"98ac1ff8-14b4-40f9-9d69-87c2fbab5aa9\") " Feb 27 10:29:40 crc kubenswrapper[4728]: I0227 10:29:40.575090 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/98ac1ff8-14b4-40f9-9d69-87c2fbab5aa9-config\") pod \"98ac1ff8-14b4-40f9-9d69-87c2fbab5aa9\" (UID: \"98ac1ff8-14b4-40f9-9d69-87c2fbab5aa9\") " Feb 27 10:29:40 crc kubenswrapper[4728]: I0227 10:29:40.575125 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jtqc2\" (UniqueName: \"kubernetes.io/projected/0386cbf8-9da3-4ca2-86c0-5e38713e390f-kube-api-access-jtqc2\") pod \"0386cbf8-9da3-4ca2-86c0-5e38713e390f\" (UID: \"0386cbf8-9da3-4ca2-86c0-5e38713e390f\") " Feb 27 10:29:40 crc kubenswrapper[4728]: I0227 10:29:40.575162 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0386cbf8-9da3-4ca2-86c0-5e38713e390f-config\") pod \"0386cbf8-9da3-4ca2-86c0-5e38713e390f\" (UID: \"0386cbf8-9da3-4ca2-86c0-5e38713e390f\") " Feb 27 10:29:40 crc kubenswrapper[4728]: I0227 10:29:40.575187 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/98ac1ff8-14b4-40f9-9d69-87c2fbab5aa9-client-ca\") pod \"98ac1ff8-14b4-40f9-9d69-87c2fbab5aa9\" (UID: \"98ac1ff8-14b4-40f9-9d69-87c2fbab5aa9\") " Feb 27 10:29:40 crc kubenswrapper[4728]: I0227 10:29:40.575206 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0386cbf8-9da3-4ca2-86c0-5e38713e390f-proxy-ca-bundles\") pod \"0386cbf8-9da3-4ca2-86c0-5e38713e390f\" (UID: \"0386cbf8-9da3-4ca2-86c0-5e38713e390f\") " Feb 27 10:29:40 crc kubenswrapper[4728]: I0227 10:29:40.575237 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0386cbf8-9da3-4ca2-86c0-5e38713e390f-serving-cert\") pod \"0386cbf8-9da3-4ca2-86c0-5e38713e390f\" (UID: \"0386cbf8-9da3-4ca2-86c0-5e38713e390f\") " Feb 27 10:29:40 crc kubenswrapper[4728]: I0227 10:29:40.575260 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0386cbf8-9da3-4ca2-86c0-5e38713e390f-client-ca\") pod \"0386cbf8-9da3-4ca2-86c0-5e38713e390f\" (UID: \"0386cbf8-9da3-4ca2-86c0-5e38713e390f\") " Feb 27 10:29:40 crc kubenswrapper[4728]: I0227 10:29:40.575309 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/98ac1ff8-14b4-40f9-9d69-87c2fbab5aa9-serving-cert\") pod \"98ac1ff8-14b4-40f9-9d69-87c2fbab5aa9\" (UID: \"98ac1ff8-14b4-40f9-9d69-87c2fbab5aa9\") " Feb 27 10:29:40 crc kubenswrapper[4728]: I0227 10:29:40.576842 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0386cbf8-9da3-4ca2-86c0-5e38713e390f-client-ca" (OuterVolumeSpecName: "client-ca") pod "0386cbf8-9da3-4ca2-86c0-5e38713e390f" (UID: "0386cbf8-9da3-4ca2-86c0-5e38713e390f"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:29:40 crc kubenswrapper[4728]: I0227 10:29:40.576887 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0386cbf8-9da3-4ca2-86c0-5e38713e390f-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "0386cbf8-9da3-4ca2-86c0-5e38713e390f" (UID: "0386cbf8-9da3-4ca2-86c0-5e38713e390f"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:29:40 crc kubenswrapper[4728]: I0227 10:29:40.576913 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0386cbf8-9da3-4ca2-86c0-5e38713e390f-config" (OuterVolumeSpecName: "config") pod "0386cbf8-9da3-4ca2-86c0-5e38713e390f" (UID: "0386cbf8-9da3-4ca2-86c0-5e38713e390f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:29:40 crc kubenswrapper[4728]: I0227 10:29:40.576982 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98ac1ff8-14b4-40f9-9d69-87c2fbab5aa9-config" (OuterVolumeSpecName: "config") pod "98ac1ff8-14b4-40f9-9d69-87c2fbab5aa9" (UID: "98ac1ff8-14b4-40f9-9d69-87c2fbab5aa9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:29:40 crc kubenswrapper[4728]: I0227 10:29:40.576995 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98ac1ff8-14b4-40f9-9d69-87c2fbab5aa9-client-ca" (OuterVolumeSpecName: "client-ca") pod "98ac1ff8-14b4-40f9-9d69-87c2fbab5aa9" (UID: "98ac1ff8-14b4-40f9-9d69-87c2fbab5aa9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:29:40 crc kubenswrapper[4728]: I0227 10:29:40.580728 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98ac1ff8-14b4-40f9-9d69-87c2fbab5aa9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "98ac1ff8-14b4-40f9-9d69-87c2fbab5aa9" (UID: "98ac1ff8-14b4-40f9-9d69-87c2fbab5aa9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:29:40 crc kubenswrapper[4728]: I0227 10:29:40.580920 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0386cbf8-9da3-4ca2-86c0-5e38713e390f-kube-api-access-jtqc2" (OuterVolumeSpecName: "kube-api-access-jtqc2") pod "0386cbf8-9da3-4ca2-86c0-5e38713e390f" (UID: "0386cbf8-9da3-4ca2-86c0-5e38713e390f"). InnerVolumeSpecName "kube-api-access-jtqc2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:29:40 crc kubenswrapper[4728]: I0227 10:29:40.580980 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0386cbf8-9da3-4ca2-86c0-5e38713e390f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0386cbf8-9da3-4ca2-86c0-5e38713e390f" (UID: "0386cbf8-9da3-4ca2-86c0-5e38713e390f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:29:40 crc kubenswrapper[4728]: I0227 10:29:40.581138 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98ac1ff8-14b4-40f9-9d69-87c2fbab5aa9-kube-api-access-629xs" (OuterVolumeSpecName: "kube-api-access-629xs") pod "98ac1ff8-14b4-40f9-9d69-87c2fbab5aa9" (UID: "98ac1ff8-14b4-40f9-9d69-87c2fbab5aa9"). InnerVolumeSpecName "kube-api-access-629xs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:29:40 crc kubenswrapper[4728]: I0227 10:29:40.676736 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/67ff3cc5-f037-4e18-83e1-aa1b470fd115-serving-cert\") pod \"route-controller-manager-7bfb7d7db9-52m6f\" (UID: \"67ff3cc5-f037-4e18-83e1-aa1b470fd115\") " pod="openshift-route-controller-manager/route-controller-manager-7bfb7d7db9-52m6f" Feb 27 10:29:40 crc kubenswrapper[4728]: I0227 10:29:40.676779 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qktfs\" (UniqueName: \"kubernetes.io/projected/67ff3cc5-f037-4e18-83e1-aa1b470fd115-kube-api-access-qktfs\") pod \"route-controller-manager-7bfb7d7db9-52m6f\" (UID: \"67ff3cc5-f037-4e18-83e1-aa1b470fd115\") " pod="openshift-route-controller-manager/route-controller-manager-7bfb7d7db9-52m6f" Feb 27 10:29:40 crc kubenswrapper[4728]: I0227 10:29:40.676815 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/67ff3cc5-f037-4e18-83e1-aa1b470fd115-config\") pod \"route-controller-manager-7bfb7d7db9-52m6f\" (UID: \"67ff3cc5-f037-4e18-83e1-aa1b470fd115\") " pod="openshift-route-controller-manager/route-controller-manager-7bfb7d7db9-52m6f" Feb 27 10:29:40 crc kubenswrapper[4728]: I0227 10:29:40.677043 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/67ff3cc5-f037-4e18-83e1-aa1b470fd115-client-ca\") pod \"route-controller-manager-7bfb7d7db9-52m6f\" (UID: \"67ff3cc5-f037-4e18-83e1-aa1b470fd115\") " pod="openshift-route-controller-manager/route-controller-manager-7bfb7d7db9-52m6f" Feb 27 10:29:40 crc kubenswrapper[4728]: I0227 10:29:40.677162 4728 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/98ac1ff8-14b4-40f9-9d69-87c2fbab5aa9-config\") on node \"crc\" DevicePath \"\"" Feb 27 10:29:40 crc kubenswrapper[4728]: I0227 10:29:40.677176 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jtqc2\" (UniqueName: \"kubernetes.io/projected/0386cbf8-9da3-4ca2-86c0-5e38713e390f-kube-api-access-jtqc2\") on node \"crc\" DevicePath \"\"" Feb 27 10:29:40 crc kubenswrapper[4728]: I0227 10:29:40.677185 4728 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0386cbf8-9da3-4ca2-86c0-5e38713e390f-config\") on node \"crc\" DevicePath \"\"" Feb 27 10:29:40 crc kubenswrapper[4728]: I0227 10:29:40.677193 4728 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/98ac1ff8-14b4-40f9-9d69-87c2fbab5aa9-client-ca\") on node \"crc\" DevicePath \"\"" Feb 27 10:29:40 crc kubenswrapper[4728]: I0227 10:29:40.677203 4728 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0386cbf8-9da3-4ca2-86c0-5e38713e390f-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 27 10:29:40 crc kubenswrapper[4728]: I0227 10:29:40.677212 4728 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0386cbf8-9da3-4ca2-86c0-5e38713e390f-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 10:29:40 crc kubenswrapper[4728]: I0227 10:29:40.677220 4728 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0386cbf8-9da3-4ca2-86c0-5e38713e390f-client-ca\") on node \"crc\" DevicePath \"\"" Feb 27 10:29:40 crc kubenswrapper[4728]: I0227 10:29:40.677229 4728 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/98ac1ff8-14b4-40f9-9d69-87c2fbab5aa9-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 10:29:40 crc kubenswrapper[4728]: I0227 10:29:40.677237 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-629xs\" (UniqueName: \"kubernetes.io/projected/98ac1ff8-14b4-40f9-9d69-87c2fbab5aa9-kube-api-access-629xs\") on node \"crc\" DevicePath \"\"" Feb 27 10:29:40 crc kubenswrapper[4728]: I0227 10:29:40.778223 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/67ff3cc5-f037-4e18-83e1-aa1b470fd115-serving-cert\") pod \"route-controller-manager-7bfb7d7db9-52m6f\" (UID: \"67ff3cc5-f037-4e18-83e1-aa1b470fd115\") " pod="openshift-route-controller-manager/route-controller-manager-7bfb7d7db9-52m6f" Feb 27 10:29:40 crc kubenswrapper[4728]: I0227 10:29:40.778299 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qktfs\" (UniqueName: \"kubernetes.io/projected/67ff3cc5-f037-4e18-83e1-aa1b470fd115-kube-api-access-qktfs\") pod \"route-controller-manager-7bfb7d7db9-52m6f\" (UID: \"67ff3cc5-f037-4e18-83e1-aa1b470fd115\") " pod="openshift-route-controller-manager/route-controller-manager-7bfb7d7db9-52m6f" Feb 27 10:29:40 crc kubenswrapper[4728]: I0227 10:29:40.778365 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/67ff3cc5-f037-4e18-83e1-aa1b470fd115-config\") pod \"route-controller-manager-7bfb7d7db9-52m6f\" (UID: \"67ff3cc5-f037-4e18-83e1-aa1b470fd115\") " pod="openshift-route-controller-manager/route-controller-manager-7bfb7d7db9-52m6f" Feb 27 10:29:40 crc kubenswrapper[4728]: I0227 10:29:40.778467 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/67ff3cc5-f037-4e18-83e1-aa1b470fd115-client-ca\") pod \"route-controller-manager-7bfb7d7db9-52m6f\" (UID: \"67ff3cc5-f037-4e18-83e1-aa1b470fd115\") " pod="openshift-route-controller-manager/route-controller-manager-7bfb7d7db9-52m6f" Feb 27 10:29:40 crc kubenswrapper[4728]: I0227 10:29:40.780144 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/67ff3cc5-f037-4e18-83e1-aa1b470fd115-client-ca\") pod \"route-controller-manager-7bfb7d7db9-52m6f\" (UID: \"67ff3cc5-f037-4e18-83e1-aa1b470fd115\") " pod="openshift-route-controller-manager/route-controller-manager-7bfb7d7db9-52m6f" Feb 27 10:29:40 crc kubenswrapper[4728]: I0227 10:29:40.781864 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/67ff3cc5-f037-4e18-83e1-aa1b470fd115-serving-cert\") pod \"route-controller-manager-7bfb7d7db9-52m6f\" (UID: \"67ff3cc5-f037-4e18-83e1-aa1b470fd115\") " pod="openshift-route-controller-manager/route-controller-manager-7bfb7d7db9-52m6f" Feb 27 10:29:40 crc kubenswrapper[4728]: I0227 10:29:40.782271 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/67ff3cc5-f037-4e18-83e1-aa1b470fd115-config\") pod \"route-controller-manager-7bfb7d7db9-52m6f\" (UID: \"67ff3cc5-f037-4e18-83e1-aa1b470fd115\") " pod="openshift-route-controller-manager/route-controller-manager-7bfb7d7db9-52m6f" Feb 27 10:29:40 crc kubenswrapper[4728]: I0227 10:29:40.793883 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qktfs\" (UniqueName: \"kubernetes.io/projected/67ff3cc5-f037-4e18-83e1-aa1b470fd115-kube-api-access-qktfs\") pod \"route-controller-manager-7bfb7d7db9-52m6f\" (UID: \"67ff3cc5-f037-4e18-83e1-aa1b470fd115\") " pod="openshift-route-controller-manager/route-controller-manager-7bfb7d7db9-52m6f" Feb 27 10:29:40 crc kubenswrapper[4728]: I0227 10:29:40.832319 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7bfb7d7db9-52m6f" Feb 27 10:29:41 crc kubenswrapper[4728]: I0227 10:29:41.380988 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-76b589b978-x8q75" event={"ID":"0386cbf8-9da3-4ca2-86c0-5e38713e390f","Type":"ContainerDied","Data":"5c2a760395f5d741d41b07674c582ae4076faae61765ff79369cf475e466d0bd"} Feb 27 10:29:41 crc kubenswrapper[4728]: I0227 10:29:41.381043 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-76b589b978-x8q75" Feb 27 10:29:41 crc kubenswrapper[4728]: I0227 10:29:41.381361 4728 scope.go:117] "RemoveContainer" containerID="856c2caee3c7918df12836b0c71fac4399d6c6b7930b4372ff04db9a16a47c4d" Feb 27 10:29:41 crc kubenswrapper[4728]: I0227 10:29:41.383354 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6b9d5c76fc-vqvnz" Feb 27 10:29:41 crc kubenswrapper[4728]: I0227 10:29:41.384210 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6b9d5c76fc-vqvnz" event={"ID":"98ac1ff8-14b4-40f9-9d69-87c2fbab5aa9","Type":"ContainerDied","Data":"f8260ac7e1ff95f5a37cdada4d8ead833da38b447a163c69f87560429991075a"} Feb 27 10:29:41 crc kubenswrapper[4728]: E0227 10:29:41.384941 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536468-682zs" podUID="826461a8-eef9-4a1f-b4a7-4ff8076ec729" Feb 27 10:29:41 crc kubenswrapper[4728]: I0227 10:29:41.413640 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6b9d5c76fc-vqvnz"] Feb 27 10:29:41 crc kubenswrapper[4728]: I0227 10:29:41.416726 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6b9d5c76fc-vqvnz"] Feb 27 10:29:41 crc kubenswrapper[4728]: I0227 10:29:41.422845 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-76b589b978-x8q75"] Feb 27 10:29:41 crc kubenswrapper[4728]: I0227 10:29:41.425204 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-76b589b978-x8q75"] Feb 27 10:29:42 crc kubenswrapper[4728]: I0227 10:29:42.512790 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-85dd7f8bd5-l9xwn"] Feb 27 10:29:42 crc kubenswrapper[4728]: I0227 10:29:42.514076 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-85dd7f8bd5-l9xwn" Feb 27 10:29:42 crc kubenswrapper[4728]: I0227 10:29:42.515700 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 27 10:29:42 crc kubenswrapper[4728]: I0227 10:29:42.515781 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 27 10:29:42 crc kubenswrapper[4728]: I0227 10:29:42.516050 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 27 10:29:42 crc kubenswrapper[4728]: I0227 10:29:42.517450 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 27 10:29:42 crc kubenswrapper[4728]: I0227 10:29:42.517690 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 27 10:29:42 crc kubenswrapper[4728]: I0227 10:29:42.518036 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 27 10:29:42 crc kubenswrapper[4728]: I0227 10:29:42.523584 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 27 10:29:42 crc kubenswrapper[4728]: I0227 10:29:42.523967 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-85dd7f8bd5-l9xwn"] Feb 27 10:29:42 crc kubenswrapper[4728]: I0227 10:29:42.602415 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2479aae-7e42-4324-8b06-d0e005e03acc-config\") pod \"controller-manager-85dd7f8bd5-l9xwn\" (UID: \"e2479aae-7e42-4324-8b06-d0e005e03acc\") " pod="openshift-controller-manager/controller-manager-85dd7f8bd5-l9xwn" Feb 27 10:29:42 crc kubenswrapper[4728]: I0227 10:29:42.602855 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wh68g\" (UniqueName: \"kubernetes.io/projected/e2479aae-7e42-4324-8b06-d0e005e03acc-kube-api-access-wh68g\") pod \"controller-manager-85dd7f8bd5-l9xwn\" (UID: \"e2479aae-7e42-4324-8b06-d0e005e03acc\") " pod="openshift-controller-manager/controller-manager-85dd7f8bd5-l9xwn" Feb 27 10:29:42 crc kubenswrapper[4728]: I0227 10:29:42.602920 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e2479aae-7e42-4324-8b06-d0e005e03acc-serving-cert\") pod \"controller-manager-85dd7f8bd5-l9xwn\" (UID: \"e2479aae-7e42-4324-8b06-d0e005e03acc\") " pod="openshift-controller-manager/controller-manager-85dd7f8bd5-l9xwn" Feb 27 10:29:42 crc kubenswrapper[4728]: I0227 10:29:42.602977 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e2479aae-7e42-4324-8b06-d0e005e03acc-proxy-ca-bundles\") pod \"controller-manager-85dd7f8bd5-l9xwn\" (UID: \"e2479aae-7e42-4324-8b06-d0e005e03acc\") " pod="openshift-controller-manager/controller-manager-85dd7f8bd5-l9xwn" Feb 27 10:29:42 crc kubenswrapper[4728]: I0227 10:29:42.603015 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e2479aae-7e42-4324-8b06-d0e005e03acc-client-ca\") pod \"controller-manager-85dd7f8bd5-l9xwn\" (UID: \"e2479aae-7e42-4324-8b06-d0e005e03acc\") " pod="openshift-controller-manager/controller-manager-85dd7f8bd5-l9xwn" Feb 27 10:29:42 crc kubenswrapper[4728]: I0227 10:29:42.704846 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e2479aae-7e42-4324-8b06-d0e005e03acc-serving-cert\") pod \"controller-manager-85dd7f8bd5-l9xwn\" (UID: \"e2479aae-7e42-4324-8b06-d0e005e03acc\") " pod="openshift-controller-manager/controller-manager-85dd7f8bd5-l9xwn" Feb 27 10:29:42 crc kubenswrapper[4728]: I0227 10:29:42.704908 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e2479aae-7e42-4324-8b06-d0e005e03acc-proxy-ca-bundles\") pod \"controller-manager-85dd7f8bd5-l9xwn\" (UID: \"e2479aae-7e42-4324-8b06-d0e005e03acc\") " pod="openshift-controller-manager/controller-manager-85dd7f8bd5-l9xwn" Feb 27 10:29:42 crc kubenswrapper[4728]: I0227 10:29:42.704935 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e2479aae-7e42-4324-8b06-d0e005e03acc-client-ca\") pod \"controller-manager-85dd7f8bd5-l9xwn\" (UID: \"e2479aae-7e42-4324-8b06-d0e005e03acc\") " pod="openshift-controller-manager/controller-manager-85dd7f8bd5-l9xwn" Feb 27 10:29:42 crc kubenswrapper[4728]: I0227 10:29:42.705014 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2479aae-7e42-4324-8b06-d0e005e03acc-config\") pod \"controller-manager-85dd7f8bd5-l9xwn\" (UID: \"e2479aae-7e42-4324-8b06-d0e005e03acc\") " pod="openshift-controller-manager/controller-manager-85dd7f8bd5-l9xwn" Feb 27 10:29:42 crc kubenswrapper[4728]: I0227 10:29:42.705041 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wh68g\" (UniqueName: \"kubernetes.io/projected/e2479aae-7e42-4324-8b06-d0e005e03acc-kube-api-access-wh68g\") pod \"controller-manager-85dd7f8bd5-l9xwn\" (UID: \"e2479aae-7e42-4324-8b06-d0e005e03acc\") " pod="openshift-controller-manager/controller-manager-85dd7f8bd5-l9xwn" Feb 27 10:29:42 crc kubenswrapper[4728]: I0227 10:29:42.707068 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2479aae-7e42-4324-8b06-d0e005e03acc-config\") pod \"controller-manager-85dd7f8bd5-l9xwn\" (UID: \"e2479aae-7e42-4324-8b06-d0e005e03acc\") " pod="openshift-controller-manager/controller-manager-85dd7f8bd5-l9xwn" Feb 27 10:29:42 crc kubenswrapper[4728]: I0227 10:29:42.707617 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e2479aae-7e42-4324-8b06-d0e005e03acc-client-ca\") pod \"controller-manager-85dd7f8bd5-l9xwn\" (UID: \"e2479aae-7e42-4324-8b06-d0e005e03acc\") " pod="openshift-controller-manager/controller-manager-85dd7f8bd5-l9xwn" Feb 27 10:29:42 crc kubenswrapper[4728]: I0227 10:29:42.709672 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e2479aae-7e42-4324-8b06-d0e005e03acc-proxy-ca-bundles\") pod \"controller-manager-85dd7f8bd5-l9xwn\" (UID: \"e2479aae-7e42-4324-8b06-d0e005e03acc\") " pod="openshift-controller-manager/controller-manager-85dd7f8bd5-l9xwn" Feb 27 10:29:42 crc kubenswrapper[4728]: I0227 10:29:42.724080 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wh68g\" (UniqueName: \"kubernetes.io/projected/e2479aae-7e42-4324-8b06-d0e005e03acc-kube-api-access-wh68g\") pod \"controller-manager-85dd7f8bd5-l9xwn\" (UID: \"e2479aae-7e42-4324-8b06-d0e005e03acc\") " pod="openshift-controller-manager/controller-manager-85dd7f8bd5-l9xwn" Feb 27 10:29:42 crc kubenswrapper[4728]: I0227 10:29:42.724355 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e2479aae-7e42-4324-8b06-d0e005e03acc-serving-cert\") pod \"controller-manager-85dd7f8bd5-l9xwn\" (UID: \"e2479aae-7e42-4324-8b06-d0e005e03acc\") " pod="openshift-controller-manager/controller-manager-85dd7f8bd5-l9xwn" Feb 27 10:29:42 crc kubenswrapper[4728]: I0227 10:29:42.734593 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0386cbf8-9da3-4ca2-86c0-5e38713e390f" path="/var/lib/kubelet/pods/0386cbf8-9da3-4ca2-86c0-5e38713e390f/volumes" Feb 27 10:29:42 crc kubenswrapper[4728]: I0227 10:29:42.735896 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="98ac1ff8-14b4-40f9-9d69-87c2fbab5aa9" path="/var/lib/kubelet/pods/98ac1ff8-14b4-40f9-9d69-87c2fbab5aa9/volumes" Feb 27 10:29:42 crc kubenswrapper[4728]: I0227 10:29:42.839716 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-85dd7f8bd5-l9xwn" Feb 27 10:29:43 crc kubenswrapper[4728]: I0227 10:29:43.905335 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9bqx2" Feb 27 10:29:43 crc kubenswrapper[4728]: E0227 10:29:43.906198 4728 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 27 10:29:43 crc kubenswrapper[4728]: E0227 10:29:43.907314 4728 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bpx97,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-pztrz_openshift-marketplace(7850a694-dd44-4f4d-9b97-ecaa50efb803): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 27 10:29:43 crc kubenswrapper[4728]: E0227 10:29:43.908528 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-pztrz" podUID="7850a694-dd44-4f4d-9b97-ecaa50efb803" Feb 27 10:29:44 crc kubenswrapper[4728]: E0227 10:29:44.008051 4728 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1206c848258532f67f69f951cef6e00d801573f63a796e06bc1603e25228dccf is running failed: container process not found" containerID="1206c848258532f67f69f951cef6e00d801573f63a796e06bc1603e25228dccf" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 27 10:29:44 crc kubenswrapper[4728]: E0227 10:29:44.008710 4728 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1206c848258532f67f69f951cef6e00d801573f63a796e06bc1603e25228dccf is running failed: container process not found" containerID="1206c848258532f67f69f951cef6e00d801573f63a796e06bc1603e25228dccf" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 27 10:29:44 crc kubenswrapper[4728]: E0227 10:29:44.009173 4728 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1206c848258532f67f69f951cef6e00d801573f63a796e06bc1603e25228dccf is running failed: container process not found" containerID="1206c848258532f67f69f951cef6e00d801573f63a796e06bc1603e25228dccf" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 27 10:29:44 crc kubenswrapper[4728]: E0227 10:29:44.009221 4728 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1206c848258532f67f69f951cef6e00d801573f63a796e06bc1603e25228dccf is running failed: container process not found" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-w976v" podUID="a4d6106d-dd3c-4a6a-aaeb-b441add3fdad" containerName="kube-multus-additional-cni-plugins" Feb 27 10:29:45 crc kubenswrapper[4728]: E0227 10:29:45.609414 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pztrz" podUID="7850a694-dd44-4f4d-9b97-ecaa50efb803" Feb 27 10:29:45 crc kubenswrapper[4728]: E0227 10:29:45.673398 4728 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 27 10:29:45 crc kubenswrapper[4728]: E0227 10:29:45.673768 4728 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ml2lk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-gg7mm_openshift-marketplace(0a7d9e95-6291-465f-9f94-f99fc86e4389): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 27 10:29:45 crc kubenswrapper[4728]: E0227 10:29:45.675616 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-gg7mm" podUID="0a7d9e95-6291-465f-9f94-f99fc86e4389" Feb 27 10:29:45 crc kubenswrapper[4728]: I0227 10:29:45.685906 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 27 10:29:45 crc kubenswrapper[4728]: I0227 10:29:45.686650 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 27 10:29:45 crc kubenswrapper[4728]: I0227 10:29:45.688643 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 27 10:29:45 crc kubenswrapper[4728]: I0227 10:29:45.688757 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 27 10:29:45 crc kubenswrapper[4728]: I0227 10:29:45.693776 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 27 10:29:45 crc kubenswrapper[4728]: I0227 10:29:45.850688 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/02fffb4d-70a0-44df-9c12-c9c9ae4a2c0e-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"02fffb4d-70a0-44df-9c12-c9c9ae4a2c0e\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 27 10:29:45 crc kubenswrapper[4728]: I0227 10:29:45.850734 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/02fffb4d-70a0-44df-9c12-c9c9ae4a2c0e-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"02fffb4d-70a0-44df-9c12-c9c9ae4a2c0e\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 27 10:29:45 crc kubenswrapper[4728]: I0227 10:29:45.957112 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/02fffb4d-70a0-44df-9c12-c9c9ae4a2c0e-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"02fffb4d-70a0-44df-9c12-c9c9ae4a2c0e\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 27 10:29:45 crc kubenswrapper[4728]: I0227 10:29:45.957205 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/02fffb4d-70a0-44df-9c12-c9c9ae4a2c0e-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"02fffb4d-70a0-44df-9c12-c9c9ae4a2c0e\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 27 10:29:45 crc kubenswrapper[4728]: I0227 10:29:45.957480 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/02fffb4d-70a0-44df-9c12-c9c9ae4a2c0e-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"02fffb4d-70a0-44df-9c12-c9c9ae4a2c0e\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 27 10:29:45 crc kubenswrapper[4728]: I0227 10:29:45.991258 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/02fffb4d-70a0-44df-9c12-c9c9ae4a2c0e-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"02fffb4d-70a0-44df-9c12-c9c9ae4a2c0e\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 27 10:29:46 crc kubenswrapper[4728]: I0227 10:29:46.006183 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 27 10:29:47 crc kubenswrapper[4728]: E0227 10:29:47.219158 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-gg7mm" podUID="0a7d9e95-6291-465f-9f94-f99fc86e4389" Feb 27 10:29:47 crc kubenswrapper[4728]: E0227 10:29:47.295283 4728 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 27 10:29:47 crc kubenswrapper[4728]: E0227 10:29:47.295427 4728 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dvlqh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-brtfb_openshift-marketplace(7c5a3750-282d-4f84-a9c9-b3167aa283b8): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 27 10:29:47 crc kubenswrapper[4728]: E0227 10:29:47.296824 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-brtfb" podUID="7c5a3750-282d-4f84-a9c9-b3167aa283b8" Feb 27 10:29:47 crc kubenswrapper[4728]: E0227 10:29:47.336134 4728 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 27 10:29:47 crc kubenswrapper[4728]: E0227 10:29:47.336283 4728 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dmk8m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-t4bnk_openshift-marketplace(7771abc7-886d-41eb-b966-74538062511f): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 27 10:29:47 crc kubenswrapper[4728]: E0227 10:29:47.337457 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-t4bnk" podUID="7771abc7-886d-41eb-b966-74538062511f" Feb 27 10:29:47 crc kubenswrapper[4728]: E0227 10:29:47.366482 4728 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 27 10:29:47 crc kubenswrapper[4728]: E0227 10:29:47.366619 4728 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rt98j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-xsskq_openshift-marketplace(49693a3e-1583-4584-9049-fe85013bb9ab): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 27 10:29:47 crc kubenswrapper[4728]: E0227 10:29:47.367808 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-xsskq" podUID="49693a3e-1583-4584-9049-fe85013bb9ab" Feb 27 10:29:48 crc kubenswrapper[4728]: I0227 10:29:48.386612 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-85dd7f8bd5-l9xwn"] Feb 27 10:29:48 crc kubenswrapper[4728]: I0227 10:29:48.486492 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7bfb7d7db9-52m6f"] Feb 27 10:29:48 crc kubenswrapper[4728]: E0227 10:29:48.629244 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-t4bnk" podUID="7771abc7-886d-41eb-b966-74538062511f" Feb 27 10:29:48 crc kubenswrapper[4728]: E0227 10:29:48.629334 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-xsskq" podUID="49693a3e-1583-4584-9049-fe85013bb9ab" Feb 27 10:29:48 crc kubenswrapper[4728]: E0227 10:29:48.629412 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-brtfb" podUID="7c5a3750-282d-4f84-a9c9-b3167aa283b8" Feb 27 10:29:48 crc kubenswrapper[4728]: I0227 10:29:48.658674 4728 scope.go:117] "RemoveContainer" containerID="2e72df5008f9f47bbba1166fde91008ed8b175a17cf53bc293ce423868fae854" Feb 27 10:29:48 crc kubenswrapper[4728]: E0227 10:29:48.689910 4728 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 27 10:29:48 crc kubenswrapper[4728]: E0227 10:29:48.690433 4728 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x6qpz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-wnfnp_openshift-marketplace(b27f5cf8-de13-42a0-825a-0bc27ddc8466): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 27 10:29:48 crc kubenswrapper[4728]: E0227 10:29:48.691656 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-wnfnp" podUID="b27f5cf8-de13-42a0-825a-0bc27ddc8466" Feb 27 10:29:48 crc kubenswrapper[4728]: E0227 10:29:48.702349 4728 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 27 10:29:48 crc kubenswrapper[4728]: E0227 10:29:48.702729 4728 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nd859,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-rq2kj_openshift-marketplace(055d41f1-5e49-481e-8662-a245ba878526): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 27 10:29:48 crc kubenswrapper[4728]: E0227 10:29:48.704581 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-rq2kj" podUID="055d41f1-5e49-481e-8662-a245ba878526" Feb 27 10:29:48 crc kubenswrapper[4728]: E0227 10:29:48.718290 4728 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 27 10:29:48 crc kubenswrapper[4728]: E0227 10:29:48.718495 4728 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-npgnf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-b97gn_openshift-marketplace(34088c2f-1e95-4227-9242-9e4cde7a9fde): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 27 10:29:48 crc kubenswrapper[4728]: E0227 10:29:48.720307 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-b97gn" podUID="34088c2f-1e95-4227-9242-9e4cde7a9fde" Feb 27 10:29:48 crc kubenswrapper[4728]: I0227 10:29:48.725968 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-w976v_a4d6106d-dd3c-4a6a-aaeb-b441add3fdad/kube-multus-additional-cni-plugins/0.log" Feb 27 10:29:48 crc kubenswrapper[4728]: I0227 10:29:48.726177 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-w976v" Feb 27 10:29:48 crc kubenswrapper[4728]: I0227 10:29:48.798250 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/a4d6106d-dd3c-4a6a-aaeb-b441add3fdad-tuning-conf-dir\") pod \"a4d6106d-dd3c-4a6a-aaeb-b441add3fdad\" (UID: \"a4d6106d-dd3c-4a6a-aaeb-b441add3fdad\") " Feb 27 10:29:48 crc kubenswrapper[4728]: I0227 10:29:48.798300 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/a4d6106d-dd3c-4a6a-aaeb-b441add3fdad-ready\") pod \"a4d6106d-dd3c-4a6a-aaeb-b441add3fdad\" (UID: \"a4d6106d-dd3c-4a6a-aaeb-b441add3fdad\") " Feb 27 10:29:48 crc kubenswrapper[4728]: I0227 10:29:48.798414 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/a4d6106d-dd3c-4a6a-aaeb-b441add3fdad-cni-sysctl-allowlist\") pod \"a4d6106d-dd3c-4a6a-aaeb-b441add3fdad\" (UID: \"a4d6106d-dd3c-4a6a-aaeb-b441add3fdad\") " Feb 27 10:29:48 crc kubenswrapper[4728]: I0227 10:29:48.798463 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kpkm9\" (UniqueName: \"kubernetes.io/projected/a4d6106d-dd3c-4a6a-aaeb-b441add3fdad-kube-api-access-kpkm9\") pod \"a4d6106d-dd3c-4a6a-aaeb-b441add3fdad\" (UID: \"a4d6106d-dd3c-4a6a-aaeb-b441add3fdad\") " Feb 27 10:29:48 crc kubenswrapper[4728]: I0227 10:29:48.799128 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a4d6106d-dd3c-4a6a-aaeb-b441add3fdad-tuning-conf-dir" (OuterVolumeSpecName: "tuning-conf-dir") pod "a4d6106d-dd3c-4a6a-aaeb-b441add3fdad" (UID: "a4d6106d-dd3c-4a6a-aaeb-b441add3fdad"). InnerVolumeSpecName "tuning-conf-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 10:29:48 crc kubenswrapper[4728]: I0227 10:29:48.799420 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a4d6106d-dd3c-4a6a-aaeb-b441add3fdad-ready" (OuterVolumeSpecName: "ready") pod "a4d6106d-dd3c-4a6a-aaeb-b441add3fdad" (UID: "a4d6106d-dd3c-4a6a-aaeb-b441add3fdad"). InnerVolumeSpecName "ready". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:29:48 crc kubenswrapper[4728]: I0227 10:29:48.799678 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a4d6106d-dd3c-4a6a-aaeb-b441add3fdad-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "a4d6106d-dd3c-4a6a-aaeb-b441add3fdad" (UID: "a4d6106d-dd3c-4a6a-aaeb-b441add3fdad"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:29:48 crc kubenswrapper[4728]: I0227 10:29:48.803948 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4d6106d-dd3c-4a6a-aaeb-b441add3fdad-kube-api-access-kpkm9" (OuterVolumeSpecName: "kube-api-access-kpkm9") pod "a4d6106d-dd3c-4a6a-aaeb-b441add3fdad" (UID: "a4d6106d-dd3c-4a6a-aaeb-b441add3fdad"). InnerVolumeSpecName "kube-api-access-kpkm9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:29:48 crc kubenswrapper[4728]: I0227 10:29:48.899632 4728 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/a4d6106d-dd3c-4a6a-aaeb-b441add3fdad-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Feb 27 10:29:48 crc kubenswrapper[4728]: I0227 10:29:48.899662 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kpkm9\" (UniqueName: \"kubernetes.io/projected/a4d6106d-dd3c-4a6a-aaeb-b441add3fdad-kube-api-access-kpkm9\") on node \"crc\" DevicePath \"\"" Feb 27 10:29:48 crc kubenswrapper[4728]: I0227 10:29:48.899671 4728 reconciler_common.go:293] "Volume detached for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/a4d6106d-dd3c-4a6a-aaeb-b441add3fdad-tuning-conf-dir\") on node \"crc\" DevicePath \"\"" Feb 27 10:29:48 crc kubenswrapper[4728]: I0227 10:29:48.899681 4728 reconciler_common.go:293] "Volume detached for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/a4d6106d-dd3c-4a6a-aaeb-b441add3fdad-ready\") on node \"crc\" DevicePath \"\"" Feb 27 10:29:49 crc kubenswrapper[4728]: I0227 10:29:49.041601 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-wv4rk"] Feb 27 10:29:49 crc kubenswrapper[4728]: I0227 10:29:49.099940 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7bfb7d7db9-52m6f"] Feb 27 10:29:49 crc kubenswrapper[4728]: I0227 10:29:49.104094 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 27 10:29:49 crc kubenswrapper[4728]: W0227 10:29:49.107353 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod02fffb4d_70a0_44df_9c12_c9c9ae4a2c0e.slice/crio-c6ca13c4fca1b2bc48fe9ad777eba2ccf221ea0b2f7dfa9d532ad16ad6fd02f5 WatchSource:0}: Error finding container c6ca13c4fca1b2bc48fe9ad777eba2ccf221ea0b2f7dfa9d532ad16ad6fd02f5: Status 404 returned error can't find the container with id c6ca13c4fca1b2bc48fe9ad777eba2ccf221ea0b2f7dfa9d532ad16ad6fd02f5 Feb 27 10:29:49 crc kubenswrapper[4728]: I0227 10:29:49.108300 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-85dd7f8bd5-l9xwn"] Feb 27 10:29:49 crc kubenswrapper[4728]: I0227 10:29:49.432730 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-wv4rk" event={"ID":"861d0263-093a-4dfa-93d7-d3efb29da94b","Type":"ContainerStarted","Data":"69a819d5da9afeea4dd9c6328893de63b749416fe9427114939d910c69e3e41e"} Feb 27 10:29:49 crc kubenswrapper[4728]: I0227 10:29:49.435554 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-wv4rk" event={"ID":"861d0263-093a-4dfa-93d7-d3efb29da94b","Type":"ContainerStarted","Data":"1424d47affc8693628fadf2bdc4ed4c4d0934b3d66402bce3640b4a59031d7c9"} Feb 27 10:29:49 crc kubenswrapper[4728]: I0227 10:29:49.438189 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"02fffb4d-70a0-44df-9c12-c9c9ae4a2c0e","Type":"ContainerStarted","Data":"c6ca13c4fca1b2bc48fe9ad777eba2ccf221ea0b2f7dfa9d532ad16ad6fd02f5"} Feb 27 10:29:49 crc kubenswrapper[4728]: I0227 10:29:49.440483 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7bfb7d7db9-52m6f" event={"ID":"67ff3cc5-f037-4e18-83e1-aa1b470fd115","Type":"ContainerStarted","Data":"181bf611fbdccf55786b0a2554a098347f29957fd1ea22b2c2e8d690386f1722"} Feb 27 10:29:49 crc kubenswrapper[4728]: I0227 10:29:49.440833 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7bfb7d7db9-52m6f" event={"ID":"67ff3cc5-f037-4e18-83e1-aa1b470fd115","Type":"ContainerStarted","Data":"2a35dd9699d05ae06220f4aee93829a5f282e8b786984f9b195e18aff5fbce7a"} Feb 27 10:29:49 crc kubenswrapper[4728]: I0227 10:29:49.440961 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-7bfb7d7db9-52m6f" podUID="67ff3cc5-f037-4e18-83e1-aa1b470fd115" containerName="route-controller-manager" containerID="cri-o://181bf611fbdccf55786b0a2554a098347f29957fd1ea22b2c2e8d690386f1722" gracePeriod=30 Feb 27 10:29:49 crc kubenswrapper[4728]: I0227 10:29:49.442440 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7bfb7d7db9-52m6f" Feb 27 10:29:49 crc kubenswrapper[4728]: I0227 10:29:49.448342 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-w976v_a4d6106d-dd3c-4a6a-aaeb-b441add3fdad/kube-multus-additional-cni-plugins/0.log" Feb 27 10:29:49 crc kubenswrapper[4728]: I0227 10:29:49.448432 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-w976v" Feb 27 10:29:49 crc kubenswrapper[4728]: I0227 10:29:49.448929 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-w976v" event={"ID":"a4d6106d-dd3c-4a6a-aaeb-b441add3fdad","Type":"ContainerDied","Data":"0cf3e969d7eea28f8ac45a8700df99275ff52e402615c51d8b23b8728d46e473"} Feb 27 10:29:49 crc kubenswrapper[4728]: I0227 10:29:49.448973 4728 scope.go:117] "RemoveContainer" containerID="1206c848258532f67f69f951cef6e00d801573f63a796e06bc1603e25228dccf" Feb 27 10:29:49 crc kubenswrapper[4728]: I0227 10:29:49.464302 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-85dd7f8bd5-l9xwn" event={"ID":"e2479aae-7e42-4324-8b06-d0e005e03acc","Type":"ContainerStarted","Data":"04b81405b2be06b12ec785df1d771871a0efe12b34d441982d91e95eeef0dd53"} Feb 27 10:29:49 crc kubenswrapper[4728]: I0227 10:29:49.464349 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-85dd7f8bd5-l9xwn" event={"ID":"e2479aae-7e42-4324-8b06-d0e005e03acc","Type":"ContainerStarted","Data":"8a091ec305b8abb9127d926e454b540f972872cc6bc4a62096e042c88a1f8dc1"} Feb 27 10:29:49 crc kubenswrapper[4728]: I0227 10:29:49.464444 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-85dd7f8bd5-l9xwn" podUID="e2479aae-7e42-4324-8b06-d0e005e03acc" containerName="controller-manager" containerID="cri-o://04b81405b2be06b12ec785df1d771871a0efe12b34d441982d91e95eeef0dd53" gracePeriod=30 Feb 27 10:29:49 crc kubenswrapper[4728]: I0227 10:29:49.464989 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-85dd7f8bd5-l9xwn" Feb 27 10:29:49 crc kubenswrapper[4728]: E0227 10:29:49.480120 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-wnfnp" podUID="b27f5cf8-de13-42a0-825a-0bc27ddc8466" Feb 27 10:29:49 crc kubenswrapper[4728]: E0227 10:29:49.486118 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-b97gn" podUID="34088c2f-1e95-4227-9242-9e4cde7a9fde" Feb 27 10:29:49 crc kubenswrapper[4728]: E0227 10:29:49.486306 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-rq2kj" podUID="055d41f1-5e49-481e-8662-a245ba878526" Feb 27 10:29:49 crc kubenswrapper[4728]: I0227 10:29:49.494720 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-85dd7f8bd5-l9xwn" Feb 27 10:29:49 crc kubenswrapper[4728]: I0227 10:29:49.508067 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7bfb7d7db9-52m6f" podStartSLOduration=21.508052149 podStartE2EDuration="21.508052149s" podCreationTimestamp="2026-02-27 10:29:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:29:49.463881667 +0000 UTC m=+209.426247773" watchObservedRunningTime="2026-02-27 10:29:49.508052149 +0000 UTC m=+209.470418255" Feb 27 10:29:49 crc kubenswrapper[4728]: I0227 10:29:49.548539 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-85dd7f8bd5-l9xwn" podStartSLOduration=21.548523944 podStartE2EDuration="21.548523944s" podCreationTimestamp="2026-02-27 10:29:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:29:49.546488826 +0000 UTC m=+209.508854932" watchObservedRunningTime="2026-02-27 10:29:49.548523944 +0000 UTC m=+209.510890050" Feb 27 10:29:49 crc kubenswrapper[4728]: I0227 10:29:49.569724 4728 patch_prober.go:28] interesting pod/route-controller-manager-7bfb7d7db9-52m6f container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.57:8443/healthz\": read tcp 10.217.0.2:35426->10.217.0.57:8443: read: connection reset by peer" start-of-body= Feb 27 10:29:49 crc kubenswrapper[4728]: I0227 10:29:49.569782 4728 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-7bfb7d7db9-52m6f" podUID="67ff3cc5-f037-4e18-83e1-aa1b470fd115" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.57:8443/healthz\": read tcp 10.217.0.2:35426->10.217.0.57:8443: read: connection reset by peer" Feb 27 10:29:49 crc kubenswrapper[4728]: I0227 10:29:49.606572 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-w976v"] Feb 27 10:29:49 crc kubenswrapper[4728]: I0227 10:29:49.611555 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-w976v"] Feb 27 10:29:49 crc kubenswrapper[4728]: I0227 10:29:49.829182 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-85dd7f8bd5-l9xwn" Feb 27 10:29:49 crc kubenswrapper[4728]: I0227 10:29:49.877951 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-7bfb7d7db9-52m6f_67ff3cc5-f037-4e18-83e1-aa1b470fd115/route-controller-manager/0.log" Feb 27 10:29:49 crc kubenswrapper[4728]: I0227 10:29:49.878013 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7bfb7d7db9-52m6f" Feb 27 10:29:50 crc kubenswrapper[4728]: I0227 10:29:50.013712 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e2479aae-7e42-4324-8b06-d0e005e03acc-client-ca\") pod \"e2479aae-7e42-4324-8b06-d0e005e03acc\" (UID: \"e2479aae-7e42-4324-8b06-d0e005e03acc\") " Feb 27 10:29:50 crc kubenswrapper[4728]: I0227 10:29:50.013938 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/67ff3cc5-f037-4e18-83e1-aa1b470fd115-serving-cert\") pod \"67ff3cc5-f037-4e18-83e1-aa1b470fd115\" (UID: \"67ff3cc5-f037-4e18-83e1-aa1b470fd115\") " Feb 27 10:29:50 crc kubenswrapper[4728]: I0227 10:29:50.014002 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wh68g\" (UniqueName: \"kubernetes.io/projected/e2479aae-7e42-4324-8b06-d0e005e03acc-kube-api-access-wh68g\") pod \"e2479aae-7e42-4324-8b06-d0e005e03acc\" (UID: \"e2479aae-7e42-4324-8b06-d0e005e03acc\") " Feb 27 10:29:50 crc kubenswrapper[4728]: I0227 10:29:50.014035 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e2479aae-7e42-4324-8b06-d0e005e03acc-proxy-ca-bundles\") pod \"e2479aae-7e42-4324-8b06-d0e005e03acc\" (UID: \"e2479aae-7e42-4324-8b06-d0e005e03acc\") " Feb 27 10:29:50 crc kubenswrapper[4728]: I0227 10:29:50.014065 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/67ff3cc5-f037-4e18-83e1-aa1b470fd115-config\") pod \"67ff3cc5-f037-4e18-83e1-aa1b470fd115\" (UID: \"67ff3cc5-f037-4e18-83e1-aa1b470fd115\") " Feb 27 10:29:50 crc kubenswrapper[4728]: I0227 10:29:50.014098 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/67ff3cc5-f037-4e18-83e1-aa1b470fd115-client-ca\") pod \"67ff3cc5-f037-4e18-83e1-aa1b470fd115\" (UID: \"67ff3cc5-f037-4e18-83e1-aa1b470fd115\") " Feb 27 10:29:50 crc kubenswrapper[4728]: I0227 10:29:50.014138 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qktfs\" (UniqueName: \"kubernetes.io/projected/67ff3cc5-f037-4e18-83e1-aa1b470fd115-kube-api-access-qktfs\") pod \"67ff3cc5-f037-4e18-83e1-aa1b470fd115\" (UID: \"67ff3cc5-f037-4e18-83e1-aa1b470fd115\") " Feb 27 10:29:50 crc kubenswrapper[4728]: I0227 10:29:50.014154 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2479aae-7e42-4324-8b06-d0e005e03acc-config\") pod \"e2479aae-7e42-4324-8b06-d0e005e03acc\" (UID: \"e2479aae-7e42-4324-8b06-d0e005e03acc\") " Feb 27 10:29:50 crc kubenswrapper[4728]: I0227 10:29:50.014207 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e2479aae-7e42-4324-8b06-d0e005e03acc-serving-cert\") pod \"e2479aae-7e42-4324-8b06-d0e005e03acc\" (UID: \"e2479aae-7e42-4324-8b06-d0e005e03acc\") " Feb 27 10:29:50 crc kubenswrapper[4728]: I0227 10:29:50.014985 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/67ff3cc5-f037-4e18-83e1-aa1b470fd115-client-ca" (OuterVolumeSpecName: "client-ca") pod "67ff3cc5-f037-4e18-83e1-aa1b470fd115" (UID: "67ff3cc5-f037-4e18-83e1-aa1b470fd115"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:29:50 crc kubenswrapper[4728]: I0227 10:29:50.015013 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/67ff3cc5-f037-4e18-83e1-aa1b470fd115-config" (OuterVolumeSpecName: "config") pod "67ff3cc5-f037-4e18-83e1-aa1b470fd115" (UID: "67ff3cc5-f037-4e18-83e1-aa1b470fd115"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:29:50 crc kubenswrapper[4728]: I0227 10:29:50.015032 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e2479aae-7e42-4324-8b06-d0e005e03acc-client-ca" (OuterVolumeSpecName: "client-ca") pod "e2479aae-7e42-4324-8b06-d0e005e03acc" (UID: "e2479aae-7e42-4324-8b06-d0e005e03acc"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:29:50 crc kubenswrapper[4728]: I0227 10:29:50.015253 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e2479aae-7e42-4324-8b06-d0e005e03acc-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "e2479aae-7e42-4324-8b06-d0e005e03acc" (UID: "e2479aae-7e42-4324-8b06-d0e005e03acc"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:29:50 crc kubenswrapper[4728]: I0227 10:29:50.015662 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e2479aae-7e42-4324-8b06-d0e005e03acc-config" (OuterVolumeSpecName: "config") pod "e2479aae-7e42-4324-8b06-d0e005e03acc" (UID: "e2479aae-7e42-4324-8b06-d0e005e03acc"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:29:50 crc kubenswrapper[4728]: I0227 10:29:50.018356 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2479aae-7e42-4324-8b06-d0e005e03acc-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e2479aae-7e42-4324-8b06-d0e005e03acc" (UID: "e2479aae-7e42-4324-8b06-d0e005e03acc"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:29:50 crc kubenswrapper[4728]: I0227 10:29:50.018517 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67ff3cc5-f037-4e18-83e1-aa1b470fd115-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "67ff3cc5-f037-4e18-83e1-aa1b470fd115" (UID: "67ff3cc5-f037-4e18-83e1-aa1b470fd115"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:29:50 crc kubenswrapper[4728]: I0227 10:29:50.018532 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2479aae-7e42-4324-8b06-d0e005e03acc-kube-api-access-wh68g" (OuterVolumeSpecName: "kube-api-access-wh68g") pod "e2479aae-7e42-4324-8b06-d0e005e03acc" (UID: "e2479aae-7e42-4324-8b06-d0e005e03acc"). InnerVolumeSpecName "kube-api-access-wh68g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:29:50 crc kubenswrapper[4728]: I0227 10:29:50.018553 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67ff3cc5-f037-4e18-83e1-aa1b470fd115-kube-api-access-qktfs" (OuterVolumeSpecName: "kube-api-access-qktfs") pod "67ff3cc5-f037-4e18-83e1-aa1b470fd115" (UID: "67ff3cc5-f037-4e18-83e1-aa1b470fd115"). InnerVolumeSpecName "kube-api-access-qktfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:29:50 crc kubenswrapper[4728]: I0227 10:29:50.119400 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wh68g\" (UniqueName: \"kubernetes.io/projected/e2479aae-7e42-4324-8b06-d0e005e03acc-kube-api-access-wh68g\") on node \"crc\" DevicePath \"\"" Feb 27 10:29:50 crc kubenswrapper[4728]: I0227 10:29:50.119441 4728 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e2479aae-7e42-4324-8b06-d0e005e03acc-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 27 10:29:50 crc kubenswrapper[4728]: I0227 10:29:50.119458 4728 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/67ff3cc5-f037-4e18-83e1-aa1b470fd115-config\") on node \"crc\" DevicePath \"\"" Feb 27 10:29:50 crc kubenswrapper[4728]: I0227 10:29:50.119471 4728 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/67ff3cc5-f037-4e18-83e1-aa1b470fd115-client-ca\") on node \"crc\" DevicePath \"\"" Feb 27 10:29:50 crc kubenswrapper[4728]: I0227 10:29:50.119484 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qktfs\" (UniqueName: \"kubernetes.io/projected/67ff3cc5-f037-4e18-83e1-aa1b470fd115-kube-api-access-qktfs\") on node \"crc\" DevicePath \"\"" Feb 27 10:29:50 crc kubenswrapper[4728]: I0227 10:29:50.119495 4728 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2479aae-7e42-4324-8b06-d0e005e03acc-config\") on node \"crc\" DevicePath \"\"" Feb 27 10:29:50 crc kubenswrapper[4728]: I0227 10:29:50.119517 4728 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e2479aae-7e42-4324-8b06-d0e005e03acc-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 10:29:50 crc kubenswrapper[4728]: I0227 10:29:50.119525 4728 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e2479aae-7e42-4324-8b06-d0e005e03acc-client-ca\") on node \"crc\" DevicePath \"\"" Feb 27 10:29:50 crc kubenswrapper[4728]: I0227 10:29:50.119533 4728 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/67ff3cc5-f037-4e18-83e1-aa1b470fd115-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 10:29:50 crc kubenswrapper[4728]: I0227 10:29:50.479371 4728 generic.go:334] "Generic (PLEG): container finished" podID="e2479aae-7e42-4324-8b06-d0e005e03acc" containerID="04b81405b2be06b12ec785df1d771871a0efe12b34d441982d91e95eeef0dd53" exitCode=0 Feb 27 10:29:50 crc kubenswrapper[4728]: I0227 10:29:50.479557 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-85dd7f8bd5-l9xwn" Feb 27 10:29:50 crc kubenswrapper[4728]: I0227 10:29:50.480355 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-85dd7f8bd5-l9xwn" event={"ID":"e2479aae-7e42-4324-8b06-d0e005e03acc","Type":"ContainerDied","Data":"04b81405b2be06b12ec785df1d771871a0efe12b34d441982d91e95eeef0dd53"} Feb 27 10:29:50 crc kubenswrapper[4728]: I0227 10:29:50.480407 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-85dd7f8bd5-l9xwn" event={"ID":"e2479aae-7e42-4324-8b06-d0e005e03acc","Type":"ContainerDied","Data":"8a091ec305b8abb9127d926e454b540f972872cc6bc4a62096e042c88a1f8dc1"} Feb 27 10:29:50 crc kubenswrapper[4728]: I0227 10:29:50.480455 4728 scope.go:117] "RemoveContainer" containerID="04b81405b2be06b12ec785df1d771871a0efe12b34d441982d91e95eeef0dd53" Feb 27 10:29:50 crc kubenswrapper[4728]: I0227 10:29:50.483099 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-wv4rk" event={"ID":"861d0263-093a-4dfa-93d7-d3efb29da94b","Type":"ContainerStarted","Data":"c208f76d61b2206bfd910d00165a4b9ed244f9895b5831a66bd66d71c27652e4"} Feb 27 10:29:50 crc kubenswrapper[4728]: I0227 10:29:50.485817 4728 generic.go:334] "Generic (PLEG): container finished" podID="02fffb4d-70a0-44df-9c12-c9c9ae4a2c0e" containerID="5e3db8ed3548d04ca639a3afa2502f85d72926e3bbc7868ca1ebcafcdb941a5c" exitCode=0 Feb 27 10:29:50 crc kubenswrapper[4728]: I0227 10:29:50.485884 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"02fffb4d-70a0-44df-9c12-c9c9ae4a2c0e","Type":"ContainerDied","Data":"5e3db8ed3548d04ca639a3afa2502f85d72926e3bbc7868ca1ebcafcdb941a5c"} Feb 27 10:29:50 crc kubenswrapper[4728]: I0227 10:29:50.488252 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-7bfb7d7db9-52m6f_67ff3cc5-f037-4e18-83e1-aa1b470fd115/route-controller-manager/0.log" Feb 27 10:29:50 crc kubenswrapper[4728]: I0227 10:29:50.488286 4728 generic.go:334] "Generic (PLEG): container finished" podID="67ff3cc5-f037-4e18-83e1-aa1b470fd115" containerID="181bf611fbdccf55786b0a2554a098347f29957fd1ea22b2c2e8d690386f1722" exitCode=255 Feb 27 10:29:50 crc kubenswrapper[4728]: I0227 10:29:50.488330 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7bfb7d7db9-52m6f" event={"ID":"67ff3cc5-f037-4e18-83e1-aa1b470fd115","Type":"ContainerDied","Data":"181bf611fbdccf55786b0a2554a098347f29957fd1ea22b2c2e8d690386f1722"} Feb 27 10:29:50 crc kubenswrapper[4728]: I0227 10:29:50.488347 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7bfb7d7db9-52m6f" event={"ID":"67ff3cc5-f037-4e18-83e1-aa1b470fd115","Type":"ContainerDied","Data":"2a35dd9699d05ae06220f4aee93829a5f282e8b786984f9b195e18aff5fbce7a"} Feb 27 10:29:50 crc kubenswrapper[4728]: I0227 10:29:50.488418 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7bfb7d7db9-52m6f" Feb 27 10:29:50 crc kubenswrapper[4728]: I0227 10:29:50.497921 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-wv4rk" podStartSLOduration=140.497910096 podStartE2EDuration="2m20.497910096s" podCreationTimestamp="2026-02-27 10:27:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:29:50.494136177 +0000 UTC m=+210.456502283" watchObservedRunningTime="2026-02-27 10:29:50.497910096 +0000 UTC m=+210.460276202" Feb 27 10:29:50 crc kubenswrapper[4728]: I0227 10:29:50.514843 4728 scope.go:117] "RemoveContainer" containerID="04b81405b2be06b12ec785df1d771871a0efe12b34d441982d91e95eeef0dd53" Feb 27 10:29:50 crc kubenswrapper[4728]: E0227 10:29:50.525467 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"04b81405b2be06b12ec785df1d771871a0efe12b34d441982d91e95eeef0dd53\": container with ID starting with 04b81405b2be06b12ec785df1d771871a0efe12b34d441982d91e95eeef0dd53 not found: ID does not exist" containerID="04b81405b2be06b12ec785df1d771871a0efe12b34d441982d91e95eeef0dd53" Feb 27 10:29:50 crc kubenswrapper[4728]: I0227 10:29:50.525524 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"04b81405b2be06b12ec785df1d771871a0efe12b34d441982d91e95eeef0dd53"} err="failed to get container status \"04b81405b2be06b12ec785df1d771871a0efe12b34d441982d91e95eeef0dd53\": rpc error: code = NotFound desc = could not find container \"04b81405b2be06b12ec785df1d771871a0efe12b34d441982d91e95eeef0dd53\": container with ID starting with 04b81405b2be06b12ec785df1d771871a0efe12b34d441982d91e95eeef0dd53 not found: ID does not exist" Feb 27 10:29:50 crc kubenswrapper[4728]: I0227 10:29:50.525557 4728 scope.go:117] "RemoveContainer" containerID="181bf611fbdccf55786b0a2554a098347f29957fd1ea22b2c2e8d690386f1722" Feb 27 10:29:50 crc kubenswrapper[4728]: I0227 10:29:50.530257 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-85dd7f8bd5-l9xwn"] Feb 27 10:29:50 crc kubenswrapper[4728]: I0227 10:29:50.535327 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-85dd7f8bd5-l9xwn"] Feb 27 10:29:50 crc kubenswrapper[4728]: I0227 10:29:50.545411 4728 scope.go:117] "RemoveContainer" containerID="181bf611fbdccf55786b0a2554a098347f29957fd1ea22b2c2e8d690386f1722" Feb 27 10:29:50 crc kubenswrapper[4728]: E0227 10:29:50.545851 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"181bf611fbdccf55786b0a2554a098347f29957fd1ea22b2c2e8d690386f1722\": container with ID starting with 181bf611fbdccf55786b0a2554a098347f29957fd1ea22b2c2e8d690386f1722 not found: ID does not exist" containerID="181bf611fbdccf55786b0a2554a098347f29957fd1ea22b2c2e8d690386f1722" Feb 27 10:29:50 crc kubenswrapper[4728]: I0227 10:29:50.545888 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"181bf611fbdccf55786b0a2554a098347f29957fd1ea22b2c2e8d690386f1722"} err="failed to get container status \"181bf611fbdccf55786b0a2554a098347f29957fd1ea22b2c2e8d690386f1722\": rpc error: code = NotFound desc = could not find container \"181bf611fbdccf55786b0a2554a098347f29957fd1ea22b2c2e8d690386f1722\": container with ID starting with 181bf611fbdccf55786b0a2554a098347f29957fd1ea22b2c2e8d690386f1722 not found: ID does not exist" Feb 27 10:29:50 crc kubenswrapper[4728]: I0227 10:29:50.545901 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7bfb7d7db9-52m6f"] Feb 27 10:29:50 crc kubenswrapper[4728]: I0227 10:29:50.549214 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7bfb7d7db9-52m6f"] Feb 27 10:29:50 crc kubenswrapper[4728]: I0227 10:29:50.732121 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="67ff3cc5-f037-4e18-83e1-aa1b470fd115" path="/var/lib/kubelet/pods/67ff3cc5-f037-4e18-83e1-aa1b470fd115/volumes" Feb 27 10:29:50 crc kubenswrapper[4728]: I0227 10:29:50.732724 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a4d6106d-dd3c-4a6a-aaeb-b441add3fdad" path="/var/lib/kubelet/pods/a4d6106d-dd3c-4a6a-aaeb-b441add3fdad/volumes" Feb 27 10:29:50 crc kubenswrapper[4728]: I0227 10:29:50.733296 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e2479aae-7e42-4324-8b06-d0e005e03acc" path="/var/lib/kubelet/pods/e2479aae-7e42-4324-8b06-d0e005e03acc/volumes" Feb 27 10:29:51 crc kubenswrapper[4728]: I0227 10:29:51.537472 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-78d57b857c-gjd9v"] Feb 27 10:29:51 crc kubenswrapper[4728]: E0227 10:29:51.537965 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4d6106d-dd3c-4a6a-aaeb-b441add3fdad" containerName="kube-multus-additional-cni-plugins" Feb 27 10:29:51 crc kubenswrapper[4728]: I0227 10:29:51.538003 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4d6106d-dd3c-4a6a-aaeb-b441add3fdad" containerName="kube-multus-additional-cni-plugins" Feb 27 10:29:51 crc kubenswrapper[4728]: E0227 10:29:51.538027 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2479aae-7e42-4324-8b06-d0e005e03acc" containerName="controller-manager" Feb 27 10:29:51 crc kubenswrapper[4728]: I0227 10:29:51.538036 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2479aae-7e42-4324-8b06-d0e005e03acc" containerName="controller-manager" Feb 27 10:29:51 crc kubenswrapper[4728]: E0227 10:29:51.538065 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67ff3cc5-f037-4e18-83e1-aa1b470fd115" containerName="route-controller-manager" Feb 27 10:29:51 crc kubenswrapper[4728]: I0227 10:29:51.538073 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="67ff3cc5-f037-4e18-83e1-aa1b470fd115" containerName="route-controller-manager" Feb 27 10:29:51 crc kubenswrapper[4728]: I0227 10:29:51.538347 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4d6106d-dd3c-4a6a-aaeb-b441add3fdad" containerName="kube-multus-additional-cni-plugins" Feb 27 10:29:51 crc kubenswrapper[4728]: I0227 10:29:51.538371 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="67ff3cc5-f037-4e18-83e1-aa1b470fd115" containerName="route-controller-manager" Feb 27 10:29:51 crc kubenswrapper[4728]: I0227 10:29:51.538392 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2479aae-7e42-4324-8b06-d0e005e03acc" containerName="controller-manager" Feb 27 10:29:51 crc kubenswrapper[4728]: I0227 10:29:51.539021 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-78d57b857c-gjd9v" Feb 27 10:29:51 crc kubenswrapper[4728]: I0227 10:29:51.544144 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 27 10:29:51 crc kubenswrapper[4728]: I0227 10:29:51.544168 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 27 10:29:51 crc kubenswrapper[4728]: I0227 10:29:51.544697 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 27 10:29:51 crc kubenswrapper[4728]: I0227 10:29:51.545454 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 27 10:29:51 crc kubenswrapper[4728]: I0227 10:29:51.546492 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 27 10:29:51 crc kubenswrapper[4728]: I0227 10:29:51.546759 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 27 10:29:51 crc kubenswrapper[4728]: I0227 10:29:51.564334 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-764d56c5d4-trvgv"] Feb 27 10:29:51 crc kubenswrapper[4728]: I0227 10:29:51.565408 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-764d56c5d4-trvgv" Feb 27 10:29:51 crc kubenswrapper[4728]: I0227 10:29:51.569748 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-78d57b857c-gjd9v"] Feb 27 10:29:51 crc kubenswrapper[4728]: I0227 10:29:51.569918 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 27 10:29:51 crc kubenswrapper[4728]: I0227 10:29:51.570046 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 27 10:29:51 crc kubenswrapper[4728]: I0227 10:29:51.570008 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 27 10:29:51 crc kubenswrapper[4728]: I0227 10:29:51.572197 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 27 10:29:51 crc kubenswrapper[4728]: I0227 10:29:51.572432 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 27 10:29:51 crc kubenswrapper[4728]: I0227 10:29:51.572672 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 27 10:29:51 crc kubenswrapper[4728]: I0227 10:29:51.573894 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-764d56c5d4-trvgv"] Feb 27 10:29:51 crc kubenswrapper[4728]: I0227 10:29:51.582581 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 27 10:29:51 crc kubenswrapper[4728]: I0227 10:29:51.741242 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/644a6631-07ad-4ddd-8cf1-c60f26b44a43-serving-cert\") pod \"route-controller-manager-78d57b857c-gjd9v\" (UID: \"644a6631-07ad-4ddd-8cf1-c60f26b44a43\") " pod="openshift-route-controller-manager/route-controller-manager-78d57b857c-gjd9v" Feb 27 10:29:51 crc kubenswrapper[4728]: I0227 10:29:51.741281 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0441e633-30eb-4e90-aa65-ca23fb9697f9-serving-cert\") pod \"controller-manager-764d56c5d4-trvgv\" (UID: \"0441e633-30eb-4e90-aa65-ca23fb9697f9\") " pod="openshift-controller-manager/controller-manager-764d56c5d4-trvgv" Feb 27 10:29:51 crc kubenswrapper[4728]: I0227 10:29:51.741327 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/644a6631-07ad-4ddd-8cf1-c60f26b44a43-config\") pod \"route-controller-manager-78d57b857c-gjd9v\" (UID: \"644a6631-07ad-4ddd-8cf1-c60f26b44a43\") " pod="openshift-route-controller-manager/route-controller-manager-78d57b857c-gjd9v" Feb 27 10:29:51 crc kubenswrapper[4728]: I0227 10:29:51.741425 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0441e633-30eb-4e90-aa65-ca23fb9697f9-proxy-ca-bundles\") pod \"controller-manager-764d56c5d4-trvgv\" (UID: \"0441e633-30eb-4e90-aa65-ca23fb9697f9\") " pod="openshift-controller-manager/controller-manager-764d56c5d4-trvgv" Feb 27 10:29:51 crc kubenswrapper[4728]: I0227 10:29:51.742133 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2vxf\" (UniqueName: \"kubernetes.io/projected/0441e633-30eb-4e90-aa65-ca23fb9697f9-kube-api-access-f2vxf\") pod \"controller-manager-764d56c5d4-trvgv\" (UID: \"0441e633-30eb-4e90-aa65-ca23fb9697f9\") " pod="openshift-controller-manager/controller-manager-764d56c5d4-trvgv" Feb 27 10:29:51 crc kubenswrapper[4728]: I0227 10:29:51.742166 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/644a6631-07ad-4ddd-8cf1-c60f26b44a43-client-ca\") pod \"route-controller-manager-78d57b857c-gjd9v\" (UID: \"644a6631-07ad-4ddd-8cf1-c60f26b44a43\") " pod="openshift-route-controller-manager/route-controller-manager-78d57b857c-gjd9v" Feb 27 10:29:51 crc kubenswrapper[4728]: I0227 10:29:51.742226 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0441e633-30eb-4e90-aa65-ca23fb9697f9-config\") pod \"controller-manager-764d56c5d4-trvgv\" (UID: \"0441e633-30eb-4e90-aa65-ca23fb9697f9\") " pod="openshift-controller-manager/controller-manager-764d56c5d4-trvgv" Feb 27 10:29:51 crc kubenswrapper[4728]: I0227 10:29:51.742319 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0441e633-30eb-4e90-aa65-ca23fb9697f9-client-ca\") pod \"controller-manager-764d56c5d4-trvgv\" (UID: \"0441e633-30eb-4e90-aa65-ca23fb9697f9\") " pod="openshift-controller-manager/controller-manager-764d56c5d4-trvgv" Feb 27 10:29:51 crc kubenswrapper[4728]: I0227 10:29:51.742414 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5v7xp\" (UniqueName: \"kubernetes.io/projected/644a6631-07ad-4ddd-8cf1-c60f26b44a43-kube-api-access-5v7xp\") pod \"route-controller-manager-78d57b857c-gjd9v\" (UID: \"644a6631-07ad-4ddd-8cf1-c60f26b44a43\") " pod="openshift-route-controller-manager/route-controller-manager-78d57b857c-gjd9v" Feb 27 10:29:51 crc kubenswrapper[4728]: I0227 10:29:51.776863 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 27 10:29:51 crc kubenswrapper[4728]: I0227 10:29:51.843455 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/644a6631-07ad-4ddd-8cf1-c60f26b44a43-serving-cert\") pod \"route-controller-manager-78d57b857c-gjd9v\" (UID: \"644a6631-07ad-4ddd-8cf1-c60f26b44a43\") " pod="openshift-route-controller-manager/route-controller-manager-78d57b857c-gjd9v" Feb 27 10:29:51 crc kubenswrapper[4728]: I0227 10:29:51.844059 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0441e633-30eb-4e90-aa65-ca23fb9697f9-serving-cert\") pod \"controller-manager-764d56c5d4-trvgv\" (UID: \"0441e633-30eb-4e90-aa65-ca23fb9697f9\") " pod="openshift-controller-manager/controller-manager-764d56c5d4-trvgv" Feb 27 10:29:51 crc kubenswrapper[4728]: I0227 10:29:51.844088 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/644a6631-07ad-4ddd-8cf1-c60f26b44a43-config\") pod \"route-controller-manager-78d57b857c-gjd9v\" (UID: \"644a6631-07ad-4ddd-8cf1-c60f26b44a43\") " pod="openshift-route-controller-manager/route-controller-manager-78d57b857c-gjd9v" Feb 27 10:29:51 crc kubenswrapper[4728]: I0227 10:29:51.844117 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0441e633-30eb-4e90-aa65-ca23fb9697f9-proxy-ca-bundles\") pod \"controller-manager-764d56c5d4-trvgv\" (UID: \"0441e633-30eb-4e90-aa65-ca23fb9697f9\") " pod="openshift-controller-manager/controller-manager-764d56c5d4-trvgv" Feb 27 10:29:51 crc kubenswrapper[4728]: I0227 10:29:51.844138 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f2vxf\" (UniqueName: \"kubernetes.io/projected/0441e633-30eb-4e90-aa65-ca23fb9697f9-kube-api-access-f2vxf\") pod \"controller-manager-764d56c5d4-trvgv\" (UID: \"0441e633-30eb-4e90-aa65-ca23fb9697f9\") " pod="openshift-controller-manager/controller-manager-764d56c5d4-trvgv" Feb 27 10:29:51 crc kubenswrapper[4728]: I0227 10:29:51.844162 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/644a6631-07ad-4ddd-8cf1-c60f26b44a43-client-ca\") pod \"route-controller-manager-78d57b857c-gjd9v\" (UID: \"644a6631-07ad-4ddd-8cf1-c60f26b44a43\") " pod="openshift-route-controller-manager/route-controller-manager-78d57b857c-gjd9v" Feb 27 10:29:51 crc kubenswrapper[4728]: I0227 10:29:51.844182 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0441e633-30eb-4e90-aa65-ca23fb9697f9-config\") pod \"controller-manager-764d56c5d4-trvgv\" (UID: \"0441e633-30eb-4e90-aa65-ca23fb9697f9\") " pod="openshift-controller-manager/controller-manager-764d56c5d4-trvgv" Feb 27 10:29:51 crc kubenswrapper[4728]: I0227 10:29:51.844204 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0441e633-30eb-4e90-aa65-ca23fb9697f9-client-ca\") pod \"controller-manager-764d56c5d4-trvgv\" (UID: \"0441e633-30eb-4e90-aa65-ca23fb9697f9\") " pod="openshift-controller-manager/controller-manager-764d56c5d4-trvgv" Feb 27 10:29:51 crc kubenswrapper[4728]: I0227 10:29:51.844222 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5v7xp\" (UniqueName: \"kubernetes.io/projected/644a6631-07ad-4ddd-8cf1-c60f26b44a43-kube-api-access-5v7xp\") pod \"route-controller-manager-78d57b857c-gjd9v\" (UID: \"644a6631-07ad-4ddd-8cf1-c60f26b44a43\") " pod="openshift-route-controller-manager/route-controller-manager-78d57b857c-gjd9v" Feb 27 10:29:51 crc kubenswrapper[4728]: I0227 10:29:51.845279 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0441e633-30eb-4e90-aa65-ca23fb9697f9-client-ca\") pod \"controller-manager-764d56c5d4-trvgv\" (UID: \"0441e633-30eb-4e90-aa65-ca23fb9697f9\") " pod="openshift-controller-manager/controller-manager-764d56c5d4-trvgv" Feb 27 10:29:51 crc kubenswrapper[4728]: I0227 10:29:51.845433 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/644a6631-07ad-4ddd-8cf1-c60f26b44a43-config\") pod \"route-controller-manager-78d57b857c-gjd9v\" (UID: \"644a6631-07ad-4ddd-8cf1-c60f26b44a43\") " pod="openshift-route-controller-manager/route-controller-manager-78d57b857c-gjd9v" Feb 27 10:29:51 crc kubenswrapper[4728]: I0227 10:29:51.845816 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0441e633-30eb-4e90-aa65-ca23fb9697f9-config\") pod \"controller-manager-764d56c5d4-trvgv\" (UID: \"0441e633-30eb-4e90-aa65-ca23fb9697f9\") " pod="openshift-controller-manager/controller-manager-764d56c5d4-trvgv" Feb 27 10:29:51 crc kubenswrapper[4728]: I0227 10:29:51.845904 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0441e633-30eb-4e90-aa65-ca23fb9697f9-proxy-ca-bundles\") pod \"controller-manager-764d56c5d4-trvgv\" (UID: \"0441e633-30eb-4e90-aa65-ca23fb9697f9\") " pod="openshift-controller-manager/controller-manager-764d56c5d4-trvgv" Feb 27 10:29:51 crc kubenswrapper[4728]: I0227 10:29:51.846030 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/644a6631-07ad-4ddd-8cf1-c60f26b44a43-client-ca\") pod \"route-controller-manager-78d57b857c-gjd9v\" (UID: \"644a6631-07ad-4ddd-8cf1-c60f26b44a43\") " pod="openshift-route-controller-manager/route-controller-manager-78d57b857c-gjd9v" Feb 27 10:29:51 crc kubenswrapper[4728]: I0227 10:29:51.849584 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0441e633-30eb-4e90-aa65-ca23fb9697f9-serving-cert\") pod \"controller-manager-764d56c5d4-trvgv\" (UID: \"0441e633-30eb-4e90-aa65-ca23fb9697f9\") " pod="openshift-controller-manager/controller-manager-764d56c5d4-trvgv" Feb 27 10:29:51 crc kubenswrapper[4728]: I0227 10:29:51.854793 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/644a6631-07ad-4ddd-8cf1-c60f26b44a43-serving-cert\") pod \"route-controller-manager-78d57b857c-gjd9v\" (UID: \"644a6631-07ad-4ddd-8cf1-c60f26b44a43\") " pod="openshift-route-controller-manager/route-controller-manager-78d57b857c-gjd9v" Feb 27 10:29:51 crc kubenswrapper[4728]: I0227 10:29:51.858370 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5v7xp\" (UniqueName: \"kubernetes.io/projected/644a6631-07ad-4ddd-8cf1-c60f26b44a43-kube-api-access-5v7xp\") pod \"route-controller-manager-78d57b857c-gjd9v\" (UID: \"644a6631-07ad-4ddd-8cf1-c60f26b44a43\") " pod="openshift-route-controller-manager/route-controller-manager-78d57b857c-gjd9v" Feb 27 10:29:51 crc kubenswrapper[4728]: I0227 10:29:51.865098 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f2vxf\" (UniqueName: \"kubernetes.io/projected/0441e633-30eb-4e90-aa65-ca23fb9697f9-kube-api-access-f2vxf\") pod \"controller-manager-764d56c5d4-trvgv\" (UID: \"0441e633-30eb-4e90-aa65-ca23fb9697f9\") " pod="openshift-controller-manager/controller-manager-764d56c5d4-trvgv" Feb 27 10:29:51 crc kubenswrapper[4728]: I0227 10:29:51.877081 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-78d57b857c-gjd9v" Feb 27 10:29:51 crc kubenswrapper[4728]: I0227 10:29:51.893276 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-764d56c5d4-trvgv" Feb 27 10:29:51 crc kubenswrapper[4728]: I0227 10:29:51.945283 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/02fffb4d-70a0-44df-9c12-c9c9ae4a2c0e-kube-api-access\") pod \"02fffb4d-70a0-44df-9c12-c9c9ae4a2c0e\" (UID: \"02fffb4d-70a0-44df-9c12-c9c9ae4a2c0e\") " Feb 27 10:29:51 crc kubenswrapper[4728]: I0227 10:29:51.945598 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/02fffb4d-70a0-44df-9c12-c9c9ae4a2c0e-kubelet-dir\") pod \"02fffb4d-70a0-44df-9c12-c9c9ae4a2c0e\" (UID: \"02fffb4d-70a0-44df-9c12-c9c9ae4a2c0e\") " Feb 27 10:29:51 crc kubenswrapper[4728]: I0227 10:29:51.945740 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/02fffb4d-70a0-44df-9c12-c9c9ae4a2c0e-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "02fffb4d-70a0-44df-9c12-c9c9ae4a2c0e" (UID: "02fffb4d-70a0-44df-9c12-c9c9ae4a2c0e"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 10:29:51 crc kubenswrapper[4728]: I0227 10:29:51.946071 4728 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/02fffb4d-70a0-44df-9c12-c9c9ae4a2c0e-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 27 10:29:51 crc kubenswrapper[4728]: I0227 10:29:51.949668 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02fffb4d-70a0-44df-9c12-c9c9ae4a2c0e-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "02fffb4d-70a0-44df-9c12-c9c9ae4a2c0e" (UID: "02fffb4d-70a0-44df-9c12-c9c9ae4a2c0e"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:29:52 crc kubenswrapper[4728]: I0227 10:29:52.047680 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/02fffb4d-70a0-44df-9c12-c9c9ae4a2c0e-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 27 10:29:52 crc kubenswrapper[4728]: I0227 10:29:52.101426 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-764d56c5d4-trvgv"] Feb 27 10:29:52 crc kubenswrapper[4728]: W0227 10:29:52.107773 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0441e633_30eb_4e90_aa65_ca23fb9697f9.slice/crio-5620a15d87dd4265ea396222e3339095edb623ec7e2670a44de34c0c97141844 WatchSource:0}: Error finding container 5620a15d87dd4265ea396222e3339095edb623ec7e2670a44de34c0c97141844: Status 404 returned error can't find the container with id 5620a15d87dd4265ea396222e3339095edb623ec7e2670a44de34c0c97141844 Feb 27 10:29:52 crc kubenswrapper[4728]: I0227 10:29:52.272048 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-78d57b857c-gjd9v"] Feb 27 10:29:52 crc kubenswrapper[4728]: W0227 10:29:52.277408 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod644a6631_07ad_4ddd_8cf1_c60f26b44a43.slice/crio-3565b93ed50f8d3fc6e5ca835a9aa43fd14d64fa523a9b5539e4f2e56340c159 WatchSource:0}: Error finding container 3565b93ed50f8d3fc6e5ca835a9aa43fd14d64fa523a9b5539e4f2e56340c159: Status 404 returned error can't find the container with id 3565b93ed50f8d3fc6e5ca835a9aa43fd14d64fa523a9b5539e4f2e56340c159 Feb 27 10:29:52 crc kubenswrapper[4728]: I0227 10:29:52.472777 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 27 10:29:52 crc kubenswrapper[4728]: E0227 10:29:52.472976 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02fffb4d-70a0-44df-9c12-c9c9ae4a2c0e" containerName="pruner" Feb 27 10:29:52 crc kubenswrapper[4728]: I0227 10:29:52.472988 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="02fffb4d-70a0-44df-9c12-c9c9ae4a2c0e" containerName="pruner" Feb 27 10:29:52 crc kubenswrapper[4728]: I0227 10:29:52.473087 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="02fffb4d-70a0-44df-9c12-c9c9ae4a2c0e" containerName="pruner" Feb 27 10:29:52 crc kubenswrapper[4728]: I0227 10:29:52.473467 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 27 10:29:52 crc kubenswrapper[4728]: I0227 10:29:52.490414 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 27 10:29:52 crc kubenswrapper[4728]: I0227 10:29:52.517759 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-764d56c5d4-trvgv" event={"ID":"0441e633-30eb-4e90-aa65-ca23fb9697f9","Type":"ContainerStarted","Data":"63ea93649b36e4d2b410acc9c393f2f264083660cfc0ffb242e4ab8558b96528"} Feb 27 10:29:52 crc kubenswrapper[4728]: I0227 10:29:52.517810 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-764d56c5d4-trvgv" event={"ID":"0441e633-30eb-4e90-aa65-ca23fb9697f9","Type":"ContainerStarted","Data":"5620a15d87dd4265ea396222e3339095edb623ec7e2670a44de34c0c97141844"} Feb 27 10:29:52 crc kubenswrapper[4728]: I0227 10:29:52.518028 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-764d56c5d4-trvgv" Feb 27 10:29:52 crc kubenswrapper[4728]: I0227 10:29:52.519651 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-78d57b857c-gjd9v" event={"ID":"644a6631-07ad-4ddd-8cf1-c60f26b44a43","Type":"ContainerStarted","Data":"819faf11edfaf3f737506bcacb2ce472a6dc836c306ab2cc62a15591a08b1a74"} Feb 27 10:29:52 crc kubenswrapper[4728]: I0227 10:29:52.519716 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-78d57b857c-gjd9v" event={"ID":"644a6631-07ad-4ddd-8cf1-c60f26b44a43","Type":"ContainerStarted","Data":"3565b93ed50f8d3fc6e5ca835a9aa43fd14d64fa523a9b5539e4f2e56340c159"} Feb 27 10:29:52 crc kubenswrapper[4728]: I0227 10:29:52.519883 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-78d57b857c-gjd9v" Feb 27 10:29:52 crc kubenswrapper[4728]: I0227 10:29:52.521481 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"02fffb4d-70a0-44df-9c12-c9c9ae4a2c0e","Type":"ContainerDied","Data":"c6ca13c4fca1b2bc48fe9ad777eba2ccf221ea0b2f7dfa9d532ad16ad6fd02f5"} Feb 27 10:29:52 crc kubenswrapper[4728]: I0227 10:29:52.521520 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c6ca13c4fca1b2bc48fe9ad777eba2ccf221ea0b2f7dfa9d532ad16ad6fd02f5" Feb 27 10:29:52 crc kubenswrapper[4728]: I0227 10:29:52.521566 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 27 10:29:52 crc kubenswrapper[4728]: I0227 10:29:52.526923 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-764d56c5d4-trvgv" Feb 27 10:29:52 crc kubenswrapper[4728]: I0227 10:29:52.536841 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-764d56c5d4-trvgv" podStartSLOduration=4.5368257960000005 podStartE2EDuration="4.536825796s" podCreationTimestamp="2026-02-27 10:29:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:29:52.534307134 +0000 UTC m=+212.496673240" watchObservedRunningTime="2026-02-27 10:29:52.536825796 +0000 UTC m=+212.499191902" Feb 27 10:29:52 crc kubenswrapper[4728]: I0227 10:29:52.592821 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-78d57b857c-gjd9v" podStartSLOduration=4.592803038 podStartE2EDuration="4.592803038s" podCreationTimestamp="2026-02-27 10:29:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:29:52.5664551 +0000 UTC m=+212.528821206" watchObservedRunningTime="2026-02-27 10:29:52.592803038 +0000 UTC m=+212.555169144" Feb 27 10:29:52 crc kubenswrapper[4728]: I0227 10:29:52.653287 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7a457cf-2c88-458c-b3a3-e53f1b717d81-kube-api-access\") pod \"installer-9-crc\" (UID: \"e7a457cf-2c88-458c-b3a3-e53f1b717d81\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 27 10:29:52 crc kubenswrapper[4728]: I0227 10:29:52.653337 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e7a457cf-2c88-458c-b3a3-e53f1b717d81-kubelet-dir\") pod \"installer-9-crc\" (UID: \"e7a457cf-2c88-458c-b3a3-e53f1b717d81\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 27 10:29:52 crc kubenswrapper[4728]: I0227 10:29:52.653430 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e7a457cf-2c88-458c-b3a3-e53f1b717d81-var-lock\") pod \"installer-9-crc\" (UID: \"e7a457cf-2c88-458c-b3a3-e53f1b717d81\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 27 10:29:52 crc kubenswrapper[4728]: I0227 10:29:52.754739 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-78d57b857c-gjd9v" Feb 27 10:29:52 crc kubenswrapper[4728]: I0227 10:29:52.755583 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e7a457cf-2c88-458c-b3a3-e53f1b717d81-var-lock\") pod \"installer-9-crc\" (UID: \"e7a457cf-2c88-458c-b3a3-e53f1b717d81\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 27 10:29:52 crc kubenswrapper[4728]: I0227 10:29:52.755636 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7a457cf-2c88-458c-b3a3-e53f1b717d81-kube-api-access\") pod \"installer-9-crc\" (UID: \"e7a457cf-2c88-458c-b3a3-e53f1b717d81\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 27 10:29:52 crc kubenswrapper[4728]: I0227 10:29:52.755737 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e7a457cf-2c88-458c-b3a3-e53f1b717d81-var-lock\") pod \"installer-9-crc\" (UID: \"e7a457cf-2c88-458c-b3a3-e53f1b717d81\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 27 10:29:52 crc kubenswrapper[4728]: I0227 10:29:52.755864 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e7a457cf-2c88-458c-b3a3-e53f1b717d81-kubelet-dir\") pod \"installer-9-crc\" (UID: \"e7a457cf-2c88-458c-b3a3-e53f1b717d81\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 27 10:29:52 crc kubenswrapper[4728]: I0227 10:29:52.755995 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e7a457cf-2c88-458c-b3a3-e53f1b717d81-kubelet-dir\") pod \"installer-9-crc\" (UID: \"e7a457cf-2c88-458c-b3a3-e53f1b717d81\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 27 10:29:52 crc kubenswrapper[4728]: I0227 10:29:52.775119 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7a457cf-2c88-458c-b3a3-e53f1b717d81-kube-api-access\") pod \"installer-9-crc\" (UID: \"e7a457cf-2c88-458c-b3a3-e53f1b717d81\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 27 10:29:52 crc kubenswrapper[4728]: I0227 10:29:52.794615 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 27 10:29:53 crc kubenswrapper[4728]: I0227 10:29:53.199577 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 27 10:29:53 crc kubenswrapper[4728]: W0227 10:29:53.207989 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pode7a457cf_2c88_458c_b3a3_e53f1b717d81.slice/crio-635aea1e408cd0a3da79c218c7d1eff02227e7a130ddbf099abd83f3ca673133 WatchSource:0}: Error finding container 635aea1e408cd0a3da79c218c7d1eff02227e7a130ddbf099abd83f3ca673133: Status 404 returned error can't find the container with id 635aea1e408cd0a3da79c218c7d1eff02227e7a130ddbf099abd83f3ca673133 Feb 27 10:29:53 crc kubenswrapper[4728]: I0227 10:29:53.265567 4728 csr.go:261] certificate signing request csr-4btvf is approved, waiting to be issued Feb 27 10:29:53 crc kubenswrapper[4728]: I0227 10:29:53.272167 4728 csr.go:257] certificate signing request csr-4btvf is issued Feb 27 10:29:53 crc kubenswrapper[4728]: I0227 10:29:53.527279 4728 generic.go:334] "Generic (PLEG): container finished" podID="826461a8-eef9-4a1f-b4a7-4ff8076ec729" containerID="3160b2014171f57fd5e20be89b6d9d288e16ad060f61d38d1026dc51dd130408" exitCode=0 Feb 27 10:29:53 crc kubenswrapper[4728]: I0227 10:29:53.527369 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536468-682zs" event={"ID":"826461a8-eef9-4a1f-b4a7-4ff8076ec729","Type":"ContainerDied","Data":"3160b2014171f57fd5e20be89b6d9d288e16ad060f61d38d1026dc51dd130408"} Feb 27 10:29:53 crc kubenswrapper[4728]: I0227 10:29:53.529134 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"e7a457cf-2c88-458c-b3a3-e53f1b717d81","Type":"ContainerStarted","Data":"29e794f4a0c5123cf62d527bf2f912ef5cea685ccdfe65acbedd5aa8455c5e34"} Feb 27 10:29:53 crc kubenswrapper[4728]: I0227 10:29:53.529367 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"e7a457cf-2c88-458c-b3a3-e53f1b717d81","Type":"ContainerStarted","Data":"635aea1e408cd0a3da79c218c7d1eff02227e7a130ddbf099abd83f3ca673133"} Feb 27 10:29:53 crc kubenswrapper[4728]: I0227 10:29:53.556697 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=1.556657547 podStartE2EDuration="1.556657547s" podCreationTimestamp="2026-02-27 10:29:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:29:53.55294507 +0000 UTC m=+213.515311176" watchObservedRunningTime="2026-02-27 10:29:53.556657547 +0000 UTC m=+213.519023673" Feb 27 10:29:54 crc kubenswrapper[4728]: I0227 10:29:54.277037 4728 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-02-24 05:54:36 +0000 UTC, rotation deadline is 2026-12-19 00:45:19.600585553 +0000 UTC Feb 27 10:29:54 crc kubenswrapper[4728]: I0227 10:29:54.277658 4728 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 7070h15m25.322941462s for next certificate rotation Feb 27 10:29:54 crc kubenswrapper[4728]: I0227 10:29:54.816767 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536468-682zs" Feb 27 10:29:54 crc kubenswrapper[4728]: I0227 10:29:54.987543 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-762bl\" (UniqueName: \"kubernetes.io/projected/826461a8-eef9-4a1f-b4a7-4ff8076ec729-kube-api-access-762bl\") pod \"826461a8-eef9-4a1f-b4a7-4ff8076ec729\" (UID: \"826461a8-eef9-4a1f-b4a7-4ff8076ec729\") " Feb 27 10:29:54 crc kubenswrapper[4728]: I0227 10:29:54.995523 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/826461a8-eef9-4a1f-b4a7-4ff8076ec729-kube-api-access-762bl" (OuterVolumeSpecName: "kube-api-access-762bl") pod "826461a8-eef9-4a1f-b4a7-4ff8076ec729" (UID: "826461a8-eef9-4a1f-b4a7-4ff8076ec729"). InnerVolumeSpecName "kube-api-access-762bl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:29:55 crc kubenswrapper[4728]: I0227 10:29:55.089375 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-762bl\" (UniqueName: \"kubernetes.io/projected/826461a8-eef9-4a1f-b4a7-4ff8076ec729-kube-api-access-762bl\") on node \"crc\" DevicePath \"\"" Feb 27 10:29:55 crc kubenswrapper[4728]: I0227 10:29:55.278695 4728 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-02-24 05:54:36 +0000 UTC, rotation deadline is 2026-11-08 10:17:18.506313927 +0000 UTC Feb 27 10:29:55 crc kubenswrapper[4728]: I0227 10:29:55.278751 4728 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6095h47m23.227565735s for next certificate rotation Feb 27 10:29:55 crc kubenswrapper[4728]: I0227 10:29:55.543027 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536468-682zs" event={"ID":"826461a8-eef9-4a1f-b4a7-4ff8076ec729","Type":"ContainerDied","Data":"5d7bfae19605849c6effc627087526eab119f59e77c0269f316a9cf2300fdf43"} Feb 27 10:29:55 crc kubenswrapper[4728]: I0227 10:29:55.543069 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5d7bfae19605849c6effc627087526eab119f59e77c0269f316a9cf2300fdf43" Feb 27 10:29:55 crc kubenswrapper[4728]: I0227 10:29:55.543070 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536468-682zs" Feb 27 10:29:59 crc kubenswrapper[4728]: I0227 10:29:59.571712 4728 generic.go:334] "Generic (PLEG): container finished" podID="7850a694-dd44-4f4d-9b97-ecaa50efb803" containerID="997a4512d4443c069d5956b3d5c28c9637029a19ff645862f360e8945c384b28" exitCode=0 Feb 27 10:29:59 crc kubenswrapper[4728]: I0227 10:29:59.571765 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pztrz" event={"ID":"7850a694-dd44-4f4d-9b97-ecaa50efb803","Type":"ContainerDied","Data":"997a4512d4443c069d5956b3d5c28c9637029a19ff645862f360e8945c384b28"} Feb 27 10:30:00 crc kubenswrapper[4728]: I0227 10:30:00.138649 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536470-xfw9v"] Feb 27 10:30:00 crc kubenswrapper[4728]: E0227 10:30:00.139011 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="826461a8-eef9-4a1f-b4a7-4ff8076ec729" containerName="oc" Feb 27 10:30:00 crc kubenswrapper[4728]: I0227 10:30:00.139039 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="826461a8-eef9-4a1f-b4a7-4ff8076ec729" containerName="oc" Feb 27 10:30:00 crc kubenswrapper[4728]: I0227 10:30:00.139235 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="826461a8-eef9-4a1f-b4a7-4ff8076ec729" containerName="oc" Feb 27 10:30:00 crc kubenswrapper[4728]: I0227 10:30:00.139878 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536470-xfw9v" Feb 27 10:30:00 crc kubenswrapper[4728]: I0227 10:30:00.143877 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 10:30:00 crc kubenswrapper[4728]: I0227 10:30:00.144004 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-wxdfj" Feb 27 10:30:00 crc kubenswrapper[4728]: I0227 10:30:00.144203 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 10:30:00 crc kubenswrapper[4728]: I0227 10:30:00.148331 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536470-xfw9v"] Feb 27 10:30:00 crc kubenswrapper[4728]: I0227 10:30:00.168179 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hw6km\" (UniqueName: \"kubernetes.io/projected/380270a6-c1d3-49a1-b3c7-9080ae9038b9-kube-api-access-hw6km\") pod \"auto-csr-approver-29536470-xfw9v\" (UID: \"380270a6-c1d3-49a1-b3c7-9080ae9038b9\") " pod="openshift-infra/auto-csr-approver-29536470-xfw9v" Feb 27 10:30:00 crc kubenswrapper[4728]: I0227 10:30:00.238530 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29536470-g4k6j"] Feb 27 10:30:00 crc kubenswrapper[4728]: I0227 10:30:00.239690 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29536470-g4k6j" Feb 27 10:30:00 crc kubenswrapper[4728]: I0227 10:30:00.242838 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 27 10:30:00 crc kubenswrapper[4728]: I0227 10:30:00.245535 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 27 10:30:00 crc kubenswrapper[4728]: I0227 10:30:00.246324 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29536470-g4k6j"] Feb 27 10:30:00 crc kubenswrapper[4728]: I0227 10:30:00.269564 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hw6km\" (UniqueName: \"kubernetes.io/projected/380270a6-c1d3-49a1-b3c7-9080ae9038b9-kube-api-access-hw6km\") pod \"auto-csr-approver-29536470-xfw9v\" (UID: \"380270a6-c1d3-49a1-b3c7-9080ae9038b9\") " pod="openshift-infra/auto-csr-approver-29536470-xfw9v" Feb 27 10:30:00 crc kubenswrapper[4728]: I0227 10:30:00.289881 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hw6km\" (UniqueName: \"kubernetes.io/projected/380270a6-c1d3-49a1-b3c7-9080ae9038b9-kube-api-access-hw6km\") pod \"auto-csr-approver-29536470-xfw9v\" (UID: \"380270a6-c1d3-49a1-b3c7-9080ae9038b9\") " pod="openshift-infra/auto-csr-approver-29536470-xfw9v" Feb 27 10:30:00 crc kubenswrapper[4728]: I0227 10:30:00.371877 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/11c85e5f-01f8-436d-96d7-d623c640df36-config-volume\") pod \"collect-profiles-29536470-g4k6j\" (UID: \"11c85e5f-01f8-436d-96d7-d623c640df36\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536470-g4k6j" Feb 27 10:30:00 crc kubenswrapper[4728]: I0227 10:30:00.371953 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/11c85e5f-01f8-436d-96d7-d623c640df36-secret-volume\") pod \"collect-profiles-29536470-g4k6j\" (UID: \"11c85e5f-01f8-436d-96d7-d623c640df36\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536470-g4k6j" Feb 27 10:30:00 crc kubenswrapper[4728]: I0227 10:30:00.371971 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5tq4\" (UniqueName: \"kubernetes.io/projected/11c85e5f-01f8-436d-96d7-d623c640df36-kube-api-access-h5tq4\") pod \"collect-profiles-29536470-g4k6j\" (UID: \"11c85e5f-01f8-436d-96d7-d623c640df36\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536470-g4k6j" Feb 27 10:30:00 crc kubenswrapper[4728]: I0227 10:30:00.464173 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536470-xfw9v" Feb 27 10:30:00 crc kubenswrapper[4728]: I0227 10:30:00.472848 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/11c85e5f-01f8-436d-96d7-d623c640df36-secret-volume\") pod \"collect-profiles-29536470-g4k6j\" (UID: \"11c85e5f-01f8-436d-96d7-d623c640df36\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536470-g4k6j" Feb 27 10:30:00 crc kubenswrapper[4728]: I0227 10:30:00.472885 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h5tq4\" (UniqueName: \"kubernetes.io/projected/11c85e5f-01f8-436d-96d7-d623c640df36-kube-api-access-h5tq4\") pod \"collect-profiles-29536470-g4k6j\" (UID: \"11c85e5f-01f8-436d-96d7-d623c640df36\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536470-g4k6j" Feb 27 10:30:00 crc kubenswrapper[4728]: I0227 10:30:00.472941 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/11c85e5f-01f8-436d-96d7-d623c640df36-config-volume\") pod \"collect-profiles-29536470-g4k6j\" (UID: \"11c85e5f-01f8-436d-96d7-d623c640df36\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536470-g4k6j" Feb 27 10:30:00 crc kubenswrapper[4728]: I0227 10:30:00.474960 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/11c85e5f-01f8-436d-96d7-d623c640df36-config-volume\") pod \"collect-profiles-29536470-g4k6j\" (UID: \"11c85e5f-01f8-436d-96d7-d623c640df36\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536470-g4k6j" Feb 27 10:30:00 crc kubenswrapper[4728]: I0227 10:30:00.478264 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/11c85e5f-01f8-436d-96d7-d623c640df36-secret-volume\") pod \"collect-profiles-29536470-g4k6j\" (UID: \"11c85e5f-01f8-436d-96d7-d623c640df36\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536470-g4k6j" Feb 27 10:30:00 crc kubenswrapper[4728]: I0227 10:30:00.494863 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h5tq4\" (UniqueName: \"kubernetes.io/projected/11c85e5f-01f8-436d-96d7-d623c640df36-kube-api-access-h5tq4\") pod \"collect-profiles-29536470-g4k6j\" (UID: \"11c85e5f-01f8-436d-96d7-d623c640df36\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536470-g4k6j" Feb 27 10:30:00 crc kubenswrapper[4728]: I0227 10:30:00.558195 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29536470-g4k6j" Feb 27 10:30:00 crc kubenswrapper[4728]: I0227 10:30:00.590169 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pztrz" event={"ID":"7850a694-dd44-4f4d-9b97-ecaa50efb803","Type":"ContainerStarted","Data":"0efe2242a39af5b84489968861f63da49209417961d8cf509813a4d0b0936141"} Feb 27 10:30:00 crc kubenswrapper[4728]: I0227 10:30:00.632391 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-pztrz" podStartSLOduration=2.654186782 podStartE2EDuration="49.632370683s" podCreationTimestamp="2026-02-27 10:29:11 +0000 UTC" firstStartedPulling="2026-02-27 10:29:12.967588319 +0000 UTC m=+172.929954425" lastFinishedPulling="2026-02-27 10:29:59.94577222 +0000 UTC m=+219.908138326" observedRunningTime="2026-02-27 10:30:00.629774179 +0000 UTC m=+220.592140295" watchObservedRunningTime="2026-02-27 10:30:00.632370683 +0000 UTC m=+220.594736789" Feb 27 10:30:00 crc kubenswrapper[4728]: I0227 10:30:00.945049 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536470-xfw9v"] Feb 27 10:30:00 crc kubenswrapper[4728]: W0227 10:30:00.951473 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod380270a6_c1d3_49a1_b3c7_9080ae9038b9.slice/crio-a3f788207d9f887b0d50ba225fba5d077a0ede8399c8e389336d01b903bbff04 WatchSource:0}: Error finding container a3f788207d9f887b0d50ba225fba5d077a0ede8399c8e389336d01b903bbff04: Status 404 returned error can't find the container with id a3f788207d9f887b0d50ba225fba5d077a0ede8399c8e389336d01b903bbff04 Feb 27 10:30:01 crc kubenswrapper[4728]: I0227 10:30:01.087997 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29536470-g4k6j"] Feb 27 10:30:01 crc kubenswrapper[4728]: I0227 10:30:01.596108 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29536470-g4k6j" event={"ID":"11c85e5f-01f8-436d-96d7-d623c640df36","Type":"ContainerStarted","Data":"1c5a4a6aa1ce4efa3ca02736f1406efd10b4d12d218e88a727d18b9be77e8953"} Feb 27 10:30:01 crc kubenswrapper[4728]: I0227 10:30:01.597162 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536470-xfw9v" event={"ID":"380270a6-c1d3-49a1-b3c7-9080ae9038b9","Type":"ContainerStarted","Data":"a3f788207d9f887b0d50ba225fba5d077a0ede8399c8e389336d01b903bbff04"} Feb 27 10:30:01 crc kubenswrapper[4728]: I0227 10:30:01.598931 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xsskq" event={"ID":"49693a3e-1583-4584-9049-fe85013bb9ab","Type":"ContainerStarted","Data":"70d34416b3e365a6605526fabfe3c159ae9fc6a91e93aa1b0ac081190033b7f9"} Feb 27 10:30:01 crc kubenswrapper[4728]: I0227 10:30:01.823040 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-pztrz" Feb 27 10:30:01 crc kubenswrapper[4728]: I0227 10:30:01.823105 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-pztrz" Feb 27 10:30:02 crc kubenswrapper[4728]: E0227 10:30:02.025315 4728 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod11c85e5f_01f8_436d_96d7_d623c640df36.slice/crio-d93b3d5c2c323f1d67065ea5b5844296d0bde41810b6c050efd46056af158962.scope\": RecentStats: unable to find data in memory cache]" Feb 27 10:30:02 crc kubenswrapper[4728]: I0227 10:30:02.607253 4728 generic.go:334] "Generic (PLEG): container finished" podID="11c85e5f-01f8-436d-96d7-d623c640df36" containerID="d93b3d5c2c323f1d67065ea5b5844296d0bde41810b6c050efd46056af158962" exitCode=0 Feb 27 10:30:02 crc kubenswrapper[4728]: I0227 10:30:02.607304 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29536470-g4k6j" event={"ID":"11c85e5f-01f8-436d-96d7-d623c640df36","Type":"ContainerDied","Data":"d93b3d5c2c323f1d67065ea5b5844296d0bde41810b6c050efd46056af158962"} Feb 27 10:30:02 crc kubenswrapper[4728]: I0227 10:30:02.610898 4728 generic.go:334] "Generic (PLEG): container finished" podID="49693a3e-1583-4584-9049-fe85013bb9ab" containerID="70d34416b3e365a6605526fabfe3c159ae9fc6a91e93aa1b0ac081190033b7f9" exitCode=0 Feb 27 10:30:02 crc kubenswrapper[4728]: I0227 10:30:02.610968 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xsskq" event={"ID":"49693a3e-1583-4584-9049-fe85013bb9ab","Type":"ContainerDied","Data":"70d34416b3e365a6605526fabfe3c159ae9fc6a91e93aa1b0ac081190033b7f9"} Feb 27 10:30:03 crc kubenswrapper[4728]: I0227 10:30:03.489997 4728 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-pztrz" podUID="7850a694-dd44-4f4d-9b97-ecaa50efb803" containerName="registry-server" probeResult="failure" output=< Feb 27 10:30:03 crc kubenswrapper[4728]: timeout: failed to connect service ":50051" within 1s Feb 27 10:30:03 crc kubenswrapper[4728]: > Feb 27 10:30:07 crc kubenswrapper[4728]: I0227 10:30:07.750057 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29536470-g4k6j" Feb 27 10:30:07 crc kubenswrapper[4728]: I0227 10:30:07.921903 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/11c85e5f-01f8-436d-96d7-d623c640df36-config-volume\") pod \"11c85e5f-01f8-436d-96d7-d623c640df36\" (UID: \"11c85e5f-01f8-436d-96d7-d623c640df36\") " Feb 27 10:30:07 crc kubenswrapper[4728]: I0227 10:30:07.922381 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h5tq4\" (UniqueName: \"kubernetes.io/projected/11c85e5f-01f8-436d-96d7-d623c640df36-kube-api-access-h5tq4\") pod \"11c85e5f-01f8-436d-96d7-d623c640df36\" (UID: \"11c85e5f-01f8-436d-96d7-d623c640df36\") " Feb 27 10:30:07 crc kubenswrapper[4728]: I0227 10:30:07.922430 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/11c85e5f-01f8-436d-96d7-d623c640df36-secret-volume\") pod \"11c85e5f-01f8-436d-96d7-d623c640df36\" (UID: \"11c85e5f-01f8-436d-96d7-d623c640df36\") " Feb 27 10:30:07 crc kubenswrapper[4728]: I0227 10:30:07.922955 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/11c85e5f-01f8-436d-96d7-d623c640df36-config-volume" (OuterVolumeSpecName: "config-volume") pod "11c85e5f-01f8-436d-96d7-d623c640df36" (UID: "11c85e5f-01f8-436d-96d7-d623c640df36"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:30:07 crc kubenswrapper[4728]: I0227 10:30:07.923346 4728 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/11c85e5f-01f8-436d-96d7-d623c640df36-config-volume\") on node \"crc\" DevicePath \"\"" Feb 27 10:30:07 crc kubenswrapper[4728]: I0227 10:30:07.928564 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11c85e5f-01f8-436d-96d7-d623c640df36-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "11c85e5f-01f8-436d-96d7-d623c640df36" (UID: "11c85e5f-01f8-436d-96d7-d623c640df36"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:30:07 crc kubenswrapper[4728]: I0227 10:30:07.930035 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11c85e5f-01f8-436d-96d7-d623c640df36-kube-api-access-h5tq4" (OuterVolumeSpecName: "kube-api-access-h5tq4") pod "11c85e5f-01f8-436d-96d7-d623c640df36" (UID: "11c85e5f-01f8-436d-96d7-d623c640df36"). InnerVolumeSpecName "kube-api-access-h5tq4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:30:08 crc kubenswrapper[4728]: I0227 10:30:08.025712 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h5tq4\" (UniqueName: \"kubernetes.io/projected/11c85e5f-01f8-436d-96d7-d623c640df36-kube-api-access-h5tq4\") on node \"crc\" DevicePath \"\"" Feb 27 10:30:08 crc kubenswrapper[4728]: I0227 10:30:08.025776 4728 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/11c85e5f-01f8-436d-96d7-d623c640df36-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 27 10:30:08 crc kubenswrapper[4728]: I0227 10:30:08.407494 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-764d56c5d4-trvgv"] Feb 27 10:30:08 crc kubenswrapper[4728]: I0227 10:30:08.407801 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-764d56c5d4-trvgv" podUID="0441e633-30eb-4e90-aa65-ca23fb9697f9" containerName="controller-manager" containerID="cri-o://63ea93649b36e4d2b410acc9c393f2f264083660cfc0ffb242e4ab8558b96528" gracePeriod=30 Feb 27 10:30:08 crc kubenswrapper[4728]: I0227 10:30:08.425970 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-78d57b857c-gjd9v"] Feb 27 10:30:08 crc kubenswrapper[4728]: I0227 10:30:08.426469 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-78d57b857c-gjd9v" podUID="644a6631-07ad-4ddd-8cf1-c60f26b44a43" containerName="route-controller-manager" containerID="cri-o://819faf11edfaf3f737506bcacb2ce472a6dc836c306ab2cc62a15591a08b1a74" gracePeriod=30 Feb 27 10:30:08 crc kubenswrapper[4728]: I0227 10:30:08.665483 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29536470-g4k6j" event={"ID":"11c85e5f-01f8-436d-96d7-d623c640df36","Type":"ContainerDied","Data":"1c5a4a6aa1ce4efa3ca02736f1406efd10b4d12d218e88a727d18b9be77e8953"} Feb 27 10:30:08 crc kubenswrapper[4728]: I0227 10:30:08.665560 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1c5a4a6aa1ce4efa3ca02736f1406efd10b4d12d218e88a727d18b9be77e8953" Feb 27 10:30:08 crc kubenswrapper[4728]: I0227 10:30:08.665666 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29536470-g4k6j" Feb 27 10:30:09 crc kubenswrapper[4728]: I0227 10:30:09.675794 4728 generic.go:334] "Generic (PLEG): container finished" podID="644a6631-07ad-4ddd-8cf1-c60f26b44a43" containerID="819faf11edfaf3f737506bcacb2ce472a6dc836c306ab2cc62a15591a08b1a74" exitCode=0 Feb 27 10:30:09 crc kubenswrapper[4728]: I0227 10:30:09.676087 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-78d57b857c-gjd9v" event={"ID":"644a6631-07ad-4ddd-8cf1-c60f26b44a43","Type":"ContainerDied","Data":"819faf11edfaf3f737506bcacb2ce472a6dc836c306ab2cc62a15591a08b1a74"} Feb 27 10:30:09 crc kubenswrapper[4728]: I0227 10:30:09.679972 4728 generic.go:334] "Generic (PLEG): container finished" podID="0441e633-30eb-4e90-aa65-ca23fb9697f9" containerID="63ea93649b36e4d2b410acc9c393f2f264083660cfc0ffb242e4ab8558b96528" exitCode=0 Feb 27 10:30:09 crc kubenswrapper[4728]: I0227 10:30:09.680022 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-764d56c5d4-trvgv" event={"ID":"0441e633-30eb-4e90-aa65-ca23fb9697f9","Type":"ContainerDied","Data":"63ea93649b36e4d2b410acc9c393f2f264083660cfc0ffb242e4ab8558b96528"} Feb 27 10:30:11 crc kubenswrapper[4728]: I0227 10:30:11.878885 4728 patch_prober.go:28] interesting pod/route-controller-manager-78d57b857c-gjd9v container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.61:8443/healthz\": dial tcp 10.217.0.61:8443: connect: connection refused" start-of-body= Feb 27 10:30:11 crc kubenswrapper[4728]: I0227 10:30:11.878953 4728 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-78d57b857c-gjd9v" podUID="644a6631-07ad-4ddd-8cf1-c60f26b44a43" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.61:8443/healthz\": dial tcp 10.217.0.61:8443: connect: connection refused" Feb 27 10:30:11 crc kubenswrapper[4728]: I0227 10:30:11.895027 4728 patch_prober.go:28] interesting pod/controller-manager-764d56c5d4-trvgv container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.62:8443/healthz\": dial tcp 10.217.0.62:8443: connect: connection refused" start-of-body= Feb 27 10:30:11 crc kubenswrapper[4728]: I0227 10:30:11.895140 4728 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-764d56c5d4-trvgv" podUID="0441e633-30eb-4e90-aa65-ca23fb9697f9" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.62:8443/healthz\": dial tcp 10.217.0.62:8443: connect: connection refused" Feb 27 10:30:11 crc kubenswrapper[4728]: I0227 10:30:11.982189 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-pztrz" Feb 27 10:30:12 crc kubenswrapper[4728]: I0227 10:30:12.076190 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-pztrz" Feb 27 10:30:12 crc kubenswrapper[4728]: I0227 10:30:12.111200 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-25vw6"] Feb 27 10:30:12 crc kubenswrapper[4728]: I0227 10:30:12.324983 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-764d56c5d4-trvgv" Feb 27 10:30:12 crc kubenswrapper[4728]: I0227 10:30:12.350392 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-77647bd6bd-tptvb"] Feb 27 10:30:12 crc kubenswrapper[4728]: E0227 10:30:12.350624 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0441e633-30eb-4e90-aa65-ca23fb9697f9" containerName="controller-manager" Feb 27 10:30:12 crc kubenswrapper[4728]: I0227 10:30:12.350636 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="0441e633-30eb-4e90-aa65-ca23fb9697f9" containerName="controller-manager" Feb 27 10:30:12 crc kubenswrapper[4728]: E0227 10:30:12.350656 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11c85e5f-01f8-436d-96d7-d623c640df36" containerName="collect-profiles" Feb 27 10:30:12 crc kubenswrapper[4728]: I0227 10:30:12.350662 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="11c85e5f-01f8-436d-96d7-d623c640df36" containerName="collect-profiles" Feb 27 10:30:12 crc kubenswrapper[4728]: I0227 10:30:12.350751 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="0441e633-30eb-4e90-aa65-ca23fb9697f9" containerName="controller-manager" Feb 27 10:30:12 crc kubenswrapper[4728]: I0227 10:30:12.350761 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="11c85e5f-01f8-436d-96d7-d623c640df36" containerName="collect-profiles" Feb 27 10:30:12 crc kubenswrapper[4728]: I0227 10:30:12.351126 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-77647bd6bd-tptvb" Feb 27 10:30:12 crc kubenswrapper[4728]: I0227 10:30:12.357837 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-77647bd6bd-tptvb"] Feb 27 10:30:12 crc kubenswrapper[4728]: I0227 10:30:12.397879 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0441e633-30eb-4e90-aa65-ca23fb9697f9-serving-cert\") pod \"0441e633-30eb-4e90-aa65-ca23fb9697f9\" (UID: \"0441e633-30eb-4e90-aa65-ca23fb9697f9\") " Feb 27 10:30:12 crc kubenswrapper[4728]: I0227 10:30:12.397919 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0441e633-30eb-4e90-aa65-ca23fb9697f9-client-ca\") pod \"0441e633-30eb-4e90-aa65-ca23fb9697f9\" (UID: \"0441e633-30eb-4e90-aa65-ca23fb9697f9\") " Feb 27 10:30:12 crc kubenswrapper[4728]: I0227 10:30:12.398026 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3f66047b-9fdb-4972-b8e4-59eedce02a32-proxy-ca-bundles\") pod \"controller-manager-77647bd6bd-tptvb\" (UID: \"3f66047b-9fdb-4972-b8e4-59eedce02a32\") " pod="openshift-controller-manager/controller-manager-77647bd6bd-tptvb" Feb 27 10:30:12 crc kubenswrapper[4728]: I0227 10:30:12.398051 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zkcs\" (UniqueName: \"kubernetes.io/projected/3f66047b-9fdb-4972-b8e4-59eedce02a32-kube-api-access-5zkcs\") pod \"controller-manager-77647bd6bd-tptvb\" (UID: \"3f66047b-9fdb-4972-b8e4-59eedce02a32\") " pod="openshift-controller-manager/controller-manager-77647bd6bd-tptvb" Feb 27 10:30:12 crc kubenswrapper[4728]: I0227 10:30:12.398068 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3f66047b-9fdb-4972-b8e4-59eedce02a32-serving-cert\") pod \"controller-manager-77647bd6bd-tptvb\" (UID: \"3f66047b-9fdb-4972-b8e4-59eedce02a32\") " pod="openshift-controller-manager/controller-manager-77647bd6bd-tptvb" Feb 27 10:30:12 crc kubenswrapper[4728]: I0227 10:30:12.398084 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3f66047b-9fdb-4972-b8e4-59eedce02a32-client-ca\") pod \"controller-manager-77647bd6bd-tptvb\" (UID: \"3f66047b-9fdb-4972-b8e4-59eedce02a32\") " pod="openshift-controller-manager/controller-manager-77647bd6bd-tptvb" Feb 27 10:30:12 crc kubenswrapper[4728]: I0227 10:30:12.398101 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f66047b-9fdb-4972-b8e4-59eedce02a32-config\") pod \"controller-manager-77647bd6bd-tptvb\" (UID: \"3f66047b-9fdb-4972-b8e4-59eedce02a32\") " pod="openshift-controller-manager/controller-manager-77647bd6bd-tptvb" Feb 27 10:30:12 crc kubenswrapper[4728]: I0227 10:30:12.398709 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0441e633-30eb-4e90-aa65-ca23fb9697f9-client-ca" (OuterVolumeSpecName: "client-ca") pod "0441e633-30eb-4e90-aa65-ca23fb9697f9" (UID: "0441e633-30eb-4e90-aa65-ca23fb9697f9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:30:12 crc kubenswrapper[4728]: I0227 10:30:12.411706 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0441e633-30eb-4e90-aa65-ca23fb9697f9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0441e633-30eb-4e90-aa65-ca23fb9697f9" (UID: "0441e633-30eb-4e90-aa65-ca23fb9697f9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:30:12 crc kubenswrapper[4728]: I0227 10:30:12.422998 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-78d57b857c-gjd9v" Feb 27 10:30:12 crc kubenswrapper[4728]: I0227 10:30:12.499257 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f2vxf\" (UniqueName: \"kubernetes.io/projected/0441e633-30eb-4e90-aa65-ca23fb9697f9-kube-api-access-f2vxf\") pod \"0441e633-30eb-4e90-aa65-ca23fb9697f9\" (UID: \"0441e633-30eb-4e90-aa65-ca23fb9697f9\") " Feb 27 10:30:12 crc kubenswrapper[4728]: I0227 10:30:12.499302 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/644a6631-07ad-4ddd-8cf1-c60f26b44a43-config\") pod \"644a6631-07ad-4ddd-8cf1-c60f26b44a43\" (UID: \"644a6631-07ad-4ddd-8cf1-c60f26b44a43\") " Feb 27 10:30:12 crc kubenswrapper[4728]: I0227 10:30:12.499325 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5v7xp\" (UniqueName: \"kubernetes.io/projected/644a6631-07ad-4ddd-8cf1-c60f26b44a43-kube-api-access-5v7xp\") pod \"644a6631-07ad-4ddd-8cf1-c60f26b44a43\" (UID: \"644a6631-07ad-4ddd-8cf1-c60f26b44a43\") " Feb 27 10:30:12 crc kubenswrapper[4728]: I0227 10:30:12.499350 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/644a6631-07ad-4ddd-8cf1-c60f26b44a43-client-ca\") pod \"644a6631-07ad-4ddd-8cf1-c60f26b44a43\" (UID: \"644a6631-07ad-4ddd-8cf1-c60f26b44a43\") " Feb 27 10:30:12 crc kubenswrapper[4728]: I0227 10:30:12.499365 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0441e633-30eb-4e90-aa65-ca23fb9697f9-config\") pod \"0441e633-30eb-4e90-aa65-ca23fb9697f9\" (UID: \"0441e633-30eb-4e90-aa65-ca23fb9697f9\") " Feb 27 10:30:12 crc kubenswrapper[4728]: I0227 10:30:12.499393 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0441e633-30eb-4e90-aa65-ca23fb9697f9-proxy-ca-bundles\") pod \"0441e633-30eb-4e90-aa65-ca23fb9697f9\" (UID: \"0441e633-30eb-4e90-aa65-ca23fb9697f9\") " Feb 27 10:30:12 crc kubenswrapper[4728]: I0227 10:30:12.499418 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/644a6631-07ad-4ddd-8cf1-c60f26b44a43-serving-cert\") pod \"644a6631-07ad-4ddd-8cf1-c60f26b44a43\" (UID: \"644a6631-07ad-4ddd-8cf1-c60f26b44a43\") " Feb 27 10:30:12 crc kubenswrapper[4728]: I0227 10:30:12.499540 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3f66047b-9fdb-4972-b8e4-59eedce02a32-proxy-ca-bundles\") pod \"controller-manager-77647bd6bd-tptvb\" (UID: \"3f66047b-9fdb-4972-b8e4-59eedce02a32\") " pod="openshift-controller-manager/controller-manager-77647bd6bd-tptvb" Feb 27 10:30:12 crc kubenswrapper[4728]: I0227 10:30:12.499564 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5zkcs\" (UniqueName: \"kubernetes.io/projected/3f66047b-9fdb-4972-b8e4-59eedce02a32-kube-api-access-5zkcs\") pod \"controller-manager-77647bd6bd-tptvb\" (UID: \"3f66047b-9fdb-4972-b8e4-59eedce02a32\") " pod="openshift-controller-manager/controller-manager-77647bd6bd-tptvb" Feb 27 10:30:12 crc kubenswrapper[4728]: I0227 10:30:12.499579 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3f66047b-9fdb-4972-b8e4-59eedce02a32-serving-cert\") pod \"controller-manager-77647bd6bd-tptvb\" (UID: \"3f66047b-9fdb-4972-b8e4-59eedce02a32\") " pod="openshift-controller-manager/controller-manager-77647bd6bd-tptvb" Feb 27 10:30:12 crc kubenswrapper[4728]: I0227 10:30:12.499595 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3f66047b-9fdb-4972-b8e4-59eedce02a32-client-ca\") pod \"controller-manager-77647bd6bd-tptvb\" (UID: \"3f66047b-9fdb-4972-b8e4-59eedce02a32\") " pod="openshift-controller-manager/controller-manager-77647bd6bd-tptvb" Feb 27 10:30:12 crc kubenswrapper[4728]: I0227 10:30:12.499611 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f66047b-9fdb-4972-b8e4-59eedce02a32-config\") pod \"controller-manager-77647bd6bd-tptvb\" (UID: \"3f66047b-9fdb-4972-b8e4-59eedce02a32\") " pod="openshift-controller-manager/controller-manager-77647bd6bd-tptvb" Feb 27 10:30:12 crc kubenswrapper[4728]: I0227 10:30:12.499655 4728 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0441e633-30eb-4e90-aa65-ca23fb9697f9-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 10:30:12 crc kubenswrapper[4728]: I0227 10:30:12.499664 4728 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0441e633-30eb-4e90-aa65-ca23fb9697f9-client-ca\") on node \"crc\" DevicePath \"\"" Feb 27 10:30:12 crc kubenswrapper[4728]: I0227 10:30:12.500885 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f66047b-9fdb-4972-b8e4-59eedce02a32-config\") pod \"controller-manager-77647bd6bd-tptvb\" (UID: \"3f66047b-9fdb-4972-b8e4-59eedce02a32\") " pod="openshift-controller-manager/controller-manager-77647bd6bd-tptvb" Feb 27 10:30:12 crc kubenswrapper[4728]: I0227 10:30:12.501482 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/644a6631-07ad-4ddd-8cf1-c60f26b44a43-client-ca" (OuterVolumeSpecName: "client-ca") pod "644a6631-07ad-4ddd-8cf1-c60f26b44a43" (UID: "644a6631-07ad-4ddd-8cf1-c60f26b44a43"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:30:12 crc kubenswrapper[4728]: I0227 10:30:12.501498 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0441e633-30eb-4e90-aa65-ca23fb9697f9-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "0441e633-30eb-4e90-aa65-ca23fb9697f9" (UID: "0441e633-30eb-4e90-aa65-ca23fb9697f9"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:30:12 crc kubenswrapper[4728]: I0227 10:30:12.501713 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0441e633-30eb-4e90-aa65-ca23fb9697f9-config" (OuterVolumeSpecName: "config") pod "0441e633-30eb-4e90-aa65-ca23fb9697f9" (UID: "0441e633-30eb-4e90-aa65-ca23fb9697f9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:30:12 crc kubenswrapper[4728]: I0227 10:30:12.501736 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3f66047b-9fdb-4972-b8e4-59eedce02a32-client-ca\") pod \"controller-manager-77647bd6bd-tptvb\" (UID: \"3f66047b-9fdb-4972-b8e4-59eedce02a32\") " pod="openshift-controller-manager/controller-manager-77647bd6bd-tptvb" Feb 27 10:30:12 crc kubenswrapper[4728]: I0227 10:30:12.502688 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3f66047b-9fdb-4972-b8e4-59eedce02a32-proxy-ca-bundles\") pod \"controller-manager-77647bd6bd-tptvb\" (UID: \"3f66047b-9fdb-4972-b8e4-59eedce02a32\") " pod="openshift-controller-manager/controller-manager-77647bd6bd-tptvb" Feb 27 10:30:12 crc kubenswrapper[4728]: I0227 10:30:12.503423 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/644a6631-07ad-4ddd-8cf1-c60f26b44a43-config" (OuterVolumeSpecName: "config") pod "644a6631-07ad-4ddd-8cf1-c60f26b44a43" (UID: "644a6631-07ad-4ddd-8cf1-c60f26b44a43"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:30:12 crc kubenswrapper[4728]: I0227 10:30:12.503621 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0441e633-30eb-4e90-aa65-ca23fb9697f9-kube-api-access-f2vxf" (OuterVolumeSpecName: "kube-api-access-f2vxf") pod "0441e633-30eb-4e90-aa65-ca23fb9697f9" (UID: "0441e633-30eb-4e90-aa65-ca23fb9697f9"). InnerVolumeSpecName "kube-api-access-f2vxf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:30:12 crc kubenswrapper[4728]: I0227 10:30:12.503793 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/644a6631-07ad-4ddd-8cf1-c60f26b44a43-kube-api-access-5v7xp" (OuterVolumeSpecName: "kube-api-access-5v7xp") pod "644a6631-07ad-4ddd-8cf1-c60f26b44a43" (UID: "644a6631-07ad-4ddd-8cf1-c60f26b44a43"). InnerVolumeSpecName "kube-api-access-5v7xp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:30:12 crc kubenswrapper[4728]: I0227 10:30:12.504052 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/644a6631-07ad-4ddd-8cf1-c60f26b44a43-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "644a6631-07ad-4ddd-8cf1-c60f26b44a43" (UID: "644a6631-07ad-4ddd-8cf1-c60f26b44a43"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:30:12 crc kubenswrapper[4728]: I0227 10:30:12.504142 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3f66047b-9fdb-4972-b8e4-59eedce02a32-serving-cert\") pod \"controller-manager-77647bd6bd-tptvb\" (UID: \"3f66047b-9fdb-4972-b8e4-59eedce02a32\") " pod="openshift-controller-manager/controller-manager-77647bd6bd-tptvb" Feb 27 10:30:12 crc kubenswrapper[4728]: I0227 10:30:12.519432 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5zkcs\" (UniqueName: \"kubernetes.io/projected/3f66047b-9fdb-4972-b8e4-59eedce02a32-kube-api-access-5zkcs\") pod \"controller-manager-77647bd6bd-tptvb\" (UID: \"3f66047b-9fdb-4972-b8e4-59eedce02a32\") " pod="openshift-controller-manager/controller-manager-77647bd6bd-tptvb" Feb 27 10:30:12 crc kubenswrapper[4728]: I0227 10:30:12.600318 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f2vxf\" (UniqueName: \"kubernetes.io/projected/0441e633-30eb-4e90-aa65-ca23fb9697f9-kube-api-access-f2vxf\") on node \"crc\" DevicePath \"\"" Feb 27 10:30:12 crc kubenswrapper[4728]: I0227 10:30:12.600349 4728 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/644a6631-07ad-4ddd-8cf1-c60f26b44a43-config\") on node \"crc\" DevicePath \"\"" Feb 27 10:30:12 crc kubenswrapper[4728]: I0227 10:30:12.600358 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5v7xp\" (UniqueName: \"kubernetes.io/projected/644a6631-07ad-4ddd-8cf1-c60f26b44a43-kube-api-access-5v7xp\") on node \"crc\" DevicePath \"\"" Feb 27 10:30:12 crc kubenswrapper[4728]: I0227 10:30:12.600366 4728 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/644a6631-07ad-4ddd-8cf1-c60f26b44a43-client-ca\") on node \"crc\" DevicePath \"\"" Feb 27 10:30:12 crc kubenswrapper[4728]: I0227 10:30:12.600375 4728 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0441e633-30eb-4e90-aa65-ca23fb9697f9-config\") on node \"crc\" DevicePath \"\"" Feb 27 10:30:12 crc kubenswrapper[4728]: I0227 10:30:12.600383 4728 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0441e633-30eb-4e90-aa65-ca23fb9697f9-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 27 10:30:12 crc kubenswrapper[4728]: I0227 10:30:12.600391 4728 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/644a6631-07ad-4ddd-8cf1-c60f26b44a43-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 10:30:12 crc kubenswrapper[4728]: I0227 10:30:12.663309 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-77647bd6bd-tptvb" Feb 27 10:30:12 crc kubenswrapper[4728]: I0227 10:30:12.703689 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-764d56c5d4-trvgv" event={"ID":"0441e633-30eb-4e90-aa65-ca23fb9697f9","Type":"ContainerDied","Data":"5620a15d87dd4265ea396222e3339095edb623ec7e2670a44de34c0c97141844"} Feb 27 10:30:12 crc kubenswrapper[4728]: I0227 10:30:12.703771 4728 scope.go:117] "RemoveContainer" containerID="63ea93649b36e4d2b410acc9c393f2f264083660cfc0ffb242e4ab8558b96528" Feb 27 10:30:12 crc kubenswrapper[4728]: I0227 10:30:12.704015 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-764d56c5d4-trvgv" Feb 27 10:30:12 crc kubenswrapper[4728]: I0227 10:30:12.710310 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-78d57b857c-gjd9v" event={"ID":"644a6631-07ad-4ddd-8cf1-c60f26b44a43","Type":"ContainerDied","Data":"3565b93ed50f8d3fc6e5ca835a9aa43fd14d64fa523a9b5539e4f2e56340c159"} Feb 27 10:30:12 crc kubenswrapper[4728]: I0227 10:30:12.710396 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-78d57b857c-gjd9v" Feb 27 10:30:12 crc kubenswrapper[4728]: I0227 10:30:12.751243 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-764d56c5d4-trvgv"] Feb 27 10:30:12 crc kubenswrapper[4728]: I0227 10:30:12.763018 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-764d56c5d4-trvgv"] Feb 27 10:30:12 crc kubenswrapper[4728]: I0227 10:30:12.769868 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-78d57b857c-gjd9v"] Feb 27 10:30:12 crc kubenswrapper[4728]: I0227 10:30:12.774174 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-78d57b857c-gjd9v"] Feb 27 10:30:13 crc kubenswrapper[4728]: I0227 10:30:13.808391 4728 scope.go:117] "RemoveContainer" containerID="819faf11edfaf3f737506bcacb2ce472a6dc836c306ab2cc62a15591a08b1a74" Feb 27 10:30:14 crc kubenswrapper[4728]: I0227 10:30:14.172989 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-77647bd6bd-tptvb"] Feb 27 10:30:14 crc kubenswrapper[4728]: W0227 10:30:14.197157 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3f66047b_9fdb_4972_b8e4_59eedce02a32.slice/crio-a231fb8dd9553b11cc6cde16868b6e3918f272a1c1b8330152e78090d662ab55 WatchSource:0}: Error finding container a231fb8dd9553b11cc6cde16868b6e3918f272a1c1b8330152e78090d662ab55: Status 404 returned error can't find the container with id a231fb8dd9553b11cc6cde16868b6e3918f272a1c1b8330152e78090d662ab55 Feb 27 10:30:14 crc kubenswrapper[4728]: I0227 10:30:14.538419 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7bbd8c9df6-57798"] Feb 27 10:30:14 crc kubenswrapper[4728]: E0227 10:30:14.539020 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="644a6631-07ad-4ddd-8cf1-c60f26b44a43" containerName="route-controller-manager" Feb 27 10:30:14 crc kubenswrapper[4728]: I0227 10:30:14.539042 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="644a6631-07ad-4ddd-8cf1-c60f26b44a43" containerName="route-controller-manager" Feb 27 10:30:14 crc kubenswrapper[4728]: I0227 10:30:14.539175 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="644a6631-07ad-4ddd-8cf1-c60f26b44a43" containerName="route-controller-manager" Feb 27 10:30:14 crc kubenswrapper[4728]: I0227 10:30:14.539663 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7bbd8c9df6-57798" Feb 27 10:30:14 crc kubenswrapper[4728]: I0227 10:30:14.541273 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 27 10:30:14 crc kubenswrapper[4728]: I0227 10:30:14.541305 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 27 10:30:14 crc kubenswrapper[4728]: I0227 10:30:14.543964 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 27 10:30:14 crc kubenswrapper[4728]: I0227 10:30:14.544383 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 27 10:30:14 crc kubenswrapper[4728]: I0227 10:30:14.545059 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 27 10:30:14 crc kubenswrapper[4728]: I0227 10:30:14.546125 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 27 10:30:14 crc kubenswrapper[4728]: I0227 10:30:14.553225 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7bbd8c9df6-57798"] Feb 27 10:30:14 crc kubenswrapper[4728]: I0227 10:30:14.629331 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqwnm\" (UniqueName: \"kubernetes.io/projected/7fed72ee-d3b1-4c16-8292-69269d9bf816-kube-api-access-cqwnm\") pod \"route-controller-manager-7bbd8c9df6-57798\" (UID: \"7fed72ee-d3b1-4c16-8292-69269d9bf816\") " pod="openshift-route-controller-manager/route-controller-manager-7bbd8c9df6-57798" Feb 27 10:30:14 crc kubenswrapper[4728]: I0227 10:30:14.629392 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7fed72ee-d3b1-4c16-8292-69269d9bf816-client-ca\") pod \"route-controller-manager-7bbd8c9df6-57798\" (UID: \"7fed72ee-d3b1-4c16-8292-69269d9bf816\") " pod="openshift-route-controller-manager/route-controller-manager-7bbd8c9df6-57798" Feb 27 10:30:14 crc kubenswrapper[4728]: I0227 10:30:14.629413 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7fed72ee-d3b1-4c16-8292-69269d9bf816-config\") pod \"route-controller-manager-7bbd8c9df6-57798\" (UID: \"7fed72ee-d3b1-4c16-8292-69269d9bf816\") " pod="openshift-route-controller-manager/route-controller-manager-7bbd8c9df6-57798" Feb 27 10:30:14 crc kubenswrapper[4728]: I0227 10:30:14.629460 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7fed72ee-d3b1-4c16-8292-69269d9bf816-serving-cert\") pod \"route-controller-manager-7bbd8c9df6-57798\" (UID: \"7fed72ee-d3b1-4c16-8292-69269d9bf816\") " pod="openshift-route-controller-manager/route-controller-manager-7bbd8c9df6-57798" Feb 27 10:30:14 crc kubenswrapper[4728]: I0227 10:30:14.730040 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7fed72ee-d3b1-4c16-8292-69269d9bf816-client-ca\") pod \"route-controller-manager-7bbd8c9df6-57798\" (UID: \"7fed72ee-d3b1-4c16-8292-69269d9bf816\") " pod="openshift-route-controller-manager/route-controller-manager-7bbd8c9df6-57798" Feb 27 10:30:14 crc kubenswrapper[4728]: I0227 10:30:14.730093 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7fed72ee-d3b1-4c16-8292-69269d9bf816-config\") pod \"route-controller-manager-7bbd8c9df6-57798\" (UID: \"7fed72ee-d3b1-4c16-8292-69269d9bf816\") " pod="openshift-route-controller-manager/route-controller-manager-7bbd8c9df6-57798" Feb 27 10:30:14 crc kubenswrapper[4728]: I0227 10:30:14.730116 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7fed72ee-d3b1-4c16-8292-69269d9bf816-serving-cert\") pod \"route-controller-manager-7bbd8c9df6-57798\" (UID: \"7fed72ee-d3b1-4c16-8292-69269d9bf816\") " pod="openshift-route-controller-manager/route-controller-manager-7bbd8c9df6-57798" Feb 27 10:30:14 crc kubenswrapper[4728]: I0227 10:30:14.730185 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqwnm\" (UniqueName: \"kubernetes.io/projected/7fed72ee-d3b1-4c16-8292-69269d9bf816-kube-api-access-cqwnm\") pod \"route-controller-manager-7bbd8c9df6-57798\" (UID: \"7fed72ee-d3b1-4c16-8292-69269d9bf816\") " pod="openshift-route-controller-manager/route-controller-manager-7bbd8c9df6-57798" Feb 27 10:30:14 crc kubenswrapper[4728]: I0227 10:30:14.730967 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7fed72ee-d3b1-4c16-8292-69269d9bf816-client-ca\") pod \"route-controller-manager-7bbd8c9df6-57798\" (UID: \"7fed72ee-d3b1-4c16-8292-69269d9bf816\") " pod="openshift-route-controller-manager/route-controller-manager-7bbd8c9df6-57798" Feb 27 10:30:14 crc kubenswrapper[4728]: I0227 10:30:14.731229 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7fed72ee-d3b1-4c16-8292-69269d9bf816-config\") pod \"route-controller-manager-7bbd8c9df6-57798\" (UID: \"7fed72ee-d3b1-4c16-8292-69269d9bf816\") " pod="openshift-route-controller-manager/route-controller-manager-7bbd8c9df6-57798" Feb 27 10:30:14 crc kubenswrapper[4728]: I0227 10:30:14.731344 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0441e633-30eb-4e90-aa65-ca23fb9697f9" path="/var/lib/kubelet/pods/0441e633-30eb-4e90-aa65-ca23fb9697f9/volumes" Feb 27 10:30:14 crc kubenswrapper[4728]: I0227 10:30:14.732692 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="644a6631-07ad-4ddd-8cf1-c60f26b44a43" path="/var/lib/kubelet/pods/644a6631-07ad-4ddd-8cf1-c60f26b44a43/volumes" Feb 27 10:30:14 crc kubenswrapper[4728]: I0227 10:30:14.734948 4728 generic.go:334] "Generic (PLEG): container finished" podID="b27f5cf8-de13-42a0-825a-0bc27ddc8466" containerID="1a3d0bf303acbd394ca8ea75ea721ba7995d02c2aa82a9c837c9b4ebb3cb0866" exitCode=0 Feb 27 10:30:14 crc kubenswrapper[4728]: I0227 10:30:14.735019 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wnfnp" event={"ID":"b27f5cf8-de13-42a0-825a-0bc27ddc8466","Type":"ContainerDied","Data":"1a3d0bf303acbd394ca8ea75ea721ba7995d02c2aa82a9c837c9b4ebb3cb0866"} Feb 27 10:30:14 crc kubenswrapper[4728]: I0227 10:30:14.736181 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7fed72ee-d3b1-4c16-8292-69269d9bf816-serving-cert\") pod \"route-controller-manager-7bbd8c9df6-57798\" (UID: \"7fed72ee-d3b1-4c16-8292-69269d9bf816\") " pod="openshift-route-controller-manager/route-controller-manager-7bbd8c9df6-57798" Feb 27 10:30:14 crc kubenswrapper[4728]: I0227 10:30:14.742182 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-77647bd6bd-tptvb" event={"ID":"3f66047b-9fdb-4972-b8e4-59eedce02a32","Type":"ContainerStarted","Data":"c918ac39608ba04b6b43a4dbfb408fe6c6c22533af4e2060da343b5787e82b63"} Feb 27 10:30:14 crc kubenswrapper[4728]: I0227 10:30:14.742222 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-77647bd6bd-tptvb" event={"ID":"3f66047b-9fdb-4972-b8e4-59eedce02a32","Type":"ContainerStarted","Data":"a231fb8dd9553b11cc6cde16868b6e3918f272a1c1b8330152e78090d662ab55"} Feb 27 10:30:14 crc kubenswrapper[4728]: I0227 10:30:14.742720 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-77647bd6bd-tptvb" Feb 27 10:30:14 crc kubenswrapper[4728]: I0227 10:30:14.758035 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-77647bd6bd-tptvb" Feb 27 10:30:14 crc kubenswrapper[4728]: I0227 10:30:14.758483 4728 generic.go:334] "Generic (PLEG): container finished" podID="055d41f1-5e49-481e-8662-a245ba878526" containerID="811e272834c2a5900003e08d87412dec137b46ad0dd8e2f482aa92575b1ffd56" exitCode=0 Feb 27 10:30:14 crc kubenswrapper[4728]: I0227 10:30:14.758644 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rq2kj" event={"ID":"055d41f1-5e49-481e-8662-a245ba878526","Type":"ContainerDied","Data":"811e272834c2a5900003e08d87412dec137b46ad0dd8e2f482aa92575b1ffd56"} Feb 27 10:30:14 crc kubenswrapper[4728]: I0227 10:30:14.765463 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqwnm\" (UniqueName: \"kubernetes.io/projected/7fed72ee-d3b1-4c16-8292-69269d9bf816-kube-api-access-cqwnm\") pod \"route-controller-manager-7bbd8c9df6-57798\" (UID: \"7fed72ee-d3b1-4c16-8292-69269d9bf816\") " pod="openshift-route-controller-manager/route-controller-manager-7bbd8c9df6-57798" Feb 27 10:30:14 crc kubenswrapper[4728]: I0227 10:30:14.766831 4728 generic.go:334] "Generic (PLEG): container finished" podID="7771abc7-886d-41eb-b966-74538062511f" containerID="eee8f3ab4c0874cbe5b816705008eea71b4e9947d8f93815b1a860077da9dca7" exitCode=0 Feb 27 10:30:14 crc kubenswrapper[4728]: I0227 10:30:14.766888 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t4bnk" event={"ID":"7771abc7-886d-41eb-b966-74538062511f","Type":"ContainerDied","Data":"eee8f3ab4c0874cbe5b816705008eea71b4e9947d8f93815b1a860077da9dca7"} Feb 27 10:30:14 crc kubenswrapper[4728]: I0227 10:30:14.771835 4728 generic.go:334] "Generic (PLEG): container finished" podID="34088c2f-1e95-4227-9242-9e4cde7a9fde" containerID="4251be889dce4d9fe923660b66023de97a4a4cbb7b4de1e65dd25b25c43d14c4" exitCode=0 Feb 27 10:30:14 crc kubenswrapper[4728]: I0227 10:30:14.771931 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b97gn" event={"ID":"34088c2f-1e95-4227-9242-9e4cde7a9fde","Type":"ContainerDied","Data":"4251be889dce4d9fe923660b66023de97a4a4cbb7b4de1e65dd25b25c43d14c4"} Feb 27 10:30:14 crc kubenswrapper[4728]: I0227 10:30:14.773331 4728 generic.go:334] "Generic (PLEG): container finished" podID="0a7d9e95-6291-465f-9f94-f99fc86e4389" containerID="6599e4d911a2ca04a1be15a19b88842115449d5495ef14be2fe1d856ff162725" exitCode=0 Feb 27 10:30:14 crc kubenswrapper[4728]: I0227 10:30:14.773365 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gg7mm" event={"ID":"0a7d9e95-6291-465f-9f94-f99fc86e4389","Type":"ContainerDied","Data":"6599e4d911a2ca04a1be15a19b88842115449d5495ef14be2fe1d856ff162725"} Feb 27 10:30:14 crc kubenswrapper[4728]: I0227 10:30:14.781566 4728 generic.go:334] "Generic (PLEG): container finished" podID="7c5a3750-282d-4f84-a9c9-b3167aa283b8" containerID="4f6c04d856b378c061da9394525e4481bad2d1646739df2e4c933c0ece193de5" exitCode=0 Feb 27 10:30:14 crc kubenswrapper[4728]: I0227 10:30:14.781626 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-brtfb" event={"ID":"7c5a3750-282d-4f84-a9c9-b3167aa283b8","Type":"ContainerDied","Data":"4f6c04d856b378c061da9394525e4481bad2d1646739df2e4c933c0ece193de5"} Feb 27 10:30:14 crc kubenswrapper[4728]: I0227 10:30:14.786550 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536470-xfw9v" event={"ID":"380270a6-c1d3-49a1-b3c7-9080ae9038b9","Type":"ContainerStarted","Data":"06e461d77ad5ff8e32ac2b4c96f09589f5d1521f4760cb602916d21d7e3204b2"} Feb 27 10:30:14 crc kubenswrapper[4728]: I0227 10:30:14.790808 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xsskq" event={"ID":"49693a3e-1583-4584-9049-fe85013bb9ab","Type":"ContainerStarted","Data":"f5d67a6396ff960e3b09e3bec62e046f4e298d27383dbd6792ccff7c975feb2d"} Feb 27 10:30:14 crc kubenswrapper[4728]: I0227 10:30:14.794216 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-77647bd6bd-tptvb" podStartSLOduration=6.794197578 podStartE2EDuration="6.794197578s" podCreationTimestamp="2026-02-27 10:30:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:30:14.790959565 +0000 UTC m=+234.753325671" watchObservedRunningTime="2026-02-27 10:30:14.794197578 +0000 UTC m=+234.756563684" Feb 27 10:30:14 crc kubenswrapper[4728]: I0227 10:30:14.879541 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29536470-xfw9v" podStartSLOduration=1.845357956 podStartE2EDuration="14.879527685s" podCreationTimestamp="2026-02-27 10:30:00 +0000 UTC" firstStartedPulling="2026-02-27 10:30:00.955103048 +0000 UTC m=+220.917469154" lastFinishedPulling="2026-02-27 10:30:13.989272737 +0000 UTC m=+233.951638883" observedRunningTime="2026-02-27 10:30:14.863976148 +0000 UTC m=+234.826342254" watchObservedRunningTime="2026-02-27 10:30:14.879527685 +0000 UTC m=+234.841893791" Feb 27 10:30:14 crc kubenswrapper[4728]: I0227 10:30:14.892167 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7bbd8c9df6-57798" Feb 27 10:30:14 crc kubenswrapper[4728]: I0227 10:30:14.901946 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-xsskq" podStartSLOduration=4.028362051 podStartE2EDuration="1m3.901929841s" podCreationTimestamp="2026-02-27 10:29:11 +0000 UTC" firstStartedPulling="2026-02-27 10:29:14.062053302 +0000 UTC m=+174.024419408" lastFinishedPulling="2026-02-27 10:30:13.935621052 +0000 UTC m=+233.897987198" observedRunningTime="2026-02-27 10:30:14.899870981 +0000 UTC m=+234.862237077" watchObservedRunningTime="2026-02-27 10:30:14.901929841 +0000 UTC m=+234.864295947" Feb 27 10:30:15 crc kubenswrapper[4728]: I0227 10:30:15.275062 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7bbd8c9df6-57798"] Feb 27 10:30:15 crc kubenswrapper[4728]: W0227 10:30:15.297826 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7fed72ee_d3b1_4c16_8292_69269d9bf816.slice/crio-ab82dc34a8ad8011dc939a369a5abba020b94cfafa742114253c97d773550588 WatchSource:0}: Error finding container ab82dc34a8ad8011dc939a369a5abba020b94cfafa742114253c97d773550588: Status 404 returned error can't find the container with id ab82dc34a8ad8011dc939a369a5abba020b94cfafa742114253c97d773550588 Feb 27 10:30:15 crc kubenswrapper[4728]: I0227 10:30:15.808669 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b97gn" event={"ID":"34088c2f-1e95-4227-9242-9e4cde7a9fde","Type":"ContainerStarted","Data":"177e9ea899e8eddd80df884e748aea9b96849c704208a9243f95166c80467dfb"} Feb 27 10:30:15 crc kubenswrapper[4728]: I0227 10:30:15.809945 4728 generic.go:334] "Generic (PLEG): container finished" podID="380270a6-c1d3-49a1-b3c7-9080ae9038b9" containerID="06e461d77ad5ff8e32ac2b4c96f09589f5d1521f4760cb602916d21d7e3204b2" exitCode=0 Feb 27 10:30:15 crc kubenswrapper[4728]: I0227 10:30:15.810008 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536470-xfw9v" event={"ID":"380270a6-c1d3-49a1-b3c7-9080ae9038b9","Type":"ContainerDied","Data":"06e461d77ad5ff8e32ac2b4c96f09589f5d1521f4760cb602916d21d7e3204b2"} Feb 27 10:30:15 crc kubenswrapper[4728]: I0227 10:30:15.811569 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gg7mm" event={"ID":"0a7d9e95-6291-465f-9f94-f99fc86e4389","Type":"ContainerStarted","Data":"76a950d1b4f4a70e3d2d8839c8048be635838908c89148a6a19f95c3a8f0524d"} Feb 27 10:30:15 crc kubenswrapper[4728]: I0227 10:30:15.813265 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7bbd8c9df6-57798" event={"ID":"7fed72ee-d3b1-4c16-8292-69269d9bf816","Type":"ContainerStarted","Data":"b2b00d1c8fd5a899effbb5e0a24d63391cec6ce750c81a80aa3dc4944d50364d"} Feb 27 10:30:15 crc kubenswrapper[4728]: I0227 10:30:15.813296 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7bbd8c9df6-57798" event={"ID":"7fed72ee-d3b1-4c16-8292-69269d9bf816","Type":"ContainerStarted","Data":"ab82dc34a8ad8011dc939a369a5abba020b94cfafa742114253c97d773550588"} Feb 27 10:30:15 crc kubenswrapper[4728]: I0227 10:30:15.813528 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7bbd8c9df6-57798" Feb 27 10:30:15 crc kubenswrapper[4728]: I0227 10:30:15.814666 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-brtfb" event={"ID":"7c5a3750-282d-4f84-a9c9-b3167aa283b8","Type":"ContainerStarted","Data":"2ff5fada8a2dc06eb2818512884c5e08be4152467cbfd2cf39873863b50bd1e6"} Feb 27 10:30:15 crc kubenswrapper[4728]: I0227 10:30:15.816283 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wnfnp" event={"ID":"b27f5cf8-de13-42a0-825a-0bc27ddc8466","Type":"ContainerStarted","Data":"02bbfcda99329ca111a91bf0c11f0695387513a40b4905311ce5a5a9fed4de41"} Feb 27 10:30:15 crc kubenswrapper[4728]: I0227 10:30:15.818242 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rq2kj" event={"ID":"055d41f1-5e49-481e-8662-a245ba878526","Type":"ContainerStarted","Data":"4c9eb3e0d51790034ee5e7e92ffa737175b563b7e8d311f261234b42f7048581"} Feb 27 10:30:15 crc kubenswrapper[4728]: I0227 10:30:15.819828 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t4bnk" event={"ID":"7771abc7-886d-41eb-b966-74538062511f","Type":"ContainerStarted","Data":"7143b72ae2d7e05b85ad009f20d696a7570ddb41432db16b879e3dcbce7fcf63"} Feb 27 10:30:15 crc kubenswrapper[4728]: I0227 10:30:15.832397 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-b97gn" podStartSLOduration=3.374406634 podStartE2EDuration="1m7.832379937s" podCreationTimestamp="2026-02-27 10:29:08 +0000 UTC" firstStartedPulling="2026-02-27 10:29:10.871291264 +0000 UTC m=+170.833657370" lastFinishedPulling="2026-02-27 10:30:15.329264557 +0000 UTC m=+235.291630673" observedRunningTime="2026-02-27 10:30:15.830146433 +0000 UTC m=+235.792512549" watchObservedRunningTime="2026-02-27 10:30:15.832379937 +0000 UTC m=+235.794746043" Feb 27 10:30:15 crc kubenswrapper[4728]: I0227 10:30:15.851106 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-t4bnk" podStartSLOduration=3.494663015 podStartE2EDuration="1m7.851092986s" podCreationTimestamp="2026-02-27 10:29:08 +0000 UTC" firstStartedPulling="2026-02-27 10:29:10.855422845 +0000 UTC m=+170.817788951" lastFinishedPulling="2026-02-27 10:30:15.211852796 +0000 UTC m=+235.174218922" observedRunningTime="2026-02-27 10:30:15.848688417 +0000 UTC m=+235.811054523" watchObservedRunningTime="2026-02-27 10:30:15.851092986 +0000 UTC m=+235.813459092" Feb 27 10:30:15 crc kubenswrapper[4728]: I0227 10:30:15.870566 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-wnfnp" podStartSLOduration=2.689975103 podStartE2EDuration="1m5.870546616s" podCreationTimestamp="2026-02-27 10:29:10 +0000 UTC" firstStartedPulling="2026-02-27 10:29:11.949028364 +0000 UTC m=+171.911394470" lastFinishedPulling="2026-02-27 10:30:15.129599877 +0000 UTC m=+235.091965983" observedRunningTime="2026-02-27 10:30:15.866465909 +0000 UTC m=+235.828832015" watchObservedRunningTime="2026-02-27 10:30:15.870546616 +0000 UTC m=+235.832912722" Feb 27 10:30:15 crc kubenswrapper[4728]: I0227 10:30:15.913903 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-rq2kj" podStartSLOduration=2.5685315969999998 podStartE2EDuration="1m5.913875554s" podCreationTimestamp="2026-02-27 10:29:10 +0000 UTC" firstStartedPulling="2026-02-27 10:29:11.924734796 +0000 UTC m=+171.887100902" lastFinishedPulling="2026-02-27 10:30:15.270078753 +0000 UTC m=+235.232444859" observedRunningTime="2026-02-27 10:30:15.912321699 +0000 UTC m=+235.874687805" watchObservedRunningTime="2026-02-27 10:30:15.913875554 +0000 UTC m=+235.876241660" Feb 27 10:30:15 crc kubenswrapper[4728]: I0227 10:30:15.914831 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-gg7mm" podStartSLOduration=3.624442919 podStartE2EDuration="1m7.914811931s" podCreationTimestamp="2026-02-27 10:29:08 +0000 UTC" firstStartedPulling="2026-02-27 10:29:10.901077927 +0000 UTC m=+170.863444033" lastFinishedPulling="2026-02-27 10:30:15.191446899 +0000 UTC m=+235.153813045" observedRunningTime="2026-02-27 10:30:15.893768335 +0000 UTC m=+235.856134441" watchObservedRunningTime="2026-02-27 10:30:15.914811931 +0000 UTC m=+235.877178037" Feb 27 10:30:15 crc kubenswrapper[4728]: I0227 10:30:15.930579 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7bbd8c9df6-57798" podStartSLOduration=7.930561174 podStartE2EDuration="7.930561174s" podCreationTimestamp="2026-02-27 10:30:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:30:15.929773632 +0000 UTC m=+235.892139748" watchObservedRunningTime="2026-02-27 10:30:15.930561174 +0000 UTC m=+235.892927280" Feb 27 10:30:15 crc kubenswrapper[4728]: I0227 10:30:15.963346 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-brtfb" podStartSLOduration=3.628330154 podStartE2EDuration="1m7.963310618s" podCreationTimestamp="2026-02-27 10:29:08 +0000 UTC" firstStartedPulling="2026-02-27 10:29:10.835752808 +0000 UTC m=+170.798118914" lastFinishedPulling="2026-02-27 10:30:15.170733272 +0000 UTC m=+235.133099378" observedRunningTime="2026-02-27 10:30:15.96303954 +0000 UTC m=+235.925405646" watchObservedRunningTime="2026-02-27 10:30:15.963310618 +0000 UTC m=+235.925676714" Feb 27 10:30:16 crc kubenswrapper[4728]: I0227 10:30:16.255473 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7bbd8c9df6-57798" Feb 27 10:30:17 crc kubenswrapper[4728]: I0227 10:30:17.175685 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536470-xfw9v" Feb 27 10:30:17 crc kubenswrapper[4728]: I0227 10:30:17.367708 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hw6km\" (UniqueName: \"kubernetes.io/projected/380270a6-c1d3-49a1-b3c7-9080ae9038b9-kube-api-access-hw6km\") pod \"380270a6-c1d3-49a1-b3c7-9080ae9038b9\" (UID: \"380270a6-c1d3-49a1-b3c7-9080ae9038b9\") " Feb 27 10:30:17 crc kubenswrapper[4728]: I0227 10:30:17.373773 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/380270a6-c1d3-49a1-b3c7-9080ae9038b9-kube-api-access-hw6km" (OuterVolumeSpecName: "kube-api-access-hw6km") pod "380270a6-c1d3-49a1-b3c7-9080ae9038b9" (UID: "380270a6-c1d3-49a1-b3c7-9080ae9038b9"). InnerVolumeSpecName "kube-api-access-hw6km". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:30:17 crc kubenswrapper[4728]: I0227 10:30:17.468693 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hw6km\" (UniqueName: \"kubernetes.io/projected/380270a6-c1d3-49a1-b3c7-9080ae9038b9-kube-api-access-hw6km\") on node \"crc\" DevicePath \"\"" Feb 27 10:30:17 crc kubenswrapper[4728]: I0227 10:30:17.831627 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536470-xfw9v" event={"ID":"380270a6-c1d3-49a1-b3c7-9080ae9038b9","Type":"ContainerDied","Data":"a3f788207d9f887b0d50ba225fba5d077a0ede8399c8e389336d01b903bbff04"} Feb 27 10:30:17 crc kubenswrapper[4728]: I0227 10:30:17.831673 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a3f788207d9f887b0d50ba225fba5d077a0ede8399c8e389336d01b903bbff04" Feb 27 10:30:17 crc kubenswrapper[4728]: I0227 10:30:17.831645 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536470-xfw9v" Feb 27 10:30:18 crc kubenswrapper[4728]: I0227 10:30:18.593118 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-b97gn" Feb 27 10:30:18 crc kubenswrapper[4728]: I0227 10:30:18.593191 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-b97gn" Feb 27 10:30:18 crc kubenswrapper[4728]: I0227 10:30:18.665957 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-b97gn" Feb 27 10:30:18 crc kubenswrapper[4728]: I0227 10:30:18.802377 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-t4bnk" Feb 27 10:30:18 crc kubenswrapper[4728]: I0227 10:30:18.802922 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-t4bnk" Feb 27 10:30:18 crc kubenswrapper[4728]: I0227 10:30:18.852206 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-t4bnk" Feb 27 10:30:19 crc kubenswrapper[4728]: I0227 10:30:19.005624 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-brtfb" Feb 27 10:30:19 crc kubenswrapper[4728]: I0227 10:30:19.005702 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-brtfb" Feb 27 10:30:19 crc kubenswrapper[4728]: I0227 10:30:19.070565 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-brtfb" Feb 27 10:30:19 crc kubenswrapper[4728]: I0227 10:30:19.243427 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-gg7mm" Feb 27 10:30:19 crc kubenswrapper[4728]: I0227 10:30:19.243933 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-gg7mm" Feb 27 10:30:19 crc kubenswrapper[4728]: I0227 10:30:19.306776 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-gg7mm" Feb 27 10:30:20 crc kubenswrapper[4728]: I0227 10:30:20.777322 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-wnfnp" Feb 27 10:30:20 crc kubenswrapper[4728]: I0227 10:30:20.777390 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-wnfnp" Feb 27 10:30:20 crc kubenswrapper[4728]: I0227 10:30:20.831246 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-wnfnp" Feb 27 10:30:20 crc kubenswrapper[4728]: I0227 10:30:20.900709 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-gg7mm" Feb 27 10:30:20 crc kubenswrapper[4728]: I0227 10:30:20.917085 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-wnfnp" Feb 27 10:30:21 crc kubenswrapper[4728]: I0227 10:30:21.240605 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-rq2kj" Feb 27 10:30:21 crc kubenswrapper[4728]: I0227 10:30:21.240796 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-rq2kj" Feb 27 10:30:21 crc kubenswrapper[4728]: I0227 10:30:21.292615 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-rq2kj" Feb 27 10:30:21 crc kubenswrapper[4728]: I0227 10:30:21.923606 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-rq2kj" Feb 27 10:30:22 crc kubenswrapper[4728]: I0227 10:30:22.194746 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-xsskq" Feb 27 10:30:22 crc kubenswrapper[4728]: I0227 10:30:22.194830 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-xsskq" Feb 27 10:30:22 crc kubenswrapper[4728]: I0227 10:30:22.246173 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-xsskq" Feb 27 10:30:22 crc kubenswrapper[4728]: I0227 10:30:22.604126 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gg7mm"] Feb 27 10:30:22 crc kubenswrapper[4728]: I0227 10:30:22.865230 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-gg7mm" podUID="0a7d9e95-6291-465f-9f94-f99fc86e4389" containerName="registry-server" containerID="cri-o://76a950d1b4f4a70e3d2d8839c8048be635838908c89148a6a19f95c3a8f0524d" gracePeriod=2 Feb 27 10:30:22 crc kubenswrapper[4728]: I0227 10:30:22.931418 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-xsskq" Feb 27 10:30:23 crc kubenswrapper[4728]: I0227 10:30:23.874738 4728 generic.go:334] "Generic (PLEG): container finished" podID="0a7d9e95-6291-465f-9f94-f99fc86e4389" containerID="76a950d1b4f4a70e3d2d8839c8048be635838908c89148a6a19f95c3a8f0524d" exitCode=0 Feb 27 10:30:23 crc kubenswrapper[4728]: I0227 10:30:23.874825 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gg7mm" event={"ID":"0a7d9e95-6291-465f-9f94-f99fc86e4389","Type":"ContainerDied","Data":"76a950d1b4f4a70e3d2d8839c8048be635838908c89148a6a19f95c3a8f0524d"} Feb 27 10:30:23 crc kubenswrapper[4728]: I0227 10:30:23.969987 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gg7mm" Feb 27 10:30:24 crc kubenswrapper[4728]: I0227 10:30:24.089439 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a7d9e95-6291-465f-9f94-f99fc86e4389-catalog-content\") pod \"0a7d9e95-6291-465f-9f94-f99fc86e4389\" (UID: \"0a7d9e95-6291-465f-9f94-f99fc86e4389\") " Feb 27 10:30:24 crc kubenswrapper[4728]: I0227 10:30:24.089607 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a7d9e95-6291-465f-9f94-f99fc86e4389-utilities\") pod \"0a7d9e95-6291-465f-9f94-f99fc86e4389\" (UID: \"0a7d9e95-6291-465f-9f94-f99fc86e4389\") " Feb 27 10:30:24 crc kubenswrapper[4728]: I0227 10:30:24.089702 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ml2lk\" (UniqueName: \"kubernetes.io/projected/0a7d9e95-6291-465f-9f94-f99fc86e4389-kube-api-access-ml2lk\") pod \"0a7d9e95-6291-465f-9f94-f99fc86e4389\" (UID: \"0a7d9e95-6291-465f-9f94-f99fc86e4389\") " Feb 27 10:30:24 crc kubenswrapper[4728]: I0227 10:30:24.091093 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0a7d9e95-6291-465f-9f94-f99fc86e4389-utilities" (OuterVolumeSpecName: "utilities") pod "0a7d9e95-6291-465f-9f94-f99fc86e4389" (UID: "0a7d9e95-6291-465f-9f94-f99fc86e4389"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:30:24 crc kubenswrapper[4728]: I0227 10:30:24.098938 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a7d9e95-6291-465f-9f94-f99fc86e4389-kube-api-access-ml2lk" (OuterVolumeSpecName: "kube-api-access-ml2lk") pod "0a7d9e95-6291-465f-9f94-f99fc86e4389" (UID: "0a7d9e95-6291-465f-9f94-f99fc86e4389"). InnerVolumeSpecName "kube-api-access-ml2lk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:30:24 crc kubenswrapper[4728]: I0227 10:30:24.185637 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0a7d9e95-6291-465f-9f94-f99fc86e4389-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0a7d9e95-6291-465f-9f94-f99fc86e4389" (UID: "0a7d9e95-6291-465f-9f94-f99fc86e4389"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:30:24 crc kubenswrapper[4728]: I0227 10:30:24.191880 4728 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a7d9e95-6291-465f-9f94-f99fc86e4389-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 10:30:24 crc kubenswrapper[4728]: I0227 10:30:24.191909 4728 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a7d9e95-6291-465f-9f94-f99fc86e4389-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 10:30:24 crc kubenswrapper[4728]: I0227 10:30:24.191923 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ml2lk\" (UniqueName: \"kubernetes.io/projected/0a7d9e95-6291-465f-9f94-f99fc86e4389-kube-api-access-ml2lk\") on node \"crc\" DevicePath \"\"" Feb 27 10:30:24 crc kubenswrapper[4728]: I0227 10:30:24.895440 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gg7mm" event={"ID":"0a7d9e95-6291-465f-9f94-f99fc86e4389","Type":"ContainerDied","Data":"d608e445ce5ff869da41ef045aaa499ed97caf7477db66b53a7c4659d07b97e4"} Feb 27 10:30:24 crc kubenswrapper[4728]: I0227 10:30:24.895644 4728 scope.go:117] "RemoveContainer" containerID="76a950d1b4f4a70e3d2d8839c8048be635838908c89148a6a19f95c3a8f0524d" Feb 27 10:30:24 crc kubenswrapper[4728]: I0227 10:30:24.896435 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gg7mm" Feb 27 10:30:24 crc kubenswrapper[4728]: I0227 10:30:24.930407 4728 scope.go:117] "RemoveContainer" containerID="6599e4d911a2ca04a1be15a19b88842115449d5495ef14be2fe1d856ff162725" Feb 27 10:30:24 crc kubenswrapper[4728]: I0227 10:30:24.939020 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gg7mm"] Feb 27 10:30:24 crc kubenswrapper[4728]: I0227 10:30:24.945287 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-gg7mm"] Feb 27 10:30:24 crc kubenswrapper[4728]: I0227 10:30:24.952626 4728 scope.go:117] "RemoveContainer" containerID="c3b3f06c1b3ad39d67c4cd03c6dd87964a2ce5c3733bf9cf37afbb4bc5878188" Feb 27 10:30:25 crc kubenswrapper[4728]: I0227 10:30:25.001996 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rq2kj"] Feb 27 10:30:25 crc kubenswrapper[4728]: I0227 10:30:25.002254 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-rq2kj" podUID="055d41f1-5e49-481e-8662-a245ba878526" containerName="registry-server" containerID="cri-o://4c9eb3e0d51790034ee5e7e92ffa737175b563b7e8d311f261234b42f7048581" gracePeriod=2 Feb 27 10:30:25 crc kubenswrapper[4728]: I0227 10:30:25.205310 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xsskq"] Feb 27 10:30:25 crc kubenswrapper[4728]: I0227 10:30:25.205748 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-xsskq" podUID="49693a3e-1583-4584-9049-fe85013bb9ab" containerName="registry-server" containerID="cri-o://f5d67a6396ff960e3b09e3bec62e046f4e298d27383dbd6792ccff7c975feb2d" gracePeriod=2 Feb 27 10:30:25 crc kubenswrapper[4728]: I0227 10:30:25.472112 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rq2kj" Feb 27 10:30:25 crc kubenswrapper[4728]: I0227 10:30:25.506125 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/055d41f1-5e49-481e-8662-a245ba878526-catalog-content\") pod \"055d41f1-5e49-481e-8662-a245ba878526\" (UID: \"055d41f1-5e49-481e-8662-a245ba878526\") " Feb 27 10:30:25 crc kubenswrapper[4728]: I0227 10:30:25.506227 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/055d41f1-5e49-481e-8662-a245ba878526-utilities\") pod \"055d41f1-5e49-481e-8662-a245ba878526\" (UID: \"055d41f1-5e49-481e-8662-a245ba878526\") " Feb 27 10:30:25 crc kubenswrapper[4728]: I0227 10:30:25.506291 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nd859\" (UniqueName: \"kubernetes.io/projected/055d41f1-5e49-481e-8662-a245ba878526-kube-api-access-nd859\") pod \"055d41f1-5e49-481e-8662-a245ba878526\" (UID: \"055d41f1-5e49-481e-8662-a245ba878526\") " Feb 27 10:30:25 crc kubenswrapper[4728]: I0227 10:30:25.507135 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/055d41f1-5e49-481e-8662-a245ba878526-utilities" (OuterVolumeSpecName: "utilities") pod "055d41f1-5e49-481e-8662-a245ba878526" (UID: "055d41f1-5e49-481e-8662-a245ba878526"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:30:25 crc kubenswrapper[4728]: I0227 10:30:25.511037 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/055d41f1-5e49-481e-8662-a245ba878526-kube-api-access-nd859" (OuterVolumeSpecName: "kube-api-access-nd859") pod "055d41f1-5e49-481e-8662-a245ba878526" (UID: "055d41f1-5e49-481e-8662-a245ba878526"). InnerVolumeSpecName "kube-api-access-nd859". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:30:25 crc kubenswrapper[4728]: I0227 10:30:25.560287 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/055d41f1-5e49-481e-8662-a245ba878526-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "055d41f1-5e49-481e-8662-a245ba878526" (UID: "055d41f1-5e49-481e-8662-a245ba878526"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:30:25 crc kubenswrapper[4728]: I0227 10:30:25.566669 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xsskq" Feb 27 10:30:25 crc kubenswrapper[4728]: I0227 10:30:25.607098 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rt98j\" (UniqueName: \"kubernetes.io/projected/49693a3e-1583-4584-9049-fe85013bb9ab-kube-api-access-rt98j\") pod \"49693a3e-1583-4584-9049-fe85013bb9ab\" (UID: \"49693a3e-1583-4584-9049-fe85013bb9ab\") " Feb 27 10:30:25 crc kubenswrapper[4728]: I0227 10:30:25.607183 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/49693a3e-1583-4584-9049-fe85013bb9ab-catalog-content\") pod \"49693a3e-1583-4584-9049-fe85013bb9ab\" (UID: \"49693a3e-1583-4584-9049-fe85013bb9ab\") " Feb 27 10:30:25 crc kubenswrapper[4728]: I0227 10:30:25.607295 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/49693a3e-1583-4584-9049-fe85013bb9ab-utilities\") pod \"49693a3e-1583-4584-9049-fe85013bb9ab\" (UID: \"49693a3e-1583-4584-9049-fe85013bb9ab\") " Feb 27 10:30:25 crc kubenswrapper[4728]: I0227 10:30:25.607553 4728 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/055d41f1-5e49-481e-8662-a245ba878526-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 10:30:25 crc kubenswrapper[4728]: I0227 10:30:25.607574 4728 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/055d41f1-5e49-481e-8662-a245ba878526-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 10:30:25 crc kubenswrapper[4728]: I0227 10:30:25.607586 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nd859\" (UniqueName: \"kubernetes.io/projected/055d41f1-5e49-481e-8662-a245ba878526-kube-api-access-nd859\") on node \"crc\" DevicePath \"\"" Feb 27 10:30:25 crc kubenswrapper[4728]: I0227 10:30:25.608098 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/49693a3e-1583-4584-9049-fe85013bb9ab-utilities" (OuterVolumeSpecName: "utilities") pod "49693a3e-1583-4584-9049-fe85013bb9ab" (UID: "49693a3e-1583-4584-9049-fe85013bb9ab"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:30:25 crc kubenswrapper[4728]: I0227 10:30:25.626789 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49693a3e-1583-4584-9049-fe85013bb9ab-kube-api-access-rt98j" (OuterVolumeSpecName: "kube-api-access-rt98j") pod "49693a3e-1583-4584-9049-fe85013bb9ab" (UID: "49693a3e-1583-4584-9049-fe85013bb9ab"). InnerVolumeSpecName "kube-api-access-rt98j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:30:25 crc kubenswrapper[4728]: I0227 10:30:25.708463 4728 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/49693a3e-1583-4584-9049-fe85013bb9ab-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 10:30:25 crc kubenswrapper[4728]: I0227 10:30:25.708575 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rt98j\" (UniqueName: \"kubernetes.io/projected/49693a3e-1583-4584-9049-fe85013bb9ab-kube-api-access-rt98j\") on node \"crc\" DevicePath \"\"" Feb 27 10:30:25 crc kubenswrapper[4728]: I0227 10:30:25.726908 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/49693a3e-1583-4584-9049-fe85013bb9ab-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "49693a3e-1583-4584-9049-fe85013bb9ab" (UID: "49693a3e-1583-4584-9049-fe85013bb9ab"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:30:25 crc kubenswrapper[4728]: I0227 10:30:25.810710 4728 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/49693a3e-1583-4584-9049-fe85013bb9ab-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 10:30:25 crc kubenswrapper[4728]: I0227 10:30:25.902867 4728 generic.go:334] "Generic (PLEG): container finished" podID="49693a3e-1583-4584-9049-fe85013bb9ab" containerID="f5d67a6396ff960e3b09e3bec62e046f4e298d27383dbd6792ccff7c975feb2d" exitCode=0 Feb 27 10:30:25 crc kubenswrapper[4728]: I0227 10:30:25.902938 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xsskq" Feb 27 10:30:25 crc kubenswrapper[4728]: I0227 10:30:25.902953 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xsskq" event={"ID":"49693a3e-1583-4584-9049-fe85013bb9ab","Type":"ContainerDied","Data":"f5d67a6396ff960e3b09e3bec62e046f4e298d27383dbd6792ccff7c975feb2d"} Feb 27 10:30:25 crc kubenswrapper[4728]: I0227 10:30:25.903058 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xsskq" event={"ID":"49693a3e-1583-4584-9049-fe85013bb9ab","Type":"ContainerDied","Data":"fbc4f4cc9abb4c564f8c7bb60b3a363970d283b50849bfae94b7d280ce129783"} Feb 27 10:30:25 crc kubenswrapper[4728]: I0227 10:30:25.903099 4728 scope.go:117] "RemoveContainer" containerID="f5d67a6396ff960e3b09e3bec62e046f4e298d27383dbd6792ccff7c975feb2d" Feb 27 10:30:25 crc kubenswrapper[4728]: I0227 10:30:25.907476 4728 generic.go:334] "Generic (PLEG): container finished" podID="055d41f1-5e49-481e-8662-a245ba878526" containerID="4c9eb3e0d51790034ee5e7e92ffa737175b563b7e8d311f261234b42f7048581" exitCode=0 Feb 27 10:30:25 crc kubenswrapper[4728]: I0227 10:30:25.907584 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rq2kj" Feb 27 10:30:25 crc kubenswrapper[4728]: I0227 10:30:25.907985 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rq2kj" event={"ID":"055d41f1-5e49-481e-8662-a245ba878526","Type":"ContainerDied","Data":"4c9eb3e0d51790034ee5e7e92ffa737175b563b7e8d311f261234b42f7048581"} Feb 27 10:30:25 crc kubenswrapper[4728]: I0227 10:30:25.908121 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rq2kj" event={"ID":"055d41f1-5e49-481e-8662-a245ba878526","Type":"ContainerDied","Data":"f995b5b18ffaa7610f311a384b788553479fcc99627719e1d6b7bf86fae7c2a1"} Feb 27 10:30:25 crc kubenswrapper[4728]: I0227 10:30:25.940710 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xsskq"] Feb 27 10:30:25 crc kubenswrapper[4728]: I0227 10:30:25.946494 4728 scope.go:117] "RemoveContainer" containerID="70d34416b3e365a6605526fabfe3c159ae9fc6a91e93aa1b0ac081190033b7f9" Feb 27 10:30:25 crc kubenswrapper[4728]: I0227 10:30:25.953375 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-xsskq"] Feb 27 10:30:25 crc kubenswrapper[4728]: I0227 10:30:25.963055 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rq2kj"] Feb 27 10:30:25 crc kubenswrapper[4728]: I0227 10:30:25.968878 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-rq2kj"] Feb 27 10:30:25 crc kubenswrapper[4728]: I0227 10:30:25.974914 4728 scope.go:117] "RemoveContainer" containerID="bee8ee694307cb7e7009e7378d26a8070fb0e8dc035e34d6cc00a05c1023408e" Feb 27 10:30:26 crc kubenswrapper[4728]: I0227 10:30:26.001129 4728 scope.go:117] "RemoveContainer" containerID="f5d67a6396ff960e3b09e3bec62e046f4e298d27383dbd6792ccff7c975feb2d" Feb 27 10:30:26 crc kubenswrapper[4728]: E0227 10:30:26.001593 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f5d67a6396ff960e3b09e3bec62e046f4e298d27383dbd6792ccff7c975feb2d\": container with ID starting with f5d67a6396ff960e3b09e3bec62e046f4e298d27383dbd6792ccff7c975feb2d not found: ID does not exist" containerID="f5d67a6396ff960e3b09e3bec62e046f4e298d27383dbd6792ccff7c975feb2d" Feb 27 10:30:26 crc kubenswrapper[4728]: I0227 10:30:26.001649 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f5d67a6396ff960e3b09e3bec62e046f4e298d27383dbd6792ccff7c975feb2d"} err="failed to get container status \"f5d67a6396ff960e3b09e3bec62e046f4e298d27383dbd6792ccff7c975feb2d\": rpc error: code = NotFound desc = could not find container \"f5d67a6396ff960e3b09e3bec62e046f4e298d27383dbd6792ccff7c975feb2d\": container with ID starting with f5d67a6396ff960e3b09e3bec62e046f4e298d27383dbd6792ccff7c975feb2d not found: ID does not exist" Feb 27 10:30:26 crc kubenswrapper[4728]: I0227 10:30:26.001681 4728 scope.go:117] "RemoveContainer" containerID="70d34416b3e365a6605526fabfe3c159ae9fc6a91e93aa1b0ac081190033b7f9" Feb 27 10:30:26 crc kubenswrapper[4728]: E0227 10:30:26.002302 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"70d34416b3e365a6605526fabfe3c159ae9fc6a91e93aa1b0ac081190033b7f9\": container with ID starting with 70d34416b3e365a6605526fabfe3c159ae9fc6a91e93aa1b0ac081190033b7f9 not found: ID does not exist" containerID="70d34416b3e365a6605526fabfe3c159ae9fc6a91e93aa1b0ac081190033b7f9" Feb 27 10:30:26 crc kubenswrapper[4728]: I0227 10:30:26.002339 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"70d34416b3e365a6605526fabfe3c159ae9fc6a91e93aa1b0ac081190033b7f9"} err="failed to get container status \"70d34416b3e365a6605526fabfe3c159ae9fc6a91e93aa1b0ac081190033b7f9\": rpc error: code = NotFound desc = could not find container \"70d34416b3e365a6605526fabfe3c159ae9fc6a91e93aa1b0ac081190033b7f9\": container with ID starting with 70d34416b3e365a6605526fabfe3c159ae9fc6a91e93aa1b0ac081190033b7f9 not found: ID does not exist" Feb 27 10:30:26 crc kubenswrapper[4728]: I0227 10:30:26.002360 4728 scope.go:117] "RemoveContainer" containerID="bee8ee694307cb7e7009e7378d26a8070fb0e8dc035e34d6cc00a05c1023408e" Feb 27 10:30:26 crc kubenswrapper[4728]: E0227 10:30:26.002714 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bee8ee694307cb7e7009e7378d26a8070fb0e8dc035e34d6cc00a05c1023408e\": container with ID starting with bee8ee694307cb7e7009e7378d26a8070fb0e8dc035e34d6cc00a05c1023408e not found: ID does not exist" containerID="bee8ee694307cb7e7009e7378d26a8070fb0e8dc035e34d6cc00a05c1023408e" Feb 27 10:30:26 crc kubenswrapper[4728]: I0227 10:30:26.002747 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bee8ee694307cb7e7009e7378d26a8070fb0e8dc035e34d6cc00a05c1023408e"} err="failed to get container status \"bee8ee694307cb7e7009e7378d26a8070fb0e8dc035e34d6cc00a05c1023408e\": rpc error: code = NotFound desc = could not find container \"bee8ee694307cb7e7009e7378d26a8070fb0e8dc035e34d6cc00a05c1023408e\": container with ID starting with bee8ee694307cb7e7009e7378d26a8070fb0e8dc035e34d6cc00a05c1023408e not found: ID does not exist" Feb 27 10:30:26 crc kubenswrapper[4728]: I0227 10:30:26.002780 4728 scope.go:117] "RemoveContainer" containerID="4c9eb3e0d51790034ee5e7e92ffa737175b563b7e8d311f261234b42f7048581" Feb 27 10:30:26 crc kubenswrapper[4728]: I0227 10:30:26.020599 4728 scope.go:117] "RemoveContainer" containerID="811e272834c2a5900003e08d87412dec137b46ad0dd8e2f482aa92575b1ffd56" Feb 27 10:30:26 crc kubenswrapper[4728]: I0227 10:30:26.034755 4728 scope.go:117] "RemoveContainer" containerID="73c89c06ea962f51600b9489cd54f13a9780d3127827b0d05a55bc2d7d0437d4" Feb 27 10:30:26 crc kubenswrapper[4728]: I0227 10:30:26.046604 4728 scope.go:117] "RemoveContainer" containerID="4c9eb3e0d51790034ee5e7e92ffa737175b563b7e8d311f261234b42f7048581" Feb 27 10:30:26 crc kubenswrapper[4728]: E0227 10:30:26.046950 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c9eb3e0d51790034ee5e7e92ffa737175b563b7e8d311f261234b42f7048581\": container with ID starting with 4c9eb3e0d51790034ee5e7e92ffa737175b563b7e8d311f261234b42f7048581 not found: ID does not exist" containerID="4c9eb3e0d51790034ee5e7e92ffa737175b563b7e8d311f261234b42f7048581" Feb 27 10:30:26 crc kubenswrapper[4728]: I0227 10:30:26.046980 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c9eb3e0d51790034ee5e7e92ffa737175b563b7e8d311f261234b42f7048581"} err="failed to get container status \"4c9eb3e0d51790034ee5e7e92ffa737175b563b7e8d311f261234b42f7048581\": rpc error: code = NotFound desc = could not find container \"4c9eb3e0d51790034ee5e7e92ffa737175b563b7e8d311f261234b42f7048581\": container with ID starting with 4c9eb3e0d51790034ee5e7e92ffa737175b563b7e8d311f261234b42f7048581 not found: ID does not exist" Feb 27 10:30:26 crc kubenswrapper[4728]: I0227 10:30:26.047019 4728 scope.go:117] "RemoveContainer" containerID="811e272834c2a5900003e08d87412dec137b46ad0dd8e2f482aa92575b1ffd56" Feb 27 10:30:26 crc kubenswrapper[4728]: E0227 10:30:26.047286 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"811e272834c2a5900003e08d87412dec137b46ad0dd8e2f482aa92575b1ffd56\": container with ID starting with 811e272834c2a5900003e08d87412dec137b46ad0dd8e2f482aa92575b1ffd56 not found: ID does not exist" containerID="811e272834c2a5900003e08d87412dec137b46ad0dd8e2f482aa92575b1ffd56" Feb 27 10:30:26 crc kubenswrapper[4728]: I0227 10:30:26.047306 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"811e272834c2a5900003e08d87412dec137b46ad0dd8e2f482aa92575b1ffd56"} err="failed to get container status \"811e272834c2a5900003e08d87412dec137b46ad0dd8e2f482aa92575b1ffd56\": rpc error: code = NotFound desc = could not find container \"811e272834c2a5900003e08d87412dec137b46ad0dd8e2f482aa92575b1ffd56\": container with ID starting with 811e272834c2a5900003e08d87412dec137b46ad0dd8e2f482aa92575b1ffd56 not found: ID does not exist" Feb 27 10:30:26 crc kubenswrapper[4728]: I0227 10:30:26.047319 4728 scope.go:117] "RemoveContainer" containerID="73c89c06ea962f51600b9489cd54f13a9780d3127827b0d05a55bc2d7d0437d4" Feb 27 10:30:26 crc kubenswrapper[4728]: E0227 10:30:26.047748 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"73c89c06ea962f51600b9489cd54f13a9780d3127827b0d05a55bc2d7d0437d4\": container with ID starting with 73c89c06ea962f51600b9489cd54f13a9780d3127827b0d05a55bc2d7d0437d4 not found: ID does not exist" containerID="73c89c06ea962f51600b9489cd54f13a9780d3127827b0d05a55bc2d7d0437d4" Feb 27 10:30:26 crc kubenswrapper[4728]: I0227 10:30:26.047769 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"73c89c06ea962f51600b9489cd54f13a9780d3127827b0d05a55bc2d7d0437d4"} err="failed to get container status \"73c89c06ea962f51600b9489cd54f13a9780d3127827b0d05a55bc2d7d0437d4\": rpc error: code = NotFound desc = could not find container \"73c89c06ea962f51600b9489cd54f13a9780d3127827b0d05a55bc2d7d0437d4\": container with ID starting with 73c89c06ea962f51600b9489cd54f13a9780d3127827b0d05a55bc2d7d0437d4 not found: ID does not exist" Feb 27 10:30:26 crc kubenswrapper[4728]: I0227 10:30:26.733555 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="055d41f1-5e49-481e-8662-a245ba878526" path="/var/lib/kubelet/pods/055d41f1-5e49-481e-8662-a245ba878526/volumes" Feb 27 10:30:26 crc kubenswrapper[4728]: I0227 10:30:26.734459 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a7d9e95-6291-465f-9f94-f99fc86e4389" path="/var/lib/kubelet/pods/0a7d9e95-6291-465f-9f94-f99fc86e4389/volumes" Feb 27 10:30:26 crc kubenswrapper[4728]: I0227 10:30:26.735105 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49693a3e-1583-4584-9049-fe85013bb9ab" path="/var/lib/kubelet/pods/49693a3e-1583-4584-9049-fe85013bb9ab/volumes" Feb 27 10:30:28 crc kubenswrapper[4728]: I0227 10:30:28.402241 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-77647bd6bd-tptvb"] Feb 27 10:30:28 crc kubenswrapper[4728]: I0227 10:30:28.402562 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-77647bd6bd-tptvb" podUID="3f66047b-9fdb-4972-b8e4-59eedce02a32" containerName="controller-manager" containerID="cri-o://c918ac39608ba04b6b43a4dbfb408fe6c6c22533af4e2060da343b5787e82b63" gracePeriod=30 Feb 27 10:30:28 crc kubenswrapper[4728]: I0227 10:30:28.489243 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7bbd8c9df6-57798"] Feb 27 10:30:28 crc kubenswrapper[4728]: I0227 10:30:28.489883 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-7bbd8c9df6-57798" podUID="7fed72ee-d3b1-4c16-8292-69269d9bf816" containerName="route-controller-manager" containerID="cri-o://b2b00d1c8fd5a899effbb5e0a24d63391cec6ce750c81a80aa3dc4944d50364d" gracePeriod=30 Feb 27 10:30:28 crc kubenswrapper[4728]: I0227 10:30:28.648230 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-b97gn" Feb 27 10:30:28 crc kubenswrapper[4728]: I0227 10:30:28.863109 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-t4bnk" Feb 27 10:30:28 crc kubenswrapper[4728]: I0227 10:30:28.932860 4728 generic.go:334] "Generic (PLEG): container finished" podID="7fed72ee-d3b1-4c16-8292-69269d9bf816" containerID="b2b00d1c8fd5a899effbb5e0a24d63391cec6ce750c81a80aa3dc4944d50364d" exitCode=0 Feb 27 10:30:28 crc kubenswrapper[4728]: I0227 10:30:28.932933 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7bbd8c9df6-57798" event={"ID":"7fed72ee-d3b1-4c16-8292-69269d9bf816","Type":"ContainerDied","Data":"b2b00d1c8fd5a899effbb5e0a24d63391cec6ce750c81a80aa3dc4944d50364d"} Feb 27 10:30:28 crc kubenswrapper[4728]: I0227 10:30:28.935050 4728 generic.go:334] "Generic (PLEG): container finished" podID="3f66047b-9fdb-4972-b8e4-59eedce02a32" containerID="c918ac39608ba04b6b43a4dbfb408fe6c6c22533af4e2060da343b5787e82b63" exitCode=0 Feb 27 10:30:28 crc kubenswrapper[4728]: I0227 10:30:28.935089 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-77647bd6bd-tptvb" event={"ID":"3f66047b-9fdb-4972-b8e4-59eedce02a32","Type":"ContainerDied","Data":"c918ac39608ba04b6b43a4dbfb408fe6c6c22533af4e2060da343b5787e82b63"} Feb 27 10:30:29 crc kubenswrapper[4728]: I0227 10:30:29.038779 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7bbd8c9df6-57798" Feb 27 10:30:29 crc kubenswrapper[4728]: I0227 10:30:29.038817 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-77647bd6bd-tptvb" Feb 27 10:30:29 crc kubenswrapper[4728]: I0227 10:30:29.050569 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-brtfb" Feb 27 10:30:29 crc kubenswrapper[4728]: I0227 10:30:29.050773 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cqwnm\" (UniqueName: \"kubernetes.io/projected/7fed72ee-d3b1-4c16-8292-69269d9bf816-kube-api-access-cqwnm\") pod \"7fed72ee-d3b1-4c16-8292-69269d9bf816\" (UID: \"7fed72ee-d3b1-4c16-8292-69269d9bf816\") " Feb 27 10:30:29 crc kubenswrapper[4728]: I0227 10:30:29.050829 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f66047b-9fdb-4972-b8e4-59eedce02a32-config\") pod \"3f66047b-9fdb-4972-b8e4-59eedce02a32\" (UID: \"3f66047b-9fdb-4972-b8e4-59eedce02a32\") " Feb 27 10:30:29 crc kubenswrapper[4728]: I0227 10:30:29.050900 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7fed72ee-d3b1-4c16-8292-69269d9bf816-client-ca\") pod \"7fed72ee-d3b1-4c16-8292-69269d9bf816\" (UID: \"7fed72ee-d3b1-4c16-8292-69269d9bf816\") " Feb 27 10:30:29 crc kubenswrapper[4728]: I0227 10:30:29.050979 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7fed72ee-d3b1-4c16-8292-69269d9bf816-serving-cert\") pod \"7fed72ee-d3b1-4c16-8292-69269d9bf816\" (UID: \"7fed72ee-d3b1-4c16-8292-69269d9bf816\") " Feb 27 10:30:29 crc kubenswrapper[4728]: I0227 10:30:29.051871 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7fed72ee-d3b1-4c16-8292-69269d9bf816-client-ca" (OuterVolumeSpecName: "client-ca") pod "7fed72ee-d3b1-4c16-8292-69269d9bf816" (UID: "7fed72ee-d3b1-4c16-8292-69269d9bf816"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:30:29 crc kubenswrapper[4728]: I0227 10:30:29.051992 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f66047b-9fdb-4972-b8e4-59eedce02a32-config" (OuterVolumeSpecName: "config") pod "3f66047b-9fdb-4972-b8e4-59eedce02a32" (UID: "3f66047b-9fdb-4972-b8e4-59eedce02a32"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:30:29 crc kubenswrapper[4728]: I0227 10:30:29.051056 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3f66047b-9fdb-4972-b8e4-59eedce02a32-serving-cert\") pod \"3f66047b-9fdb-4972-b8e4-59eedce02a32\" (UID: \"3f66047b-9fdb-4972-b8e4-59eedce02a32\") " Feb 27 10:30:29 crc kubenswrapper[4728]: I0227 10:30:29.052481 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3f66047b-9fdb-4972-b8e4-59eedce02a32-proxy-ca-bundles\") pod \"3f66047b-9fdb-4972-b8e4-59eedce02a32\" (UID: \"3f66047b-9fdb-4972-b8e4-59eedce02a32\") " Feb 27 10:30:29 crc kubenswrapper[4728]: I0227 10:30:29.052540 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3f66047b-9fdb-4972-b8e4-59eedce02a32-client-ca\") pod \"3f66047b-9fdb-4972-b8e4-59eedce02a32\" (UID: \"3f66047b-9fdb-4972-b8e4-59eedce02a32\") " Feb 27 10:30:29 crc kubenswrapper[4728]: I0227 10:30:29.052604 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5zkcs\" (UniqueName: \"kubernetes.io/projected/3f66047b-9fdb-4972-b8e4-59eedce02a32-kube-api-access-5zkcs\") pod \"3f66047b-9fdb-4972-b8e4-59eedce02a32\" (UID: \"3f66047b-9fdb-4972-b8e4-59eedce02a32\") " Feb 27 10:30:29 crc kubenswrapper[4728]: I0227 10:30:29.052669 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7fed72ee-d3b1-4c16-8292-69269d9bf816-config\") pod \"7fed72ee-d3b1-4c16-8292-69269d9bf816\" (UID: \"7fed72ee-d3b1-4c16-8292-69269d9bf816\") " Feb 27 10:30:29 crc kubenswrapper[4728]: I0227 10:30:29.052985 4728 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f66047b-9fdb-4972-b8e4-59eedce02a32-config\") on node \"crc\" DevicePath \"\"" Feb 27 10:30:29 crc kubenswrapper[4728]: I0227 10:30:29.053005 4728 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7fed72ee-d3b1-4c16-8292-69269d9bf816-client-ca\") on node \"crc\" DevicePath \"\"" Feb 27 10:30:29 crc kubenswrapper[4728]: I0227 10:30:29.053010 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f66047b-9fdb-4972-b8e4-59eedce02a32-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "3f66047b-9fdb-4972-b8e4-59eedce02a32" (UID: "3f66047b-9fdb-4972-b8e4-59eedce02a32"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:30:29 crc kubenswrapper[4728]: I0227 10:30:29.053020 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f66047b-9fdb-4972-b8e4-59eedce02a32-client-ca" (OuterVolumeSpecName: "client-ca") pod "3f66047b-9fdb-4972-b8e4-59eedce02a32" (UID: "3f66047b-9fdb-4972-b8e4-59eedce02a32"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:30:29 crc kubenswrapper[4728]: I0227 10:30:29.053792 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7fed72ee-d3b1-4c16-8292-69269d9bf816-config" (OuterVolumeSpecName: "config") pod "7fed72ee-d3b1-4c16-8292-69269d9bf816" (UID: "7fed72ee-d3b1-4c16-8292-69269d9bf816"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:30:29 crc kubenswrapper[4728]: I0227 10:30:29.060892 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fed72ee-d3b1-4c16-8292-69269d9bf816-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7fed72ee-d3b1-4c16-8292-69269d9bf816" (UID: "7fed72ee-d3b1-4c16-8292-69269d9bf816"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:30:29 crc kubenswrapper[4728]: I0227 10:30:29.066542 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f66047b-9fdb-4972-b8e4-59eedce02a32-kube-api-access-5zkcs" (OuterVolumeSpecName: "kube-api-access-5zkcs") pod "3f66047b-9fdb-4972-b8e4-59eedce02a32" (UID: "3f66047b-9fdb-4972-b8e4-59eedce02a32"). InnerVolumeSpecName "kube-api-access-5zkcs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:30:29 crc kubenswrapper[4728]: I0227 10:30:29.067757 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fed72ee-d3b1-4c16-8292-69269d9bf816-kube-api-access-cqwnm" (OuterVolumeSpecName: "kube-api-access-cqwnm") pod "7fed72ee-d3b1-4c16-8292-69269d9bf816" (UID: "7fed72ee-d3b1-4c16-8292-69269d9bf816"). InnerVolumeSpecName "kube-api-access-cqwnm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:30:29 crc kubenswrapper[4728]: I0227 10:30:29.069127 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f66047b-9fdb-4972-b8e4-59eedce02a32-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "3f66047b-9fdb-4972-b8e4-59eedce02a32" (UID: "3f66047b-9fdb-4972-b8e4-59eedce02a32"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:30:29 crc kubenswrapper[4728]: I0227 10:30:29.154852 4728 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7fed72ee-d3b1-4c16-8292-69269d9bf816-config\") on node \"crc\" DevicePath \"\"" Feb 27 10:30:29 crc kubenswrapper[4728]: I0227 10:30:29.154901 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cqwnm\" (UniqueName: \"kubernetes.io/projected/7fed72ee-d3b1-4c16-8292-69269d9bf816-kube-api-access-cqwnm\") on node \"crc\" DevicePath \"\"" Feb 27 10:30:29 crc kubenswrapper[4728]: I0227 10:30:29.154920 4728 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7fed72ee-d3b1-4c16-8292-69269d9bf816-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 10:30:29 crc kubenswrapper[4728]: I0227 10:30:29.154939 4728 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3f66047b-9fdb-4972-b8e4-59eedce02a32-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 10:30:29 crc kubenswrapper[4728]: I0227 10:30:29.154958 4728 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3f66047b-9fdb-4972-b8e4-59eedce02a32-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 27 10:30:29 crc kubenswrapper[4728]: I0227 10:30:29.154974 4728 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3f66047b-9fdb-4972-b8e4-59eedce02a32-client-ca\") on node \"crc\" DevicePath \"\"" Feb 27 10:30:29 crc kubenswrapper[4728]: I0227 10:30:29.154989 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5zkcs\" (UniqueName: \"kubernetes.io/projected/3f66047b-9fdb-4972-b8e4-59eedce02a32-kube-api-access-5zkcs\") on node \"crc\" DevicePath \"\"" Feb 27 10:30:29 crc kubenswrapper[4728]: I0227 10:30:29.554621 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6749b8c876-bbl5m"] Feb 27 10:30:29 crc kubenswrapper[4728]: E0227 10:30:29.556639 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a7d9e95-6291-465f-9f94-f99fc86e4389" containerName="extract-content" Feb 27 10:30:29 crc kubenswrapper[4728]: I0227 10:30:29.556846 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a7d9e95-6291-465f-9f94-f99fc86e4389" containerName="extract-content" Feb 27 10:30:29 crc kubenswrapper[4728]: E0227 10:30:29.557026 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="055d41f1-5e49-481e-8662-a245ba878526" containerName="registry-server" Feb 27 10:30:29 crc kubenswrapper[4728]: I0227 10:30:29.557184 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="055d41f1-5e49-481e-8662-a245ba878526" containerName="registry-server" Feb 27 10:30:29 crc kubenswrapper[4728]: E0227 10:30:29.557344 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49693a3e-1583-4584-9049-fe85013bb9ab" containerName="extract-utilities" Feb 27 10:30:29 crc kubenswrapper[4728]: I0227 10:30:29.557546 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="49693a3e-1583-4584-9049-fe85013bb9ab" containerName="extract-utilities" Feb 27 10:30:29 crc kubenswrapper[4728]: E0227 10:30:29.557756 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="380270a6-c1d3-49a1-b3c7-9080ae9038b9" containerName="oc" Feb 27 10:30:29 crc kubenswrapper[4728]: I0227 10:30:29.557918 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="380270a6-c1d3-49a1-b3c7-9080ae9038b9" containerName="oc" Feb 27 10:30:29 crc kubenswrapper[4728]: E0227 10:30:29.558096 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f66047b-9fdb-4972-b8e4-59eedce02a32" containerName="controller-manager" Feb 27 10:30:29 crc kubenswrapper[4728]: I0227 10:30:29.558242 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f66047b-9fdb-4972-b8e4-59eedce02a32" containerName="controller-manager" Feb 27 10:30:29 crc kubenswrapper[4728]: E0227 10:30:29.558418 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="055d41f1-5e49-481e-8662-a245ba878526" containerName="extract-content" Feb 27 10:30:29 crc kubenswrapper[4728]: I0227 10:30:29.558636 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="055d41f1-5e49-481e-8662-a245ba878526" containerName="extract-content" Feb 27 10:30:29 crc kubenswrapper[4728]: E0227 10:30:29.558832 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49693a3e-1583-4584-9049-fe85013bb9ab" containerName="extract-content" Feb 27 10:30:29 crc kubenswrapper[4728]: I0227 10:30:29.558992 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="49693a3e-1583-4584-9049-fe85013bb9ab" containerName="extract-content" Feb 27 10:30:29 crc kubenswrapper[4728]: E0227 10:30:29.559177 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a7d9e95-6291-465f-9f94-f99fc86e4389" containerName="registry-server" Feb 27 10:30:29 crc kubenswrapper[4728]: I0227 10:30:29.559342 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a7d9e95-6291-465f-9f94-f99fc86e4389" containerName="registry-server" Feb 27 10:30:29 crc kubenswrapper[4728]: E0227 10:30:29.559476 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a7d9e95-6291-465f-9f94-f99fc86e4389" containerName="extract-utilities" Feb 27 10:30:29 crc kubenswrapper[4728]: I0227 10:30:29.559751 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a7d9e95-6291-465f-9f94-f99fc86e4389" containerName="extract-utilities" Feb 27 10:30:29 crc kubenswrapper[4728]: E0227 10:30:29.559892 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fed72ee-d3b1-4c16-8292-69269d9bf816" containerName="route-controller-manager" Feb 27 10:30:29 crc kubenswrapper[4728]: I0227 10:30:29.560022 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fed72ee-d3b1-4c16-8292-69269d9bf816" containerName="route-controller-manager" Feb 27 10:30:29 crc kubenswrapper[4728]: E0227 10:30:29.560141 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="055d41f1-5e49-481e-8662-a245ba878526" containerName="extract-utilities" Feb 27 10:30:29 crc kubenswrapper[4728]: I0227 10:30:29.560557 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="055d41f1-5e49-481e-8662-a245ba878526" containerName="extract-utilities" Feb 27 10:30:29 crc kubenswrapper[4728]: E0227 10:30:29.573740 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49693a3e-1583-4584-9049-fe85013bb9ab" containerName="registry-server" Feb 27 10:30:29 crc kubenswrapper[4728]: I0227 10:30:29.573774 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="49693a3e-1583-4584-9049-fe85013bb9ab" containerName="registry-server" Feb 27 10:30:29 crc kubenswrapper[4728]: I0227 10:30:29.574605 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="7fed72ee-d3b1-4c16-8292-69269d9bf816" containerName="route-controller-manager" Feb 27 10:30:29 crc kubenswrapper[4728]: I0227 10:30:29.574663 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="380270a6-c1d3-49a1-b3c7-9080ae9038b9" containerName="oc" Feb 27 10:30:29 crc kubenswrapper[4728]: I0227 10:30:29.574685 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="055d41f1-5e49-481e-8662-a245ba878526" containerName="registry-server" Feb 27 10:30:29 crc kubenswrapper[4728]: I0227 10:30:29.574729 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f66047b-9fdb-4972-b8e4-59eedce02a32" containerName="controller-manager" Feb 27 10:30:29 crc kubenswrapper[4728]: I0227 10:30:29.574773 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a7d9e95-6291-465f-9f94-f99fc86e4389" containerName="registry-server" Feb 27 10:30:29 crc kubenswrapper[4728]: I0227 10:30:29.574819 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="49693a3e-1583-4584-9049-fe85013bb9ab" containerName="registry-server" Feb 27 10:30:29 crc kubenswrapper[4728]: I0227 10:30:29.575805 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7f7cf79d89-4zx8c"] Feb 27 10:30:29 crc kubenswrapper[4728]: I0227 10:30:29.577926 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7f7cf79d89-4zx8c" Feb 27 10:30:29 crc kubenswrapper[4728]: I0227 10:30:29.579037 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6749b8c876-bbl5m" Feb 27 10:30:29 crc kubenswrapper[4728]: I0227 10:30:29.590243 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6749b8c876-bbl5m"] Feb 27 10:30:29 crc kubenswrapper[4728]: I0227 10:30:29.599642 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7f7cf79d89-4zx8c"] Feb 27 10:30:29 crc kubenswrapper[4728]: I0227 10:30:29.662025 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b8a0e313-f25f-4fbd-a611-73dfaaea0cfb-config\") pod \"controller-manager-6749b8c876-bbl5m\" (UID: \"b8a0e313-f25f-4fbd-a611-73dfaaea0cfb\") " pod="openshift-controller-manager/controller-manager-6749b8c876-bbl5m" Feb 27 10:30:29 crc kubenswrapper[4728]: I0227 10:30:29.662111 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5efa4dae-b03b-4070-8431-b75d24e1aa94-client-ca\") pod \"route-controller-manager-7f7cf79d89-4zx8c\" (UID: \"5efa4dae-b03b-4070-8431-b75d24e1aa94\") " pod="openshift-route-controller-manager/route-controller-manager-7f7cf79d89-4zx8c" Feb 27 10:30:29 crc kubenswrapper[4728]: I0227 10:30:29.662165 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5efa4dae-b03b-4070-8431-b75d24e1aa94-serving-cert\") pod \"route-controller-manager-7f7cf79d89-4zx8c\" (UID: \"5efa4dae-b03b-4070-8431-b75d24e1aa94\") " pod="openshift-route-controller-manager/route-controller-manager-7f7cf79d89-4zx8c" Feb 27 10:30:29 crc kubenswrapper[4728]: I0227 10:30:29.662238 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kt6k\" (UniqueName: \"kubernetes.io/projected/5efa4dae-b03b-4070-8431-b75d24e1aa94-kube-api-access-7kt6k\") pod \"route-controller-manager-7f7cf79d89-4zx8c\" (UID: \"5efa4dae-b03b-4070-8431-b75d24e1aa94\") " pod="openshift-route-controller-manager/route-controller-manager-7f7cf79d89-4zx8c" Feb 27 10:30:29 crc kubenswrapper[4728]: I0227 10:30:29.662298 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b8a0e313-f25f-4fbd-a611-73dfaaea0cfb-proxy-ca-bundles\") pod \"controller-manager-6749b8c876-bbl5m\" (UID: \"b8a0e313-f25f-4fbd-a611-73dfaaea0cfb\") " pod="openshift-controller-manager/controller-manager-6749b8c876-bbl5m" Feb 27 10:30:29 crc kubenswrapper[4728]: I0227 10:30:29.662346 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8sm42\" (UniqueName: \"kubernetes.io/projected/b8a0e313-f25f-4fbd-a611-73dfaaea0cfb-kube-api-access-8sm42\") pod \"controller-manager-6749b8c876-bbl5m\" (UID: \"b8a0e313-f25f-4fbd-a611-73dfaaea0cfb\") " pod="openshift-controller-manager/controller-manager-6749b8c876-bbl5m" Feb 27 10:30:29 crc kubenswrapper[4728]: I0227 10:30:29.662372 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5efa4dae-b03b-4070-8431-b75d24e1aa94-config\") pod \"route-controller-manager-7f7cf79d89-4zx8c\" (UID: \"5efa4dae-b03b-4070-8431-b75d24e1aa94\") " pod="openshift-route-controller-manager/route-controller-manager-7f7cf79d89-4zx8c" Feb 27 10:30:29 crc kubenswrapper[4728]: I0227 10:30:29.662396 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b8a0e313-f25f-4fbd-a611-73dfaaea0cfb-client-ca\") pod \"controller-manager-6749b8c876-bbl5m\" (UID: \"b8a0e313-f25f-4fbd-a611-73dfaaea0cfb\") " pod="openshift-controller-manager/controller-manager-6749b8c876-bbl5m" Feb 27 10:30:29 crc kubenswrapper[4728]: I0227 10:30:29.662463 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b8a0e313-f25f-4fbd-a611-73dfaaea0cfb-serving-cert\") pod \"controller-manager-6749b8c876-bbl5m\" (UID: \"b8a0e313-f25f-4fbd-a611-73dfaaea0cfb\") " pod="openshift-controller-manager/controller-manager-6749b8c876-bbl5m" Feb 27 10:30:29 crc kubenswrapper[4728]: I0227 10:30:29.763591 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7kt6k\" (UniqueName: \"kubernetes.io/projected/5efa4dae-b03b-4070-8431-b75d24e1aa94-kube-api-access-7kt6k\") pod \"route-controller-manager-7f7cf79d89-4zx8c\" (UID: \"5efa4dae-b03b-4070-8431-b75d24e1aa94\") " pod="openshift-route-controller-manager/route-controller-manager-7f7cf79d89-4zx8c" Feb 27 10:30:29 crc kubenswrapper[4728]: I0227 10:30:29.763643 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b8a0e313-f25f-4fbd-a611-73dfaaea0cfb-proxy-ca-bundles\") pod \"controller-manager-6749b8c876-bbl5m\" (UID: \"b8a0e313-f25f-4fbd-a611-73dfaaea0cfb\") " pod="openshift-controller-manager/controller-manager-6749b8c876-bbl5m" Feb 27 10:30:29 crc kubenswrapper[4728]: I0227 10:30:29.763669 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8sm42\" (UniqueName: \"kubernetes.io/projected/b8a0e313-f25f-4fbd-a611-73dfaaea0cfb-kube-api-access-8sm42\") pod \"controller-manager-6749b8c876-bbl5m\" (UID: \"b8a0e313-f25f-4fbd-a611-73dfaaea0cfb\") " pod="openshift-controller-manager/controller-manager-6749b8c876-bbl5m" Feb 27 10:30:29 crc kubenswrapper[4728]: I0227 10:30:29.763694 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5efa4dae-b03b-4070-8431-b75d24e1aa94-config\") pod \"route-controller-manager-7f7cf79d89-4zx8c\" (UID: \"5efa4dae-b03b-4070-8431-b75d24e1aa94\") " pod="openshift-route-controller-manager/route-controller-manager-7f7cf79d89-4zx8c" Feb 27 10:30:29 crc kubenswrapper[4728]: I0227 10:30:29.763713 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b8a0e313-f25f-4fbd-a611-73dfaaea0cfb-client-ca\") pod \"controller-manager-6749b8c876-bbl5m\" (UID: \"b8a0e313-f25f-4fbd-a611-73dfaaea0cfb\") " pod="openshift-controller-manager/controller-manager-6749b8c876-bbl5m" Feb 27 10:30:29 crc kubenswrapper[4728]: I0227 10:30:29.763798 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b8a0e313-f25f-4fbd-a611-73dfaaea0cfb-serving-cert\") pod \"controller-manager-6749b8c876-bbl5m\" (UID: \"b8a0e313-f25f-4fbd-a611-73dfaaea0cfb\") " pod="openshift-controller-manager/controller-manager-6749b8c876-bbl5m" Feb 27 10:30:29 crc kubenswrapper[4728]: I0227 10:30:29.763843 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b8a0e313-f25f-4fbd-a611-73dfaaea0cfb-config\") pod \"controller-manager-6749b8c876-bbl5m\" (UID: \"b8a0e313-f25f-4fbd-a611-73dfaaea0cfb\") " pod="openshift-controller-manager/controller-manager-6749b8c876-bbl5m" Feb 27 10:30:29 crc kubenswrapper[4728]: I0227 10:30:29.763903 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5efa4dae-b03b-4070-8431-b75d24e1aa94-client-ca\") pod \"route-controller-manager-7f7cf79d89-4zx8c\" (UID: \"5efa4dae-b03b-4070-8431-b75d24e1aa94\") " pod="openshift-route-controller-manager/route-controller-manager-7f7cf79d89-4zx8c" Feb 27 10:30:29 crc kubenswrapper[4728]: I0227 10:30:29.763931 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5efa4dae-b03b-4070-8431-b75d24e1aa94-serving-cert\") pod \"route-controller-manager-7f7cf79d89-4zx8c\" (UID: \"5efa4dae-b03b-4070-8431-b75d24e1aa94\") " pod="openshift-route-controller-manager/route-controller-manager-7f7cf79d89-4zx8c" Feb 27 10:30:29 crc kubenswrapper[4728]: I0227 10:30:29.765983 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b8a0e313-f25f-4fbd-a611-73dfaaea0cfb-proxy-ca-bundles\") pod \"controller-manager-6749b8c876-bbl5m\" (UID: \"b8a0e313-f25f-4fbd-a611-73dfaaea0cfb\") " pod="openshift-controller-manager/controller-manager-6749b8c876-bbl5m" Feb 27 10:30:29 crc kubenswrapper[4728]: I0227 10:30:29.766327 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b8a0e313-f25f-4fbd-a611-73dfaaea0cfb-client-ca\") pod \"controller-manager-6749b8c876-bbl5m\" (UID: \"b8a0e313-f25f-4fbd-a611-73dfaaea0cfb\") " pod="openshift-controller-manager/controller-manager-6749b8c876-bbl5m" Feb 27 10:30:29 crc kubenswrapper[4728]: I0227 10:30:29.766624 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5efa4dae-b03b-4070-8431-b75d24e1aa94-config\") pod \"route-controller-manager-7f7cf79d89-4zx8c\" (UID: \"5efa4dae-b03b-4070-8431-b75d24e1aa94\") " pod="openshift-route-controller-manager/route-controller-manager-7f7cf79d89-4zx8c" Feb 27 10:30:29 crc kubenswrapper[4728]: I0227 10:30:29.767075 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5efa4dae-b03b-4070-8431-b75d24e1aa94-client-ca\") pod \"route-controller-manager-7f7cf79d89-4zx8c\" (UID: \"5efa4dae-b03b-4070-8431-b75d24e1aa94\") " pod="openshift-route-controller-manager/route-controller-manager-7f7cf79d89-4zx8c" Feb 27 10:30:29 crc kubenswrapper[4728]: I0227 10:30:29.767842 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b8a0e313-f25f-4fbd-a611-73dfaaea0cfb-config\") pod \"controller-manager-6749b8c876-bbl5m\" (UID: \"b8a0e313-f25f-4fbd-a611-73dfaaea0cfb\") " pod="openshift-controller-manager/controller-manager-6749b8c876-bbl5m" Feb 27 10:30:29 crc kubenswrapper[4728]: I0227 10:30:29.769041 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5efa4dae-b03b-4070-8431-b75d24e1aa94-serving-cert\") pod \"route-controller-manager-7f7cf79d89-4zx8c\" (UID: \"5efa4dae-b03b-4070-8431-b75d24e1aa94\") " pod="openshift-route-controller-manager/route-controller-manager-7f7cf79d89-4zx8c" Feb 27 10:30:29 crc kubenswrapper[4728]: I0227 10:30:29.773629 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b8a0e313-f25f-4fbd-a611-73dfaaea0cfb-serving-cert\") pod \"controller-manager-6749b8c876-bbl5m\" (UID: \"b8a0e313-f25f-4fbd-a611-73dfaaea0cfb\") " pod="openshift-controller-manager/controller-manager-6749b8c876-bbl5m" Feb 27 10:30:29 crc kubenswrapper[4728]: I0227 10:30:29.787374 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7kt6k\" (UniqueName: \"kubernetes.io/projected/5efa4dae-b03b-4070-8431-b75d24e1aa94-kube-api-access-7kt6k\") pod \"route-controller-manager-7f7cf79d89-4zx8c\" (UID: \"5efa4dae-b03b-4070-8431-b75d24e1aa94\") " pod="openshift-route-controller-manager/route-controller-manager-7f7cf79d89-4zx8c" Feb 27 10:30:29 crc kubenswrapper[4728]: I0227 10:30:29.794538 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8sm42\" (UniqueName: \"kubernetes.io/projected/b8a0e313-f25f-4fbd-a611-73dfaaea0cfb-kube-api-access-8sm42\") pod \"controller-manager-6749b8c876-bbl5m\" (UID: \"b8a0e313-f25f-4fbd-a611-73dfaaea0cfb\") " pod="openshift-controller-manager/controller-manager-6749b8c876-bbl5m" Feb 27 10:30:29 crc kubenswrapper[4728]: I0227 10:30:29.897928 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7f7cf79d89-4zx8c" Feb 27 10:30:29 crc kubenswrapper[4728]: I0227 10:30:29.908718 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6749b8c876-bbl5m" Feb 27 10:30:29 crc kubenswrapper[4728]: I0227 10:30:29.942875 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7bbd8c9df6-57798" event={"ID":"7fed72ee-d3b1-4c16-8292-69269d9bf816","Type":"ContainerDied","Data":"ab82dc34a8ad8011dc939a369a5abba020b94cfafa742114253c97d773550588"} Feb 27 10:30:29 crc kubenswrapper[4728]: I0227 10:30:29.943239 4728 scope.go:117] "RemoveContainer" containerID="b2b00d1c8fd5a899effbb5e0a24d63391cec6ce750c81a80aa3dc4944d50364d" Feb 27 10:30:29 crc kubenswrapper[4728]: I0227 10:30:29.942975 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7bbd8c9df6-57798" Feb 27 10:30:29 crc kubenswrapper[4728]: I0227 10:30:29.946203 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-77647bd6bd-tptvb" event={"ID":"3f66047b-9fdb-4972-b8e4-59eedce02a32","Type":"ContainerDied","Data":"a231fb8dd9553b11cc6cde16868b6e3918f272a1c1b8330152e78090d662ab55"} Feb 27 10:30:29 crc kubenswrapper[4728]: I0227 10:30:29.946800 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-77647bd6bd-tptvb" Feb 27 10:30:29 crc kubenswrapper[4728]: I0227 10:30:29.991727 4728 scope.go:117] "RemoveContainer" containerID="c918ac39608ba04b6b43a4dbfb408fe6c6c22533af4e2060da343b5787e82b63" Feb 27 10:30:30 crc kubenswrapper[4728]: I0227 10:30:30.003377 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7bbd8c9df6-57798"] Feb 27 10:30:30 crc kubenswrapper[4728]: I0227 10:30:30.006705 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7bbd8c9df6-57798"] Feb 27 10:30:30 crc kubenswrapper[4728]: I0227 10:30:30.009691 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-77647bd6bd-tptvb"] Feb 27 10:30:30 crc kubenswrapper[4728]: I0227 10:30:30.013411 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-77647bd6bd-tptvb"] Feb 27 10:30:30 crc kubenswrapper[4728]: I0227 10:30:30.185196 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6749b8c876-bbl5m"] Feb 27 10:30:30 crc kubenswrapper[4728]: W0227 10:30:30.189842 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb8a0e313_f25f_4fbd_a611_73dfaaea0cfb.slice/crio-37b954bedc9db68a6370e667c474f5a2ad298ef2b3db2ec556abc96723502aef WatchSource:0}: Error finding container 37b954bedc9db68a6370e667c474f5a2ad298ef2b3db2ec556abc96723502aef: Status 404 returned error can't find the container with id 37b954bedc9db68a6370e667c474f5a2ad298ef2b3db2ec556abc96723502aef Feb 27 10:30:30 crc kubenswrapper[4728]: I0227 10:30:30.220742 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7f7cf79d89-4zx8c"] Feb 27 10:30:30 crc kubenswrapper[4728]: W0227 10:30:30.225897 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5efa4dae_b03b_4070_8431_b75d24e1aa94.slice/crio-82e872fbed24b01299924209385d4a22ddfaac3584fa9d7a71621e59ae969bb5 WatchSource:0}: Error finding container 82e872fbed24b01299924209385d4a22ddfaac3584fa9d7a71621e59ae969bb5: Status 404 returned error can't find the container with id 82e872fbed24b01299924209385d4a22ddfaac3584fa9d7a71621e59ae969bb5 Feb 27 10:30:30 crc kubenswrapper[4728]: I0227 10:30:30.731075 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f66047b-9fdb-4972-b8e4-59eedce02a32" path="/var/lib/kubelet/pods/3f66047b-9fdb-4972-b8e4-59eedce02a32/volumes" Feb 27 10:30:30 crc kubenswrapper[4728]: I0227 10:30:30.732118 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7fed72ee-d3b1-4c16-8292-69269d9bf816" path="/var/lib/kubelet/pods/7fed72ee-d3b1-4c16-8292-69269d9bf816/volumes" Feb 27 10:30:30 crc kubenswrapper[4728]: I0227 10:30:30.956847 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6749b8c876-bbl5m" event={"ID":"b8a0e313-f25f-4fbd-a611-73dfaaea0cfb","Type":"ContainerStarted","Data":"05652f1b7aa2764fca6631b9f558ab4e1179b5f1899f2a2d624969ba8b772b3f"} Feb 27 10:30:30 crc kubenswrapper[4728]: I0227 10:30:30.956908 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6749b8c876-bbl5m" event={"ID":"b8a0e313-f25f-4fbd-a611-73dfaaea0cfb","Type":"ContainerStarted","Data":"37b954bedc9db68a6370e667c474f5a2ad298ef2b3db2ec556abc96723502aef"} Feb 27 10:30:30 crc kubenswrapper[4728]: I0227 10:30:30.958383 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6749b8c876-bbl5m" Feb 27 10:30:30 crc kubenswrapper[4728]: I0227 10:30:30.960711 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7f7cf79d89-4zx8c" event={"ID":"5efa4dae-b03b-4070-8431-b75d24e1aa94","Type":"ContainerStarted","Data":"4589486ea1444529abffbb012ade0673a342b48c44cb6517718e51b44ab7873a"} Feb 27 10:30:30 crc kubenswrapper[4728]: I0227 10:30:30.960761 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7f7cf79d89-4zx8c" event={"ID":"5efa4dae-b03b-4070-8431-b75d24e1aa94","Type":"ContainerStarted","Data":"82e872fbed24b01299924209385d4a22ddfaac3584fa9d7a71621e59ae969bb5"} Feb 27 10:30:30 crc kubenswrapper[4728]: I0227 10:30:30.961772 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7f7cf79d89-4zx8c" Feb 27 10:30:30 crc kubenswrapper[4728]: I0227 10:30:30.964283 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6749b8c876-bbl5m" Feb 27 10:30:30 crc kubenswrapper[4728]: I0227 10:30:30.968677 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7f7cf79d89-4zx8c" Feb 27 10:30:30 crc kubenswrapper[4728]: I0227 10:30:30.981590 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6749b8c876-bbl5m" podStartSLOduration=2.981567535 podStartE2EDuration="2.981567535s" podCreationTimestamp="2026-02-27 10:30:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:30:30.976397676 +0000 UTC m=+250.938763812" watchObservedRunningTime="2026-02-27 10:30:30.981567535 +0000 UTC m=+250.943933671" Feb 27 10:30:31 crc kubenswrapper[4728]: I0227 10:30:31.003318 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7f7cf79d89-4zx8c" podStartSLOduration=3.003296401 podStartE2EDuration="3.003296401s" podCreationTimestamp="2026-02-27 10:30:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:30:30.99563763 +0000 UTC m=+250.958003786" watchObservedRunningTime="2026-02-27 10:30:31.003296401 +0000 UTC m=+250.965662507" Feb 27 10:30:31 crc kubenswrapper[4728]: I0227 10:30:31.206548 4728 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 27 10:30:31 crc kubenswrapper[4728]: I0227 10:30:31.206868 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://f4e8835f3ff41008225e090a7c460900d18997805869ec9191f1e7cce3c5778e" gracePeriod=15 Feb 27 10:30:31 crc kubenswrapper[4728]: I0227 10:30:31.206925 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://f483187ce4b468f12ba37432364d63b1594b49400f77ab7ff5040730bc29b35d" gracePeriod=15 Feb 27 10:30:31 crc kubenswrapper[4728]: I0227 10:30:31.206979 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://144568fcb21837838d999f90cb51207fff11edb023b2d975dad1980e0817ca43" gracePeriod=15 Feb 27 10:30:31 crc kubenswrapper[4728]: I0227 10:30:31.207022 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://fd5f667231cb95d3d072f38635815d04c45ae9803c179df54367329131afd05a" gracePeriod=15 Feb 27 10:30:31 crc kubenswrapper[4728]: I0227 10:30:31.207004 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://19f04d61914f64b94f46aebdc8a7580de81f5ba7e15935be045d4e92bd0cd889" gracePeriod=15 Feb 27 10:30:31 crc kubenswrapper[4728]: I0227 10:30:31.208239 4728 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 27 10:30:31 crc kubenswrapper[4728]: E0227 10:30:31.208532 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 27 10:30:31 crc kubenswrapper[4728]: I0227 10:30:31.208548 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 27 10:30:31 crc kubenswrapper[4728]: E0227 10:30:31.208565 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 27 10:30:31 crc kubenswrapper[4728]: I0227 10:30:31.208576 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 27 10:30:31 crc kubenswrapper[4728]: E0227 10:30:31.208589 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 27 10:30:31 crc kubenswrapper[4728]: I0227 10:30:31.208598 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 27 10:30:31 crc kubenswrapper[4728]: E0227 10:30:31.208616 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 27 10:30:31 crc kubenswrapper[4728]: I0227 10:30:31.208624 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 27 10:30:31 crc kubenswrapper[4728]: E0227 10:30:31.208637 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 27 10:30:31 crc kubenswrapper[4728]: I0227 10:30:31.208646 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 27 10:30:31 crc kubenswrapper[4728]: E0227 10:30:31.208657 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 27 10:30:31 crc kubenswrapper[4728]: I0227 10:30:31.208714 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 27 10:30:31 crc kubenswrapper[4728]: E0227 10:30:31.208734 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 27 10:30:31 crc kubenswrapper[4728]: I0227 10:30:31.208783 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 27 10:30:31 crc kubenswrapper[4728]: E0227 10:30:31.208801 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 27 10:30:31 crc kubenswrapper[4728]: I0227 10:30:31.208813 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 27 10:30:31 crc kubenswrapper[4728]: E0227 10:30:31.208838 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 27 10:30:31 crc kubenswrapper[4728]: I0227 10:30:31.208847 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 27 10:30:31 crc kubenswrapper[4728]: I0227 10:30:31.208994 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 27 10:30:31 crc kubenswrapper[4728]: I0227 10:30:31.209006 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 27 10:30:31 crc kubenswrapper[4728]: I0227 10:30:31.209018 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 27 10:30:31 crc kubenswrapper[4728]: I0227 10:30:31.209032 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 27 10:30:31 crc kubenswrapper[4728]: I0227 10:30:31.209044 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 27 10:30:31 crc kubenswrapper[4728]: I0227 10:30:31.209063 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 27 10:30:31 crc kubenswrapper[4728]: I0227 10:30:31.209073 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 27 10:30:31 crc kubenswrapper[4728]: E0227 10:30:31.209211 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 27 10:30:31 crc kubenswrapper[4728]: I0227 10:30:31.209221 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 27 10:30:31 crc kubenswrapper[4728]: I0227 10:30:31.209340 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 27 10:30:31 crc kubenswrapper[4728]: I0227 10:30:31.209622 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 27 10:30:31 crc kubenswrapper[4728]: I0227 10:30:31.212435 4728 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 27 10:30:31 crc kubenswrapper[4728]: I0227 10:30:31.214344 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 27 10:30:31 crc kubenswrapper[4728]: I0227 10:30:31.218747 4728 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="f4b27818a5e8e43d0dc095d08835c792" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" Feb 27 10:30:31 crc kubenswrapper[4728]: I0227 10:30:31.285088 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 27 10:30:31 crc kubenswrapper[4728]: I0227 10:30:31.285154 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 27 10:30:31 crc kubenswrapper[4728]: I0227 10:30:31.285185 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 10:30:31 crc kubenswrapper[4728]: I0227 10:30:31.285206 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 27 10:30:31 crc kubenswrapper[4728]: I0227 10:30:31.285254 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 27 10:30:31 crc kubenswrapper[4728]: I0227 10:30:31.285276 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 27 10:30:31 crc kubenswrapper[4728]: I0227 10:30:31.285292 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 10:30:31 crc kubenswrapper[4728]: I0227 10:30:31.285310 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 10:30:31 crc kubenswrapper[4728]: I0227 10:30:31.386411 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 27 10:30:31 crc kubenswrapper[4728]: I0227 10:30:31.386609 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 27 10:30:31 crc kubenswrapper[4728]: I0227 10:30:31.386927 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 27 10:30:31 crc kubenswrapper[4728]: I0227 10:30:31.386877 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 27 10:30:31 crc kubenswrapper[4728]: I0227 10:30:31.386992 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 10:30:31 crc kubenswrapper[4728]: I0227 10:30:31.387038 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 10:30:31 crc kubenswrapper[4728]: I0227 10:30:31.387062 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 27 10:30:31 crc kubenswrapper[4728]: I0227 10:30:31.387110 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 10:30:31 crc kubenswrapper[4728]: I0227 10:30:31.387135 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 27 10:30:31 crc kubenswrapper[4728]: I0227 10:30:31.387140 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 10:30:31 crc kubenswrapper[4728]: I0227 10:30:31.387163 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 27 10:30:31 crc kubenswrapper[4728]: I0227 10:30:31.387171 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 10:30:31 crc kubenswrapper[4728]: I0227 10:30:31.387199 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 10:30:31 crc kubenswrapper[4728]: I0227 10:30:31.387213 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 27 10:30:31 crc kubenswrapper[4728]: I0227 10:30:31.387251 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 27 10:30:31 crc kubenswrapper[4728]: I0227 10:30:31.387341 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 27 10:30:31 crc kubenswrapper[4728]: E0227 10:30:31.559133 4728 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 27 10:30:31 crc kubenswrapper[4728]: E0227 10:30:31.559939 4728 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 27 10:30:31 crc kubenswrapper[4728]: E0227 10:30:31.560565 4728 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 27 10:30:31 crc kubenswrapper[4728]: E0227 10:30:31.561143 4728 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 27 10:30:31 crc kubenswrapper[4728]: E0227 10:30:31.561804 4728 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 27 10:30:31 crc kubenswrapper[4728]: I0227 10:30:31.561857 4728 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Feb 27 10:30:31 crc kubenswrapper[4728]: E0227 10:30:31.562413 4728 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.164:6443: connect: connection refused" interval="200ms" Feb 27 10:30:31 crc kubenswrapper[4728]: E0227 10:30:31.763893 4728 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.164:6443: connect: connection refused" interval="400ms" Feb 27 10:30:31 crc kubenswrapper[4728]: I0227 10:30:31.975739 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Feb 27 10:30:31 crc kubenswrapper[4728]: I0227 10:30:31.978445 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 27 10:30:31 crc kubenswrapper[4728]: I0227 10:30:31.980217 4728 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="f483187ce4b468f12ba37432364d63b1594b49400f77ab7ff5040730bc29b35d" exitCode=0 Feb 27 10:30:31 crc kubenswrapper[4728]: I0227 10:30:31.980284 4728 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="fd5f667231cb95d3d072f38635815d04c45ae9803c179df54367329131afd05a" exitCode=0 Feb 27 10:30:31 crc kubenswrapper[4728]: I0227 10:30:31.980305 4728 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="144568fcb21837838d999f90cb51207fff11edb023b2d975dad1980e0817ca43" exitCode=0 Feb 27 10:30:31 crc kubenswrapper[4728]: I0227 10:30:31.980325 4728 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="19f04d61914f64b94f46aebdc8a7580de81f5ba7e15935be045d4e92bd0cd889" exitCode=2 Feb 27 10:30:31 crc kubenswrapper[4728]: I0227 10:30:31.980405 4728 scope.go:117] "RemoveContainer" containerID="0f09311526d9d8d971440bee60c506ffb423eed73c2ac58c81934c2df487ae48" Feb 27 10:30:31 crc kubenswrapper[4728]: I0227 10:30:31.983026 4728 generic.go:334] "Generic (PLEG): container finished" podID="e7a457cf-2c88-458c-b3a3-e53f1b717d81" containerID="29e794f4a0c5123cf62d527bf2f912ef5cea685ccdfe65acbedd5aa8455c5e34" exitCode=0 Feb 27 10:30:31 crc kubenswrapper[4728]: I0227 10:30:31.983161 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"e7a457cf-2c88-458c-b3a3-e53f1b717d81","Type":"ContainerDied","Data":"29e794f4a0c5123cf62d527bf2f912ef5cea685ccdfe65acbedd5aa8455c5e34"} Feb 27 10:30:31 crc kubenswrapper[4728]: I0227 10:30:31.984154 4728 status_manager.go:851] "Failed to get status for pod" podUID="e7a457cf-2c88-458c-b3a3-e53f1b717d81" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 27 10:30:32 crc kubenswrapper[4728]: E0227 10:30:32.165068 4728 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.164:6443: connect: connection refused" interval="800ms" Feb 27 10:30:32 crc kubenswrapper[4728]: E0227 10:30:32.966226 4728 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.164:6443: connect: connection refused" interval="1.6s" Feb 27 10:30:32 crc kubenswrapper[4728]: I0227 10:30:32.992678 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 27 10:30:33 crc kubenswrapper[4728]: I0227 10:30:33.444363 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 27 10:30:33 crc kubenswrapper[4728]: I0227 10:30:33.445219 4728 status_manager.go:851] "Failed to get status for pod" podUID="e7a457cf-2c88-458c-b3a3-e53f1b717d81" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 27 10:30:33 crc kubenswrapper[4728]: I0227 10:30:33.524192 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7a457cf-2c88-458c-b3a3-e53f1b717d81-kube-api-access\") pod \"e7a457cf-2c88-458c-b3a3-e53f1b717d81\" (UID: \"e7a457cf-2c88-458c-b3a3-e53f1b717d81\") " Feb 27 10:30:33 crc kubenswrapper[4728]: I0227 10:30:33.524356 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e7a457cf-2c88-458c-b3a3-e53f1b717d81-kubelet-dir\") pod \"e7a457cf-2c88-458c-b3a3-e53f1b717d81\" (UID: \"e7a457cf-2c88-458c-b3a3-e53f1b717d81\") " Feb 27 10:30:33 crc kubenswrapper[4728]: I0227 10:30:33.524405 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e7a457cf-2c88-458c-b3a3-e53f1b717d81-var-lock\") pod \"e7a457cf-2c88-458c-b3a3-e53f1b717d81\" (UID: \"e7a457cf-2c88-458c-b3a3-e53f1b717d81\") " Feb 27 10:30:33 crc kubenswrapper[4728]: I0227 10:30:33.524541 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e7a457cf-2c88-458c-b3a3-e53f1b717d81-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "e7a457cf-2c88-458c-b3a3-e53f1b717d81" (UID: "e7a457cf-2c88-458c-b3a3-e53f1b717d81"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 10:30:33 crc kubenswrapper[4728]: I0227 10:30:33.524679 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e7a457cf-2c88-458c-b3a3-e53f1b717d81-var-lock" (OuterVolumeSpecName: "var-lock") pod "e7a457cf-2c88-458c-b3a3-e53f1b717d81" (UID: "e7a457cf-2c88-458c-b3a3-e53f1b717d81"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 10:30:33 crc kubenswrapper[4728]: I0227 10:30:33.524819 4728 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e7a457cf-2c88-458c-b3a3-e53f1b717d81-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 27 10:30:33 crc kubenswrapper[4728]: I0227 10:30:33.524844 4728 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e7a457cf-2c88-458c-b3a3-e53f1b717d81-var-lock\") on node \"crc\" DevicePath \"\"" Feb 27 10:30:33 crc kubenswrapper[4728]: I0227 10:30:33.533052 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7a457cf-2c88-458c-b3a3-e53f1b717d81-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7a457cf-2c88-458c-b3a3-e53f1b717d81" (UID: "e7a457cf-2c88-458c-b3a3-e53f1b717d81"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:30:33 crc kubenswrapper[4728]: I0227 10:30:33.566837 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 27 10:30:33 crc kubenswrapper[4728]: I0227 10:30:33.568083 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 10:30:33 crc kubenswrapper[4728]: I0227 10:30:33.568570 4728 status_manager.go:851] "Failed to get status for pod" podUID="e7a457cf-2c88-458c-b3a3-e53f1b717d81" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 27 10:30:33 crc kubenswrapper[4728]: I0227 10:30:33.569080 4728 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 27 10:30:33 crc kubenswrapper[4728]: I0227 10:30:33.626029 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 27 10:30:33 crc kubenswrapper[4728]: I0227 10:30:33.626148 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 27 10:30:33 crc kubenswrapper[4728]: I0227 10:30:33.626138 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 10:30:33 crc kubenswrapper[4728]: I0227 10:30:33.626167 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 27 10:30:33 crc kubenswrapper[4728]: I0227 10:30:33.626198 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 10:30:33 crc kubenswrapper[4728]: I0227 10:30:33.626225 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 10:30:33 crc kubenswrapper[4728]: I0227 10:30:33.626656 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7a457cf-2c88-458c-b3a3-e53f1b717d81-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 27 10:30:33 crc kubenswrapper[4728]: I0227 10:30:33.626672 4728 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 27 10:30:33 crc kubenswrapper[4728]: I0227 10:30:33.626683 4728 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 27 10:30:33 crc kubenswrapper[4728]: I0227 10:30:33.626691 4728 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Feb 27 10:30:34 crc kubenswrapper[4728]: I0227 10:30:34.003464 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"e7a457cf-2c88-458c-b3a3-e53f1b717d81","Type":"ContainerDied","Data":"635aea1e408cd0a3da79c218c7d1eff02227e7a130ddbf099abd83f3ca673133"} Feb 27 10:30:34 crc kubenswrapper[4728]: I0227 10:30:34.003481 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 27 10:30:34 crc kubenswrapper[4728]: I0227 10:30:34.003817 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="635aea1e408cd0a3da79c218c7d1eff02227e7a130ddbf099abd83f3ca673133" Feb 27 10:30:34 crc kubenswrapper[4728]: I0227 10:30:34.007079 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 27 10:30:34 crc kubenswrapper[4728]: I0227 10:30:34.008101 4728 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="f4e8835f3ff41008225e090a7c460900d18997805869ec9191f1e7cce3c5778e" exitCode=0 Feb 27 10:30:34 crc kubenswrapper[4728]: I0227 10:30:34.008139 4728 scope.go:117] "RemoveContainer" containerID="f483187ce4b468f12ba37432364d63b1594b49400f77ab7ff5040730bc29b35d" Feb 27 10:30:34 crc kubenswrapper[4728]: I0227 10:30:34.008418 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 10:30:34 crc kubenswrapper[4728]: I0227 10:30:34.033224 4728 scope.go:117] "RemoveContainer" containerID="fd5f667231cb95d3d072f38635815d04c45ae9803c179df54367329131afd05a" Feb 27 10:30:34 crc kubenswrapper[4728]: I0227 10:30:34.035200 4728 status_manager.go:851] "Failed to get status for pod" podUID="e7a457cf-2c88-458c-b3a3-e53f1b717d81" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 27 10:30:34 crc kubenswrapper[4728]: I0227 10:30:34.035905 4728 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 27 10:30:34 crc kubenswrapper[4728]: I0227 10:30:34.045956 4728 status_manager.go:851] "Failed to get status for pod" podUID="e7a457cf-2c88-458c-b3a3-e53f1b717d81" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 27 10:30:34 crc kubenswrapper[4728]: I0227 10:30:34.046525 4728 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 27 10:30:34 crc kubenswrapper[4728]: I0227 10:30:34.054034 4728 scope.go:117] "RemoveContainer" containerID="144568fcb21837838d999f90cb51207fff11edb023b2d975dad1980e0817ca43" Feb 27 10:30:34 crc kubenswrapper[4728]: I0227 10:30:34.075248 4728 scope.go:117] "RemoveContainer" containerID="19f04d61914f64b94f46aebdc8a7580de81f5ba7e15935be045d4e92bd0cd889" Feb 27 10:30:34 crc kubenswrapper[4728]: I0227 10:30:34.093278 4728 scope.go:117] "RemoveContainer" containerID="f4e8835f3ff41008225e090a7c460900d18997805869ec9191f1e7cce3c5778e" Feb 27 10:30:34 crc kubenswrapper[4728]: I0227 10:30:34.119865 4728 scope.go:117] "RemoveContainer" containerID="c860355459019bda5fddc0a658236e97a674f965aa2eff648e97bbaa30161b75" Feb 27 10:30:34 crc kubenswrapper[4728]: I0227 10:30:34.151398 4728 scope.go:117] "RemoveContainer" containerID="f483187ce4b468f12ba37432364d63b1594b49400f77ab7ff5040730bc29b35d" Feb 27 10:30:34 crc kubenswrapper[4728]: E0227 10:30:34.151996 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f483187ce4b468f12ba37432364d63b1594b49400f77ab7ff5040730bc29b35d\": container with ID starting with f483187ce4b468f12ba37432364d63b1594b49400f77ab7ff5040730bc29b35d not found: ID does not exist" containerID="f483187ce4b468f12ba37432364d63b1594b49400f77ab7ff5040730bc29b35d" Feb 27 10:30:34 crc kubenswrapper[4728]: I0227 10:30:34.152047 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f483187ce4b468f12ba37432364d63b1594b49400f77ab7ff5040730bc29b35d"} err="failed to get container status \"f483187ce4b468f12ba37432364d63b1594b49400f77ab7ff5040730bc29b35d\": rpc error: code = NotFound desc = could not find container \"f483187ce4b468f12ba37432364d63b1594b49400f77ab7ff5040730bc29b35d\": container with ID starting with f483187ce4b468f12ba37432364d63b1594b49400f77ab7ff5040730bc29b35d not found: ID does not exist" Feb 27 10:30:34 crc kubenswrapper[4728]: I0227 10:30:34.152078 4728 scope.go:117] "RemoveContainer" containerID="fd5f667231cb95d3d072f38635815d04c45ae9803c179df54367329131afd05a" Feb 27 10:30:34 crc kubenswrapper[4728]: E0227 10:30:34.152598 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fd5f667231cb95d3d072f38635815d04c45ae9803c179df54367329131afd05a\": container with ID starting with fd5f667231cb95d3d072f38635815d04c45ae9803c179df54367329131afd05a not found: ID does not exist" containerID="fd5f667231cb95d3d072f38635815d04c45ae9803c179df54367329131afd05a" Feb 27 10:30:34 crc kubenswrapper[4728]: I0227 10:30:34.152682 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd5f667231cb95d3d072f38635815d04c45ae9803c179df54367329131afd05a"} err="failed to get container status \"fd5f667231cb95d3d072f38635815d04c45ae9803c179df54367329131afd05a\": rpc error: code = NotFound desc = could not find container \"fd5f667231cb95d3d072f38635815d04c45ae9803c179df54367329131afd05a\": container with ID starting with fd5f667231cb95d3d072f38635815d04c45ae9803c179df54367329131afd05a not found: ID does not exist" Feb 27 10:30:34 crc kubenswrapper[4728]: I0227 10:30:34.152798 4728 scope.go:117] "RemoveContainer" containerID="144568fcb21837838d999f90cb51207fff11edb023b2d975dad1980e0817ca43" Feb 27 10:30:34 crc kubenswrapper[4728]: E0227 10:30:34.153181 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"144568fcb21837838d999f90cb51207fff11edb023b2d975dad1980e0817ca43\": container with ID starting with 144568fcb21837838d999f90cb51207fff11edb023b2d975dad1980e0817ca43 not found: ID does not exist" containerID="144568fcb21837838d999f90cb51207fff11edb023b2d975dad1980e0817ca43" Feb 27 10:30:34 crc kubenswrapper[4728]: I0227 10:30:34.153222 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"144568fcb21837838d999f90cb51207fff11edb023b2d975dad1980e0817ca43"} err="failed to get container status \"144568fcb21837838d999f90cb51207fff11edb023b2d975dad1980e0817ca43\": rpc error: code = NotFound desc = could not find container \"144568fcb21837838d999f90cb51207fff11edb023b2d975dad1980e0817ca43\": container with ID starting with 144568fcb21837838d999f90cb51207fff11edb023b2d975dad1980e0817ca43 not found: ID does not exist" Feb 27 10:30:34 crc kubenswrapper[4728]: I0227 10:30:34.153249 4728 scope.go:117] "RemoveContainer" containerID="19f04d61914f64b94f46aebdc8a7580de81f5ba7e15935be045d4e92bd0cd889" Feb 27 10:30:34 crc kubenswrapper[4728]: E0227 10:30:34.153529 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"19f04d61914f64b94f46aebdc8a7580de81f5ba7e15935be045d4e92bd0cd889\": container with ID starting with 19f04d61914f64b94f46aebdc8a7580de81f5ba7e15935be045d4e92bd0cd889 not found: ID does not exist" containerID="19f04d61914f64b94f46aebdc8a7580de81f5ba7e15935be045d4e92bd0cd889" Feb 27 10:30:34 crc kubenswrapper[4728]: I0227 10:30:34.153570 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"19f04d61914f64b94f46aebdc8a7580de81f5ba7e15935be045d4e92bd0cd889"} err="failed to get container status \"19f04d61914f64b94f46aebdc8a7580de81f5ba7e15935be045d4e92bd0cd889\": rpc error: code = NotFound desc = could not find container \"19f04d61914f64b94f46aebdc8a7580de81f5ba7e15935be045d4e92bd0cd889\": container with ID starting with 19f04d61914f64b94f46aebdc8a7580de81f5ba7e15935be045d4e92bd0cd889 not found: ID does not exist" Feb 27 10:30:34 crc kubenswrapper[4728]: I0227 10:30:34.153594 4728 scope.go:117] "RemoveContainer" containerID="f4e8835f3ff41008225e090a7c460900d18997805869ec9191f1e7cce3c5778e" Feb 27 10:30:34 crc kubenswrapper[4728]: E0227 10:30:34.153932 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f4e8835f3ff41008225e090a7c460900d18997805869ec9191f1e7cce3c5778e\": container with ID starting with f4e8835f3ff41008225e090a7c460900d18997805869ec9191f1e7cce3c5778e not found: ID does not exist" containerID="f4e8835f3ff41008225e090a7c460900d18997805869ec9191f1e7cce3c5778e" Feb 27 10:30:34 crc kubenswrapper[4728]: I0227 10:30:34.153971 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f4e8835f3ff41008225e090a7c460900d18997805869ec9191f1e7cce3c5778e"} err="failed to get container status \"f4e8835f3ff41008225e090a7c460900d18997805869ec9191f1e7cce3c5778e\": rpc error: code = NotFound desc = could not find container \"f4e8835f3ff41008225e090a7c460900d18997805869ec9191f1e7cce3c5778e\": container with ID starting with f4e8835f3ff41008225e090a7c460900d18997805869ec9191f1e7cce3c5778e not found: ID does not exist" Feb 27 10:30:34 crc kubenswrapper[4728]: I0227 10:30:34.153996 4728 scope.go:117] "RemoveContainer" containerID="c860355459019bda5fddc0a658236e97a674f965aa2eff648e97bbaa30161b75" Feb 27 10:30:34 crc kubenswrapper[4728]: E0227 10:30:34.154328 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c860355459019bda5fddc0a658236e97a674f965aa2eff648e97bbaa30161b75\": container with ID starting with c860355459019bda5fddc0a658236e97a674f965aa2eff648e97bbaa30161b75 not found: ID does not exist" containerID="c860355459019bda5fddc0a658236e97a674f965aa2eff648e97bbaa30161b75" Feb 27 10:30:34 crc kubenswrapper[4728]: I0227 10:30:34.154359 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c860355459019bda5fddc0a658236e97a674f965aa2eff648e97bbaa30161b75"} err="failed to get container status \"c860355459019bda5fddc0a658236e97a674f965aa2eff648e97bbaa30161b75\": rpc error: code = NotFound desc = could not find container \"c860355459019bda5fddc0a658236e97a674f965aa2eff648e97bbaa30161b75\": container with ID starting with c860355459019bda5fddc0a658236e97a674f965aa2eff648e97bbaa30161b75 not found: ID does not exist" Feb 27 10:30:34 crc kubenswrapper[4728]: E0227 10:30:34.567475 4728 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.164:6443: connect: connection refused" interval="3.2s" Feb 27 10:30:34 crc kubenswrapper[4728]: I0227 10:30:34.730846 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Feb 27 10:30:35 crc kubenswrapper[4728]: I0227 10:30:35.922333 4728 patch_prober.go:28] interesting pod/machine-config-daemon-mf2hh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 10:30:35 crc kubenswrapper[4728]: I0227 10:30:35.922768 4728 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 10:30:35 crc kubenswrapper[4728]: E0227 10:30:35.923404 4728 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/events\": dial tcp 38.102.83.164:6443: connect: connection refused" event=< Feb 27 10:30:35 crc kubenswrapper[4728]: &Event{ObjectMeta:{machine-config-daemon-mf2hh.189813cfa54d0deb openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:machine-config-daemon-mf2hh,UID:c2cfd349-f825-497b-b698-7fb6bc258b22,APIVersion:v1,ResourceVersion:26763,FieldPath:spec.containers{machine-config-daemon},},Reason:ProbeError,Message:Liveness probe error: Get "http://127.0.0.1:8798/health": dial tcp 127.0.0.1:8798: connect: connection refused Feb 27 10:30:35 crc kubenswrapper[4728]: body: Feb 27 10:30:35 crc kubenswrapper[4728]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:30:35.922738667 +0000 UTC m=+255.885104813,LastTimestamp:2026-02-27 10:30:35.922738667 +0000 UTC m=+255.885104813,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 27 10:30:35 crc kubenswrapper[4728]: > Feb 27 10:30:36 crc kubenswrapper[4728]: E0227 10:30:36.249560 4728 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.164:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 27 10:30:36 crc kubenswrapper[4728]: I0227 10:30:36.249985 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 27 10:30:36 crc kubenswrapper[4728]: W0227 10:30:36.272964 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-4e4cb5e310d28bc0ebfed6780f26a920af3f6f674416126d13f156d0411a01ee WatchSource:0}: Error finding container 4e4cb5e310d28bc0ebfed6780f26a920af3f6f674416126d13f156d0411a01ee: Status 404 returned error can't find the container with id 4e4cb5e310d28bc0ebfed6780f26a920af3f6f674416126d13f156d0411a01ee Feb 27 10:30:37 crc kubenswrapper[4728]: I0227 10:30:37.030768 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"f44bbbbf88244223d404524160d5f84258c555bedaec4c3f6f047aaaa67a099d"} Feb 27 10:30:37 crc kubenswrapper[4728]: I0227 10:30:37.031164 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"4e4cb5e310d28bc0ebfed6780f26a920af3f6f674416126d13f156d0411a01ee"} Feb 27 10:30:37 crc kubenswrapper[4728]: E0227 10:30:37.032039 4728 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.164:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 27 10:30:37 crc kubenswrapper[4728]: I0227 10:30:37.032052 4728 status_manager.go:851] "Failed to get status for pod" podUID="e7a457cf-2c88-458c-b3a3-e53f1b717d81" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 27 10:30:37 crc kubenswrapper[4728]: I0227 10:30:37.141411 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-25vw6" podUID="de6bdf95-032d-42a2-a8b5-0202641a05c1" containerName="oauth-openshift" containerID="cri-o://663fac5063b4ed346937f0493e50ea2f79981be85156caa72e27186d7e102b4b" gracePeriod=15 Feb 27 10:30:37 crc kubenswrapper[4728]: I0227 10:30:37.623739 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-25vw6" Feb 27 10:30:37 crc kubenswrapper[4728]: I0227 10:30:37.624906 4728 status_manager.go:851] "Failed to get status for pod" podUID="de6bdf95-032d-42a2-a8b5-0202641a05c1" pod="openshift-authentication/oauth-openshift-558db77b4-25vw6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-25vw6\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 27 10:30:37 crc kubenswrapper[4728]: I0227 10:30:37.625389 4728 status_manager.go:851] "Failed to get status for pod" podUID="e7a457cf-2c88-458c-b3a3-e53f1b717d81" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 27 10:30:37 crc kubenswrapper[4728]: I0227 10:30:37.679114 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/de6bdf95-032d-42a2-a8b5-0202641a05c1-v4-0-config-system-serving-cert\") pod \"de6bdf95-032d-42a2-a8b5-0202641a05c1\" (UID: \"de6bdf95-032d-42a2-a8b5-0202641a05c1\") " Feb 27 10:30:37 crc kubenswrapper[4728]: I0227 10:30:37.679165 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/de6bdf95-032d-42a2-a8b5-0202641a05c1-v4-0-config-system-session\") pod \"de6bdf95-032d-42a2-a8b5-0202641a05c1\" (UID: \"de6bdf95-032d-42a2-a8b5-0202641a05c1\") " Feb 27 10:30:37 crc kubenswrapper[4728]: I0227 10:30:37.679204 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/de6bdf95-032d-42a2-a8b5-0202641a05c1-v4-0-config-system-ocp-branding-template\") pod \"de6bdf95-032d-42a2-a8b5-0202641a05c1\" (UID: \"de6bdf95-032d-42a2-a8b5-0202641a05c1\") " Feb 27 10:30:37 crc kubenswrapper[4728]: I0227 10:30:37.679235 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/de6bdf95-032d-42a2-a8b5-0202641a05c1-v4-0-config-user-template-login\") pod \"de6bdf95-032d-42a2-a8b5-0202641a05c1\" (UID: \"de6bdf95-032d-42a2-a8b5-0202641a05c1\") " Feb 27 10:30:37 crc kubenswrapper[4728]: I0227 10:30:37.679292 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fzfpf\" (UniqueName: \"kubernetes.io/projected/de6bdf95-032d-42a2-a8b5-0202641a05c1-kube-api-access-fzfpf\") pod \"de6bdf95-032d-42a2-a8b5-0202641a05c1\" (UID: \"de6bdf95-032d-42a2-a8b5-0202641a05c1\") " Feb 27 10:30:37 crc kubenswrapper[4728]: I0227 10:30:37.679342 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/de6bdf95-032d-42a2-a8b5-0202641a05c1-v4-0-config-system-cliconfig\") pod \"de6bdf95-032d-42a2-a8b5-0202641a05c1\" (UID: \"de6bdf95-032d-42a2-a8b5-0202641a05c1\") " Feb 27 10:30:37 crc kubenswrapper[4728]: I0227 10:30:37.679371 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/de6bdf95-032d-42a2-a8b5-0202641a05c1-v4-0-config-user-template-provider-selection\") pod \"de6bdf95-032d-42a2-a8b5-0202641a05c1\" (UID: \"de6bdf95-032d-42a2-a8b5-0202641a05c1\") " Feb 27 10:30:37 crc kubenswrapper[4728]: I0227 10:30:37.679399 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/de6bdf95-032d-42a2-a8b5-0202641a05c1-v4-0-config-system-trusted-ca-bundle\") pod \"de6bdf95-032d-42a2-a8b5-0202641a05c1\" (UID: \"de6bdf95-032d-42a2-a8b5-0202641a05c1\") " Feb 27 10:30:37 crc kubenswrapper[4728]: I0227 10:30:37.679420 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/de6bdf95-032d-42a2-a8b5-0202641a05c1-audit-policies\") pod \"de6bdf95-032d-42a2-a8b5-0202641a05c1\" (UID: \"de6bdf95-032d-42a2-a8b5-0202641a05c1\") " Feb 27 10:30:37 crc kubenswrapper[4728]: I0227 10:30:37.679445 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/de6bdf95-032d-42a2-a8b5-0202641a05c1-audit-dir\") pod \"de6bdf95-032d-42a2-a8b5-0202641a05c1\" (UID: \"de6bdf95-032d-42a2-a8b5-0202641a05c1\") " Feb 27 10:30:37 crc kubenswrapper[4728]: I0227 10:30:37.679468 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/de6bdf95-032d-42a2-a8b5-0202641a05c1-v4-0-config-system-service-ca\") pod \"de6bdf95-032d-42a2-a8b5-0202641a05c1\" (UID: \"de6bdf95-032d-42a2-a8b5-0202641a05c1\") " Feb 27 10:30:37 crc kubenswrapper[4728]: I0227 10:30:37.679496 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/de6bdf95-032d-42a2-a8b5-0202641a05c1-v4-0-config-user-idp-0-file-data\") pod \"de6bdf95-032d-42a2-a8b5-0202641a05c1\" (UID: \"de6bdf95-032d-42a2-a8b5-0202641a05c1\") " Feb 27 10:30:37 crc kubenswrapper[4728]: I0227 10:30:37.679537 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/de6bdf95-032d-42a2-a8b5-0202641a05c1-v4-0-config-user-template-error\") pod \"de6bdf95-032d-42a2-a8b5-0202641a05c1\" (UID: \"de6bdf95-032d-42a2-a8b5-0202641a05c1\") " Feb 27 10:30:37 crc kubenswrapper[4728]: I0227 10:30:37.679569 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/de6bdf95-032d-42a2-a8b5-0202641a05c1-v4-0-config-system-router-certs\") pod \"de6bdf95-032d-42a2-a8b5-0202641a05c1\" (UID: \"de6bdf95-032d-42a2-a8b5-0202641a05c1\") " Feb 27 10:30:37 crc kubenswrapper[4728]: I0227 10:30:37.679648 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/de6bdf95-032d-42a2-a8b5-0202641a05c1-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "de6bdf95-032d-42a2-a8b5-0202641a05c1" (UID: "de6bdf95-032d-42a2-a8b5-0202641a05c1"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 10:30:37 crc kubenswrapper[4728]: I0227 10:30:37.680002 4728 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/de6bdf95-032d-42a2-a8b5-0202641a05c1-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 27 10:30:37 crc kubenswrapper[4728]: I0227 10:30:37.681670 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de6bdf95-032d-42a2-a8b5-0202641a05c1-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "de6bdf95-032d-42a2-a8b5-0202641a05c1" (UID: "de6bdf95-032d-42a2-a8b5-0202641a05c1"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:30:37 crc kubenswrapper[4728]: I0227 10:30:37.681837 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de6bdf95-032d-42a2-a8b5-0202641a05c1-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "de6bdf95-032d-42a2-a8b5-0202641a05c1" (UID: "de6bdf95-032d-42a2-a8b5-0202641a05c1"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:30:37 crc kubenswrapper[4728]: I0227 10:30:37.681863 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de6bdf95-032d-42a2-a8b5-0202641a05c1-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "de6bdf95-032d-42a2-a8b5-0202641a05c1" (UID: "de6bdf95-032d-42a2-a8b5-0202641a05c1"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:30:37 crc kubenswrapper[4728]: I0227 10:30:37.682590 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de6bdf95-032d-42a2-a8b5-0202641a05c1-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "de6bdf95-032d-42a2-a8b5-0202641a05c1" (UID: "de6bdf95-032d-42a2-a8b5-0202641a05c1"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:30:37 crc kubenswrapper[4728]: I0227 10:30:37.685398 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de6bdf95-032d-42a2-a8b5-0202641a05c1-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "de6bdf95-032d-42a2-a8b5-0202641a05c1" (UID: "de6bdf95-032d-42a2-a8b5-0202641a05c1"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:30:37 crc kubenswrapper[4728]: I0227 10:30:37.686726 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de6bdf95-032d-42a2-a8b5-0202641a05c1-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "de6bdf95-032d-42a2-a8b5-0202641a05c1" (UID: "de6bdf95-032d-42a2-a8b5-0202641a05c1"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:30:37 crc kubenswrapper[4728]: I0227 10:30:37.687377 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de6bdf95-032d-42a2-a8b5-0202641a05c1-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "de6bdf95-032d-42a2-a8b5-0202641a05c1" (UID: "de6bdf95-032d-42a2-a8b5-0202641a05c1"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:30:37 crc kubenswrapper[4728]: I0227 10:30:37.687913 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de6bdf95-032d-42a2-a8b5-0202641a05c1-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "de6bdf95-032d-42a2-a8b5-0202641a05c1" (UID: "de6bdf95-032d-42a2-a8b5-0202641a05c1"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:30:37 crc kubenswrapper[4728]: I0227 10:30:37.687997 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de6bdf95-032d-42a2-a8b5-0202641a05c1-kube-api-access-fzfpf" (OuterVolumeSpecName: "kube-api-access-fzfpf") pod "de6bdf95-032d-42a2-a8b5-0202641a05c1" (UID: "de6bdf95-032d-42a2-a8b5-0202641a05c1"). InnerVolumeSpecName "kube-api-access-fzfpf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:30:37 crc kubenswrapper[4728]: I0227 10:30:37.689055 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de6bdf95-032d-42a2-a8b5-0202641a05c1-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "de6bdf95-032d-42a2-a8b5-0202641a05c1" (UID: "de6bdf95-032d-42a2-a8b5-0202641a05c1"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:30:37 crc kubenswrapper[4728]: I0227 10:30:37.691709 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de6bdf95-032d-42a2-a8b5-0202641a05c1-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "de6bdf95-032d-42a2-a8b5-0202641a05c1" (UID: "de6bdf95-032d-42a2-a8b5-0202641a05c1"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:30:37 crc kubenswrapper[4728]: I0227 10:30:37.692055 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de6bdf95-032d-42a2-a8b5-0202641a05c1-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "de6bdf95-032d-42a2-a8b5-0202641a05c1" (UID: "de6bdf95-032d-42a2-a8b5-0202641a05c1"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:30:37 crc kubenswrapper[4728]: I0227 10:30:37.692671 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de6bdf95-032d-42a2-a8b5-0202641a05c1-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "de6bdf95-032d-42a2-a8b5-0202641a05c1" (UID: "de6bdf95-032d-42a2-a8b5-0202641a05c1"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:30:37 crc kubenswrapper[4728]: E0227 10:30:37.777633 4728 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.164:6443: connect: connection refused" interval="6.4s" Feb 27 10:30:37 crc kubenswrapper[4728]: I0227 10:30:37.781535 4728 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/de6bdf95-032d-42a2-a8b5-0202641a05c1-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 27 10:30:37 crc kubenswrapper[4728]: I0227 10:30:37.782019 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fzfpf\" (UniqueName: \"kubernetes.io/projected/de6bdf95-032d-42a2-a8b5-0202641a05c1-kube-api-access-fzfpf\") on node \"crc\" DevicePath \"\"" Feb 27 10:30:37 crc kubenswrapper[4728]: I0227 10:30:37.782106 4728 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/de6bdf95-032d-42a2-a8b5-0202641a05c1-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 27 10:30:37 crc kubenswrapper[4728]: I0227 10:30:37.782128 4728 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/de6bdf95-032d-42a2-a8b5-0202641a05c1-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 27 10:30:37 crc kubenswrapper[4728]: I0227 10:30:37.782148 4728 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/de6bdf95-032d-42a2-a8b5-0202641a05c1-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 10:30:37 crc kubenswrapper[4728]: I0227 10:30:37.782220 4728 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/de6bdf95-032d-42a2-a8b5-0202641a05c1-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 27 10:30:37 crc kubenswrapper[4728]: I0227 10:30:37.782283 4728 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/de6bdf95-032d-42a2-a8b5-0202641a05c1-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 27 10:30:37 crc kubenswrapper[4728]: I0227 10:30:37.782322 4728 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/de6bdf95-032d-42a2-a8b5-0202641a05c1-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 27 10:30:37 crc kubenswrapper[4728]: I0227 10:30:37.782337 4728 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/de6bdf95-032d-42a2-a8b5-0202641a05c1-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 27 10:30:37 crc kubenswrapper[4728]: I0227 10:30:37.782352 4728 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/de6bdf95-032d-42a2-a8b5-0202641a05c1-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 27 10:30:37 crc kubenswrapper[4728]: I0227 10:30:37.782365 4728 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/de6bdf95-032d-42a2-a8b5-0202641a05c1-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 10:30:37 crc kubenswrapper[4728]: I0227 10:30:37.782377 4728 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/de6bdf95-032d-42a2-a8b5-0202641a05c1-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 27 10:30:37 crc kubenswrapper[4728]: I0227 10:30:37.782393 4728 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/de6bdf95-032d-42a2-a8b5-0202641a05c1-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 27 10:30:38 crc kubenswrapper[4728]: I0227 10:30:38.040181 4728 generic.go:334] "Generic (PLEG): container finished" podID="de6bdf95-032d-42a2-a8b5-0202641a05c1" containerID="663fac5063b4ed346937f0493e50ea2f79981be85156caa72e27186d7e102b4b" exitCode=0 Feb 27 10:30:38 crc kubenswrapper[4728]: I0227 10:30:38.040231 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-25vw6" event={"ID":"de6bdf95-032d-42a2-a8b5-0202641a05c1","Type":"ContainerDied","Data":"663fac5063b4ed346937f0493e50ea2f79981be85156caa72e27186d7e102b4b"} Feb 27 10:30:38 crc kubenswrapper[4728]: I0227 10:30:38.040260 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-25vw6" Feb 27 10:30:38 crc kubenswrapper[4728]: I0227 10:30:38.040277 4728 scope.go:117] "RemoveContainer" containerID="663fac5063b4ed346937f0493e50ea2f79981be85156caa72e27186d7e102b4b" Feb 27 10:30:38 crc kubenswrapper[4728]: I0227 10:30:38.040265 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-25vw6" event={"ID":"de6bdf95-032d-42a2-a8b5-0202641a05c1","Type":"ContainerDied","Data":"0282cb8eb7faa5bd65f9280e85830dca357989a93dd75e817f870d1e5713a634"} Feb 27 10:30:38 crc kubenswrapper[4728]: I0227 10:30:38.040988 4728 status_manager.go:851] "Failed to get status for pod" podUID="de6bdf95-032d-42a2-a8b5-0202641a05c1" pod="openshift-authentication/oauth-openshift-558db77b4-25vw6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-25vw6\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 27 10:30:38 crc kubenswrapper[4728]: I0227 10:30:38.041626 4728 status_manager.go:851] "Failed to get status for pod" podUID="e7a457cf-2c88-458c-b3a3-e53f1b717d81" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 27 10:30:38 crc kubenswrapper[4728]: I0227 10:30:38.053665 4728 status_manager.go:851] "Failed to get status for pod" podUID="de6bdf95-032d-42a2-a8b5-0202641a05c1" pod="openshift-authentication/oauth-openshift-558db77b4-25vw6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-25vw6\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 27 10:30:38 crc kubenswrapper[4728]: I0227 10:30:38.054039 4728 status_manager.go:851] "Failed to get status for pod" podUID="e7a457cf-2c88-458c-b3a3-e53f1b717d81" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 27 10:30:38 crc kubenswrapper[4728]: I0227 10:30:38.059688 4728 scope.go:117] "RemoveContainer" containerID="663fac5063b4ed346937f0493e50ea2f79981be85156caa72e27186d7e102b4b" Feb 27 10:30:38 crc kubenswrapper[4728]: E0227 10:30:38.060186 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"663fac5063b4ed346937f0493e50ea2f79981be85156caa72e27186d7e102b4b\": container with ID starting with 663fac5063b4ed346937f0493e50ea2f79981be85156caa72e27186d7e102b4b not found: ID does not exist" containerID="663fac5063b4ed346937f0493e50ea2f79981be85156caa72e27186d7e102b4b" Feb 27 10:30:38 crc kubenswrapper[4728]: I0227 10:30:38.060228 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"663fac5063b4ed346937f0493e50ea2f79981be85156caa72e27186d7e102b4b"} err="failed to get container status \"663fac5063b4ed346937f0493e50ea2f79981be85156caa72e27186d7e102b4b\": rpc error: code = NotFound desc = could not find container \"663fac5063b4ed346937f0493e50ea2f79981be85156caa72e27186d7e102b4b\": container with ID starting with 663fac5063b4ed346937f0493e50ea2f79981be85156caa72e27186d7e102b4b not found: ID does not exist" Feb 27 10:30:40 crc kubenswrapper[4728]: I0227 10:30:40.727388 4728 status_manager.go:851] "Failed to get status for pod" podUID="de6bdf95-032d-42a2-a8b5-0202641a05c1" pod="openshift-authentication/oauth-openshift-558db77b4-25vw6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-25vw6\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 27 10:30:40 crc kubenswrapper[4728]: I0227 10:30:40.728342 4728 status_manager.go:851] "Failed to get status for pod" podUID="e7a457cf-2c88-458c-b3a3-e53f1b717d81" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 27 10:30:40 crc kubenswrapper[4728]: E0227 10:30:40.825197 4728 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/events\": dial tcp 38.102.83.164:6443: connect: connection refused" event=< Feb 27 10:30:40 crc kubenswrapper[4728]: &Event{ObjectMeta:{machine-config-daemon-mf2hh.189813cfa54d0deb openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:machine-config-daemon-mf2hh,UID:c2cfd349-f825-497b-b698-7fb6bc258b22,APIVersion:v1,ResourceVersion:26763,FieldPath:spec.containers{machine-config-daemon},},Reason:ProbeError,Message:Liveness probe error: Get "http://127.0.0.1:8798/health": dial tcp 127.0.0.1:8798: connect: connection refused Feb 27 10:30:40 crc kubenswrapper[4728]: body: Feb 27 10:30:40 crc kubenswrapper[4728]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 10:30:35.922738667 +0000 UTC m=+255.885104813,LastTimestamp:2026-02-27 10:30:35.922738667 +0000 UTC m=+255.885104813,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 27 10:30:40 crc kubenswrapper[4728]: > Feb 27 10:30:41 crc kubenswrapper[4728]: I0227 10:30:41.724539 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 10:30:41 crc kubenswrapper[4728]: I0227 10:30:41.725137 4728 status_manager.go:851] "Failed to get status for pod" podUID="e7a457cf-2c88-458c-b3a3-e53f1b717d81" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 27 10:30:41 crc kubenswrapper[4728]: I0227 10:30:41.725470 4728 status_manager.go:851] "Failed to get status for pod" podUID="de6bdf95-032d-42a2-a8b5-0202641a05c1" pod="openshift-authentication/oauth-openshift-558db77b4-25vw6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-25vw6\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 27 10:30:41 crc kubenswrapper[4728]: I0227 10:30:41.742263 4728 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="9ded44d8-d959-4509-be28-3560f21eebda" Feb 27 10:30:41 crc kubenswrapper[4728]: I0227 10:30:41.742296 4728 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="9ded44d8-d959-4509-be28-3560f21eebda" Feb 27 10:30:41 crc kubenswrapper[4728]: E0227 10:30:41.742973 4728 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.164:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 10:30:41 crc kubenswrapper[4728]: I0227 10:30:41.743708 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 10:30:41 crc kubenswrapper[4728]: W0227 10:30:41.769748 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-58b4a84a0dc646b690d9d2b19b72562fbb59f09c84aef8aa207bff7908708acb WatchSource:0}: Error finding container 58b4a84a0dc646b690d9d2b19b72562fbb59f09c84aef8aa207bff7908708acb: Status 404 returned error can't find the container with id 58b4a84a0dc646b690d9d2b19b72562fbb59f09c84aef8aa207bff7908708acb Feb 27 10:30:42 crc kubenswrapper[4728]: I0227 10:30:42.077459 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"7cf9b14cbbab7deec4acd934c92e63b400cc7d30941a9daea90e1b4af2aec6d7"} Feb 27 10:30:42 crc kubenswrapper[4728]: I0227 10:30:42.077797 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"58b4a84a0dc646b690d9d2b19b72562fbb59f09c84aef8aa207bff7908708acb"} Feb 27 10:30:42 crc kubenswrapper[4728]: I0227 10:30:42.078094 4728 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="9ded44d8-d959-4509-be28-3560f21eebda" Feb 27 10:30:42 crc kubenswrapper[4728]: I0227 10:30:42.078121 4728 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="9ded44d8-d959-4509-be28-3560f21eebda" Feb 27 10:30:42 crc kubenswrapper[4728]: I0227 10:30:42.078380 4728 status_manager.go:851] "Failed to get status for pod" podUID="de6bdf95-032d-42a2-a8b5-0202641a05c1" pod="openshift-authentication/oauth-openshift-558db77b4-25vw6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-25vw6\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 27 10:30:42 crc kubenswrapper[4728]: E0227 10:30:42.078518 4728 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.164:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 10:30:42 crc kubenswrapper[4728]: I0227 10:30:42.078632 4728 status_manager.go:851] "Failed to get status for pod" podUID="e7a457cf-2c88-458c-b3a3-e53f1b717d81" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.164:6443: connect: connection refused" Feb 27 10:30:43 crc kubenswrapper[4728]: I0227 10:30:43.084947 4728 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="7cf9b14cbbab7deec4acd934c92e63b400cc7d30941a9daea90e1b4af2aec6d7" exitCode=0 Feb 27 10:30:43 crc kubenswrapper[4728]: I0227 10:30:43.084976 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"7cf9b14cbbab7deec4acd934c92e63b400cc7d30941a9daea90e1b4af2aec6d7"} Feb 27 10:30:43 crc kubenswrapper[4728]: I0227 10:30:43.085248 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"53a8d14d01000282a4f9f341dd64e443aab8d77f033912451eeb125af03c28c7"} Feb 27 10:30:43 crc kubenswrapper[4728]: I0227 10:30:43.085259 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"10edf222292680088bd5e0fd5d9841cac0bdebe95f81600c6bc504d9d62919a5"} Feb 27 10:30:43 crc kubenswrapper[4728]: I0227 10:30:43.085268 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"b935ebbf6f084878539a25fe06430ebd7287df5a9d11839dae85d7fae829d759"} Feb 27 10:30:44 crc kubenswrapper[4728]: I0227 10:30:44.094054 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"29f0576a19326b6f060c9212f23b2e0f78331774ed9db033f47c4706569283c1"} Feb 27 10:30:44 crc kubenswrapper[4728]: I0227 10:30:44.094095 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"70311e8a78be4558479a28824bab966c5ca8af764c646701b36fb930df31b087"} Feb 27 10:30:44 crc kubenswrapper[4728]: I0227 10:30:44.094217 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 10:30:44 crc kubenswrapper[4728]: I0227 10:30:44.094301 4728 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="9ded44d8-d959-4509-be28-3560f21eebda" Feb 27 10:30:44 crc kubenswrapper[4728]: I0227 10:30:44.094325 4728 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="9ded44d8-d959-4509-be28-3560f21eebda" Feb 27 10:30:45 crc kubenswrapper[4728]: I0227 10:30:45.101383 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/1.log" Feb 27 10:30:45 crc kubenswrapper[4728]: I0227 10:30:45.102693 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 27 10:30:45 crc kubenswrapper[4728]: I0227 10:30:45.102746 4728 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="2a5ab953af522332825eec5ebd360d933ca20bf07db8f0bc94fcb5702fdce3d6" exitCode=1 Feb 27 10:30:45 crc kubenswrapper[4728]: I0227 10:30:45.102779 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"2a5ab953af522332825eec5ebd360d933ca20bf07db8f0bc94fcb5702fdce3d6"} Feb 27 10:30:45 crc kubenswrapper[4728]: I0227 10:30:45.103191 4728 scope.go:117] "RemoveContainer" containerID="2a5ab953af522332825eec5ebd360d933ca20bf07db8f0bc94fcb5702fdce3d6" Feb 27 10:30:46 crc kubenswrapper[4728]: I0227 10:30:46.113794 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/1.log" Feb 27 10:30:46 crc kubenswrapper[4728]: I0227 10:30:46.115862 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 27 10:30:46 crc kubenswrapper[4728]: I0227 10:30:46.115961 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"47a1d2c288afebaf82668dd928999dc795c60c038dd884a81f5e27eb4d3759ef"} Feb 27 10:30:46 crc kubenswrapper[4728]: I0227 10:30:46.744433 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 10:30:46 crc kubenswrapper[4728]: I0227 10:30:46.744657 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 10:30:46 crc kubenswrapper[4728]: I0227 10:30:46.753923 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 10:30:47 crc kubenswrapper[4728]: I0227 10:30:47.461175 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 27 10:30:47 crc kubenswrapper[4728]: I0227 10:30:47.468153 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 27 10:30:47 crc kubenswrapper[4728]: I0227 10:30:47.637958 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 27 10:30:49 crc kubenswrapper[4728]: I0227 10:30:49.119299 4728 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 10:30:50 crc kubenswrapper[4728]: I0227 10:30:50.146182 4728 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="9ded44d8-d959-4509-be28-3560f21eebda" Feb 27 10:30:50 crc kubenswrapper[4728]: I0227 10:30:50.147376 4728 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="9ded44d8-d959-4509-be28-3560f21eebda" Feb 27 10:30:50 crc kubenswrapper[4728]: I0227 10:30:50.151641 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 10:30:50 crc kubenswrapper[4728]: I0227 10:30:50.745061 4728 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="20aee2ad-e9af-4d46-b28e-dd13fb7097f1" Feb 27 10:30:51 crc kubenswrapper[4728]: I0227 10:30:51.153051 4728 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="9ded44d8-d959-4509-be28-3560f21eebda" Feb 27 10:30:51 crc kubenswrapper[4728]: I0227 10:30:51.153092 4728 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="9ded44d8-d959-4509-be28-3560f21eebda" Feb 27 10:30:51 crc kubenswrapper[4728]: I0227 10:30:51.157961 4728 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="20aee2ad-e9af-4d46-b28e-dd13fb7097f1" Feb 27 10:30:55 crc kubenswrapper[4728]: I0227 10:30:55.549217 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 27 10:30:55 crc kubenswrapper[4728]: I0227 10:30:55.618393 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 27 10:30:55 crc kubenswrapper[4728]: I0227 10:30:55.642399 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 27 10:30:55 crc kubenswrapper[4728]: I0227 10:30:55.706848 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 27 10:30:55 crc kubenswrapper[4728]: I0227 10:30:55.813080 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 27 10:30:55 crc kubenswrapper[4728]: I0227 10:30:55.981788 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 27 10:30:56 crc kubenswrapper[4728]: I0227 10:30:56.054552 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 27 10:30:56 crc kubenswrapper[4728]: I0227 10:30:56.235917 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 27 10:30:56 crc kubenswrapper[4728]: I0227 10:30:56.349589 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 27 10:30:56 crc kubenswrapper[4728]: I0227 10:30:56.455603 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 27 10:30:56 crc kubenswrapper[4728]: I0227 10:30:56.725954 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 27 10:30:56 crc kubenswrapper[4728]: I0227 10:30:56.808608 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 27 10:30:57 crc kubenswrapper[4728]: I0227 10:30:57.200710 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 27 10:30:57 crc kubenswrapper[4728]: I0227 10:30:57.269877 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 27 10:30:57 crc kubenswrapper[4728]: I0227 10:30:57.301189 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 27 10:30:57 crc kubenswrapper[4728]: I0227 10:30:57.642345 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 27 10:30:57 crc kubenswrapper[4728]: I0227 10:30:57.798113 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 27 10:30:57 crc kubenswrapper[4728]: I0227 10:30:57.970780 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 27 10:30:57 crc kubenswrapper[4728]: I0227 10:30:57.990921 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 27 10:30:58 crc kubenswrapper[4728]: I0227 10:30:58.039146 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 27 10:30:58 crc kubenswrapper[4728]: I0227 10:30:58.073790 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 27 10:30:58 crc kubenswrapper[4728]: I0227 10:30:58.249927 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 27 10:30:58 crc kubenswrapper[4728]: I0227 10:30:58.274316 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 27 10:30:58 crc kubenswrapper[4728]: I0227 10:30:58.409614 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 27 10:30:58 crc kubenswrapper[4728]: I0227 10:30:58.504238 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 27 10:30:58 crc kubenswrapper[4728]: I0227 10:30:58.520079 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 27 10:30:58 crc kubenswrapper[4728]: I0227 10:30:58.532119 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 27 10:30:58 crc kubenswrapper[4728]: I0227 10:30:58.613656 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 27 10:30:58 crc kubenswrapper[4728]: I0227 10:30:58.666814 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 27 10:30:58 crc kubenswrapper[4728]: I0227 10:30:58.691210 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 27 10:30:58 crc kubenswrapper[4728]: I0227 10:30:58.697630 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 27 10:30:58 crc kubenswrapper[4728]: I0227 10:30:58.881184 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 27 10:30:58 crc kubenswrapper[4728]: I0227 10:30:58.966272 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 27 10:30:59 crc kubenswrapper[4728]: I0227 10:30:59.008374 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 27 10:30:59 crc kubenswrapper[4728]: I0227 10:30:59.054584 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 27 10:30:59 crc kubenswrapper[4728]: I0227 10:30:59.067666 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 27 10:30:59 crc kubenswrapper[4728]: I0227 10:30:59.166832 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 27 10:30:59 crc kubenswrapper[4728]: I0227 10:30:59.211932 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 27 10:30:59 crc kubenswrapper[4728]: I0227 10:30:59.241335 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 27 10:30:59 crc kubenswrapper[4728]: I0227 10:30:59.509079 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 27 10:30:59 crc kubenswrapper[4728]: I0227 10:30:59.545425 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 27 10:30:59 crc kubenswrapper[4728]: I0227 10:30:59.590202 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 27 10:30:59 crc kubenswrapper[4728]: I0227 10:30:59.902063 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 27 10:30:59 crc kubenswrapper[4728]: I0227 10:30:59.919020 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 27 10:31:00 crc kubenswrapper[4728]: I0227 10:31:00.211065 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 27 10:31:00 crc kubenswrapper[4728]: I0227 10:31:00.285840 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 27 10:31:00 crc kubenswrapper[4728]: I0227 10:31:00.303248 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 27 10:31:00 crc kubenswrapper[4728]: I0227 10:31:00.586413 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 27 10:31:00 crc kubenswrapper[4728]: I0227 10:31:00.757202 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 27 10:31:01 crc kubenswrapper[4728]: I0227 10:31:01.089483 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 27 10:31:01 crc kubenswrapper[4728]: I0227 10:31:01.116027 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 27 10:31:01 crc kubenswrapper[4728]: I0227 10:31:01.142891 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 27 10:31:01 crc kubenswrapper[4728]: I0227 10:31:01.227413 4728 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 27 10:31:01 crc kubenswrapper[4728]: I0227 10:31:01.234581 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-25vw6","openshift-kube-apiserver/kube-apiserver-crc"] Feb 27 10:31:01 crc kubenswrapper[4728]: I0227 10:31:01.234675 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 27 10:31:01 crc kubenswrapper[4728]: I0227 10:31:01.235202 4728 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="9ded44d8-d959-4509-be28-3560f21eebda" Feb 27 10:31:01 crc kubenswrapper[4728]: I0227 10:31:01.235249 4728 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="9ded44d8-d959-4509-be28-3560f21eebda" Feb 27 10:31:01 crc kubenswrapper[4728]: I0227 10:31:01.243761 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 10:31:01 crc kubenswrapper[4728]: I0227 10:31:01.259929 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 27 10:31:01 crc kubenswrapper[4728]: I0227 10:31:01.263862 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=12.263831858 podStartE2EDuration="12.263831858s" podCreationTimestamp="2026-02-27 10:30:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:31:01.25968854 +0000 UTC m=+281.222054686" watchObservedRunningTime="2026-02-27 10:31:01.263831858 +0000 UTC m=+281.226198004" Feb 27 10:31:01 crc kubenswrapper[4728]: I0227 10:31:01.543120 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 27 10:31:01 crc kubenswrapper[4728]: I0227 10:31:01.553551 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 27 10:31:01 crc kubenswrapper[4728]: I0227 10:31:01.558025 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 27 10:31:01 crc kubenswrapper[4728]: I0227 10:31:01.589532 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 27 10:31:01 crc kubenswrapper[4728]: I0227 10:31:01.615369 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 27 10:31:01 crc kubenswrapper[4728]: I0227 10:31:01.653814 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 27 10:31:01 crc kubenswrapper[4728]: I0227 10:31:01.670889 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 27 10:31:01 crc kubenswrapper[4728]: I0227 10:31:01.712616 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 27 10:31:01 crc kubenswrapper[4728]: I0227 10:31:01.734734 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 27 10:31:02 crc kubenswrapper[4728]: I0227 10:31:02.167481 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 27 10:31:02 crc kubenswrapper[4728]: I0227 10:31:02.709262 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 27 10:31:02 crc kubenswrapper[4728]: I0227 10:31:02.736995 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de6bdf95-032d-42a2-a8b5-0202641a05c1" path="/var/lib/kubelet/pods/de6bdf95-032d-42a2-a8b5-0202641a05c1/volumes" Feb 27 10:31:03 crc kubenswrapper[4728]: I0227 10:31:03.145241 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 27 10:31:03 crc kubenswrapper[4728]: I0227 10:31:03.168684 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 27 10:31:03 crc kubenswrapper[4728]: I0227 10:31:03.627352 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 27 10:31:03 crc kubenswrapper[4728]: I0227 10:31:03.863943 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 27 10:31:04 crc kubenswrapper[4728]: I0227 10:31:04.008425 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 27 10:31:04 crc kubenswrapper[4728]: I0227 10:31:04.014125 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 27 10:31:04 crc kubenswrapper[4728]: I0227 10:31:04.156545 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 27 10:31:04 crc kubenswrapper[4728]: I0227 10:31:04.177219 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 27 10:31:04 crc kubenswrapper[4728]: I0227 10:31:04.180180 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 27 10:31:04 crc kubenswrapper[4728]: I0227 10:31:04.344406 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 27 10:31:04 crc kubenswrapper[4728]: I0227 10:31:04.675096 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 27 10:31:04 crc kubenswrapper[4728]: I0227 10:31:04.692492 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 27 10:31:04 crc kubenswrapper[4728]: I0227 10:31:04.730727 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 27 10:31:04 crc kubenswrapper[4728]: I0227 10:31:04.750196 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 27 10:31:04 crc kubenswrapper[4728]: I0227 10:31:04.823136 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 27 10:31:04 crc kubenswrapper[4728]: I0227 10:31:04.857330 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 27 10:31:05 crc kubenswrapper[4728]: I0227 10:31:05.132053 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 27 10:31:05 crc kubenswrapper[4728]: I0227 10:31:05.526557 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 27 10:31:05 crc kubenswrapper[4728]: I0227 10:31:05.555640 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 27 10:31:05 crc kubenswrapper[4728]: I0227 10:31:05.652124 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 27 10:31:05 crc kubenswrapper[4728]: I0227 10:31:05.727080 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 27 10:31:05 crc kubenswrapper[4728]: I0227 10:31:05.888478 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 27 10:31:05 crc kubenswrapper[4728]: I0227 10:31:05.922094 4728 patch_prober.go:28] interesting pod/machine-config-daemon-mf2hh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 10:31:05 crc kubenswrapper[4728]: I0227 10:31:05.922575 4728 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 10:31:06 crc kubenswrapper[4728]: I0227 10:31:06.031113 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 27 10:31:06 crc kubenswrapper[4728]: I0227 10:31:06.060706 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 27 10:31:06 crc kubenswrapper[4728]: I0227 10:31:06.378187 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 27 10:31:06 crc kubenswrapper[4728]: I0227 10:31:06.490173 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 27 10:31:06 crc kubenswrapper[4728]: I0227 10:31:06.509162 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 27 10:31:06 crc kubenswrapper[4728]: I0227 10:31:06.646322 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-55c8c74798-tbnmx"] Feb 27 10:31:06 crc kubenswrapper[4728]: E0227 10:31:06.646573 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7a457cf-2c88-458c-b3a3-e53f1b717d81" containerName="installer" Feb 27 10:31:06 crc kubenswrapper[4728]: I0227 10:31:06.646588 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7a457cf-2c88-458c-b3a3-e53f1b717d81" containerName="installer" Feb 27 10:31:06 crc kubenswrapper[4728]: E0227 10:31:06.646597 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de6bdf95-032d-42a2-a8b5-0202641a05c1" containerName="oauth-openshift" Feb 27 10:31:06 crc kubenswrapper[4728]: I0227 10:31:06.646604 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="de6bdf95-032d-42a2-a8b5-0202641a05c1" containerName="oauth-openshift" Feb 27 10:31:06 crc kubenswrapper[4728]: I0227 10:31:06.646718 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="de6bdf95-032d-42a2-a8b5-0202641a05c1" containerName="oauth-openshift" Feb 27 10:31:06 crc kubenswrapper[4728]: I0227 10:31:06.646732 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7a457cf-2c88-458c-b3a3-e53f1b717d81" containerName="installer" Feb 27 10:31:06 crc kubenswrapper[4728]: I0227 10:31:06.647161 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-55c8c74798-tbnmx" Feb 27 10:31:06 crc kubenswrapper[4728]: I0227 10:31:06.657820 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 27 10:31:06 crc kubenswrapper[4728]: I0227 10:31:06.657893 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 27 10:31:06 crc kubenswrapper[4728]: I0227 10:31:06.657824 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 27 10:31:06 crc kubenswrapper[4728]: I0227 10:31:06.657917 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 27 10:31:06 crc kubenswrapper[4728]: I0227 10:31:06.657907 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 27 10:31:06 crc kubenswrapper[4728]: I0227 10:31:06.658431 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 27 10:31:06 crc kubenswrapper[4728]: I0227 10:31:06.658440 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 27 10:31:06 crc kubenswrapper[4728]: I0227 10:31:06.658659 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 27 10:31:06 crc kubenswrapper[4728]: I0227 10:31:06.658861 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 27 10:31:06 crc kubenswrapper[4728]: I0227 10:31:06.658939 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 27 10:31:06 crc kubenswrapper[4728]: I0227 10:31:06.659727 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 27 10:31:06 crc kubenswrapper[4728]: I0227 10:31:06.659856 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 27 10:31:06 crc kubenswrapper[4728]: I0227 10:31:06.659968 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 27 10:31:06 crc kubenswrapper[4728]: I0227 10:31:06.666049 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 27 10:31:06 crc kubenswrapper[4728]: I0227 10:31:06.668176 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 27 10:31:06 crc kubenswrapper[4728]: I0227 10:31:06.676639 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-55c8c74798-tbnmx"] Feb 27 10:31:06 crc kubenswrapper[4728]: I0227 10:31:06.693446 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 27 10:31:06 crc kubenswrapper[4728]: I0227 10:31:06.745431 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 27 10:31:06 crc kubenswrapper[4728]: I0227 10:31:06.750620 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 27 10:31:06 crc kubenswrapper[4728]: I0227 10:31:06.751077 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/0fed02bf-7183-45bc-84f1-4191d7fd7be2-v4-0-config-system-service-ca\") pod \"oauth-openshift-55c8c74798-tbnmx\" (UID: \"0fed02bf-7183-45bc-84f1-4191d7fd7be2\") " pod="openshift-authentication/oauth-openshift-55c8c74798-tbnmx" Feb 27 10:31:06 crc kubenswrapper[4728]: I0227 10:31:06.751146 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/0fed02bf-7183-45bc-84f1-4191d7fd7be2-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-55c8c74798-tbnmx\" (UID: \"0fed02bf-7183-45bc-84f1-4191d7fd7be2\") " pod="openshift-authentication/oauth-openshift-55c8c74798-tbnmx" Feb 27 10:31:06 crc kubenswrapper[4728]: I0227 10:31:06.751174 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/0fed02bf-7183-45bc-84f1-4191d7fd7be2-v4-0-config-system-router-certs\") pod \"oauth-openshift-55c8c74798-tbnmx\" (UID: \"0fed02bf-7183-45bc-84f1-4191d7fd7be2\") " pod="openshift-authentication/oauth-openshift-55c8c74798-tbnmx" Feb 27 10:31:06 crc kubenswrapper[4728]: I0227 10:31:06.751197 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0fed02bf-7183-45bc-84f1-4191d7fd7be2-audit-dir\") pod \"oauth-openshift-55c8c74798-tbnmx\" (UID: \"0fed02bf-7183-45bc-84f1-4191d7fd7be2\") " pod="openshift-authentication/oauth-openshift-55c8c74798-tbnmx" Feb 27 10:31:06 crc kubenswrapper[4728]: I0227 10:31:06.751243 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/0fed02bf-7183-45bc-84f1-4191d7fd7be2-v4-0-config-user-template-login\") pod \"oauth-openshift-55c8c74798-tbnmx\" (UID: \"0fed02bf-7183-45bc-84f1-4191d7fd7be2\") " pod="openshift-authentication/oauth-openshift-55c8c74798-tbnmx" Feb 27 10:31:06 crc kubenswrapper[4728]: I0227 10:31:06.751271 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0fed02bf-7183-45bc-84f1-4191d7fd7be2-audit-policies\") pod \"oauth-openshift-55c8c74798-tbnmx\" (UID: \"0fed02bf-7183-45bc-84f1-4191d7fd7be2\") " pod="openshift-authentication/oauth-openshift-55c8c74798-tbnmx" Feb 27 10:31:06 crc kubenswrapper[4728]: I0227 10:31:06.751317 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/0fed02bf-7183-45bc-84f1-4191d7fd7be2-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-55c8c74798-tbnmx\" (UID: \"0fed02bf-7183-45bc-84f1-4191d7fd7be2\") " pod="openshift-authentication/oauth-openshift-55c8c74798-tbnmx" Feb 27 10:31:06 crc kubenswrapper[4728]: I0227 10:31:06.751350 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0fed02bf-7183-45bc-84f1-4191d7fd7be2-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-55c8c74798-tbnmx\" (UID: \"0fed02bf-7183-45bc-84f1-4191d7fd7be2\") " pod="openshift-authentication/oauth-openshift-55c8c74798-tbnmx" Feb 27 10:31:06 crc kubenswrapper[4728]: I0227 10:31:06.751377 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j92ks\" (UniqueName: \"kubernetes.io/projected/0fed02bf-7183-45bc-84f1-4191d7fd7be2-kube-api-access-j92ks\") pod \"oauth-openshift-55c8c74798-tbnmx\" (UID: \"0fed02bf-7183-45bc-84f1-4191d7fd7be2\") " pod="openshift-authentication/oauth-openshift-55c8c74798-tbnmx" Feb 27 10:31:06 crc kubenswrapper[4728]: I0227 10:31:06.751469 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/0fed02bf-7183-45bc-84f1-4191d7fd7be2-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-55c8c74798-tbnmx\" (UID: \"0fed02bf-7183-45bc-84f1-4191d7fd7be2\") " pod="openshift-authentication/oauth-openshift-55c8c74798-tbnmx" Feb 27 10:31:06 crc kubenswrapper[4728]: I0227 10:31:06.751557 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/0fed02bf-7183-45bc-84f1-4191d7fd7be2-v4-0-config-system-serving-cert\") pod \"oauth-openshift-55c8c74798-tbnmx\" (UID: \"0fed02bf-7183-45bc-84f1-4191d7fd7be2\") " pod="openshift-authentication/oauth-openshift-55c8c74798-tbnmx" Feb 27 10:31:06 crc kubenswrapper[4728]: I0227 10:31:06.751632 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/0fed02bf-7183-45bc-84f1-4191d7fd7be2-v4-0-config-user-template-error\") pod \"oauth-openshift-55c8c74798-tbnmx\" (UID: \"0fed02bf-7183-45bc-84f1-4191d7fd7be2\") " pod="openshift-authentication/oauth-openshift-55c8c74798-tbnmx" Feb 27 10:31:06 crc kubenswrapper[4728]: I0227 10:31:06.751723 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/0fed02bf-7183-45bc-84f1-4191d7fd7be2-v4-0-config-system-session\") pod \"oauth-openshift-55c8c74798-tbnmx\" (UID: \"0fed02bf-7183-45bc-84f1-4191d7fd7be2\") " pod="openshift-authentication/oauth-openshift-55c8c74798-tbnmx" Feb 27 10:31:06 crc kubenswrapper[4728]: I0227 10:31:06.751795 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/0fed02bf-7183-45bc-84f1-4191d7fd7be2-v4-0-config-system-cliconfig\") pod \"oauth-openshift-55c8c74798-tbnmx\" (UID: \"0fed02bf-7183-45bc-84f1-4191d7fd7be2\") " pod="openshift-authentication/oauth-openshift-55c8c74798-tbnmx" Feb 27 10:31:06 crc kubenswrapper[4728]: I0227 10:31:06.778370 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 27 10:31:06 crc kubenswrapper[4728]: I0227 10:31:06.852810 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/0fed02bf-7183-45bc-84f1-4191d7fd7be2-v4-0-config-system-cliconfig\") pod \"oauth-openshift-55c8c74798-tbnmx\" (UID: \"0fed02bf-7183-45bc-84f1-4191d7fd7be2\") " pod="openshift-authentication/oauth-openshift-55c8c74798-tbnmx" Feb 27 10:31:06 crc kubenswrapper[4728]: I0227 10:31:06.852869 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/0fed02bf-7183-45bc-84f1-4191d7fd7be2-v4-0-config-system-service-ca\") pod \"oauth-openshift-55c8c74798-tbnmx\" (UID: \"0fed02bf-7183-45bc-84f1-4191d7fd7be2\") " pod="openshift-authentication/oauth-openshift-55c8c74798-tbnmx" Feb 27 10:31:06 crc kubenswrapper[4728]: I0227 10:31:06.852918 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/0fed02bf-7183-45bc-84f1-4191d7fd7be2-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-55c8c74798-tbnmx\" (UID: \"0fed02bf-7183-45bc-84f1-4191d7fd7be2\") " pod="openshift-authentication/oauth-openshift-55c8c74798-tbnmx" Feb 27 10:31:06 crc kubenswrapper[4728]: I0227 10:31:06.852952 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/0fed02bf-7183-45bc-84f1-4191d7fd7be2-v4-0-config-system-router-certs\") pod \"oauth-openshift-55c8c74798-tbnmx\" (UID: \"0fed02bf-7183-45bc-84f1-4191d7fd7be2\") " pod="openshift-authentication/oauth-openshift-55c8c74798-tbnmx" Feb 27 10:31:06 crc kubenswrapper[4728]: I0227 10:31:06.852976 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0fed02bf-7183-45bc-84f1-4191d7fd7be2-audit-dir\") pod \"oauth-openshift-55c8c74798-tbnmx\" (UID: \"0fed02bf-7183-45bc-84f1-4191d7fd7be2\") " pod="openshift-authentication/oauth-openshift-55c8c74798-tbnmx" Feb 27 10:31:06 crc kubenswrapper[4728]: I0227 10:31:06.853003 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/0fed02bf-7183-45bc-84f1-4191d7fd7be2-v4-0-config-user-template-login\") pod \"oauth-openshift-55c8c74798-tbnmx\" (UID: \"0fed02bf-7183-45bc-84f1-4191d7fd7be2\") " pod="openshift-authentication/oauth-openshift-55c8c74798-tbnmx" Feb 27 10:31:06 crc kubenswrapper[4728]: I0227 10:31:06.853034 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0fed02bf-7183-45bc-84f1-4191d7fd7be2-audit-policies\") pod \"oauth-openshift-55c8c74798-tbnmx\" (UID: \"0fed02bf-7183-45bc-84f1-4191d7fd7be2\") " pod="openshift-authentication/oauth-openshift-55c8c74798-tbnmx" Feb 27 10:31:06 crc kubenswrapper[4728]: I0227 10:31:06.853059 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/0fed02bf-7183-45bc-84f1-4191d7fd7be2-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-55c8c74798-tbnmx\" (UID: \"0fed02bf-7183-45bc-84f1-4191d7fd7be2\") " pod="openshift-authentication/oauth-openshift-55c8c74798-tbnmx" Feb 27 10:31:06 crc kubenswrapper[4728]: I0227 10:31:06.853094 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0fed02bf-7183-45bc-84f1-4191d7fd7be2-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-55c8c74798-tbnmx\" (UID: \"0fed02bf-7183-45bc-84f1-4191d7fd7be2\") " pod="openshift-authentication/oauth-openshift-55c8c74798-tbnmx" Feb 27 10:31:06 crc kubenswrapper[4728]: I0227 10:31:06.853114 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j92ks\" (UniqueName: \"kubernetes.io/projected/0fed02bf-7183-45bc-84f1-4191d7fd7be2-kube-api-access-j92ks\") pod \"oauth-openshift-55c8c74798-tbnmx\" (UID: \"0fed02bf-7183-45bc-84f1-4191d7fd7be2\") " pod="openshift-authentication/oauth-openshift-55c8c74798-tbnmx" Feb 27 10:31:06 crc kubenswrapper[4728]: I0227 10:31:06.853142 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/0fed02bf-7183-45bc-84f1-4191d7fd7be2-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-55c8c74798-tbnmx\" (UID: \"0fed02bf-7183-45bc-84f1-4191d7fd7be2\") " pod="openshift-authentication/oauth-openshift-55c8c74798-tbnmx" Feb 27 10:31:06 crc kubenswrapper[4728]: I0227 10:31:06.853165 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/0fed02bf-7183-45bc-84f1-4191d7fd7be2-v4-0-config-system-serving-cert\") pod \"oauth-openshift-55c8c74798-tbnmx\" (UID: \"0fed02bf-7183-45bc-84f1-4191d7fd7be2\") " pod="openshift-authentication/oauth-openshift-55c8c74798-tbnmx" Feb 27 10:31:06 crc kubenswrapper[4728]: I0227 10:31:06.853199 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/0fed02bf-7183-45bc-84f1-4191d7fd7be2-v4-0-config-user-template-error\") pod \"oauth-openshift-55c8c74798-tbnmx\" (UID: \"0fed02bf-7183-45bc-84f1-4191d7fd7be2\") " pod="openshift-authentication/oauth-openshift-55c8c74798-tbnmx" Feb 27 10:31:06 crc kubenswrapper[4728]: I0227 10:31:06.853201 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0fed02bf-7183-45bc-84f1-4191d7fd7be2-audit-dir\") pod \"oauth-openshift-55c8c74798-tbnmx\" (UID: \"0fed02bf-7183-45bc-84f1-4191d7fd7be2\") " pod="openshift-authentication/oauth-openshift-55c8c74798-tbnmx" Feb 27 10:31:06 crc kubenswrapper[4728]: I0227 10:31:06.853233 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/0fed02bf-7183-45bc-84f1-4191d7fd7be2-v4-0-config-system-session\") pod \"oauth-openshift-55c8c74798-tbnmx\" (UID: \"0fed02bf-7183-45bc-84f1-4191d7fd7be2\") " pod="openshift-authentication/oauth-openshift-55c8c74798-tbnmx" Feb 27 10:31:06 crc kubenswrapper[4728]: I0227 10:31:06.854738 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/0fed02bf-7183-45bc-84f1-4191d7fd7be2-v4-0-config-system-cliconfig\") pod \"oauth-openshift-55c8c74798-tbnmx\" (UID: \"0fed02bf-7183-45bc-84f1-4191d7fd7be2\") " pod="openshift-authentication/oauth-openshift-55c8c74798-tbnmx" Feb 27 10:31:06 crc kubenswrapper[4728]: I0227 10:31:06.857518 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0fed02bf-7183-45bc-84f1-4191d7fd7be2-audit-policies\") pod \"oauth-openshift-55c8c74798-tbnmx\" (UID: \"0fed02bf-7183-45bc-84f1-4191d7fd7be2\") " pod="openshift-authentication/oauth-openshift-55c8c74798-tbnmx" Feb 27 10:31:06 crc kubenswrapper[4728]: I0227 10:31:06.858021 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/0fed02bf-7183-45bc-84f1-4191d7fd7be2-v4-0-config-system-service-ca\") pod \"oauth-openshift-55c8c74798-tbnmx\" (UID: \"0fed02bf-7183-45bc-84f1-4191d7fd7be2\") " pod="openshift-authentication/oauth-openshift-55c8c74798-tbnmx" Feb 27 10:31:06 crc kubenswrapper[4728]: I0227 10:31:06.858703 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0fed02bf-7183-45bc-84f1-4191d7fd7be2-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-55c8c74798-tbnmx\" (UID: \"0fed02bf-7183-45bc-84f1-4191d7fd7be2\") " pod="openshift-authentication/oauth-openshift-55c8c74798-tbnmx" Feb 27 10:31:06 crc kubenswrapper[4728]: I0227 10:31:06.864103 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/0fed02bf-7183-45bc-84f1-4191d7fd7be2-v4-0-config-user-template-error\") pod \"oauth-openshift-55c8c74798-tbnmx\" (UID: \"0fed02bf-7183-45bc-84f1-4191d7fd7be2\") " pod="openshift-authentication/oauth-openshift-55c8c74798-tbnmx" Feb 27 10:31:06 crc kubenswrapper[4728]: I0227 10:31:06.864750 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/0fed02bf-7183-45bc-84f1-4191d7fd7be2-v4-0-config-system-serving-cert\") pod \"oauth-openshift-55c8c74798-tbnmx\" (UID: \"0fed02bf-7183-45bc-84f1-4191d7fd7be2\") " pod="openshift-authentication/oauth-openshift-55c8c74798-tbnmx" Feb 27 10:31:06 crc kubenswrapper[4728]: I0227 10:31:06.865856 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/0fed02bf-7183-45bc-84f1-4191d7fd7be2-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-55c8c74798-tbnmx\" (UID: \"0fed02bf-7183-45bc-84f1-4191d7fd7be2\") " pod="openshift-authentication/oauth-openshift-55c8c74798-tbnmx" Feb 27 10:31:06 crc kubenswrapper[4728]: I0227 10:31:06.865899 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/0fed02bf-7183-45bc-84f1-4191d7fd7be2-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-55c8c74798-tbnmx\" (UID: \"0fed02bf-7183-45bc-84f1-4191d7fd7be2\") " pod="openshift-authentication/oauth-openshift-55c8c74798-tbnmx" Feb 27 10:31:06 crc kubenswrapper[4728]: I0227 10:31:06.866706 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/0fed02bf-7183-45bc-84f1-4191d7fd7be2-v4-0-config-system-router-certs\") pod \"oauth-openshift-55c8c74798-tbnmx\" (UID: \"0fed02bf-7183-45bc-84f1-4191d7fd7be2\") " pod="openshift-authentication/oauth-openshift-55c8c74798-tbnmx" Feb 27 10:31:06 crc kubenswrapper[4728]: I0227 10:31:06.872003 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/0fed02bf-7183-45bc-84f1-4191d7fd7be2-v4-0-config-system-session\") pod \"oauth-openshift-55c8c74798-tbnmx\" (UID: \"0fed02bf-7183-45bc-84f1-4191d7fd7be2\") " pod="openshift-authentication/oauth-openshift-55c8c74798-tbnmx" Feb 27 10:31:06 crc kubenswrapper[4728]: I0227 10:31:06.872110 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j92ks\" (UniqueName: \"kubernetes.io/projected/0fed02bf-7183-45bc-84f1-4191d7fd7be2-kube-api-access-j92ks\") pod \"oauth-openshift-55c8c74798-tbnmx\" (UID: \"0fed02bf-7183-45bc-84f1-4191d7fd7be2\") " pod="openshift-authentication/oauth-openshift-55c8c74798-tbnmx" Feb 27 10:31:06 crc kubenswrapper[4728]: I0227 10:31:06.872272 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/0fed02bf-7183-45bc-84f1-4191d7fd7be2-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-55c8c74798-tbnmx\" (UID: \"0fed02bf-7183-45bc-84f1-4191d7fd7be2\") " pod="openshift-authentication/oauth-openshift-55c8c74798-tbnmx" Feb 27 10:31:06 crc kubenswrapper[4728]: I0227 10:31:06.872423 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/0fed02bf-7183-45bc-84f1-4191d7fd7be2-v4-0-config-user-template-login\") pod \"oauth-openshift-55c8c74798-tbnmx\" (UID: \"0fed02bf-7183-45bc-84f1-4191d7fd7be2\") " pod="openshift-authentication/oauth-openshift-55c8c74798-tbnmx" Feb 27 10:31:06 crc kubenswrapper[4728]: I0227 10:31:06.974805 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-55c8c74798-tbnmx" Feb 27 10:31:06 crc kubenswrapper[4728]: I0227 10:31:06.995401 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 27 10:31:07 crc kubenswrapper[4728]: I0227 10:31:07.013955 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 27 10:31:07 crc kubenswrapper[4728]: I0227 10:31:07.159148 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 27 10:31:07 crc kubenswrapper[4728]: I0227 10:31:07.189122 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 27 10:31:07 crc kubenswrapper[4728]: I0227 10:31:07.203960 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 27 10:31:07 crc kubenswrapper[4728]: I0227 10:31:07.231340 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 27 10:31:07 crc kubenswrapper[4728]: I0227 10:31:07.235847 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 27 10:31:07 crc kubenswrapper[4728]: I0227 10:31:07.273448 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 27 10:31:07 crc kubenswrapper[4728]: I0227 10:31:07.304673 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 27 10:31:07 crc kubenswrapper[4728]: I0227 10:31:07.378193 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 27 10:31:07 crc kubenswrapper[4728]: I0227 10:31:07.507245 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 27 10:31:07 crc kubenswrapper[4728]: I0227 10:31:07.570182 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 27 10:31:07 crc kubenswrapper[4728]: I0227 10:31:07.571839 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 27 10:31:07 crc kubenswrapper[4728]: I0227 10:31:07.715133 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 27 10:31:07 crc kubenswrapper[4728]: I0227 10:31:07.721098 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 27 10:31:07 crc kubenswrapper[4728]: I0227 10:31:07.790630 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 27 10:31:07 crc kubenswrapper[4728]: I0227 10:31:07.795249 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 27 10:31:07 crc kubenswrapper[4728]: I0227 10:31:07.831120 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 27 10:31:07 crc kubenswrapper[4728]: I0227 10:31:07.885358 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 27 10:31:07 crc kubenswrapper[4728]: I0227 10:31:07.896117 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 27 10:31:08 crc kubenswrapper[4728]: I0227 10:31:08.029619 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 27 10:31:08 crc kubenswrapper[4728]: I0227 10:31:08.063666 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 27 10:31:08 crc kubenswrapper[4728]: I0227 10:31:08.101632 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 27 10:31:08 crc kubenswrapper[4728]: I0227 10:31:08.161214 4728 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 27 10:31:08 crc kubenswrapper[4728]: I0227 10:31:08.256411 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 27 10:31:08 crc kubenswrapper[4728]: I0227 10:31:08.451365 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 27 10:31:08 crc kubenswrapper[4728]: I0227 10:31:08.453454 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 27 10:31:08 crc kubenswrapper[4728]: I0227 10:31:08.630267 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 27 10:31:08 crc kubenswrapper[4728]: I0227 10:31:08.691982 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 27 10:31:08 crc kubenswrapper[4728]: I0227 10:31:08.715892 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 27 10:31:08 crc kubenswrapper[4728]: I0227 10:31:08.764038 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 27 10:31:08 crc kubenswrapper[4728]: I0227 10:31:08.907570 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 27 10:31:08 crc kubenswrapper[4728]: I0227 10:31:08.918162 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 27 10:31:08 crc kubenswrapper[4728]: I0227 10:31:08.957913 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 27 10:31:08 crc kubenswrapper[4728]: I0227 10:31:08.958091 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 27 10:31:09 crc kubenswrapper[4728]: I0227 10:31:09.060779 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 27 10:31:09 crc kubenswrapper[4728]: I0227 10:31:09.088094 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 27 10:31:09 crc kubenswrapper[4728]: I0227 10:31:09.152429 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 27 10:31:09 crc kubenswrapper[4728]: I0227 10:31:09.200243 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 27 10:31:09 crc kubenswrapper[4728]: I0227 10:31:09.302228 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 27 10:31:09 crc kubenswrapper[4728]: I0227 10:31:09.312579 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 27 10:31:09 crc kubenswrapper[4728]: I0227 10:31:09.314697 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 27 10:31:09 crc kubenswrapper[4728]: I0227 10:31:09.342918 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 27 10:31:09 crc kubenswrapper[4728]: I0227 10:31:09.606391 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 27 10:31:09 crc kubenswrapper[4728]: I0227 10:31:09.656697 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 27 10:31:09 crc kubenswrapper[4728]: I0227 10:31:09.665755 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 27 10:31:09 crc kubenswrapper[4728]: I0227 10:31:09.906692 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 27 10:31:10 crc kubenswrapper[4728]: I0227 10:31:10.064380 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 27 10:31:10 crc kubenswrapper[4728]: I0227 10:31:10.112437 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 27 10:31:10 crc kubenswrapper[4728]: E0227 10:31:10.229921 4728 log.go:32] "RunPodSandbox from runtime service failed" err=< Feb 27 10:31:10 crc kubenswrapper[4728]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-55c8c74798-tbnmx_openshift-authentication_0fed02bf-7183-45bc-84f1-4191d7fd7be2_0(97c37f3867e0aa3260260e4e8175a765cf7ce35517300566dea02dae38e340ac): error adding pod openshift-authentication_oauth-openshift-55c8c74798-tbnmx to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"97c37f3867e0aa3260260e4e8175a765cf7ce35517300566dea02dae38e340ac" Netns:"/var/run/netns/3e937dcf-3faa-4805-909c-28333f8058c8" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-55c8c74798-tbnmx;K8S_POD_INFRA_CONTAINER_ID=97c37f3867e0aa3260260e4e8175a765cf7ce35517300566dea02dae38e340ac;K8S_POD_UID=0fed02bf-7183-45bc-84f1-4191d7fd7be2" Path:"" ERRORED: error configuring pod [openshift-authentication/oauth-openshift-55c8c74798-tbnmx] networking: Multus: [openshift-authentication/oauth-openshift-55c8c74798-tbnmx/0fed02bf-7183-45bc-84f1-4191d7fd7be2]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod oauth-openshift-55c8c74798-tbnmx in out of cluster comm: pod "oauth-openshift-55c8c74798-tbnmx" not found Feb 27 10:31:10 crc kubenswrapper[4728]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 27 10:31:10 crc kubenswrapper[4728]: > Feb 27 10:31:10 crc kubenswrapper[4728]: E0227 10:31:10.230263 4728 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Feb 27 10:31:10 crc kubenswrapper[4728]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-55c8c74798-tbnmx_openshift-authentication_0fed02bf-7183-45bc-84f1-4191d7fd7be2_0(97c37f3867e0aa3260260e4e8175a765cf7ce35517300566dea02dae38e340ac): error adding pod openshift-authentication_oauth-openshift-55c8c74798-tbnmx to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"97c37f3867e0aa3260260e4e8175a765cf7ce35517300566dea02dae38e340ac" Netns:"/var/run/netns/3e937dcf-3faa-4805-909c-28333f8058c8" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-55c8c74798-tbnmx;K8S_POD_INFRA_CONTAINER_ID=97c37f3867e0aa3260260e4e8175a765cf7ce35517300566dea02dae38e340ac;K8S_POD_UID=0fed02bf-7183-45bc-84f1-4191d7fd7be2" Path:"" ERRORED: error configuring pod [openshift-authentication/oauth-openshift-55c8c74798-tbnmx] networking: Multus: [openshift-authentication/oauth-openshift-55c8c74798-tbnmx/0fed02bf-7183-45bc-84f1-4191d7fd7be2]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod oauth-openshift-55c8c74798-tbnmx in out of cluster comm: pod "oauth-openshift-55c8c74798-tbnmx" not found Feb 27 10:31:10 crc kubenswrapper[4728]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 27 10:31:10 crc kubenswrapper[4728]: > pod="openshift-authentication/oauth-openshift-55c8c74798-tbnmx" Feb 27 10:31:10 crc kubenswrapper[4728]: E0227 10:31:10.230290 4728 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Feb 27 10:31:10 crc kubenswrapper[4728]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-55c8c74798-tbnmx_openshift-authentication_0fed02bf-7183-45bc-84f1-4191d7fd7be2_0(97c37f3867e0aa3260260e4e8175a765cf7ce35517300566dea02dae38e340ac): error adding pod openshift-authentication_oauth-openshift-55c8c74798-tbnmx to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"97c37f3867e0aa3260260e4e8175a765cf7ce35517300566dea02dae38e340ac" Netns:"/var/run/netns/3e937dcf-3faa-4805-909c-28333f8058c8" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-55c8c74798-tbnmx;K8S_POD_INFRA_CONTAINER_ID=97c37f3867e0aa3260260e4e8175a765cf7ce35517300566dea02dae38e340ac;K8S_POD_UID=0fed02bf-7183-45bc-84f1-4191d7fd7be2" Path:"" ERRORED: error configuring pod [openshift-authentication/oauth-openshift-55c8c74798-tbnmx] networking: Multus: [openshift-authentication/oauth-openshift-55c8c74798-tbnmx/0fed02bf-7183-45bc-84f1-4191d7fd7be2]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod oauth-openshift-55c8c74798-tbnmx in out of cluster comm: pod "oauth-openshift-55c8c74798-tbnmx" not found Feb 27 10:31:10 crc kubenswrapper[4728]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 27 10:31:10 crc kubenswrapper[4728]: > pod="openshift-authentication/oauth-openshift-55c8c74798-tbnmx" Feb 27 10:31:10 crc kubenswrapper[4728]: E0227 10:31:10.230364 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"oauth-openshift-55c8c74798-tbnmx_openshift-authentication(0fed02bf-7183-45bc-84f1-4191d7fd7be2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"oauth-openshift-55c8c74798-tbnmx_openshift-authentication(0fed02bf-7183-45bc-84f1-4191d7fd7be2)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-55c8c74798-tbnmx_openshift-authentication_0fed02bf-7183-45bc-84f1-4191d7fd7be2_0(97c37f3867e0aa3260260e4e8175a765cf7ce35517300566dea02dae38e340ac): error adding pod openshift-authentication_oauth-openshift-55c8c74798-tbnmx to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"97c37f3867e0aa3260260e4e8175a765cf7ce35517300566dea02dae38e340ac\\\" Netns:\\\"/var/run/netns/3e937dcf-3faa-4805-909c-28333f8058c8\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-55c8c74798-tbnmx;K8S_POD_INFRA_CONTAINER_ID=97c37f3867e0aa3260260e4e8175a765cf7ce35517300566dea02dae38e340ac;K8S_POD_UID=0fed02bf-7183-45bc-84f1-4191d7fd7be2\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-authentication/oauth-openshift-55c8c74798-tbnmx] networking: Multus: [openshift-authentication/oauth-openshift-55c8c74798-tbnmx/0fed02bf-7183-45bc-84f1-4191d7fd7be2]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod oauth-openshift-55c8c74798-tbnmx in out of cluster comm: pod \\\"oauth-openshift-55c8c74798-tbnmx\\\" not found\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-authentication/oauth-openshift-55c8c74798-tbnmx" podUID="0fed02bf-7183-45bc-84f1-4191d7fd7be2" Feb 27 10:31:10 crc kubenswrapper[4728]: I0227 10:31:10.264321 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-55c8c74798-tbnmx" Feb 27 10:31:10 crc kubenswrapper[4728]: I0227 10:31:10.264983 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-55c8c74798-tbnmx" Feb 27 10:31:10 crc kubenswrapper[4728]: I0227 10:31:10.283946 4728 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 27 10:31:10 crc kubenswrapper[4728]: I0227 10:31:10.557173 4728 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 27 10:31:10 crc kubenswrapper[4728]: I0227 10:31:10.585737 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 27 10:31:10 crc kubenswrapper[4728]: I0227 10:31:10.610826 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 27 10:31:10 crc kubenswrapper[4728]: I0227 10:31:10.612724 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 27 10:31:10 crc kubenswrapper[4728]: I0227 10:31:10.760991 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 27 10:31:10 crc kubenswrapper[4728]: I0227 10:31:10.837812 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 27 10:31:10 crc kubenswrapper[4728]: I0227 10:31:10.906091 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 27 10:31:10 crc kubenswrapper[4728]: I0227 10:31:10.954184 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 27 10:31:11 crc kubenswrapper[4728]: I0227 10:31:11.104283 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 27 10:31:11 crc kubenswrapper[4728]: I0227 10:31:11.267547 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 27 10:31:11 crc kubenswrapper[4728]: I0227 10:31:11.324093 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 27 10:31:11 crc kubenswrapper[4728]: I0227 10:31:11.338611 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 27 10:31:11 crc kubenswrapper[4728]: I0227 10:31:11.369687 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 27 10:31:11 crc kubenswrapper[4728]: I0227 10:31:11.403283 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 27 10:31:11 crc kubenswrapper[4728]: I0227 10:31:11.412325 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 27 10:31:11 crc kubenswrapper[4728]: I0227 10:31:11.414853 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 27 10:31:11 crc kubenswrapper[4728]: I0227 10:31:11.440006 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 27 10:31:11 crc kubenswrapper[4728]: I0227 10:31:11.494483 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 27 10:31:11 crc kubenswrapper[4728]: I0227 10:31:11.713026 4728 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 27 10:31:11 crc kubenswrapper[4728]: I0227 10:31:11.713541 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://f44bbbbf88244223d404524160d5f84258c555bedaec4c3f6f047aaaa67a099d" gracePeriod=5 Feb 27 10:31:11 crc kubenswrapper[4728]: I0227 10:31:11.715041 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 27 10:31:11 crc kubenswrapper[4728]: I0227 10:31:11.772315 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 27 10:31:11 crc kubenswrapper[4728]: I0227 10:31:11.828058 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 27 10:31:11 crc kubenswrapper[4728]: I0227 10:31:11.954009 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-55c8c74798-tbnmx"] Feb 27 10:31:12 crc kubenswrapper[4728]: I0227 10:31:12.033638 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 27 10:31:12 crc kubenswrapper[4728]: I0227 10:31:12.058263 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 27 10:31:12 crc kubenswrapper[4728]: I0227 10:31:12.176897 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 27 10:31:12 crc kubenswrapper[4728]: I0227 10:31:12.277122 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-55c8c74798-tbnmx" event={"ID":"0fed02bf-7183-45bc-84f1-4191d7fd7be2","Type":"ContainerStarted","Data":"5d7984c80b9e7943f3a9d190bc3c2374372553ae0647b429897e3d51dbb059a8"} Feb 27 10:31:12 crc kubenswrapper[4728]: I0227 10:31:12.278636 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-55c8c74798-tbnmx" event={"ID":"0fed02bf-7183-45bc-84f1-4191d7fd7be2","Type":"ContainerStarted","Data":"7aa5cb63a30c1cd4db02a2d4376bab6d69acd6a66bde9a3eb229428e3fa85354"} Feb 27 10:31:12 crc kubenswrapper[4728]: I0227 10:31:12.278758 4728 patch_prober.go:28] interesting pod/oauth-openshift-55c8c74798-tbnmx container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.70:6443/healthz\": dial tcp 10.217.0.70:6443: connect: connection refused" start-of-body= Feb 27 10:31:12 crc kubenswrapper[4728]: I0227 10:31:12.278842 4728 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-55c8c74798-tbnmx" podUID="0fed02bf-7183-45bc-84f1-4191d7fd7be2" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.70:6443/healthz\": dial tcp 10.217.0.70:6443: connect: connection refused" Feb 27 10:31:12 crc kubenswrapper[4728]: I0227 10:31:12.278780 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-55c8c74798-tbnmx" Feb 27 10:31:12 crc kubenswrapper[4728]: I0227 10:31:12.311127 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-55c8c74798-tbnmx" podStartSLOduration=60.311107072 podStartE2EDuration="1m0.311107072s" podCreationTimestamp="2026-02-27 10:30:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:31:12.309460356 +0000 UTC m=+292.271826502" watchObservedRunningTime="2026-02-27 10:31:12.311107072 +0000 UTC m=+292.273473168" Feb 27 10:31:12 crc kubenswrapper[4728]: I0227 10:31:12.339978 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 27 10:31:12 crc kubenswrapper[4728]: I0227 10:31:12.376666 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 27 10:31:12 crc kubenswrapper[4728]: I0227 10:31:12.379139 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 27 10:31:12 crc kubenswrapper[4728]: I0227 10:31:12.430807 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 27 10:31:12 crc kubenswrapper[4728]: I0227 10:31:12.485137 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 27 10:31:12 crc kubenswrapper[4728]: I0227 10:31:12.672447 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 27 10:31:12 crc kubenswrapper[4728]: I0227 10:31:12.803582 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 27 10:31:12 crc kubenswrapper[4728]: I0227 10:31:12.804001 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 27 10:31:12 crc kubenswrapper[4728]: I0227 10:31:12.982473 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 27 10:31:12 crc kubenswrapper[4728]: I0227 10:31:12.982490 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 27 10:31:12 crc kubenswrapper[4728]: I0227 10:31:12.983696 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 27 10:31:13 crc kubenswrapper[4728]: I0227 10:31:13.040954 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 27 10:31:13 crc kubenswrapper[4728]: I0227 10:31:13.066190 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 27 10:31:13 crc kubenswrapper[4728]: I0227 10:31:13.102768 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 27 10:31:13 crc kubenswrapper[4728]: I0227 10:31:13.103582 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 27 10:31:13 crc kubenswrapper[4728]: I0227 10:31:13.143700 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 27 10:31:13 crc kubenswrapper[4728]: I0227 10:31:13.315728 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-55c8c74798-tbnmx" Feb 27 10:31:13 crc kubenswrapper[4728]: I0227 10:31:13.376716 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 27 10:31:13 crc kubenswrapper[4728]: I0227 10:31:13.388942 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 27 10:31:13 crc kubenswrapper[4728]: I0227 10:31:13.407517 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 27 10:31:13 crc kubenswrapper[4728]: I0227 10:31:13.435184 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 27 10:31:13 crc kubenswrapper[4728]: I0227 10:31:13.494860 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 27 10:31:13 crc kubenswrapper[4728]: I0227 10:31:13.554904 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 27 10:31:13 crc kubenswrapper[4728]: I0227 10:31:13.578797 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 27 10:31:13 crc kubenswrapper[4728]: I0227 10:31:13.625060 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 27 10:31:13 crc kubenswrapper[4728]: I0227 10:31:13.777783 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 27 10:31:13 crc kubenswrapper[4728]: I0227 10:31:13.953697 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 27 10:31:14 crc kubenswrapper[4728]: I0227 10:31:14.006129 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 27 10:31:14 crc kubenswrapper[4728]: I0227 10:31:14.089898 4728 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 27 10:31:14 crc kubenswrapper[4728]: I0227 10:31:14.276524 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 27 10:31:14 crc kubenswrapper[4728]: I0227 10:31:14.472305 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 27 10:31:14 crc kubenswrapper[4728]: I0227 10:31:14.491132 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 27 10:31:14 crc kubenswrapper[4728]: I0227 10:31:14.499940 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 27 10:31:14 crc kubenswrapper[4728]: I0227 10:31:14.682606 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 27 10:31:14 crc kubenswrapper[4728]: I0227 10:31:14.803833 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 27 10:31:14 crc kubenswrapper[4728]: I0227 10:31:14.916876 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 27 10:31:14 crc kubenswrapper[4728]: I0227 10:31:14.947135 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 27 10:31:15 crc kubenswrapper[4728]: I0227 10:31:15.212054 4728 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 27 10:31:15 crc kubenswrapper[4728]: I0227 10:31:15.233159 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 27 10:31:15 crc kubenswrapper[4728]: I0227 10:31:15.484283 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 27 10:31:15 crc kubenswrapper[4728]: I0227 10:31:15.501313 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 27 10:31:15 crc kubenswrapper[4728]: I0227 10:31:15.723960 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 27 10:31:15 crc kubenswrapper[4728]: I0227 10:31:15.911387 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 27 10:31:15 crc kubenswrapper[4728]: I0227 10:31:15.933665 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 27 10:31:16 crc kubenswrapper[4728]: I0227 10:31:16.025147 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 27 10:31:16 crc kubenswrapper[4728]: I0227 10:31:16.167075 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 27 10:31:16 crc kubenswrapper[4728]: I0227 10:31:16.232615 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 27 10:31:16 crc kubenswrapper[4728]: I0227 10:31:16.320709 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 27 10:31:16 crc kubenswrapper[4728]: I0227 10:31:16.398334 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 27 10:31:16 crc kubenswrapper[4728]: I0227 10:31:16.417593 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 27 10:31:16 crc kubenswrapper[4728]: I0227 10:31:16.560140 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 27 10:31:16 crc kubenswrapper[4728]: I0227 10:31:16.690620 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 27 10:31:16 crc kubenswrapper[4728]: I0227 10:31:16.778291 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 27 10:31:16 crc kubenswrapper[4728]: I0227 10:31:16.807708 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 27 10:31:16 crc kubenswrapper[4728]: I0227 10:31:16.861929 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 27 10:31:16 crc kubenswrapper[4728]: I0227 10:31:16.895089 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 27 10:31:16 crc kubenswrapper[4728]: I0227 10:31:16.972244 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 27 10:31:17 crc kubenswrapper[4728]: I0227 10:31:17.037254 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 27 10:31:17 crc kubenswrapper[4728]: I0227 10:31:17.082920 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 27 10:31:17 crc kubenswrapper[4728]: I0227 10:31:17.085926 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 27 10:31:17 crc kubenswrapper[4728]: I0227 10:31:17.314438 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 27 10:31:17 crc kubenswrapper[4728]: I0227 10:31:17.315446 4728 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="f44bbbbf88244223d404524160d5f84258c555bedaec4c3f6f047aaaa67a099d" exitCode=137 Feb 27 10:31:17 crc kubenswrapper[4728]: I0227 10:31:17.315721 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4e4cb5e310d28bc0ebfed6780f26a920af3f6f674416126d13f156d0411a01ee" Feb 27 10:31:17 crc kubenswrapper[4728]: I0227 10:31:17.320723 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 27 10:31:17 crc kubenswrapper[4728]: I0227 10:31:17.321292 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 27 10:31:17 crc kubenswrapper[4728]: I0227 10:31:17.398848 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 27 10:31:17 crc kubenswrapper[4728]: I0227 10:31:17.398964 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 27 10:31:17 crc kubenswrapper[4728]: I0227 10:31:17.398992 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 27 10:31:17 crc kubenswrapper[4728]: I0227 10:31:17.399041 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 27 10:31:17 crc kubenswrapper[4728]: I0227 10:31:17.399077 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 27 10:31:17 crc kubenswrapper[4728]: I0227 10:31:17.399115 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 10:31:17 crc kubenswrapper[4728]: I0227 10:31:17.399223 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 10:31:17 crc kubenswrapper[4728]: I0227 10:31:17.399288 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 10:31:17 crc kubenswrapper[4728]: I0227 10:31:17.399316 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 10:31:17 crc kubenswrapper[4728]: I0227 10:31:17.399734 4728 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Feb 27 10:31:17 crc kubenswrapper[4728]: I0227 10:31:17.399785 4728 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Feb 27 10:31:17 crc kubenswrapper[4728]: I0227 10:31:17.399810 4728 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 27 10:31:17 crc kubenswrapper[4728]: I0227 10:31:17.399833 4728 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Feb 27 10:31:17 crc kubenswrapper[4728]: I0227 10:31:17.413240 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 10:31:17 crc kubenswrapper[4728]: I0227 10:31:17.454279 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 27 10:31:17 crc kubenswrapper[4728]: I0227 10:31:17.501468 4728 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 27 10:31:17 crc kubenswrapper[4728]: I0227 10:31:17.502036 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 27 10:31:17 crc kubenswrapper[4728]: I0227 10:31:17.867642 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 27 10:31:17 crc kubenswrapper[4728]: I0227 10:31:17.868383 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 27 10:31:18 crc kubenswrapper[4728]: I0227 10:31:18.056756 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 27 10:31:18 crc kubenswrapper[4728]: I0227 10:31:18.322249 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 27 10:31:18 crc kubenswrapper[4728]: I0227 10:31:18.737292 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Feb 27 10:31:18 crc kubenswrapper[4728]: I0227 10:31:18.841299 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 27 10:31:19 crc kubenswrapper[4728]: I0227 10:31:19.039795 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 27 10:31:20 crc kubenswrapper[4728]: I0227 10:31:20.031027 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 27 10:31:20 crc kubenswrapper[4728]: I0227 10:31:20.292048 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 27 10:31:35 crc kubenswrapper[4728]: I0227 10:31:35.922383 4728 patch_prober.go:28] interesting pod/machine-config-daemon-mf2hh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 10:31:35 crc kubenswrapper[4728]: I0227 10:31:35.922937 4728 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 10:31:35 crc kubenswrapper[4728]: I0227 10:31:35.922982 4728 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" Feb 27 10:31:35 crc kubenswrapper[4728]: I0227 10:31:35.923548 4728 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"983e19c2154a1b01db67f4b9f25a99f1aecc3d35ea0f570828eabe5e7d0b10ac"} pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 27 10:31:35 crc kubenswrapper[4728]: I0227 10:31:35.923604 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" containerName="machine-config-daemon" containerID="cri-o://983e19c2154a1b01db67f4b9f25a99f1aecc3d35ea0f570828eabe5e7d0b10ac" gracePeriod=600 Feb 27 10:31:36 crc kubenswrapper[4728]: I0227 10:31:36.443688 4728 generic.go:334] "Generic (PLEG): container finished" podID="c2cfd349-f825-497b-b698-7fb6bc258b22" containerID="983e19c2154a1b01db67f4b9f25a99f1aecc3d35ea0f570828eabe5e7d0b10ac" exitCode=0 Feb 27 10:31:36 crc kubenswrapper[4728]: I0227 10:31:36.443783 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" event={"ID":"c2cfd349-f825-497b-b698-7fb6bc258b22","Type":"ContainerDied","Data":"983e19c2154a1b01db67f4b9f25a99f1aecc3d35ea0f570828eabe5e7d0b10ac"} Feb 27 10:31:36 crc kubenswrapper[4728]: I0227 10:31:36.444030 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" event={"ID":"c2cfd349-f825-497b-b698-7fb6bc258b22","Type":"ContainerStarted","Data":"2416fbc83dda100006dd5fec140cd5b4cb87d01da9d620e87c0949af705e048d"} Feb 27 10:31:38 crc kubenswrapper[4728]: I0227 10:31:38.441181 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-brtfb"] Feb 27 10:31:38 crc kubenswrapper[4728]: I0227 10:31:38.441716 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-brtfb" podUID="7c5a3750-282d-4f84-a9c9-b3167aa283b8" containerName="registry-server" containerID="cri-o://2ff5fada8a2dc06eb2818512884c5e08be4152467cbfd2cf39873863b50bd1e6" gracePeriod=2 Feb 27 10:31:38 crc kubenswrapper[4728]: I0227 10:31:38.822659 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-brtfb" Feb 27 10:31:38 crc kubenswrapper[4728]: I0227 10:31:38.892047 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c5a3750-282d-4f84-a9c9-b3167aa283b8-utilities\") pod \"7c5a3750-282d-4f84-a9c9-b3167aa283b8\" (UID: \"7c5a3750-282d-4f84-a9c9-b3167aa283b8\") " Feb 27 10:31:38 crc kubenswrapper[4728]: I0227 10:31:38.892155 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c5a3750-282d-4f84-a9c9-b3167aa283b8-catalog-content\") pod \"7c5a3750-282d-4f84-a9c9-b3167aa283b8\" (UID: \"7c5a3750-282d-4f84-a9c9-b3167aa283b8\") " Feb 27 10:31:38 crc kubenswrapper[4728]: I0227 10:31:38.892287 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dvlqh\" (UniqueName: \"kubernetes.io/projected/7c5a3750-282d-4f84-a9c9-b3167aa283b8-kube-api-access-dvlqh\") pod \"7c5a3750-282d-4f84-a9c9-b3167aa283b8\" (UID: \"7c5a3750-282d-4f84-a9c9-b3167aa283b8\") " Feb 27 10:31:38 crc kubenswrapper[4728]: I0227 10:31:38.893242 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7c5a3750-282d-4f84-a9c9-b3167aa283b8-utilities" (OuterVolumeSpecName: "utilities") pod "7c5a3750-282d-4f84-a9c9-b3167aa283b8" (UID: "7c5a3750-282d-4f84-a9c9-b3167aa283b8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:31:38 crc kubenswrapper[4728]: I0227 10:31:38.898426 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c5a3750-282d-4f84-a9c9-b3167aa283b8-kube-api-access-dvlqh" (OuterVolumeSpecName: "kube-api-access-dvlqh") pod "7c5a3750-282d-4f84-a9c9-b3167aa283b8" (UID: "7c5a3750-282d-4f84-a9c9-b3167aa283b8"). InnerVolumeSpecName "kube-api-access-dvlqh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:31:38 crc kubenswrapper[4728]: I0227 10:31:38.946541 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7c5a3750-282d-4f84-a9c9-b3167aa283b8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7c5a3750-282d-4f84-a9c9-b3167aa283b8" (UID: "7c5a3750-282d-4f84-a9c9-b3167aa283b8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:31:38 crc kubenswrapper[4728]: I0227 10:31:38.993632 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dvlqh\" (UniqueName: \"kubernetes.io/projected/7c5a3750-282d-4f84-a9c9-b3167aa283b8-kube-api-access-dvlqh\") on node \"crc\" DevicePath \"\"" Feb 27 10:31:38 crc kubenswrapper[4728]: I0227 10:31:38.993672 4728 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c5a3750-282d-4f84-a9c9-b3167aa283b8-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 10:31:38 crc kubenswrapper[4728]: I0227 10:31:38.993683 4728 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c5a3750-282d-4f84-a9c9-b3167aa283b8-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 10:31:39 crc kubenswrapper[4728]: I0227 10:31:39.460929 4728 generic.go:334] "Generic (PLEG): container finished" podID="7c5a3750-282d-4f84-a9c9-b3167aa283b8" containerID="2ff5fada8a2dc06eb2818512884c5e08be4152467cbfd2cf39873863b50bd1e6" exitCode=0 Feb 27 10:31:39 crc kubenswrapper[4728]: I0227 10:31:39.461064 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-brtfb" Feb 27 10:31:39 crc kubenswrapper[4728]: I0227 10:31:39.461065 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-brtfb" event={"ID":"7c5a3750-282d-4f84-a9c9-b3167aa283b8","Type":"ContainerDied","Data":"2ff5fada8a2dc06eb2818512884c5e08be4152467cbfd2cf39873863b50bd1e6"} Feb 27 10:31:39 crc kubenswrapper[4728]: I0227 10:31:39.461575 4728 scope.go:117] "RemoveContainer" containerID="2ff5fada8a2dc06eb2818512884c5e08be4152467cbfd2cf39873863b50bd1e6" Feb 27 10:31:39 crc kubenswrapper[4728]: I0227 10:31:39.461494 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-brtfb" event={"ID":"7c5a3750-282d-4f84-a9c9-b3167aa283b8","Type":"ContainerDied","Data":"37c99086912d0b21379fdbc7215dde0cf9bb8add722cf47e61073c9943c2a0ea"} Feb 27 10:31:39 crc kubenswrapper[4728]: I0227 10:31:39.492055 4728 scope.go:117] "RemoveContainer" containerID="4f6c04d856b378c061da9394525e4481bad2d1646739df2e4c933c0ece193de5" Feb 27 10:31:39 crc kubenswrapper[4728]: I0227 10:31:39.508793 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-brtfb"] Feb 27 10:31:39 crc kubenswrapper[4728]: I0227 10:31:39.512535 4728 scope.go:117] "RemoveContainer" containerID="402e7c6d0bb1dc6970509d12a2436cdc1b369627cea9a8f6a8d3f64c8971e5ae" Feb 27 10:31:39 crc kubenswrapper[4728]: I0227 10:31:39.518425 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-brtfb"] Feb 27 10:31:39 crc kubenswrapper[4728]: I0227 10:31:39.543868 4728 scope.go:117] "RemoveContainer" containerID="2ff5fada8a2dc06eb2818512884c5e08be4152467cbfd2cf39873863b50bd1e6" Feb 27 10:31:39 crc kubenswrapper[4728]: E0227 10:31:39.544288 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2ff5fada8a2dc06eb2818512884c5e08be4152467cbfd2cf39873863b50bd1e6\": container with ID starting with 2ff5fada8a2dc06eb2818512884c5e08be4152467cbfd2cf39873863b50bd1e6 not found: ID does not exist" containerID="2ff5fada8a2dc06eb2818512884c5e08be4152467cbfd2cf39873863b50bd1e6" Feb 27 10:31:39 crc kubenswrapper[4728]: I0227 10:31:39.544327 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ff5fada8a2dc06eb2818512884c5e08be4152467cbfd2cf39873863b50bd1e6"} err="failed to get container status \"2ff5fada8a2dc06eb2818512884c5e08be4152467cbfd2cf39873863b50bd1e6\": rpc error: code = NotFound desc = could not find container \"2ff5fada8a2dc06eb2818512884c5e08be4152467cbfd2cf39873863b50bd1e6\": container with ID starting with 2ff5fada8a2dc06eb2818512884c5e08be4152467cbfd2cf39873863b50bd1e6 not found: ID does not exist" Feb 27 10:31:39 crc kubenswrapper[4728]: I0227 10:31:39.544356 4728 scope.go:117] "RemoveContainer" containerID="4f6c04d856b378c061da9394525e4481bad2d1646739df2e4c933c0ece193de5" Feb 27 10:31:39 crc kubenswrapper[4728]: E0227 10:31:39.544839 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4f6c04d856b378c061da9394525e4481bad2d1646739df2e4c933c0ece193de5\": container with ID starting with 4f6c04d856b378c061da9394525e4481bad2d1646739df2e4c933c0ece193de5 not found: ID does not exist" containerID="4f6c04d856b378c061da9394525e4481bad2d1646739df2e4c933c0ece193de5" Feb 27 10:31:39 crc kubenswrapper[4728]: I0227 10:31:39.544871 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f6c04d856b378c061da9394525e4481bad2d1646739df2e4c933c0ece193de5"} err="failed to get container status \"4f6c04d856b378c061da9394525e4481bad2d1646739df2e4c933c0ece193de5\": rpc error: code = NotFound desc = could not find container \"4f6c04d856b378c061da9394525e4481bad2d1646739df2e4c933c0ece193de5\": container with ID starting with 4f6c04d856b378c061da9394525e4481bad2d1646739df2e4c933c0ece193de5 not found: ID does not exist" Feb 27 10:31:39 crc kubenswrapper[4728]: I0227 10:31:39.544895 4728 scope.go:117] "RemoveContainer" containerID="402e7c6d0bb1dc6970509d12a2436cdc1b369627cea9a8f6a8d3f64c8971e5ae" Feb 27 10:31:39 crc kubenswrapper[4728]: E0227 10:31:39.545196 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"402e7c6d0bb1dc6970509d12a2436cdc1b369627cea9a8f6a8d3f64c8971e5ae\": container with ID starting with 402e7c6d0bb1dc6970509d12a2436cdc1b369627cea9a8f6a8d3f64c8971e5ae not found: ID does not exist" containerID="402e7c6d0bb1dc6970509d12a2436cdc1b369627cea9a8f6a8d3f64c8971e5ae" Feb 27 10:31:39 crc kubenswrapper[4728]: I0227 10:31:39.545219 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"402e7c6d0bb1dc6970509d12a2436cdc1b369627cea9a8f6a8d3f64c8971e5ae"} err="failed to get container status \"402e7c6d0bb1dc6970509d12a2436cdc1b369627cea9a8f6a8d3f64c8971e5ae\": rpc error: code = NotFound desc = could not find container \"402e7c6d0bb1dc6970509d12a2436cdc1b369627cea9a8f6a8d3f64c8971e5ae\": container with ID starting with 402e7c6d0bb1dc6970509d12a2436cdc1b369627cea9a8f6a8d3f64c8971e5ae not found: ID does not exist" Feb 27 10:31:40 crc kubenswrapper[4728]: I0227 10:31:40.732996 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c5a3750-282d-4f84-a9c9-b3167aa283b8" path="/var/lib/kubelet/pods/7c5a3750-282d-4f84-a9c9-b3167aa283b8/volumes" Feb 27 10:31:52 crc kubenswrapper[4728]: I0227 10:31:52.321934 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-t4bnk"] Feb 27 10:31:52 crc kubenswrapper[4728]: I0227 10:31:52.322612 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-t4bnk" podUID="7771abc7-886d-41eb-b966-74538062511f" containerName="registry-server" containerID="cri-o://7143b72ae2d7e05b85ad009f20d696a7570ddb41432db16b879e3dcbce7fcf63" gracePeriod=30 Feb 27 10:31:52 crc kubenswrapper[4728]: I0227 10:31:52.333053 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-b97gn"] Feb 27 10:31:52 crc kubenswrapper[4728]: I0227 10:31:52.333297 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-b97gn" podUID="34088c2f-1e95-4227-9242-9e4cde7a9fde" containerName="registry-server" containerID="cri-o://177e9ea899e8eddd80df884e748aea9b96849c704208a9243f95166c80467dfb" gracePeriod=30 Feb 27 10:31:52 crc kubenswrapper[4728]: I0227 10:31:52.355717 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-xv8vk"] Feb 27 10:31:52 crc kubenswrapper[4728]: I0227 10:31:52.355944 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-xv8vk" podUID="438710a7-473e-43a3-8aee-6f1f2d5ac756" containerName="marketplace-operator" containerID="cri-o://32d42c8d87e85626ed012613fdb85429cd29ed489457e98837214d20f6df2f0a" gracePeriod=30 Feb 27 10:31:52 crc kubenswrapper[4728]: I0227 10:31:52.366228 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wnfnp"] Feb 27 10:31:52 crc kubenswrapper[4728]: I0227 10:31:52.366839 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-wnfnp" podUID="b27f5cf8-de13-42a0-825a-0bc27ddc8466" containerName="registry-server" containerID="cri-o://02bbfcda99329ca111a91bf0c11f0695387513a40b4905311ce5a5a9fed4de41" gracePeriod=30 Feb 27 10:31:52 crc kubenswrapper[4728]: I0227 10:31:52.377897 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pztrz"] Feb 27 10:31:52 crc kubenswrapper[4728]: I0227 10:31:52.378162 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-pztrz" podUID="7850a694-dd44-4f4d-9b97-ecaa50efb803" containerName="registry-server" containerID="cri-o://0efe2242a39af5b84489968861f63da49209417961d8cf509813a4d0b0936141" gracePeriod=30 Feb 27 10:31:52 crc kubenswrapper[4728]: I0227 10:31:52.386831 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-sgd7f"] Feb 27 10:31:52 crc kubenswrapper[4728]: E0227 10:31:52.387048 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c5a3750-282d-4f84-a9c9-b3167aa283b8" containerName="extract-utilities" Feb 27 10:31:52 crc kubenswrapper[4728]: I0227 10:31:52.387064 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c5a3750-282d-4f84-a9c9-b3167aa283b8" containerName="extract-utilities" Feb 27 10:31:52 crc kubenswrapper[4728]: E0227 10:31:52.387080 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c5a3750-282d-4f84-a9c9-b3167aa283b8" containerName="extract-content" Feb 27 10:31:52 crc kubenswrapper[4728]: I0227 10:31:52.387088 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c5a3750-282d-4f84-a9c9-b3167aa283b8" containerName="extract-content" Feb 27 10:31:52 crc kubenswrapper[4728]: E0227 10:31:52.387095 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 27 10:31:52 crc kubenswrapper[4728]: I0227 10:31:52.387102 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 27 10:31:52 crc kubenswrapper[4728]: E0227 10:31:52.387116 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c5a3750-282d-4f84-a9c9-b3167aa283b8" containerName="registry-server" Feb 27 10:31:52 crc kubenswrapper[4728]: I0227 10:31:52.387122 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c5a3750-282d-4f84-a9c9-b3167aa283b8" containerName="registry-server" Feb 27 10:31:52 crc kubenswrapper[4728]: I0227 10:31:52.387209 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c5a3750-282d-4f84-a9c9-b3167aa283b8" containerName="registry-server" Feb 27 10:31:52 crc kubenswrapper[4728]: I0227 10:31:52.387221 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 27 10:31:52 crc kubenswrapper[4728]: I0227 10:31:52.387587 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-sgd7f" Feb 27 10:31:52 crc kubenswrapper[4728]: I0227 10:31:52.436708 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-sgd7f"] Feb 27 10:31:52 crc kubenswrapper[4728]: I0227 10:31:52.477942 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwmln\" (UniqueName: \"kubernetes.io/projected/fe9f039c-47c3-4535-9098-1fcc175a79e6-kube-api-access-pwmln\") pod \"marketplace-operator-79b997595-sgd7f\" (UID: \"fe9f039c-47c3-4535-9098-1fcc175a79e6\") " pod="openshift-marketplace/marketplace-operator-79b997595-sgd7f" Feb 27 10:31:52 crc kubenswrapper[4728]: I0227 10:31:52.478544 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fe9f039c-47c3-4535-9098-1fcc175a79e6-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-sgd7f\" (UID: \"fe9f039c-47c3-4535-9098-1fcc175a79e6\") " pod="openshift-marketplace/marketplace-operator-79b997595-sgd7f" Feb 27 10:31:52 crc kubenswrapper[4728]: I0227 10:31:52.478624 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/fe9f039c-47c3-4535-9098-1fcc175a79e6-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-sgd7f\" (UID: \"fe9f039c-47c3-4535-9098-1fcc175a79e6\") " pod="openshift-marketplace/marketplace-operator-79b997595-sgd7f" Feb 27 10:31:52 crc kubenswrapper[4728]: I0227 10:31:52.542916 4728 generic.go:334] "Generic (PLEG): container finished" podID="438710a7-473e-43a3-8aee-6f1f2d5ac756" containerID="32d42c8d87e85626ed012613fdb85429cd29ed489457e98837214d20f6df2f0a" exitCode=0 Feb 27 10:31:52 crc kubenswrapper[4728]: I0227 10:31:52.543386 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-xv8vk" event={"ID":"438710a7-473e-43a3-8aee-6f1f2d5ac756","Type":"ContainerDied","Data":"32d42c8d87e85626ed012613fdb85429cd29ed489457e98837214d20f6df2f0a"} Feb 27 10:31:52 crc kubenswrapper[4728]: I0227 10:31:52.561655 4728 generic.go:334] "Generic (PLEG): container finished" podID="b27f5cf8-de13-42a0-825a-0bc27ddc8466" containerID="02bbfcda99329ca111a91bf0c11f0695387513a40b4905311ce5a5a9fed4de41" exitCode=0 Feb 27 10:31:52 crc kubenswrapper[4728]: I0227 10:31:52.561747 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wnfnp" event={"ID":"b27f5cf8-de13-42a0-825a-0bc27ddc8466","Type":"ContainerDied","Data":"02bbfcda99329ca111a91bf0c11f0695387513a40b4905311ce5a5a9fed4de41"} Feb 27 10:31:52 crc kubenswrapper[4728]: I0227 10:31:52.567551 4728 generic.go:334] "Generic (PLEG): container finished" podID="7850a694-dd44-4f4d-9b97-ecaa50efb803" containerID="0efe2242a39af5b84489968861f63da49209417961d8cf509813a4d0b0936141" exitCode=0 Feb 27 10:31:52 crc kubenswrapper[4728]: I0227 10:31:52.567623 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pztrz" event={"ID":"7850a694-dd44-4f4d-9b97-ecaa50efb803","Type":"ContainerDied","Data":"0efe2242a39af5b84489968861f63da49209417961d8cf509813a4d0b0936141"} Feb 27 10:31:52 crc kubenswrapper[4728]: I0227 10:31:52.570427 4728 generic.go:334] "Generic (PLEG): container finished" podID="7771abc7-886d-41eb-b966-74538062511f" containerID="7143b72ae2d7e05b85ad009f20d696a7570ddb41432db16b879e3dcbce7fcf63" exitCode=0 Feb 27 10:31:52 crc kubenswrapper[4728]: I0227 10:31:52.570467 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t4bnk" event={"ID":"7771abc7-886d-41eb-b966-74538062511f","Type":"ContainerDied","Data":"7143b72ae2d7e05b85ad009f20d696a7570ddb41432db16b879e3dcbce7fcf63"} Feb 27 10:31:52 crc kubenswrapper[4728]: I0227 10:31:52.572214 4728 generic.go:334] "Generic (PLEG): container finished" podID="34088c2f-1e95-4227-9242-9e4cde7a9fde" containerID="177e9ea899e8eddd80df884e748aea9b96849c704208a9243f95166c80467dfb" exitCode=0 Feb 27 10:31:52 crc kubenswrapper[4728]: I0227 10:31:52.572238 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b97gn" event={"ID":"34088c2f-1e95-4227-9242-9e4cde7a9fde","Type":"ContainerDied","Data":"177e9ea899e8eddd80df884e748aea9b96849c704208a9243f95166c80467dfb"} Feb 27 10:31:52 crc kubenswrapper[4728]: I0227 10:31:52.580363 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fe9f039c-47c3-4535-9098-1fcc175a79e6-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-sgd7f\" (UID: \"fe9f039c-47c3-4535-9098-1fcc175a79e6\") " pod="openshift-marketplace/marketplace-operator-79b997595-sgd7f" Feb 27 10:31:52 crc kubenswrapper[4728]: I0227 10:31:52.580403 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/fe9f039c-47c3-4535-9098-1fcc175a79e6-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-sgd7f\" (UID: \"fe9f039c-47c3-4535-9098-1fcc175a79e6\") " pod="openshift-marketplace/marketplace-operator-79b997595-sgd7f" Feb 27 10:31:52 crc kubenswrapper[4728]: I0227 10:31:52.580426 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pwmln\" (UniqueName: \"kubernetes.io/projected/fe9f039c-47c3-4535-9098-1fcc175a79e6-kube-api-access-pwmln\") pod \"marketplace-operator-79b997595-sgd7f\" (UID: \"fe9f039c-47c3-4535-9098-1fcc175a79e6\") " pod="openshift-marketplace/marketplace-operator-79b997595-sgd7f" Feb 27 10:31:52 crc kubenswrapper[4728]: I0227 10:31:52.581933 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fe9f039c-47c3-4535-9098-1fcc175a79e6-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-sgd7f\" (UID: \"fe9f039c-47c3-4535-9098-1fcc175a79e6\") " pod="openshift-marketplace/marketplace-operator-79b997595-sgd7f" Feb 27 10:31:52 crc kubenswrapper[4728]: I0227 10:31:52.588198 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/fe9f039c-47c3-4535-9098-1fcc175a79e6-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-sgd7f\" (UID: \"fe9f039c-47c3-4535-9098-1fcc175a79e6\") " pod="openshift-marketplace/marketplace-operator-79b997595-sgd7f" Feb 27 10:31:52 crc kubenswrapper[4728]: I0227 10:31:52.596084 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pwmln\" (UniqueName: \"kubernetes.io/projected/fe9f039c-47c3-4535-9098-1fcc175a79e6-kube-api-access-pwmln\") pod \"marketplace-operator-79b997595-sgd7f\" (UID: \"fe9f039c-47c3-4535-9098-1fcc175a79e6\") " pod="openshift-marketplace/marketplace-operator-79b997595-sgd7f" Feb 27 10:31:52 crc kubenswrapper[4728]: I0227 10:31:52.773814 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-sgd7f" Feb 27 10:31:52 crc kubenswrapper[4728]: I0227 10:31:52.777751 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t4bnk" Feb 27 10:31:52 crc kubenswrapper[4728]: I0227 10:31:52.888229 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dmk8m\" (UniqueName: \"kubernetes.io/projected/7771abc7-886d-41eb-b966-74538062511f-kube-api-access-dmk8m\") pod \"7771abc7-886d-41eb-b966-74538062511f\" (UID: \"7771abc7-886d-41eb-b966-74538062511f\") " Feb 27 10:31:52 crc kubenswrapper[4728]: I0227 10:31:52.888344 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7771abc7-886d-41eb-b966-74538062511f-utilities\") pod \"7771abc7-886d-41eb-b966-74538062511f\" (UID: \"7771abc7-886d-41eb-b966-74538062511f\") " Feb 27 10:31:52 crc kubenswrapper[4728]: I0227 10:31:52.888491 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7771abc7-886d-41eb-b966-74538062511f-catalog-content\") pod \"7771abc7-886d-41eb-b966-74538062511f\" (UID: \"7771abc7-886d-41eb-b966-74538062511f\") " Feb 27 10:31:52 crc kubenswrapper[4728]: I0227 10:31:52.891580 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7771abc7-886d-41eb-b966-74538062511f-utilities" (OuterVolumeSpecName: "utilities") pod "7771abc7-886d-41eb-b966-74538062511f" (UID: "7771abc7-886d-41eb-b966-74538062511f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:31:52 crc kubenswrapper[4728]: I0227 10:31:52.898803 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7771abc7-886d-41eb-b966-74538062511f-kube-api-access-dmk8m" (OuterVolumeSpecName: "kube-api-access-dmk8m") pod "7771abc7-886d-41eb-b966-74538062511f" (UID: "7771abc7-886d-41eb-b966-74538062511f"). InnerVolumeSpecName "kube-api-access-dmk8m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:31:52 crc kubenswrapper[4728]: I0227 10:31:52.952926 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wnfnp" Feb 27 10:31:52 crc kubenswrapper[4728]: I0227 10:31:52.964527 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pztrz" Feb 27 10:31:52 crc kubenswrapper[4728]: I0227 10:31:52.966550 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b97gn" Feb 27 10:31:52 crc kubenswrapper[4728]: I0227 10:31:52.968103 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-xv8vk" Feb 27 10:31:52 crc kubenswrapper[4728]: I0227 10:31:52.981135 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7771abc7-886d-41eb-b966-74538062511f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7771abc7-886d-41eb-b966-74538062511f" (UID: "7771abc7-886d-41eb-b966-74538062511f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:31:52 crc kubenswrapper[4728]: I0227 10:31:52.989812 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bpx97\" (UniqueName: \"kubernetes.io/projected/7850a694-dd44-4f4d-9b97-ecaa50efb803-kube-api-access-bpx97\") pod \"7850a694-dd44-4f4d-9b97-ecaa50efb803\" (UID: \"7850a694-dd44-4f4d-9b97-ecaa50efb803\") " Feb 27 10:31:52 crc kubenswrapper[4728]: I0227 10:31:52.989864 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7850a694-dd44-4f4d-9b97-ecaa50efb803-catalog-content\") pod \"7850a694-dd44-4f4d-9b97-ecaa50efb803\" (UID: \"7850a694-dd44-4f4d-9b97-ecaa50efb803\") " Feb 27 10:31:52 crc kubenswrapper[4728]: I0227 10:31:52.989911 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b27f5cf8-de13-42a0-825a-0bc27ddc8466-utilities\") pod \"b27f5cf8-de13-42a0-825a-0bc27ddc8466\" (UID: \"b27f5cf8-de13-42a0-825a-0bc27ddc8466\") " Feb 27 10:31:52 crc kubenswrapper[4728]: I0227 10:31:52.989950 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-npgnf\" (UniqueName: \"kubernetes.io/projected/34088c2f-1e95-4227-9242-9e4cde7a9fde-kube-api-access-npgnf\") pod \"34088c2f-1e95-4227-9242-9e4cde7a9fde\" (UID: \"34088c2f-1e95-4227-9242-9e4cde7a9fde\") " Feb 27 10:31:52 crc kubenswrapper[4728]: I0227 10:31:52.990003 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7850a694-dd44-4f4d-9b97-ecaa50efb803-utilities\") pod \"7850a694-dd44-4f4d-9b97-ecaa50efb803\" (UID: \"7850a694-dd44-4f4d-9b97-ecaa50efb803\") " Feb 27 10:31:52 crc kubenswrapper[4728]: I0227 10:31:52.990039 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34088c2f-1e95-4227-9242-9e4cde7a9fde-utilities\") pod \"34088c2f-1e95-4227-9242-9e4cde7a9fde\" (UID: \"34088c2f-1e95-4227-9242-9e4cde7a9fde\") " Feb 27 10:31:52 crc kubenswrapper[4728]: I0227 10:31:52.990080 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x6qpz\" (UniqueName: \"kubernetes.io/projected/b27f5cf8-de13-42a0-825a-0bc27ddc8466-kube-api-access-x6qpz\") pod \"b27f5cf8-de13-42a0-825a-0bc27ddc8466\" (UID: \"b27f5cf8-de13-42a0-825a-0bc27ddc8466\") " Feb 27 10:31:52 crc kubenswrapper[4728]: I0227 10:31:52.990098 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b27f5cf8-de13-42a0-825a-0bc27ddc8466-catalog-content\") pod \"b27f5cf8-de13-42a0-825a-0bc27ddc8466\" (UID: \"b27f5cf8-de13-42a0-825a-0bc27ddc8466\") " Feb 27 10:31:52 crc kubenswrapper[4728]: I0227 10:31:52.990116 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34088c2f-1e95-4227-9242-9e4cde7a9fde-catalog-content\") pod \"34088c2f-1e95-4227-9242-9e4cde7a9fde\" (UID: \"34088c2f-1e95-4227-9242-9e4cde7a9fde\") " Feb 27 10:31:52 crc kubenswrapper[4728]: I0227 10:31:52.990304 4728 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7771abc7-886d-41eb-b966-74538062511f-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 10:31:52 crc kubenswrapper[4728]: I0227 10:31:52.990317 4728 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7771abc7-886d-41eb-b966-74538062511f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 10:31:52 crc kubenswrapper[4728]: I0227 10:31:52.990329 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dmk8m\" (UniqueName: \"kubernetes.io/projected/7771abc7-886d-41eb-b966-74538062511f-kube-api-access-dmk8m\") on node \"crc\" DevicePath \"\"" Feb 27 10:31:52 crc kubenswrapper[4728]: I0227 10:31:52.992097 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7850a694-dd44-4f4d-9b97-ecaa50efb803-utilities" (OuterVolumeSpecName: "utilities") pod "7850a694-dd44-4f4d-9b97-ecaa50efb803" (UID: "7850a694-dd44-4f4d-9b97-ecaa50efb803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:31:52 crc kubenswrapper[4728]: I0227 10:31:52.992550 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/34088c2f-1e95-4227-9242-9e4cde7a9fde-utilities" (OuterVolumeSpecName: "utilities") pod "34088c2f-1e95-4227-9242-9e4cde7a9fde" (UID: "34088c2f-1e95-4227-9242-9e4cde7a9fde"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:31:52 crc kubenswrapper[4728]: I0227 10:31:52.992891 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b27f5cf8-de13-42a0-825a-0bc27ddc8466-utilities" (OuterVolumeSpecName: "utilities") pod "b27f5cf8-de13-42a0-825a-0bc27ddc8466" (UID: "b27f5cf8-de13-42a0-825a-0bc27ddc8466"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:31:52 crc kubenswrapper[4728]: I0227 10:31:52.994743 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b27f5cf8-de13-42a0-825a-0bc27ddc8466-kube-api-access-x6qpz" (OuterVolumeSpecName: "kube-api-access-x6qpz") pod "b27f5cf8-de13-42a0-825a-0bc27ddc8466" (UID: "b27f5cf8-de13-42a0-825a-0bc27ddc8466"). InnerVolumeSpecName "kube-api-access-x6qpz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:31:53 crc kubenswrapper[4728]: I0227 10:31:53.000798 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34088c2f-1e95-4227-9242-9e4cde7a9fde-kube-api-access-npgnf" (OuterVolumeSpecName: "kube-api-access-npgnf") pod "34088c2f-1e95-4227-9242-9e4cde7a9fde" (UID: "34088c2f-1e95-4227-9242-9e4cde7a9fde"). InnerVolumeSpecName "kube-api-access-npgnf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:31:53 crc kubenswrapper[4728]: I0227 10:31:53.000878 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7850a694-dd44-4f4d-9b97-ecaa50efb803-kube-api-access-bpx97" (OuterVolumeSpecName: "kube-api-access-bpx97") pod "7850a694-dd44-4f4d-9b97-ecaa50efb803" (UID: "7850a694-dd44-4f4d-9b97-ecaa50efb803"). InnerVolumeSpecName "kube-api-access-bpx97". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:31:53 crc kubenswrapper[4728]: I0227 10:31:53.021015 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b27f5cf8-de13-42a0-825a-0bc27ddc8466-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b27f5cf8-de13-42a0-825a-0bc27ddc8466" (UID: "b27f5cf8-de13-42a0-825a-0bc27ddc8466"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:31:53 crc kubenswrapper[4728]: I0227 10:31:53.089670 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/34088c2f-1e95-4227-9242-9e4cde7a9fde-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "34088c2f-1e95-4227-9242-9e4cde7a9fde" (UID: "34088c2f-1e95-4227-9242-9e4cde7a9fde"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:31:53 crc kubenswrapper[4728]: I0227 10:31:53.093064 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-87rs7\" (UniqueName: \"kubernetes.io/projected/438710a7-473e-43a3-8aee-6f1f2d5ac756-kube-api-access-87rs7\") pod \"438710a7-473e-43a3-8aee-6f1f2d5ac756\" (UID: \"438710a7-473e-43a3-8aee-6f1f2d5ac756\") " Feb 27 10:31:53 crc kubenswrapper[4728]: I0227 10:31:53.093204 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/438710a7-473e-43a3-8aee-6f1f2d5ac756-marketplace-operator-metrics\") pod \"438710a7-473e-43a3-8aee-6f1f2d5ac756\" (UID: \"438710a7-473e-43a3-8aee-6f1f2d5ac756\") " Feb 27 10:31:53 crc kubenswrapper[4728]: I0227 10:31:53.093267 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/438710a7-473e-43a3-8aee-6f1f2d5ac756-marketplace-trusted-ca\") pod \"438710a7-473e-43a3-8aee-6f1f2d5ac756\" (UID: \"438710a7-473e-43a3-8aee-6f1f2d5ac756\") " Feb 27 10:31:53 crc kubenswrapper[4728]: I0227 10:31:53.093608 4728 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34088c2f-1e95-4227-9242-9e4cde7a9fde-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 10:31:53 crc kubenswrapper[4728]: I0227 10:31:53.093633 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x6qpz\" (UniqueName: \"kubernetes.io/projected/b27f5cf8-de13-42a0-825a-0bc27ddc8466-kube-api-access-x6qpz\") on node \"crc\" DevicePath \"\"" Feb 27 10:31:53 crc kubenswrapper[4728]: I0227 10:31:53.093649 4728 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b27f5cf8-de13-42a0-825a-0bc27ddc8466-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 10:31:53 crc kubenswrapper[4728]: I0227 10:31:53.093661 4728 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34088c2f-1e95-4227-9242-9e4cde7a9fde-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 10:31:53 crc kubenswrapper[4728]: I0227 10:31:53.093675 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bpx97\" (UniqueName: \"kubernetes.io/projected/7850a694-dd44-4f4d-9b97-ecaa50efb803-kube-api-access-bpx97\") on node \"crc\" DevicePath \"\"" Feb 27 10:31:53 crc kubenswrapper[4728]: I0227 10:31:53.093686 4728 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b27f5cf8-de13-42a0-825a-0bc27ddc8466-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 10:31:53 crc kubenswrapper[4728]: I0227 10:31:53.093698 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-npgnf\" (UniqueName: \"kubernetes.io/projected/34088c2f-1e95-4227-9242-9e4cde7a9fde-kube-api-access-npgnf\") on node \"crc\" DevicePath \"\"" Feb 27 10:31:53 crc kubenswrapper[4728]: I0227 10:31:53.093710 4728 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7850a694-dd44-4f4d-9b97-ecaa50efb803-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 10:31:53 crc kubenswrapper[4728]: I0227 10:31:53.096202 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/438710a7-473e-43a3-8aee-6f1f2d5ac756-kube-api-access-87rs7" (OuterVolumeSpecName: "kube-api-access-87rs7") pod "438710a7-473e-43a3-8aee-6f1f2d5ac756" (UID: "438710a7-473e-43a3-8aee-6f1f2d5ac756"). InnerVolumeSpecName "kube-api-access-87rs7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:31:53 crc kubenswrapper[4728]: I0227 10:31:53.097249 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/438710a7-473e-43a3-8aee-6f1f2d5ac756-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "438710a7-473e-43a3-8aee-6f1f2d5ac756" (UID: "438710a7-473e-43a3-8aee-6f1f2d5ac756"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:31:53 crc kubenswrapper[4728]: I0227 10:31:53.098682 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/438710a7-473e-43a3-8aee-6f1f2d5ac756-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "438710a7-473e-43a3-8aee-6f1f2d5ac756" (UID: "438710a7-473e-43a3-8aee-6f1f2d5ac756"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:31:53 crc kubenswrapper[4728]: I0227 10:31:53.164786 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7850a694-dd44-4f4d-9b97-ecaa50efb803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7850a694-dd44-4f4d-9b97-ecaa50efb803" (UID: "7850a694-dd44-4f4d-9b97-ecaa50efb803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:31:53 crc kubenswrapper[4728]: I0227 10:31:53.196201 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-87rs7\" (UniqueName: \"kubernetes.io/projected/438710a7-473e-43a3-8aee-6f1f2d5ac756-kube-api-access-87rs7\") on node \"crc\" DevicePath \"\"" Feb 27 10:31:53 crc kubenswrapper[4728]: I0227 10:31:53.196267 4728 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7850a694-dd44-4f4d-9b97-ecaa50efb803-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 10:31:53 crc kubenswrapper[4728]: I0227 10:31:53.196302 4728 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/438710a7-473e-43a3-8aee-6f1f2d5ac756-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 27 10:31:53 crc kubenswrapper[4728]: I0227 10:31:53.196318 4728 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/438710a7-473e-43a3-8aee-6f1f2d5ac756-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 27 10:31:53 crc kubenswrapper[4728]: I0227 10:31:53.290298 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-sgd7f"] Feb 27 10:31:53 crc kubenswrapper[4728]: I0227 10:31:53.579019 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-sgd7f" event={"ID":"fe9f039c-47c3-4535-9098-1fcc175a79e6","Type":"ContainerStarted","Data":"4793f98151f2a85f1fe957a30944c96dde6ae2189386311cf8dcd8622292e8fc"} Feb 27 10:31:53 crc kubenswrapper[4728]: I0227 10:31:53.579386 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-sgd7f" Feb 27 10:31:53 crc kubenswrapper[4728]: I0227 10:31:53.579402 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-sgd7f" event={"ID":"fe9f039c-47c3-4535-9098-1fcc175a79e6","Type":"ContainerStarted","Data":"ee1056ed24003a649754133334340af2355ecc067829d3ff2d1333a4c06d838c"} Feb 27 10:31:53 crc kubenswrapper[4728]: I0227 10:31:53.580881 4728 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-sgd7f container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.71:8080/healthz\": dial tcp 10.217.0.71:8080: connect: connection refused" start-of-body= Feb 27 10:31:53 crc kubenswrapper[4728]: I0227 10:31:53.580954 4728 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-sgd7f" podUID="fe9f039c-47c3-4535-9098-1fcc175a79e6" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.71:8080/healthz\": dial tcp 10.217.0.71:8080: connect: connection refused" Feb 27 10:31:53 crc kubenswrapper[4728]: I0227 10:31:53.581789 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t4bnk" Feb 27 10:31:53 crc kubenswrapper[4728]: I0227 10:31:53.581823 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t4bnk" event={"ID":"7771abc7-886d-41eb-b966-74538062511f","Type":"ContainerDied","Data":"c4cbddc869f0ebf2e50759993c9b7fff94903bd1dd9626a0aa995561acde2004"} Feb 27 10:31:53 crc kubenswrapper[4728]: I0227 10:31:53.581869 4728 scope.go:117] "RemoveContainer" containerID="7143b72ae2d7e05b85ad009f20d696a7570ddb41432db16b879e3dcbce7fcf63" Feb 27 10:31:53 crc kubenswrapper[4728]: I0227 10:31:53.588255 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b97gn" event={"ID":"34088c2f-1e95-4227-9242-9e4cde7a9fde","Type":"ContainerDied","Data":"7da586308b762260a4e430b04ac5f27ed14d9e51d52cb5234c07ab19d03e39a3"} Feb 27 10:31:53 crc kubenswrapper[4728]: I0227 10:31:53.588307 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b97gn" Feb 27 10:31:53 crc kubenswrapper[4728]: I0227 10:31:53.589431 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-xv8vk" event={"ID":"438710a7-473e-43a3-8aee-6f1f2d5ac756","Type":"ContainerDied","Data":"e1713c9eb79bc30322252cbda60fc1ec5d1ff31c8c4887d3abfdf2dc2017d475"} Feb 27 10:31:53 crc kubenswrapper[4728]: I0227 10:31:53.589557 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-xv8vk" Feb 27 10:31:53 crc kubenswrapper[4728]: I0227 10:31:53.597469 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wnfnp" Feb 27 10:31:53 crc kubenswrapper[4728]: I0227 10:31:53.597456 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wnfnp" event={"ID":"b27f5cf8-de13-42a0-825a-0bc27ddc8466","Type":"ContainerDied","Data":"5e2ca7c5d14ba46ed81fb10d92732b7e6c5a717dcd8d97a255b930e79f1c1a66"} Feb 27 10:31:53 crc kubenswrapper[4728]: I0227 10:31:53.602576 4728 scope.go:117] "RemoveContainer" containerID="eee8f3ab4c0874cbe5b816705008eea71b4e9947d8f93815b1a860077da9dca7" Feb 27 10:31:53 crc kubenswrapper[4728]: I0227 10:31:53.604271 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pztrz" event={"ID":"7850a694-dd44-4f4d-9b97-ecaa50efb803","Type":"ContainerDied","Data":"3629dbc3cc76503a951f580fa44eccb5e3a74544f144dc7acd9c93a06db286d0"} Feb 27 10:31:53 crc kubenswrapper[4728]: I0227 10:31:53.604324 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pztrz" Feb 27 10:31:53 crc kubenswrapper[4728]: I0227 10:31:53.620907 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-sgd7f" podStartSLOduration=1.6208401590000001 podStartE2EDuration="1.620840159s" podCreationTimestamp="2026-02-27 10:31:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:31:53.602292807 +0000 UTC m=+333.564658913" watchObservedRunningTime="2026-02-27 10:31:53.620840159 +0000 UTC m=+333.583206265" Feb 27 10:31:53 crc kubenswrapper[4728]: I0227 10:31:53.622597 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-t4bnk"] Feb 27 10:31:53 crc kubenswrapper[4728]: I0227 10:31:53.625364 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-t4bnk"] Feb 27 10:31:53 crc kubenswrapper[4728]: I0227 10:31:53.635936 4728 scope.go:117] "RemoveContainer" containerID="99e0529609a00e3a0a6091c152fb0164d74318871a1e821e5dfd5afc992a9c40" Feb 27 10:31:53 crc kubenswrapper[4728]: I0227 10:31:53.658425 4728 scope.go:117] "RemoveContainer" containerID="177e9ea899e8eddd80df884e748aea9b96849c704208a9243f95166c80467dfb" Feb 27 10:31:53 crc kubenswrapper[4728]: I0227 10:31:53.667587 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-b97gn"] Feb 27 10:31:53 crc kubenswrapper[4728]: I0227 10:31:53.676605 4728 scope.go:117] "RemoveContainer" containerID="4251be889dce4d9fe923660b66023de97a4a4cbb7b4de1e65dd25b25c43d14c4" Feb 27 10:31:53 crc kubenswrapper[4728]: I0227 10:31:53.685507 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-b97gn"] Feb 27 10:31:53 crc kubenswrapper[4728]: I0227 10:31:53.695571 4728 scope.go:117] "RemoveContainer" containerID="5c093ac091d15329058dd0850f1ace6f6015413207b11a5938f328ae75ae0042" Feb 27 10:31:53 crc kubenswrapper[4728]: I0227 10:31:53.697277 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pztrz"] Feb 27 10:31:53 crc kubenswrapper[4728]: I0227 10:31:53.713589 4728 scope.go:117] "RemoveContainer" containerID="32d42c8d87e85626ed012613fdb85429cd29ed489457e98837214d20f6df2f0a" Feb 27 10:31:53 crc kubenswrapper[4728]: I0227 10:31:53.715173 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-pztrz"] Feb 27 10:31:53 crc kubenswrapper[4728]: I0227 10:31:53.719619 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-xv8vk"] Feb 27 10:31:53 crc kubenswrapper[4728]: I0227 10:31:53.732857 4728 scope.go:117] "RemoveContainer" containerID="02bbfcda99329ca111a91bf0c11f0695387513a40b4905311ce5a5a9fed4de41" Feb 27 10:31:53 crc kubenswrapper[4728]: I0227 10:31:53.733409 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-xv8vk"] Feb 27 10:31:53 crc kubenswrapper[4728]: I0227 10:31:53.737839 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wnfnp"] Feb 27 10:31:53 crc kubenswrapper[4728]: I0227 10:31:53.748940 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-wnfnp"] Feb 27 10:31:53 crc kubenswrapper[4728]: I0227 10:31:53.757208 4728 scope.go:117] "RemoveContainer" containerID="1a3d0bf303acbd394ca8ea75ea721ba7995d02c2aa82a9c837c9b4ebb3cb0866" Feb 27 10:31:53 crc kubenswrapper[4728]: I0227 10:31:53.774378 4728 scope.go:117] "RemoveContainer" containerID="f4e883f75c7162b2da2577019ed98c246c7b645d40b4538e79648dcac6a8405b" Feb 27 10:31:53 crc kubenswrapper[4728]: I0227 10:31:53.790644 4728 scope.go:117] "RemoveContainer" containerID="0efe2242a39af5b84489968861f63da49209417961d8cf509813a4d0b0936141" Feb 27 10:31:53 crc kubenswrapper[4728]: I0227 10:31:53.811042 4728 scope.go:117] "RemoveContainer" containerID="997a4512d4443c069d5956b3d5c28c9637029a19ff645862f360e8945c384b28" Feb 27 10:31:53 crc kubenswrapper[4728]: I0227 10:31:53.830496 4728 scope.go:117] "RemoveContainer" containerID="63c6ec26161b66afa73c35c10df5963837024140867e7a30cf365ba8d2d788d7" Feb 27 10:31:54 crc kubenswrapper[4728]: I0227 10:31:54.619163 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-sgd7f" Feb 27 10:31:54 crc kubenswrapper[4728]: I0227 10:31:54.730959 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="34088c2f-1e95-4227-9242-9e4cde7a9fde" path="/var/lib/kubelet/pods/34088c2f-1e95-4227-9242-9e4cde7a9fde/volumes" Feb 27 10:31:54 crc kubenswrapper[4728]: I0227 10:31:54.731587 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="438710a7-473e-43a3-8aee-6f1f2d5ac756" path="/var/lib/kubelet/pods/438710a7-473e-43a3-8aee-6f1f2d5ac756/volumes" Feb 27 10:31:54 crc kubenswrapper[4728]: I0227 10:31:54.732165 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7771abc7-886d-41eb-b966-74538062511f" path="/var/lib/kubelet/pods/7771abc7-886d-41eb-b966-74538062511f/volumes" Feb 27 10:31:54 crc kubenswrapper[4728]: I0227 10:31:54.733117 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7850a694-dd44-4f4d-9b97-ecaa50efb803" path="/var/lib/kubelet/pods/7850a694-dd44-4f4d-9b97-ecaa50efb803/volumes" Feb 27 10:31:54 crc kubenswrapper[4728]: I0227 10:31:54.733654 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b27f5cf8-de13-42a0-825a-0bc27ddc8466" path="/var/lib/kubelet/pods/b27f5cf8-de13-42a0-825a-0bc27ddc8466/volumes" Feb 27 10:31:54 crc kubenswrapper[4728]: I0227 10:31:54.859796 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-llvng"] Feb 27 10:31:54 crc kubenswrapper[4728]: E0227 10:31:54.860558 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b27f5cf8-de13-42a0-825a-0bc27ddc8466" containerName="extract-utilities" Feb 27 10:31:54 crc kubenswrapper[4728]: I0227 10:31:54.860637 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="b27f5cf8-de13-42a0-825a-0bc27ddc8466" containerName="extract-utilities" Feb 27 10:31:54 crc kubenswrapper[4728]: E0227 10:31:54.862933 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34088c2f-1e95-4227-9242-9e4cde7a9fde" containerName="extract-utilities" Feb 27 10:31:54 crc kubenswrapper[4728]: I0227 10:31:54.863066 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="34088c2f-1e95-4227-9242-9e4cde7a9fde" containerName="extract-utilities" Feb 27 10:31:54 crc kubenswrapper[4728]: E0227 10:31:54.863146 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b27f5cf8-de13-42a0-825a-0bc27ddc8466" containerName="registry-server" Feb 27 10:31:54 crc kubenswrapper[4728]: I0227 10:31:54.863224 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="b27f5cf8-de13-42a0-825a-0bc27ddc8466" containerName="registry-server" Feb 27 10:31:54 crc kubenswrapper[4728]: E0227 10:31:54.863298 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7771abc7-886d-41eb-b966-74538062511f" containerName="extract-utilities" Feb 27 10:31:54 crc kubenswrapper[4728]: I0227 10:31:54.863374 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="7771abc7-886d-41eb-b966-74538062511f" containerName="extract-utilities" Feb 27 10:31:54 crc kubenswrapper[4728]: E0227 10:31:54.863453 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7771abc7-886d-41eb-b966-74538062511f" containerName="extract-content" Feb 27 10:31:54 crc kubenswrapper[4728]: I0227 10:31:54.863548 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="7771abc7-886d-41eb-b966-74538062511f" containerName="extract-content" Feb 27 10:31:54 crc kubenswrapper[4728]: E0227 10:31:54.863662 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7850a694-dd44-4f4d-9b97-ecaa50efb803" containerName="extract-utilities" Feb 27 10:31:54 crc kubenswrapper[4728]: I0227 10:31:54.863888 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="7850a694-dd44-4f4d-9b97-ecaa50efb803" containerName="extract-utilities" Feb 27 10:31:54 crc kubenswrapper[4728]: E0227 10:31:54.863996 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34088c2f-1e95-4227-9242-9e4cde7a9fde" containerName="registry-server" Feb 27 10:31:54 crc kubenswrapper[4728]: I0227 10:31:54.864075 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="34088c2f-1e95-4227-9242-9e4cde7a9fde" containerName="registry-server" Feb 27 10:31:54 crc kubenswrapper[4728]: E0227 10:31:54.864148 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34088c2f-1e95-4227-9242-9e4cde7a9fde" containerName="extract-content" Feb 27 10:31:54 crc kubenswrapper[4728]: I0227 10:31:54.864221 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="34088c2f-1e95-4227-9242-9e4cde7a9fde" containerName="extract-content" Feb 27 10:31:54 crc kubenswrapper[4728]: E0227 10:31:54.864293 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="438710a7-473e-43a3-8aee-6f1f2d5ac756" containerName="marketplace-operator" Feb 27 10:31:54 crc kubenswrapper[4728]: I0227 10:31:54.864360 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="438710a7-473e-43a3-8aee-6f1f2d5ac756" containerName="marketplace-operator" Feb 27 10:31:54 crc kubenswrapper[4728]: E0227 10:31:54.864441 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7850a694-dd44-4f4d-9b97-ecaa50efb803" containerName="registry-server" Feb 27 10:31:54 crc kubenswrapper[4728]: I0227 10:31:54.864542 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="7850a694-dd44-4f4d-9b97-ecaa50efb803" containerName="registry-server" Feb 27 10:31:54 crc kubenswrapper[4728]: E0227 10:31:54.864627 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7850a694-dd44-4f4d-9b97-ecaa50efb803" containerName="extract-content" Feb 27 10:31:54 crc kubenswrapper[4728]: I0227 10:31:54.864698 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="7850a694-dd44-4f4d-9b97-ecaa50efb803" containerName="extract-content" Feb 27 10:31:54 crc kubenswrapper[4728]: E0227 10:31:54.864774 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7771abc7-886d-41eb-b966-74538062511f" containerName="registry-server" Feb 27 10:31:54 crc kubenswrapper[4728]: I0227 10:31:54.864853 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="7771abc7-886d-41eb-b966-74538062511f" containerName="registry-server" Feb 27 10:31:54 crc kubenswrapper[4728]: E0227 10:31:54.864933 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b27f5cf8-de13-42a0-825a-0bc27ddc8466" containerName="extract-content" Feb 27 10:31:54 crc kubenswrapper[4728]: I0227 10:31:54.865003 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="b27f5cf8-de13-42a0-825a-0bc27ddc8466" containerName="extract-content" Feb 27 10:31:54 crc kubenswrapper[4728]: I0227 10:31:54.865199 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="438710a7-473e-43a3-8aee-6f1f2d5ac756" containerName="marketplace-operator" Feb 27 10:31:54 crc kubenswrapper[4728]: I0227 10:31:54.865272 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="7771abc7-886d-41eb-b966-74538062511f" containerName="registry-server" Feb 27 10:31:54 crc kubenswrapper[4728]: I0227 10:31:54.865333 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="7850a694-dd44-4f4d-9b97-ecaa50efb803" containerName="registry-server" Feb 27 10:31:54 crc kubenswrapper[4728]: I0227 10:31:54.865393 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="34088c2f-1e95-4227-9242-9e4cde7a9fde" containerName="registry-server" Feb 27 10:31:54 crc kubenswrapper[4728]: I0227 10:31:54.865451 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="b27f5cf8-de13-42a0-825a-0bc27ddc8466" containerName="registry-server" Feb 27 10:31:54 crc kubenswrapper[4728]: I0227 10:31:54.867454 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-llvng" Feb 27 10:31:54 crc kubenswrapper[4728]: I0227 10:31:54.869215 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-llvng"] Feb 27 10:31:54 crc kubenswrapper[4728]: I0227 10:31:54.869600 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 27 10:31:54 crc kubenswrapper[4728]: I0227 10:31:54.919403 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ce7dd06-3478-4aa5-a8b5-d371a01feb41-utilities\") pod \"community-operators-llvng\" (UID: \"9ce7dd06-3478-4aa5-a8b5-d371a01feb41\") " pod="openshift-marketplace/community-operators-llvng" Feb 27 10:31:54 crc kubenswrapper[4728]: I0227 10:31:54.919471 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qsz47\" (UniqueName: \"kubernetes.io/projected/9ce7dd06-3478-4aa5-a8b5-d371a01feb41-kube-api-access-qsz47\") pod \"community-operators-llvng\" (UID: \"9ce7dd06-3478-4aa5-a8b5-d371a01feb41\") " pod="openshift-marketplace/community-operators-llvng" Feb 27 10:31:54 crc kubenswrapper[4728]: I0227 10:31:54.919540 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ce7dd06-3478-4aa5-a8b5-d371a01feb41-catalog-content\") pod \"community-operators-llvng\" (UID: \"9ce7dd06-3478-4aa5-a8b5-d371a01feb41\") " pod="openshift-marketplace/community-operators-llvng" Feb 27 10:31:55 crc kubenswrapper[4728]: I0227 10:31:55.021181 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ce7dd06-3478-4aa5-a8b5-d371a01feb41-catalog-content\") pod \"community-operators-llvng\" (UID: \"9ce7dd06-3478-4aa5-a8b5-d371a01feb41\") " pod="openshift-marketplace/community-operators-llvng" Feb 27 10:31:55 crc kubenswrapper[4728]: I0227 10:31:55.021270 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ce7dd06-3478-4aa5-a8b5-d371a01feb41-utilities\") pod \"community-operators-llvng\" (UID: \"9ce7dd06-3478-4aa5-a8b5-d371a01feb41\") " pod="openshift-marketplace/community-operators-llvng" Feb 27 10:31:55 crc kubenswrapper[4728]: I0227 10:31:55.021361 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qsz47\" (UniqueName: \"kubernetes.io/projected/9ce7dd06-3478-4aa5-a8b5-d371a01feb41-kube-api-access-qsz47\") pod \"community-operators-llvng\" (UID: \"9ce7dd06-3478-4aa5-a8b5-d371a01feb41\") " pod="openshift-marketplace/community-operators-llvng" Feb 27 10:31:55 crc kubenswrapper[4728]: I0227 10:31:55.021837 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ce7dd06-3478-4aa5-a8b5-d371a01feb41-utilities\") pod \"community-operators-llvng\" (UID: \"9ce7dd06-3478-4aa5-a8b5-d371a01feb41\") " pod="openshift-marketplace/community-operators-llvng" Feb 27 10:31:55 crc kubenswrapper[4728]: I0227 10:31:55.022291 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ce7dd06-3478-4aa5-a8b5-d371a01feb41-catalog-content\") pod \"community-operators-llvng\" (UID: \"9ce7dd06-3478-4aa5-a8b5-d371a01feb41\") " pod="openshift-marketplace/community-operators-llvng" Feb 27 10:31:55 crc kubenswrapper[4728]: I0227 10:31:55.043004 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qsz47\" (UniqueName: \"kubernetes.io/projected/9ce7dd06-3478-4aa5-a8b5-d371a01feb41-kube-api-access-qsz47\") pod \"community-operators-llvng\" (UID: \"9ce7dd06-3478-4aa5-a8b5-d371a01feb41\") " pod="openshift-marketplace/community-operators-llvng" Feb 27 10:31:55 crc kubenswrapper[4728]: I0227 10:31:55.050411 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-spm46"] Feb 27 10:31:55 crc kubenswrapper[4728]: I0227 10:31:55.051486 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-spm46" Feb 27 10:31:55 crc kubenswrapper[4728]: I0227 10:31:55.056348 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 27 10:31:55 crc kubenswrapper[4728]: I0227 10:31:55.062013 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-spm46"] Feb 27 10:31:55 crc kubenswrapper[4728]: I0227 10:31:55.123137 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dlbdr\" (UniqueName: \"kubernetes.io/projected/57410d64-6726-4c64-b9f4-e1eaad0aa42e-kube-api-access-dlbdr\") pod \"certified-operators-spm46\" (UID: \"57410d64-6726-4c64-b9f4-e1eaad0aa42e\") " pod="openshift-marketplace/certified-operators-spm46" Feb 27 10:31:55 crc kubenswrapper[4728]: I0227 10:31:55.123187 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57410d64-6726-4c64-b9f4-e1eaad0aa42e-catalog-content\") pod \"certified-operators-spm46\" (UID: \"57410d64-6726-4c64-b9f4-e1eaad0aa42e\") " pod="openshift-marketplace/certified-operators-spm46" Feb 27 10:31:55 crc kubenswrapper[4728]: I0227 10:31:55.123229 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57410d64-6726-4c64-b9f4-e1eaad0aa42e-utilities\") pod \"certified-operators-spm46\" (UID: \"57410d64-6726-4c64-b9f4-e1eaad0aa42e\") " pod="openshift-marketplace/certified-operators-spm46" Feb 27 10:31:55 crc kubenswrapper[4728]: I0227 10:31:55.197491 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-llvng" Feb 27 10:31:55 crc kubenswrapper[4728]: I0227 10:31:55.224970 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dlbdr\" (UniqueName: \"kubernetes.io/projected/57410d64-6726-4c64-b9f4-e1eaad0aa42e-kube-api-access-dlbdr\") pod \"certified-operators-spm46\" (UID: \"57410d64-6726-4c64-b9f4-e1eaad0aa42e\") " pod="openshift-marketplace/certified-operators-spm46" Feb 27 10:31:55 crc kubenswrapper[4728]: I0227 10:31:55.225026 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57410d64-6726-4c64-b9f4-e1eaad0aa42e-catalog-content\") pod \"certified-operators-spm46\" (UID: \"57410d64-6726-4c64-b9f4-e1eaad0aa42e\") " pod="openshift-marketplace/certified-operators-spm46" Feb 27 10:31:55 crc kubenswrapper[4728]: I0227 10:31:55.225123 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57410d64-6726-4c64-b9f4-e1eaad0aa42e-utilities\") pod \"certified-operators-spm46\" (UID: \"57410d64-6726-4c64-b9f4-e1eaad0aa42e\") " pod="openshift-marketplace/certified-operators-spm46" Feb 27 10:31:55 crc kubenswrapper[4728]: I0227 10:31:55.225797 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57410d64-6726-4c64-b9f4-e1eaad0aa42e-utilities\") pod \"certified-operators-spm46\" (UID: \"57410d64-6726-4c64-b9f4-e1eaad0aa42e\") " pod="openshift-marketplace/certified-operators-spm46" Feb 27 10:31:55 crc kubenswrapper[4728]: I0227 10:31:55.225905 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57410d64-6726-4c64-b9f4-e1eaad0aa42e-catalog-content\") pod \"certified-operators-spm46\" (UID: \"57410d64-6726-4c64-b9f4-e1eaad0aa42e\") " pod="openshift-marketplace/certified-operators-spm46" Feb 27 10:31:55 crc kubenswrapper[4728]: I0227 10:31:55.243002 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dlbdr\" (UniqueName: \"kubernetes.io/projected/57410d64-6726-4c64-b9f4-e1eaad0aa42e-kube-api-access-dlbdr\") pod \"certified-operators-spm46\" (UID: \"57410d64-6726-4c64-b9f4-e1eaad0aa42e\") " pod="openshift-marketplace/certified-operators-spm46" Feb 27 10:31:55 crc kubenswrapper[4728]: I0227 10:31:55.378930 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-spm46" Feb 27 10:31:55 crc kubenswrapper[4728]: I0227 10:31:55.629121 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-llvng"] Feb 27 10:31:55 crc kubenswrapper[4728]: W0227 10:31:55.636277 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9ce7dd06_3478_4aa5_a8b5_d371a01feb41.slice/crio-dc4eec735f6f5c260762d1076fb10978a97cd9974e0e727cccefb996c02f0fd2 WatchSource:0}: Error finding container dc4eec735f6f5c260762d1076fb10978a97cd9974e0e727cccefb996c02f0fd2: Status 404 returned error can't find the container with id dc4eec735f6f5c260762d1076fb10978a97cd9974e0e727cccefb996c02f0fd2 Feb 27 10:31:55 crc kubenswrapper[4728]: I0227 10:31:55.776738 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-spm46"] Feb 27 10:31:55 crc kubenswrapper[4728]: W0227 10:31:55.784875 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod57410d64_6726_4c64_b9f4_e1eaad0aa42e.slice/crio-fff444b2a9eea7d510780b700c768f44bce8071c641caf45f904e79e8411d930 WatchSource:0}: Error finding container fff444b2a9eea7d510780b700c768f44bce8071c641caf45f904e79e8411d930: Status 404 returned error can't find the container with id fff444b2a9eea7d510780b700c768f44bce8071c641caf45f904e79e8411d930 Feb 27 10:31:56 crc kubenswrapper[4728]: I0227 10:31:56.064299 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-g2w72"] Feb 27 10:31:56 crc kubenswrapper[4728]: I0227 10:31:56.064923 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-g2w72" Feb 27 10:31:56 crc kubenswrapper[4728]: I0227 10:31:56.120134 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-g2w72"] Feb 27 10:31:56 crc kubenswrapper[4728]: I0227 10:31:56.147873 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e2b568d1-390e-4e9d-a077-90fd768200b6-bound-sa-token\") pod \"image-registry-66df7c8f76-g2w72\" (UID: \"e2b568d1-390e-4e9d-a077-90fd768200b6\") " pod="openshift-image-registry/image-registry-66df7c8f76-g2w72" Feb 27 10:31:56 crc kubenswrapper[4728]: I0227 10:31:56.147953 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/e2b568d1-390e-4e9d-a077-90fd768200b6-registry-certificates\") pod \"image-registry-66df7c8f76-g2w72\" (UID: \"e2b568d1-390e-4e9d-a077-90fd768200b6\") " pod="openshift-image-registry/image-registry-66df7c8f76-g2w72" Feb 27 10:31:56 crc kubenswrapper[4728]: I0227 10:31:56.147992 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/e2b568d1-390e-4e9d-a077-90fd768200b6-ca-trust-extracted\") pod \"image-registry-66df7c8f76-g2w72\" (UID: \"e2b568d1-390e-4e9d-a077-90fd768200b6\") " pod="openshift-image-registry/image-registry-66df7c8f76-g2w72" Feb 27 10:31:56 crc kubenswrapper[4728]: I0227 10:31:56.148013 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/e2b568d1-390e-4e9d-a077-90fd768200b6-registry-tls\") pod \"image-registry-66df7c8f76-g2w72\" (UID: \"e2b568d1-390e-4e9d-a077-90fd768200b6\") " pod="openshift-image-registry/image-registry-66df7c8f76-g2w72" Feb 27 10:31:56 crc kubenswrapper[4728]: I0227 10:31:56.148033 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/e2b568d1-390e-4e9d-a077-90fd768200b6-installation-pull-secrets\") pod \"image-registry-66df7c8f76-g2w72\" (UID: \"e2b568d1-390e-4e9d-a077-90fd768200b6\") " pod="openshift-image-registry/image-registry-66df7c8f76-g2w72" Feb 27 10:31:56 crc kubenswrapper[4728]: I0227 10:31:56.148095 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e2b568d1-390e-4e9d-a077-90fd768200b6-trusted-ca\") pod \"image-registry-66df7c8f76-g2w72\" (UID: \"e2b568d1-390e-4e9d-a077-90fd768200b6\") " pod="openshift-image-registry/image-registry-66df7c8f76-g2w72" Feb 27 10:31:56 crc kubenswrapper[4728]: I0227 10:31:56.148124 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-g2w72\" (UID: \"e2b568d1-390e-4e9d-a077-90fd768200b6\") " pod="openshift-image-registry/image-registry-66df7c8f76-g2w72" Feb 27 10:31:56 crc kubenswrapper[4728]: I0227 10:31:56.148149 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-psf75\" (UniqueName: \"kubernetes.io/projected/e2b568d1-390e-4e9d-a077-90fd768200b6-kube-api-access-psf75\") pod \"image-registry-66df7c8f76-g2w72\" (UID: \"e2b568d1-390e-4e9d-a077-90fd768200b6\") " pod="openshift-image-registry/image-registry-66df7c8f76-g2w72" Feb 27 10:31:56 crc kubenswrapper[4728]: I0227 10:31:56.168432 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-g2w72\" (UID: \"e2b568d1-390e-4e9d-a077-90fd768200b6\") " pod="openshift-image-registry/image-registry-66df7c8f76-g2w72" Feb 27 10:31:56 crc kubenswrapper[4728]: I0227 10:31:56.249993 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e2b568d1-390e-4e9d-a077-90fd768200b6-trusted-ca\") pod \"image-registry-66df7c8f76-g2w72\" (UID: \"e2b568d1-390e-4e9d-a077-90fd768200b6\") " pod="openshift-image-registry/image-registry-66df7c8f76-g2w72" Feb 27 10:31:56 crc kubenswrapper[4728]: I0227 10:31:56.250054 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-psf75\" (UniqueName: \"kubernetes.io/projected/e2b568d1-390e-4e9d-a077-90fd768200b6-kube-api-access-psf75\") pod \"image-registry-66df7c8f76-g2w72\" (UID: \"e2b568d1-390e-4e9d-a077-90fd768200b6\") " pod="openshift-image-registry/image-registry-66df7c8f76-g2w72" Feb 27 10:31:56 crc kubenswrapper[4728]: I0227 10:31:56.250108 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e2b568d1-390e-4e9d-a077-90fd768200b6-bound-sa-token\") pod \"image-registry-66df7c8f76-g2w72\" (UID: \"e2b568d1-390e-4e9d-a077-90fd768200b6\") " pod="openshift-image-registry/image-registry-66df7c8f76-g2w72" Feb 27 10:31:56 crc kubenswrapper[4728]: I0227 10:31:56.250165 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/e2b568d1-390e-4e9d-a077-90fd768200b6-registry-certificates\") pod \"image-registry-66df7c8f76-g2w72\" (UID: \"e2b568d1-390e-4e9d-a077-90fd768200b6\") " pod="openshift-image-registry/image-registry-66df7c8f76-g2w72" Feb 27 10:31:56 crc kubenswrapper[4728]: I0227 10:31:56.250197 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/e2b568d1-390e-4e9d-a077-90fd768200b6-ca-trust-extracted\") pod \"image-registry-66df7c8f76-g2w72\" (UID: \"e2b568d1-390e-4e9d-a077-90fd768200b6\") " pod="openshift-image-registry/image-registry-66df7c8f76-g2w72" Feb 27 10:31:56 crc kubenswrapper[4728]: I0227 10:31:56.250223 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/e2b568d1-390e-4e9d-a077-90fd768200b6-registry-tls\") pod \"image-registry-66df7c8f76-g2w72\" (UID: \"e2b568d1-390e-4e9d-a077-90fd768200b6\") " pod="openshift-image-registry/image-registry-66df7c8f76-g2w72" Feb 27 10:31:56 crc kubenswrapper[4728]: I0227 10:31:56.250246 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/e2b568d1-390e-4e9d-a077-90fd768200b6-installation-pull-secrets\") pod \"image-registry-66df7c8f76-g2w72\" (UID: \"e2b568d1-390e-4e9d-a077-90fd768200b6\") " pod="openshift-image-registry/image-registry-66df7c8f76-g2w72" Feb 27 10:31:56 crc kubenswrapper[4728]: I0227 10:31:56.251337 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e2b568d1-390e-4e9d-a077-90fd768200b6-trusted-ca\") pod \"image-registry-66df7c8f76-g2w72\" (UID: \"e2b568d1-390e-4e9d-a077-90fd768200b6\") " pod="openshift-image-registry/image-registry-66df7c8f76-g2w72" Feb 27 10:31:56 crc kubenswrapper[4728]: I0227 10:31:56.251344 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/e2b568d1-390e-4e9d-a077-90fd768200b6-ca-trust-extracted\") pod \"image-registry-66df7c8f76-g2w72\" (UID: \"e2b568d1-390e-4e9d-a077-90fd768200b6\") " pod="openshift-image-registry/image-registry-66df7c8f76-g2w72" Feb 27 10:31:56 crc kubenswrapper[4728]: I0227 10:31:56.251971 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/e2b568d1-390e-4e9d-a077-90fd768200b6-registry-certificates\") pod \"image-registry-66df7c8f76-g2w72\" (UID: \"e2b568d1-390e-4e9d-a077-90fd768200b6\") " pod="openshift-image-registry/image-registry-66df7c8f76-g2w72" Feb 27 10:31:56 crc kubenswrapper[4728]: I0227 10:31:56.262919 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/e2b568d1-390e-4e9d-a077-90fd768200b6-registry-tls\") pod \"image-registry-66df7c8f76-g2w72\" (UID: \"e2b568d1-390e-4e9d-a077-90fd768200b6\") " pod="openshift-image-registry/image-registry-66df7c8f76-g2w72" Feb 27 10:31:56 crc kubenswrapper[4728]: I0227 10:31:56.262997 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/e2b568d1-390e-4e9d-a077-90fd768200b6-installation-pull-secrets\") pod \"image-registry-66df7c8f76-g2w72\" (UID: \"e2b568d1-390e-4e9d-a077-90fd768200b6\") " pod="openshift-image-registry/image-registry-66df7c8f76-g2w72" Feb 27 10:31:56 crc kubenswrapper[4728]: I0227 10:31:56.264952 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e2b568d1-390e-4e9d-a077-90fd768200b6-bound-sa-token\") pod \"image-registry-66df7c8f76-g2w72\" (UID: \"e2b568d1-390e-4e9d-a077-90fd768200b6\") " pod="openshift-image-registry/image-registry-66df7c8f76-g2w72" Feb 27 10:31:56 crc kubenswrapper[4728]: I0227 10:31:56.268089 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-psf75\" (UniqueName: \"kubernetes.io/projected/e2b568d1-390e-4e9d-a077-90fd768200b6-kube-api-access-psf75\") pod \"image-registry-66df7c8f76-g2w72\" (UID: \"e2b568d1-390e-4e9d-a077-90fd768200b6\") " pod="openshift-image-registry/image-registry-66df7c8f76-g2w72" Feb 27 10:31:56 crc kubenswrapper[4728]: I0227 10:31:56.387590 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-g2w72" Feb 27 10:31:56 crc kubenswrapper[4728]: I0227 10:31:56.629234 4728 generic.go:334] "Generic (PLEG): container finished" podID="57410d64-6726-4c64-b9f4-e1eaad0aa42e" containerID="b32ca1b4d615aafe881cd8f44e80a65caea0b120141194e8e3df6b47bbeaf098" exitCode=0 Feb 27 10:31:56 crc kubenswrapper[4728]: I0227 10:31:56.629299 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-spm46" event={"ID":"57410d64-6726-4c64-b9f4-e1eaad0aa42e","Type":"ContainerDied","Data":"b32ca1b4d615aafe881cd8f44e80a65caea0b120141194e8e3df6b47bbeaf098"} Feb 27 10:31:56 crc kubenswrapper[4728]: I0227 10:31:56.629326 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-spm46" event={"ID":"57410d64-6726-4c64-b9f4-e1eaad0aa42e","Type":"ContainerStarted","Data":"fff444b2a9eea7d510780b700c768f44bce8071c641caf45f904e79e8411d930"} Feb 27 10:31:56 crc kubenswrapper[4728]: I0227 10:31:56.630485 4728 generic.go:334] "Generic (PLEG): container finished" podID="9ce7dd06-3478-4aa5-a8b5-d371a01feb41" containerID="d5e9e36435e91a5e1ad34be3be5c5cc40246316f297857f0b669b241a4bddd0b" exitCode=0 Feb 27 10:31:56 crc kubenswrapper[4728]: I0227 10:31:56.630534 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-llvng" event={"ID":"9ce7dd06-3478-4aa5-a8b5-d371a01feb41","Type":"ContainerDied","Data":"d5e9e36435e91a5e1ad34be3be5c5cc40246316f297857f0b669b241a4bddd0b"} Feb 27 10:31:56 crc kubenswrapper[4728]: I0227 10:31:56.630599 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-llvng" event={"ID":"9ce7dd06-3478-4aa5-a8b5-d371a01feb41","Type":"ContainerStarted","Data":"dc4eec735f6f5c260762d1076fb10978a97cd9974e0e727cccefb996c02f0fd2"} Feb 27 10:31:56 crc kubenswrapper[4728]: I0227 10:31:56.778826 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-g2w72"] Feb 27 10:31:57 crc kubenswrapper[4728]: I0227 10:31:57.637538 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-spm46" event={"ID":"57410d64-6726-4c64-b9f4-e1eaad0aa42e","Type":"ContainerStarted","Data":"1317e781975b21b186df5dbfe0094b2fad674a6e8d198ca9385741c716a27134"} Feb 27 10:31:57 crc kubenswrapper[4728]: I0227 10:31:57.639267 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-llvng" event={"ID":"9ce7dd06-3478-4aa5-a8b5-d371a01feb41","Type":"ContainerStarted","Data":"90bb18ea36f0ce91ce74911bd9ce209c82857b0057360a073aaa79a4524e43d9"} Feb 27 10:31:57 crc kubenswrapper[4728]: I0227 10:31:57.640649 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-g2w72" event={"ID":"e2b568d1-390e-4e9d-a077-90fd768200b6","Type":"ContainerStarted","Data":"88cadc4d6d06551f37c20ec5d9a1bc66c733b64480bdeba85b953964580037f1"} Feb 27 10:31:57 crc kubenswrapper[4728]: I0227 10:31:57.640697 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-g2w72" event={"ID":"e2b568d1-390e-4e9d-a077-90fd768200b6","Type":"ContainerStarted","Data":"8b7c43c8fbdf7d942085e43c21b00702b6207878b665574a4132245243330648"} Feb 27 10:31:57 crc kubenswrapper[4728]: I0227 10:31:57.640917 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-g2w72" Feb 27 10:31:57 crc kubenswrapper[4728]: I0227 10:31:57.655402 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-lcx6b"] Feb 27 10:31:57 crc kubenswrapper[4728]: I0227 10:31:57.656546 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lcx6b" Feb 27 10:31:57 crc kubenswrapper[4728]: I0227 10:31:57.659301 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 27 10:31:57 crc kubenswrapper[4728]: I0227 10:31:57.682496 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lcx6b"] Feb 27 10:31:57 crc kubenswrapper[4728]: I0227 10:31:57.703254 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-g2w72" podStartSLOduration=1.7032333309999999 podStartE2EDuration="1.703233331s" podCreationTimestamp="2026-02-27 10:31:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:31:57.702790699 +0000 UTC m=+337.665156825" watchObservedRunningTime="2026-02-27 10:31:57.703233331 +0000 UTC m=+337.665599427" Feb 27 10:31:57 crc kubenswrapper[4728]: I0227 10:31:57.783873 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4523ffb8-c347-4cce-8db7-95b428446b0e-catalog-content\") pod \"redhat-marketplace-lcx6b\" (UID: \"4523ffb8-c347-4cce-8db7-95b428446b0e\") " pod="openshift-marketplace/redhat-marketplace-lcx6b" Feb 27 10:31:57 crc kubenswrapper[4728]: I0227 10:31:57.784292 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4523ffb8-c347-4cce-8db7-95b428446b0e-utilities\") pod \"redhat-marketplace-lcx6b\" (UID: \"4523ffb8-c347-4cce-8db7-95b428446b0e\") " pod="openshift-marketplace/redhat-marketplace-lcx6b" Feb 27 10:31:57 crc kubenswrapper[4728]: I0227 10:31:57.784372 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dsxzd\" (UniqueName: \"kubernetes.io/projected/4523ffb8-c347-4cce-8db7-95b428446b0e-kube-api-access-dsxzd\") pod \"redhat-marketplace-lcx6b\" (UID: \"4523ffb8-c347-4cce-8db7-95b428446b0e\") " pod="openshift-marketplace/redhat-marketplace-lcx6b" Feb 27 10:31:57 crc kubenswrapper[4728]: I0227 10:31:57.851110 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-pkrw7"] Feb 27 10:31:57 crc kubenswrapper[4728]: I0227 10:31:57.852147 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pkrw7" Feb 27 10:31:57 crc kubenswrapper[4728]: I0227 10:31:57.854314 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 27 10:31:57 crc kubenswrapper[4728]: I0227 10:31:57.857621 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pkrw7"] Feb 27 10:31:57 crc kubenswrapper[4728]: I0227 10:31:57.885412 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dsxzd\" (UniqueName: \"kubernetes.io/projected/4523ffb8-c347-4cce-8db7-95b428446b0e-kube-api-access-dsxzd\") pod \"redhat-marketplace-lcx6b\" (UID: \"4523ffb8-c347-4cce-8db7-95b428446b0e\") " pod="openshift-marketplace/redhat-marketplace-lcx6b" Feb 27 10:31:57 crc kubenswrapper[4728]: I0227 10:31:57.885478 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4523ffb8-c347-4cce-8db7-95b428446b0e-catalog-content\") pod \"redhat-marketplace-lcx6b\" (UID: \"4523ffb8-c347-4cce-8db7-95b428446b0e\") " pod="openshift-marketplace/redhat-marketplace-lcx6b" Feb 27 10:31:57 crc kubenswrapper[4728]: I0227 10:31:57.885532 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4523ffb8-c347-4cce-8db7-95b428446b0e-utilities\") pod \"redhat-marketplace-lcx6b\" (UID: \"4523ffb8-c347-4cce-8db7-95b428446b0e\") " pod="openshift-marketplace/redhat-marketplace-lcx6b" Feb 27 10:31:57 crc kubenswrapper[4728]: I0227 10:31:57.885926 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4523ffb8-c347-4cce-8db7-95b428446b0e-utilities\") pod \"redhat-marketplace-lcx6b\" (UID: \"4523ffb8-c347-4cce-8db7-95b428446b0e\") " pod="openshift-marketplace/redhat-marketplace-lcx6b" Feb 27 10:31:57 crc kubenswrapper[4728]: I0227 10:31:57.885952 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4523ffb8-c347-4cce-8db7-95b428446b0e-catalog-content\") pod \"redhat-marketplace-lcx6b\" (UID: \"4523ffb8-c347-4cce-8db7-95b428446b0e\") " pod="openshift-marketplace/redhat-marketplace-lcx6b" Feb 27 10:31:57 crc kubenswrapper[4728]: I0227 10:31:57.909826 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dsxzd\" (UniqueName: \"kubernetes.io/projected/4523ffb8-c347-4cce-8db7-95b428446b0e-kube-api-access-dsxzd\") pod \"redhat-marketplace-lcx6b\" (UID: \"4523ffb8-c347-4cce-8db7-95b428446b0e\") " pod="openshift-marketplace/redhat-marketplace-lcx6b" Feb 27 10:31:57 crc kubenswrapper[4728]: I0227 10:31:57.971967 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lcx6b" Feb 27 10:31:57 crc kubenswrapper[4728]: I0227 10:31:57.986984 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06035bc4-873a-492a-baa0-0760ddb0da68-utilities\") pod \"redhat-operators-pkrw7\" (UID: \"06035bc4-873a-492a-baa0-0760ddb0da68\") " pod="openshift-marketplace/redhat-operators-pkrw7" Feb 27 10:31:57 crc kubenswrapper[4728]: I0227 10:31:57.987104 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06035bc4-873a-492a-baa0-0760ddb0da68-catalog-content\") pod \"redhat-operators-pkrw7\" (UID: \"06035bc4-873a-492a-baa0-0760ddb0da68\") " pod="openshift-marketplace/redhat-operators-pkrw7" Feb 27 10:31:57 crc kubenswrapper[4728]: I0227 10:31:57.987146 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9phlb\" (UniqueName: \"kubernetes.io/projected/06035bc4-873a-492a-baa0-0760ddb0da68-kube-api-access-9phlb\") pod \"redhat-operators-pkrw7\" (UID: \"06035bc4-873a-492a-baa0-0760ddb0da68\") " pod="openshift-marketplace/redhat-operators-pkrw7" Feb 27 10:31:58 crc kubenswrapper[4728]: I0227 10:31:58.088814 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06035bc4-873a-492a-baa0-0760ddb0da68-catalog-content\") pod \"redhat-operators-pkrw7\" (UID: \"06035bc4-873a-492a-baa0-0760ddb0da68\") " pod="openshift-marketplace/redhat-operators-pkrw7" Feb 27 10:31:58 crc kubenswrapper[4728]: I0227 10:31:58.088852 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9phlb\" (UniqueName: \"kubernetes.io/projected/06035bc4-873a-492a-baa0-0760ddb0da68-kube-api-access-9phlb\") pod \"redhat-operators-pkrw7\" (UID: \"06035bc4-873a-492a-baa0-0760ddb0da68\") " pod="openshift-marketplace/redhat-operators-pkrw7" Feb 27 10:31:58 crc kubenswrapper[4728]: I0227 10:31:58.088906 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06035bc4-873a-492a-baa0-0760ddb0da68-utilities\") pod \"redhat-operators-pkrw7\" (UID: \"06035bc4-873a-492a-baa0-0760ddb0da68\") " pod="openshift-marketplace/redhat-operators-pkrw7" Feb 27 10:31:58 crc kubenswrapper[4728]: I0227 10:31:58.089590 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06035bc4-873a-492a-baa0-0760ddb0da68-utilities\") pod \"redhat-operators-pkrw7\" (UID: \"06035bc4-873a-492a-baa0-0760ddb0da68\") " pod="openshift-marketplace/redhat-operators-pkrw7" Feb 27 10:31:58 crc kubenswrapper[4728]: I0227 10:31:58.089819 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06035bc4-873a-492a-baa0-0760ddb0da68-catalog-content\") pod \"redhat-operators-pkrw7\" (UID: \"06035bc4-873a-492a-baa0-0760ddb0da68\") " pod="openshift-marketplace/redhat-operators-pkrw7" Feb 27 10:31:58 crc kubenswrapper[4728]: I0227 10:31:58.111452 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9phlb\" (UniqueName: \"kubernetes.io/projected/06035bc4-873a-492a-baa0-0760ddb0da68-kube-api-access-9phlb\") pod \"redhat-operators-pkrw7\" (UID: \"06035bc4-873a-492a-baa0-0760ddb0da68\") " pod="openshift-marketplace/redhat-operators-pkrw7" Feb 27 10:31:58 crc kubenswrapper[4728]: I0227 10:31:58.170628 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pkrw7" Feb 27 10:31:58 crc kubenswrapper[4728]: I0227 10:31:58.412934 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lcx6b"] Feb 27 10:31:58 crc kubenswrapper[4728]: W0227 10:31:58.418179 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4523ffb8_c347_4cce_8db7_95b428446b0e.slice/crio-b0f3921617933eb581c5d033610d3a13c453483b15242c0d6719e25f9f76700c WatchSource:0}: Error finding container b0f3921617933eb581c5d033610d3a13c453483b15242c0d6719e25f9f76700c: Status 404 returned error can't find the container with id b0f3921617933eb581c5d033610d3a13c453483b15242c0d6719e25f9f76700c Feb 27 10:31:58 crc kubenswrapper[4728]: I0227 10:31:58.647000 4728 generic.go:334] "Generic (PLEG): container finished" podID="4523ffb8-c347-4cce-8db7-95b428446b0e" containerID="fbc8c61fc8400acd2822b8730e9bfdb86caa16edc0970c7070fc4dd7e9cbf6b1" exitCode=0 Feb 27 10:31:58 crc kubenswrapper[4728]: I0227 10:31:58.647131 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lcx6b" event={"ID":"4523ffb8-c347-4cce-8db7-95b428446b0e","Type":"ContainerDied","Data":"fbc8c61fc8400acd2822b8730e9bfdb86caa16edc0970c7070fc4dd7e9cbf6b1"} Feb 27 10:31:58 crc kubenswrapper[4728]: I0227 10:31:58.647163 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lcx6b" event={"ID":"4523ffb8-c347-4cce-8db7-95b428446b0e","Type":"ContainerStarted","Data":"b0f3921617933eb581c5d033610d3a13c453483b15242c0d6719e25f9f76700c"} Feb 27 10:31:58 crc kubenswrapper[4728]: I0227 10:31:58.650669 4728 generic.go:334] "Generic (PLEG): container finished" podID="57410d64-6726-4c64-b9f4-e1eaad0aa42e" containerID="1317e781975b21b186df5dbfe0094b2fad674a6e8d198ca9385741c716a27134" exitCode=0 Feb 27 10:31:58 crc kubenswrapper[4728]: I0227 10:31:58.650730 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-spm46" event={"ID":"57410d64-6726-4c64-b9f4-e1eaad0aa42e","Type":"ContainerDied","Data":"1317e781975b21b186df5dbfe0094b2fad674a6e8d198ca9385741c716a27134"} Feb 27 10:31:58 crc kubenswrapper[4728]: I0227 10:31:58.652236 4728 generic.go:334] "Generic (PLEG): container finished" podID="9ce7dd06-3478-4aa5-a8b5-d371a01feb41" containerID="90bb18ea36f0ce91ce74911bd9ce209c82857b0057360a073aaa79a4524e43d9" exitCode=0 Feb 27 10:31:58 crc kubenswrapper[4728]: I0227 10:31:58.653080 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-llvng" event={"ID":"9ce7dd06-3478-4aa5-a8b5-d371a01feb41","Type":"ContainerDied","Data":"90bb18ea36f0ce91ce74911bd9ce209c82857b0057360a073aaa79a4524e43d9"} Feb 27 10:31:59 crc kubenswrapper[4728]: I0227 10:31:59.177077 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pkrw7"] Feb 27 10:31:59 crc kubenswrapper[4728]: I0227 10:31:59.663195 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-spm46" event={"ID":"57410d64-6726-4c64-b9f4-e1eaad0aa42e","Type":"ContainerStarted","Data":"c92115c63a3d04786afb7d35601bf52b4816f3ef644c288fd306cd0bcb0d2784"} Feb 27 10:31:59 crc kubenswrapper[4728]: I0227 10:31:59.665867 4728 generic.go:334] "Generic (PLEG): container finished" podID="06035bc4-873a-492a-baa0-0760ddb0da68" containerID="91d77edd45abd47fb19b06b7d89dc2ee9aa1b0f668182dd6e142bc94324f6757" exitCode=0 Feb 27 10:31:59 crc kubenswrapper[4728]: I0227 10:31:59.665910 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pkrw7" event={"ID":"06035bc4-873a-492a-baa0-0760ddb0da68","Type":"ContainerDied","Data":"91d77edd45abd47fb19b06b7d89dc2ee9aa1b0f668182dd6e142bc94324f6757"} Feb 27 10:31:59 crc kubenswrapper[4728]: I0227 10:31:59.665924 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pkrw7" event={"ID":"06035bc4-873a-492a-baa0-0760ddb0da68","Type":"ContainerStarted","Data":"cc82fbb63bb9fe0a7044ef5f0a092e070bcd97817c7b93760ee364efbe7ebc18"} Feb 27 10:31:59 crc kubenswrapper[4728]: I0227 10:31:59.669080 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-llvng" event={"ID":"9ce7dd06-3478-4aa5-a8b5-d371a01feb41","Type":"ContainerStarted","Data":"51b75619f50f77268db3880a0aa6169dd78a1a1ee18064389c38e05e1bceed2a"} Feb 27 10:31:59 crc kubenswrapper[4728]: I0227 10:31:59.685385 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-spm46" podStartSLOduration=1.971583818 podStartE2EDuration="4.685363465s" podCreationTimestamp="2026-02-27 10:31:55 +0000 UTC" firstStartedPulling="2026-02-27 10:31:56.631243293 +0000 UTC m=+336.593609399" lastFinishedPulling="2026-02-27 10:31:59.34502294 +0000 UTC m=+339.307389046" observedRunningTime="2026-02-27 10:31:59.678626957 +0000 UTC m=+339.640993063" watchObservedRunningTime="2026-02-27 10:31:59.685363465 +0000 UTC m=+339.647729571" Feb 27 10:31:59 crc kubenswrapper[4728]: I0227 10:31:59.726495 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-llvng" podStartSLOduration=2.889999455 podStartE2EDuration="5.726481207s" podCreationTimestamp="2026-02-27 10:31:54 +0000 UTC" firstStartedPulling="2026-02-27 10:31:56.631584621 +0000 UTC m=+336.593950717" lastFinishedPulling="2026-02-27 10:31:59.468066363 +0000 UTC m=+339.430432469" observedRunningTime="2026-02-27 10:31:59.722298796 +0000 UTC m=+339.684664912" watchObservedRunningTime="2026-02-27 10:31:59.726481207 +0000 UTC m=+339.688847303" Feb 27 10:32:00 crc kubenswrapper[4728]: I0227 10:32:00.176746 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536472-w6zws"] Feb 27 10:32:00 crc kubenswrapper[4728]: I0227 10:32:00.177350 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536472-w6zws" Feb 27 10:32:00 crc kubenswrapper[4728]: I0227 10:32:00.179384 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-wxdfj" Feb 27 10:32:00 crc kubenswrapper[4728]: I0227 10:32:00.179408 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 10:32:00 crc kubenswrapper[4728]: I0227 10:32:00.179595 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 10:32:00 crc kubenswrapper[4728]: I0227 10:32:00.187709 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536472-w6zws"] Feb 27 10:32:00 crc kubenswrapper[4728]: I0227 10:32:00.224747 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5w6xh\" (UniqueName: \"kubernetes.io/projected/a679f75d-9b65-494b-8520-5f79c1ce159f-kube-api-access-5w6xh\") pod \"auto-csr-approver-29536472-w6zws\" (UID: \"a679f75d-9b65-494b-8520-5f79c1ce159f\") " pod="openshift-infra/auto-csr-approver-29536472-w6zws" Feb 27 10:32:00 crc kubenswrapper[4728]: I0227 10:32:00.326410 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5w6xh\" (UniqueName: \"kubernetes.io/projected/a679f75d-9b65-494b-8520-5f79c1ce159f-kube-api-access-5w6xh\") pod \"auto-csr-approver-29536472-w6zws\" (UID: \"a679f75d-9b65-494b-8520-5f79c1ce159f\") " pod="openshift-infra/auto-csr-approver-29536472-w6zws" Feb 27 10:32:00 crc kubenswrapper[4728]: I0227 10:32:00.351461 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5w6xh\" (UniqueName: \"kubernetes.io/projected/a679f75d-9b65-494b-8520-5f79c1ce159f-kube-api-access-5w6xh\") pod \"auto-csr-approver-29536472-w6zws\" (UID: \"a679f75d-9b65-494b-8520-5f79c1ce159f\") " pod="openshift-infra/auto-csr-approver-29536472-w6zws" Feb 27 10:32:00 crc kubenswrapper[4728]: I0227 10:32:00.496815 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536472-w6zws" Feb 27 10:32:00 crc kubenswrapper[4728]: I0227 10:32:00.679646 4728 generic.go:334] "Generic (PLEG): container finished" podID="4523ffb8-c347-4cce-8db7-95b428446b0e" containerID="16068e85f5ef333f702c9cf6d4423c90cb0c9ac48bd42dda3526f4b132367d38" exitCode=0 Feb 27 10:32:00 crc kubenswrapper[4728]: I0227 10:32:00.679930 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lcx6b" event={"ID":"4523ffb8-c347-4cce-8db7-95b428446b0e","Type":"ContainerDied","Data":"16068e85f5ef333f702c9cf6d4423c90cb0c9ac48bd42dda3526f4b132367d38"} Feb 27 10:32:00 crc kubenswrapper[4728]: I0227 10:32:00.682615 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pkrw7" event={"ID":"06035bc4-873a-492a-baa0-0760ddb0da68","Type":"ContainerStarted","Data":"6faca71b662ee53d9b0fd7717684c0c8e54ba9dd1899a2f8b6ff57ae058de75f"} Feb 27 10:32:00 crc kubenswrapper[4728]: I0227 10:32:00.937304 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536472-w6zws"] Feb 27 10:32:00 crc kubenswrapper[4728]: W0227 10:32:00.942638 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda679f75d_9b65_494b_8520_5f79c1ce159f.slice/crio-56f3012fef7464581be00f96c4d40a7c51fbce4fe23e95988d7dba6438d4a4af WatchSource:0}: Error finding container 56f3012fef7464581be00f96c4d40a7c51fbce4fe23e95988d7dba6438d4a4af: Status 404 returned error can't find the container with id 56f3012fef7464581be00f96c4d40a7c51fbce4fe23e95988d7dba6438d4a4af Feb 27 10:32:01 crc kubenswrapper[4728]: I0227 10:32:01.691271 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lcx6b" event={"ID":"4523ffb8-c347-4cce-8db7-95b428446b0e","Type":"ContainerStarted","Data":"6ca968cbc69e0095328648f0802711f58d734ab096a111acc49fe8981a9c4baf"} Feb 27 10:32:01 crc kubenswrapper[4728]: I0227 10:32:01.692794 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536472-w6zws" event={"ID":"a679f75d-9b65-494b-8520-5f79c1ce159f","Type":"ContainerStarted","Data":"56f3012fef7464581be00f96c4d40a7c51fbce4fe23e95988d7dba6438d4a4af"} Feb 27 10:32:01 crc kubenswrapper[4728]: I0227 10:32:01.694615 4728 generic.go:334] "Generic (PLEG): container finished" podID="06035bc4-873a-492a-baa0-0760ddb0da68" containerID="6faca71b662ee53d9b0fd7717684c0c8e54ba9dd1899a2f8b6ff57ae058de75f" exitCode=0 Feb 27 10:32:01 crc kubenswrapper[4728]: I0227 10:32:01.694644 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pkrw7" event={"ID":"06035bc4-873a-492a-baa0-0760ddb0da68","Type":"ContainerDied","Data":"6faca71b662ee53d9b0fd7717684c0c8e54ba9dd1899a2f8b6ff57ae058de75f"} Feb 27 10:32:01 crc kubenswrapper[4728]: I0227 10:32:01.718459 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-lcx6b" podStartSLOduration=2.236118601 podStartE2EDuration="4.718441171s" podCreationTimestamp="2026-02-27 10:31:57 +0000 UTC" firstStartedPulling="2026-02-27 10:31:58.648960301 +0000 UTC m=+338.611326397" lastFinishedPulling="2026-02-27 10:32:01.131282861 +0000 UTC m=+341.093648967" observedRunningTime="2026-02-27 10:32:01.714306082 +0000 UTC m=+341.676672228" watchObservedRunningTime="2026-02-27 10:32:01.718441171 +0000 UTC m=+341.680807277" Feb 27 10:32:02 crc kubenswrapper[4728]: I0227 10:32:02.700773 4728 generic.go:334] "Generic (PLEG): container finished" podID="a679f75d-9b65-494b-8520-5f79c1ce159f" containerID="3102fb675e817da84f9726e038944f1e0ccf862fa0ca0f3822d16b585b47c058" exitCode=0 Feb 27 10:32:02 crc kubenswrapper[4728]: I0227 10:32:02.700852 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536472-w6zws" event={"ID":"a679f75d-9b65-494b-8520-5f79c1ce159f","Type":"ContainerDied","Data":"3102fb675e817da84f9726e038944f1e0ccf862fa0ca0f3822d16b585b47c058"} Feb 27 10:32:02 crc kubenswrapper[4728]: I0227 10:32:02.704624 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pkrw7" event={"ID":"06035bc4-873a-492a-baa0-0760ddb0da68","Type":"ContainerStarted","Data":"8806a52c4744c6da249f65c48d13d066bed9e1ae4f3221e769fd7457844627d3"} Feb 27 10:32:02 crc kubenswrapper[4728]: I0227 10:32:02.732761 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-pkrw7" podStartSLOduration=2.9970552120000002 podStartE2EDuration="5.73274281s" podCreationTimestamp="2026-02-27 10:31:57 +0000 UTC" firstStartedPulling="2026-02-27 10:31:59.667853562 +0000 UTC m=+339.630219668" lastFinishedPulling="2026-02-27 10:32:02.40354116 +0000 UTC m=+342.365907266" observedRunningTime="2026-02-27 10:32:02.732478304 +0000 UTC m=+342.694844420" watchObservedRunningTime="2026-02-27 10:32:02.73274281 +0000 UTC m=+342.695108906" Feb 27 10:32:03 crc kubenswrapper[4728]: I0227 10:32:03.997085 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536472-w6zws" Feb 27 10:32:04 crc kubenswrapper[4728]: I0227 10:32:04.074725 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5w6xh\" (UniqueName: \"kubernetes.io/projected/a679f75d-9b65-494b-8520-5f79c1ce159f-kube-api-access-5w6xh\") pod \"a679f75d-9b65-494b-8520-5f79c1ce159f\" (UID: \"a679f75d-9b65-494b-8520-5f79c1ce159f\") " Feb 27 10:32:04 crc kubenswrapper[4728]: I0227 10:32:04.082627 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a679f75d-9b65-494b-8520-5f79c1ce159f-kube-api-access-5w6xh" (OuterVolumeSpecName: "kube-api-access-5w6xh") pod "a679f75d-9b65-494b-8520-5f79c1ce159f" (UID: "a679f75d-9b65-494b-8520-5f79c1ce159f"). InnerVolumeSpecName "kube-api-access-5w6xh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:32:04 crc kubenswrapper[4728]: I0227 10:32:04.176280 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5w6xh\" (UniqueName: \"kubernetes.io/projected/a679f75d-9b65-494b-8520-5f79c1ce159f-kube-api-access-5w6xh\") on node \"crc\" DevicePath \"\"" Feb 27 10:32:04 crc kubenswrapper[4728]: I0227 10:32:04.725724 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536472-w6zws" Feb 27 10:32:04 crc kubenswrapper[4728]: I0227 10:32:04.735472 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536472-w6zws" event={"ID":"a679f75d-9b65-494b-8520-5f79c1ce159f","Type":"ContainerDied","Data":"56f3012fef7464581be00f96c4d40a7c51fbce4fe23e95988d7dba6438d4a4af"} Feb 27 10:32:04 crc kubenswrapper[4728]: I0227 10:32:04.735539 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="56f3012fef7464581be00f96c4d40a7c51fbce4fe23e95988d7dba6438d4a4af" Feb 27 10:32:05 crc kubenswrapper[4728]: I0227 10:32:05.198428 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-llvng" Feb 27 10:32:05 crc kubenswrapper[4728]: I0227 10:32:05.198788 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-llvng" Feb 27 10:32:05 crc kubenswrapper[4728]: I0227 10:32:05.264646 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-llvng" Feb 27 10:32:05 crc kubenswrapper[4728]: I0227 10:32:05.379831 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-spm46" Feb 27 10:32:05 crc kubenswrapper[4728]: I0227 10:32:05.379885 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-spm46" Feb 27 10:32:05 crc kubenswrapper[4728]: I0227 10:32:05.430950 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-spm46" Feb 27 10:32:05 crc kubenswrapper[4728]: I0227 10:32:05.770134 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-spm46" Feb 27 10:32:05 crc kubenswrapper[4728]: I0227 10:32:05.781353 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-llvng" Feb 27 10:32:07 crc kubenswrapper[4728]: I0227 10:32:07.972332 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-lcx6b" Feb 27 10:32:07 crc kubenswrapper[4728]: I0227 10:32:07.972712 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-lcx6b" Feb 27 10:32:08 crc kubenswrapper[4728]: I0227 10:32:08.025720 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-lcx6b" Feb 27 10:32:08 crc kubenswrapper[4728]: I0227 10:32:08.171707 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-pkrw7" Feb 27 10:32:08 crc kubenswrapper[4728]: I0227 10:32:08.171756 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-pkrw7" Feb 27 10:32:08 crc kubenswrapper[4728]: I0227 10:32:08.786146 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-lcx6b" Feb 27 10:32:09 crc kubenswrapper[4728]: I0227 10:32:09.211275 4728 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-pkrw7" podUID="06035bc4-873a-492a-baa0-0760ddb0da68" containerName="registry-server" probeResult="failure" output=< Feb 27 10:32:09 crc kubenswrapper[4728]: timeout: failed to connect service ":50051" within 1s Feb 27 10:32:09 crc kubenswrapper[4728]: > Feb 27 10:32:16 crc kubenswrapper[4728]: I0227 10:32:16.395580 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-g2w72" Feb 27 10:32:16 crc kubenswrapper[4728]: I0227 10:32:16.463657 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-w4vnn"] Feb 27 10:32:18 crc kubenswrapper[4728]: I0227 10:32:18.216044 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-pkrw7" Feb 27 10:32:18 crc kubenswrapper[4728]: I0227 10:32:18.275030 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-pkrw7" Feb 27 10:32:41 crc kubenswrapper[4728]: I0227 10:32:41.524985 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-w4vnn" podUID="9f01d342-6bde-4063-b99d-b0efda456aef" containerName="registry" containerID="cri-o://4251ecde32a2c009d59cc49c987d3a45a0917a6d4e04ba12ccc59a2159ee5c46" gracePeriod=30 Feb 27 10:32:41 crc kubenswrapper[4728]: I0227 10:32:41.909804 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-w4vnn" Feb 27 10:32:41 crc kubenswrapper[4728]: I0227 10:32:41.947668 4728 generic.go:334] "Generic (PLEG): container finished" podID="9f01d342-6bde-4063-b99d-b0efda456aef" containerID="4251ecde32a2c009d59cc49c987d3a45a0917a6d4e04ba12ccc59a2159ee5c46" exitCode=0 Feb 27 10:32:41 crc kubenswrapper[4728]: I0227 10:32:41.947709 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-w4vnn" event={"ID":"9f01d342-6bde-4063-b99d-b0efda456aef","Type":"ContainerDied","Data":"4251ecde32a2c009d59cc49c987d3a45a0917a6d4e04ba12ccc59a2159ee5c46"} Feb 27 10:32:41 crc kubenswrapper[4728]: I0227 10:32:41.947734 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-w4vnn" event={"ID":"9f01d342-6bde-4063-b99d-b0efda456aef","Type":"ContainerDied","Data":"806d53efbb4f84dd8a088d8c0237b502cc6124d832b40e3687bbc823a2d1d7c7"} Feb 27 10:32:41 crc kubenswrapper[4728]: I0227 10:32:41.947750 4728 scope.go:117] "RemoveContainer" containerID="4251ecde32a2c009d59cc49c987d3a45a0917a6d4e04ba12ccc59a2159ee5c46" Feb 27 10:32:41 crc kubenswrapper[4728]: I0227 10:32:41.947849 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-w4vnn" Feb 27 10:32:41 crc kubenswrapper[4728]: I0227 10:32:41.973223 4728 scope.go:117] "RemoveContainer" containerID="4251ecde32a2c009d59cc49c987d3a45a0917a6d4e04ba12ccc59a2159ee5c46" Feb 27 10:32:41 crc kubenswrapper[4728]: E0227 10:32:41.973783 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4251ecde32a2c009d59cc49c987d3a45a0917a6d4e04ba12ccc59a2159ee5c46\": container with ID starting with 4251ecde32a2c009d59cc49c987d3a45a0917a6d4e04ba12ccc59a2159ee5c46 not found: ID does not exist" containerID="4251ecde32a2c009d59cc49c987d3a45a0917a6d4e04ba12ccc59a2159ee5c46" Feb 27 10:32:41 crc kubenswrapper[4728]: I0227 10:32:41.973826 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4251ecde32a2c009d59cc49c987d3a45a0917a6d4e04ba12ccc59a2159ee5c46"} err="failed to get container status \"4251ecde32a2c009d59cc49c987d3a45a0917a6d4e04ba12ccc59a2159ee5c46\": rpc error: code = NotFound desc = could not find container \"4251ecde32a2c009d59cc49c987d3a45a0917a6d4e04ba12ccc59a2159ee5c46\": container with ID starting with 4251ecde32a2c009d59cc49c987d3a45a0917a6d4e04ba12ccc59a2159ee5c46 not found: ID does not exist" Feb 27 10:32:42 crc kubenswrapper[4728]: I0227 10:32:42.096093 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9f01d342-6bde-4063-b99d-b0efda456aef-bound-sa-token\") pod \"9f01d342-6bde-4063-b99d-b0efda456aef\" (UID: \"9f01d342-6bde-4063-b99d-b0efda456aef\") " Feb 27 10:32:42 crc kubenswrapper[4728]: I0227 10:32:42.096360 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c4kgv\" (UniqueName: \"kubernetes.io/projected/9f01d342-6bde-4063-b99d-b0efda456aef-kube-api-access-c4kgv\") pod \"9f01d342-6bde-4063-b99d-b0efda456aef\" (UID: \"9f01d342-6bde-4063-b99d-b0efda456aef\") " Feb 27 10:32:42 crc kubenswrapper[4728]: I0227 10:32:42.096465 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9f01d342-6bde-4063-b99d-b0efda456aef-trusted-ca\") pod \"9f01d342-6bde-4063-b99d-b0efda456aef\" (UID: \"9f01d342-6bde-4063-b99d-b0efda456aef\") " Feb 27 10:32:42 crc kubenswrapper[4728]: I0227 10:32:42.096597 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9f01d342-6bde-4063-b99d-b0efda456aef-ca-trust-extracted\") pod \"9f01d342-6bde-4063-b99d-b0efda456aef\" (UID: \"9f01d342-6bde-4063-b99d-b0efda456aef\") " Feb 27 10:32:42 crc kubenswrapper[4728]: I0227 10:32:42.096712 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9f01d342-6bde-4063-b99d-b0efda456aef-registry-tls\") pod \"9f01d342-6bde-4063-b99d-b0efda456aef\" (UID: \"9f01d342-6bde-4063-b99d-b0efda456aef\") " Feb 27 10:32:42 crc kubenswrapper[4728]: I0227 10:32:42.096866 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9f01d342-6bde-4063-b99d-b0efda456aef-installation-pull-secrets\") pod \"9f01d342-6bde-4063-b99d-b0efda456aef\" (UID: \"9f01d342-6bde-4063-b99d-b0efda456aef\") " Feb 27 10:32:42 crc kubenswrapper[4728]: I0227 10:32:42.097092 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"9f01d342-6bde-4063-b99d-b0efda456aef\" (UID: \"9f01d342-6bde-4063-b99d-b0efda456aef\") " Feb 27 10:32:42 crc kubenswrapper[4728]: I0227 10:32:42.097221 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9f01d342-6bde-4063-b99d-b0efda456aef-registry-certificates\") pod \"9f01d342-6bde-4063-b99d-b0efda456aef\" (UID: \"9f01d342-6bde-4063-b99d-b0efda456aef\") " Feb 27 10:32:42 crc kubenswrapper[4728]: I0227 10:32:42.098107 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f01d342-6bde-4063-b99d-b0efda456aef-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9f01d342-6bde-4063-b99d-b0efda456aef" (UID: "9f01d342-6bde-4063-b99d-b0efda456aef"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:32:42 crc kubenswrapper[4728]: I0227 10:32:42.099274 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f01d342-6bde-4063-b99d-b0efda456aef-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "9f01d342-6bde-4063-b99d-b0efda456aef" (UID: "9f01d342-6bde-4063-b99d-b0efda456aef"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:32:42 crc kubenswrapper[4728]: I0227 10:32:42.103349 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f01d342-6bde-4063-b99d-b0efda456aef-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "9f01d342-6bde-4063-b99d-b0efda456aef" (UID: "9f01d342-6bde-4063-b99d-b0efda456aef"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:32:42 crc kubenswrapper[4728]: I0227 10:32:42.103455 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f01d342-6bde-4063-b99d-b0efda456aef-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "9f01d342-6bde-4063-b99d-b0efda456aef" (UID: "9f01d342-6bde-4063-b99d-b0efda456aef"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:32:42 crc kubenswrapper[4728]: I0227 10:32:42.104197 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f01d342-6bde-4063-b99d-b0efda456aef-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "9f01d342-6bde-4063-b99d-b0efda456aef" (UID: "9f01d342-6bde-4063-b99d-b0efda456aef"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:32:42 crc kubenswrapper[4728]: I0227 10:32:42.112151 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "9f01d342-6bde-4063-b99d-b0efda456aef" (UID: "9f01d342-6bde-4063-b99d-b0efda456aef"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 27 10:32:42 crc kubenswrapper[4728]: I0227 10:32:42.119249 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9f01d342-6bde-4063-b99d-b0efda456aef-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "9f01d342-6bde-4063-b99d-b0efda456aef" (UID: "9f01d342-6bde-4063-b99d-b0efda456aef"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:32:42 crc kubenswrapper[4728]: I0227 10:32:42.120313 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f01d342-6bde-4063-b99d-b0efda456aef-kube-api-access-c4kgv" (OuterVolumeSpecName: "kube-api-access-c4kgv") pod "9f01d342-6bde-4063-b99d-b0efda456aef" (UID: "9f01d342-6bde-4063-b99d-b0efda456aef"). InnerVolumeSpecName "kube-api-access-c4kgv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:32:42 crc kubenswrapper[4728]: I0227 10:32:42.199872 4728 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9f01d342-6bde-4063-b99d-b0efda456aef-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 27 10:32:42 crc kubenswrapper[4728]: I0227 10:32:42.199973 4728 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9f01d342-6bde-4063-b99d-b0efda456aef-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 27 10:32:42 crc kubenswrapper[4728]: I0227 10:32:42.199994 4728 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9f01d342-6bde-4063-b99d-b0efda456aef-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 27 10:32:42 crc kubenswrapper[4728]: I0227 10:32:42.200016 4728 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9f01d342-6bde-4063-b99d-b0efda456aef-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 27 10:32:42 crc kubenswrapper[4728]: I0227 10:32:42.200034 4728 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9f01d342-6bde-4063-b99d-b0efda456aef-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 27 10:32:42 crc kubenswrapper[4728]: I0227 10:32:42.200050 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c4kgv\" (UniqueName: \"kubernetes.io/projected/9f01d342-6bde-4063-b99d-b0efda456aef-kube-api-access-c4kgv\") on node \"crc\" DevicePath \"\"" Feb 27 10:32:42 crc kubenswrapper[4728]: I0227 10:32:42.200066 4728 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9f01d342-6bde-4063-b99d-b0efda456aef-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 27 10:32:42 crc kubenswrapper[4728]: I0227 10:32:42.287722 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-w4vnn"] Feb 27 10:32:42 crc kubenswrapper[4728]: I0227 10:32:42.293704 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-w4vnn"] Feb 27 10:32:42 crc kubenswrapper[4728]: I0227 10:32:42.738975 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f01d342-6bde-4063-b99d-b0efda456aef" path="/var/lib/kubelet/pods/9f01d342-6bde-4063-b99d-b0efda456aef/volumes" Feb 27 10:32:46 crc kubenswrapper[4728]: I0227 10:32:46.819451 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6d5b84845-rnq2w"] Feb 27 10:32:46 crc kubenswrapper[4728]: E0227 10:32:46.820015 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f01d342-6bde-4063-b99d-b0efda456aef" containerName="registry" Feb 27 10:32:46 crc kubenswrapper[4728]: I0227 10:32:46.820031 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f01d342-6bde-4063-b99d-b0efda456aef" containerName="registry" Feb 27 10:32:46 crc kubenswrapper[4728]: E0227 10:32:46.820056 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a679f75d-9b65-494b-8520-5f79c1ce159f" containerName="oc" Feb 27 10:32:46 crc kubenswrapper[4728]: I0227 10:32:46.820066 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="a679f75d-9b65-494b-8520-5f79c1ce159f" containerName="oc" Feb 27 10:32:46 crc kubenswrapper[4728]: I0227 10:32:46.820176 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f01d342-6bde-4063-b99d-b0efda456aef" containerName="registry" Feb 27 10:32:46 crc kubenswrapper[4728]: I0227 10:32:46.820195 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="a679f75d-9b65-494b-8520-5f79c1ce159f" containerName="oc" Feb 27 10:32:46 crc kubenswrapper[4728]: I0227 10:32:46.820640 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-rnq2w" Feb 27 10:32:46 crc kubenswrapper[4728]: I0227 10:32:46.826895 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Feb 27 10:32:46 crc kubenswrapper[4728]: I0227 10:32:46.826991 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-dockercfg-wwt9l" Feb 27 10:32:46 crc kubenswrapper[4728]: I0227 10:32:46.827553 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Feb 27 10:32:46 crc kubenswrapper[4728]: I0227 10:32:46.828401 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Feb 27 10:32:46 crc kubenswrapper[4728]: I0227 10:32:46.828444 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Feb 27 10:32:46 crc kubenswrapper[4728]: I0227 10:32:46.836913 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6d5b84845-rnq2w"] Feb 27 10:32:46 crc kubenswrapper[4728]: I0227 10:32:46.864784 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/6e1f88d1-87e8-4bd6-a1a8-da319307f2bf-telemetry-config\") pod \"cluster-monitoring-operator-6d5b84845-rnq2w\" (UID: \"6e1f88d1-87e8-4bd6-a1a8-da319307f2bf\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-rnq2w" Feb 27 10:32:46 crc kubenswrapper[4728]: I0227 10:32:46.864962 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/6e1f88d1-87e8-4bd6-a1a8-da319307f2bf-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6d5b84845-rnq2w\" (UID: \"6e1f88d1-87e8-4bd6-a1a8-da319307f2bf\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-rnq2w" Feb 27 10:32:46 crc kubenswrapper[4728]: I0227 10:32:46.865062 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2h74q\" (UniqueName: \"kubernetes.io/projected/6e1f88d1-87e8-4bd6-a1a8-da319307f2bf-kube-api-access-2h74q\") pod \"cluster-monitoring-operator-6d5b84845-rnq2w\" (UID: \"6e1f88d1-87e8-4bd6-a1a8-da319307f2bf\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-rnq2w" Feb 27 10:32:46 crc kubenswrapper[4728]: I0227 10:32:46.966229 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/6e1f88d1-87e8-4bd6-a1a8-da319307f2bf-telemetry-config\") pod \"cluster-monitoring-operator-6d5b84845-rnq2w\" (UID: \"6e1f88d1-87e8-4bd6-a1a8-da319307f2bf\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-rnq2w" Feb 27 10:32:46 crc kubenswrapper[4728]: I0227 10:32:46.966332 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/6e1f88d1-87e8-4bd6-a1a8-da319307f2bf-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6d5b84845-rnq2w\" (UID: \"6e1f88d1-87e8-4bd6-a1a8-da319307f2bf\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-rnq2w" Feb 27 10:32:46 crc kubenswrapper[4728]: I0227 10:32:46.966375 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2h74q\" (UniqueName: \"kubernetes.io/projected/6e1f88d1-87e8-4bd6-a1a8-da319307f2bf-kube-api-access-2h74q\") pod \"cluster-monitoring-operator-6d5b84845-rnq2w\" (UID: \"6e1f88d1-87e8-4bd6-a1a8-da319307f2bf\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-rnq2w" Feb 27 10:32:46 crc kubenswrapper[4728]: I0227 10:32:46.968570 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/6e1f88d1-87e8-4bd6-a1a8-da319307f2bf-telemetry-config\") pod \"cluster-monitoring-operator-6d5b84845-rnq2w\" (UID: \"6e1f88d1-87e8-4bd6-a1a8-da319307f2bf\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-rnq2w" Feb 27 10:32:46 crc kubenswrapper[4728]: I0227 10:32:46.976886 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/6e1f88d1-87e8-4bd6-a1a8-da319307f2bf-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6d5b84845-rnq2w\" (UID: \"6e1f88d1-87e8-4bd6-a1a8-da319307f2bf\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-rnq2w" Feb 27 10:32:46 crc kubenswrapper[4728]: I0227 10:32:46.989427 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2h74q\" (UniqueName: \"kubernetes.io/projected/6e1f88d1-87e8-4bd6-a1a8-da319307f2bf-kube-api-access-2h74q\") pod \"cluster-monitoring-operator-6d5b84845-rnq2w\" (UID: \"6e1f88d1-87e8-4bd6-a1a8-da319307f2bf\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-rnq2w" Feb 27 10:32:47 crc kubenswrapper[4728]: I0227 10:32:47.162485 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-rnq2w" Feb 27 10:32:47 crc kubenswrapper[4728]: I0227 10:32:47.450353 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6d5b84845-rnq2w"] Feb 27 10:32:48 crc kubenswrapper[4728]: I0227 10:32:48.004877 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-rnq2w" event={"ID":"6e1f88d1-87e8-4bd6-a1a8-da319307f2bf","Type":"ContainerStarted","Data":"1d2355ef4647ff52000169c4a75e86ce8205a919826dec5809590632f391ba56"} Feb 27 10:32:50 crc kubenswrapper[4728]: I0227 10:32:50.016145 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-rnq2w" event={"ID":"6e1f88d1-87e8-4bd6-a1a8-da319307f2bf","Type":"ContainerStarted","Data":"d15a9e75e2c47e279049051aac9aa52ccba582a8854da8c1f7ab1366cf3dcde7"} Feb 27 10:32:50 crc kubenswrapper[4728]: I0227 10:32:50.031660 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-rnq2w" podStartSLOduration=2.047545455 podStartE2EDuration="4.031641502s" podCreationTimestamp="2026-02-27 10:32:46 +0000 UTC" firstStartedPulling="2026-02-27 10:32:47.455479474 +0000 UTC m=+387.417845580" lastFinishedPulling="2026-02-27 10:32:49.439575521 +0000 UTC m=+389.401941627" observedRunningTime="2026-02-27 10:32:50.031394376 +0000 UTC m=+389.993760482" watchObservedRunningTime="2026-02-27 10:32:50.031641502 +0000 UTC m=+389.994007608" Feb 27 10:32:50 crc kubenswrapper[4728]: I0227 10:32:50.074753 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-rwzqq"] Feb 27 10:32:50 crc kubenswrapper[4728]: I0227 10:32:50.075406 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-rwzqq" Feb 27 10:32:50 crc kubenswrapper[4728]: I0227 10:32:50.076846 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-dockercfg-b8z2n" Feb 27 10:32:50 crc kubenswrapper[4728]: I0227 10:32:50.083248 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-tls" Feb 27 10:32:50 crc kubenswrapper[4728]: I0227 10:32:50.092062 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-rwzqq"] Feb 27 10:32:50 crc kubenswrapper[4728]: I0227 10:32:50.226100 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/02ca5e1b-6d56-45a9-b669-3868ec2489e9-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-rwzqq\" (UID: \"02ca5e1b-6d56-45a9-b669-3868ec2489e9\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-rwzqq" Feb 27 10:32:50 crc kubenswrapper[4728]: I0227 10:32:50.328035 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/02ca5e1b-6d56-45a9-b669-3868ec2489e9-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-rwzqq\" (UID: \"02ca5e1b-6d56-45a9-b669-3868ec2489e9\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-rwzqq" Feb 27 10:32:50 crc kubenswrapper[4728]: I0227 10:32:50.333424 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/02ca5e1b-6d56-45a9-b669-3868ec2489e9-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-rwzqq\" (UID: \"02ca5e1b-6d56-45a9-b669-3868ec2489e9\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-rwzqq" Feb 27 10:32:50 crc kubenswrapper[4728]: I0227 10:32:50.388016 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-rwzqq" Feb 27 10:32:50 crc kubenswrapper[4728]: I0227 10:32:50.603936 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-rwzqq"] Feb 27 10:32:50 crc kubenswrapper[4728]: W0227 10:32:50.609696 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod02ca5e1b_6d56_45a9_b669_3868ec2489e9.slice/crio-5fdbc497d73481e33147bda78af11fdb2cb72f5c4a950c8df9334e29064c53d2 WatchSource:0}: Error finding container 5fdbc497d73481e33147bda78af11fdb2cb72f5c4a950c8df9334e29064c53d2: Status 404 returned error can't find the container with id 5fdbc497d73481e33147bda78af11fdb2cb72f5c4a950c8df9334e29064c53d2 Feb 27 10:32:51 crc kubenswrapper[4728]: I0227 10:32:51.025958 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-rwzqq" event={"ID":"02ca5e1b-6d56-45a9-b669-3868ec2489e9","Type":"ContainerStarted","Data":"5fdbc497d73481e33147bda78af11fdb2cb72f5c4a950c8df9334e29064c53d2"} Feb 27 10:32:53 crc kubenswrapper[4728]: I0227 10:32:53.039737 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-rwzqq" event={"ID":"02ca5e1b-6d56-45a9-b669-3868ec2489e9","Type":"ContainerStarted","Data":"eeae1461275af18d441f70049d16b0fbaf86f228c1e6050bf7973e4e9a7dd828"} Feb 27 10:32:53 crc kubenswrapper[4728]: I0227 10:32:53.040160 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-rwzqq" Feb 27 10:32:53 crc kubenswrapper[4728]: I0227 10:32:53.048357 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-rwzqq" Feb 27 10:32:53 crc kubenswrapper[4728]: I0227 10:32:53.079941 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-rwzqq" podStartSLOduration=1.523870735 podStartE2EDuration="3.07992174s" podCreationTimestamp="2026-02-27 10:32:50 +0000 UTC" firstStartedPulling="2026-02-27 10:32:50.612488986 +0000 UTC m=+390.574855102" lastFinishedPulling="2026-02-27 10:32:52.168540001 +0000 UTC m=+392.130906107" observedRunningTime="2026-02-27 10:32:53.057334011 +0000 UTC m=+393.019700127" watchObservedRunningTime="2026-02-27 10:32:53.07992174 +0000 UTC m=+393.042287846" Feb 27 10:32:54 crc kubenswrapper[4728]: I0227 10:32:54.160031 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-db54df47d-24mkc"] Feb 27 10:32:54 crc kubenswrapper[4728]: I0227 10:32:54.161414 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-db54df47d-24mkc" Feb 27 10:32:54 crc kubenswrapper[4728]: I0227 10:32:54.165957 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-tls" Feb 27 10:32:54 crc kubenswrapper[4728]: I0227 10:32:54.167023 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-dockercfg-ntx6p" Feb 27 10:32:54 crc kubenswrapper[4728]: I0227 10:32:54.167125 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-client-ca" Feb 27 10:32:54 crc kubenswrapper[4728]: I0227 10:32:54.168105 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" Feb 27 10:32:54 crc kubenswrapper[4728]: I0227 10:32:54.178540 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-db54df47d-24mkc"] Feb 27 10:32:54 crc kubenswrapper[4728]: I0227 10:32:54.283976 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/f2ceb469-9bd9-4b41-b491-f9c49a047893-prometheus-operator-tls\") pod \"prometheus-operator-db54df47d-24mkc\" (UID: \"f2ceb469-9bd9-4b41-b491-f9c49a047893\") " pod="openshift-monitoring/prometheus-operator-db54df47d-24mkc" Feb 27 10:32:54 crc kubenswrapper[4728]: I0227 10:32:54.284027 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7bm87\" (UniqueName: \"kubernetes.io/projected/f2ceb469-9bd9-4b41-b491-f9c49a047893-kube-api-access-7bm87\") pod \"prometheus-operator-db54df47d-24mkc\" (UID: \"f2ceb469-9bd9-4b41-b491-f9c49a047893\") " pod="openshift-monitoring/prometheus-operator-db54df47d-24mkc" Feb 27 10:32:54 crc kubenswrapper[4728]: I0227 10:32:54.284058 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/f2ceb469-9bd9-4b41-b491-f9c49a047893-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-db54df47d-24mkc\" (UID: \"f2ceb469-9bd9-4b41-b491-f9c49a047893\") " pod="openshift-monitoring/prometheus-operator-db54df47d-24mkc" Feb 27 10:32:54 crc kubenswrapper[4728]: I0227 10:32:54.284107 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/f2ceb469-9bd9-4b41-b491-f9c49a047893-metrics-client-ca\") pod \"prometheus-operator-db54df47d-24mkc\" (UID: \"f2ceb469-9bd9-4b41-b491-f9c49a047893\") " pod="openshift-monitoring/prometheus-operator-db54df47d-24mkc" Feb 27 10:32:54 crc kubenswrapper[4728]: I0227 10:32:54.385388 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/f2ceb469-9bd9-4b41-b491-f9c49a047893-metrics-client-ca\") pod \"prometheus-operator-db54df47d-24mkc\" (UID: \"f2ceb469-9bd9-4b41-b491-f9c49a047893\") " pod="openshift-monitoring/prometheus-operator-db54df47d-24mkc" Feb 27 10:32:54 crc kubenswrapper[4728]: I0227 10:32:54.385473 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/f2ceb469-9bd9-4b41-b491-f9c49a047893-prometheus-operator-tls\") pod \"prometheus-operator-db54df47d-24mkc\" (UID: \"f2ceb469-9bd9-4b41-b491-f9c49a047893\") " pod="openshift-monitoring/prometheus-operator-db54df47d-24mkc" Feb 27 10:32:54 crc kubenswrapper[4728]: I0227 10:32:54.385524 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7bm87\" (UniqueName: \"kubernetes.io/projected/f2ceb469-9bd9-4b41-b491-f9c49a047893-kube-api-access-7bm87\") pod \"prometheus-operator-db54df47d-24mkc\" (UID: \"f2ceb469-9bd9-4b41-b491-f9c49a047893\") " pod="openshift-monitoring/prometheus-operator-db54df47d-24mkc" Feb 27 10:32:54 crc kubenswrapper[4728]: I0227 10:32:54.385551 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/f2ceb469-9bd9-4b41-b491-f9c49a047893-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-db54df47d-24mkc\" (UID: \"f2ceb469-9bd9-4b41-b491-f9c49a047893\") " pod="openshift-monitoring/prometheus-operator-db54df47d-24mkc" Feb 27 10:32:54 crc kubenswrapper[4728]: I0227 10:32:54.389517 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/f2ceb469-9bd9-4b41-b491-f9c49a047893-metrics-client-ca\") pod \"prometheus-operator-db54df47d-24mkc\" (UID: \"f2ceb469-9bd9-4b41-b491-f9c49a047893\") " pod="openshift-monitoring/prometheus-operator-db54df47d-24mkc" Feb 27 10:32:54 crc kubenswrapper[4728]: I0227 10:32:54.392420 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/f2ceb469-9bd9-4b41-b491-f9c49a047893-prometheus-operator-tls\") pod \"prometheus-operator-db54df47d-24mkc\" (UID: \"f2ceb469-9bd9-4b41-b491-f9c49a047893\") " pod="openshift-monitoring/prometheus-operator-db54df47d-24mkc" Feb 27 10:32:54 crc kubenswrapper[4728]: I0227 10:32:54.395190 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/f2ceb469-9bd9-4b41-b491-f9c49a047893-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-db54df47d-24mkc\" (UID: \"f2ceb469-9bd9-4b41-b491-f9c49a047893\") " pod="openshift-monitoring/prometheus-operator-db54df47d-24mkc" Feb 27 10:32:54 crc kubenswrapper[4728]: I0227 10:32:54.402238 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7bm87\" (UniqueName: \"kubernetes.io/projected/f2ceb469-9bd9-4b41-b491-f9c49a047893-kube-api-access-7bm87\") pod \"prometheus-operator-db54df47d-24mkc\" (UID: \"f2ceb469-9bd9-4b41-b491-f9c49a047893\") " pod="openshift-monitoring/prometheus-operator-db54df47d-24mkc" Feb 27 10:32:54 crc kubenswrapper[4728]: I0227 10:32:54.478591 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-db54df47d-24mkc" Feb 27 10:32:54 crc kubenswrapper[4728]: I0227 10:32:54.921237 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-db54df47d-24mkc"] Feb 27 10:32:54 crc kubenswrapper[4728]: W0227 10:32:54.929924 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf2ceb469_9bd9_4b41_b491_f9c49a047893.slice/crio-41987f8ec235141cd8e30f0fa20b9c5876ed2340cd77e3448a4332a6e1a0089a WatchSource:0}: Error finding container 41987f8ec235141cd8e30f0fa20b9c5876ed2340cd77e3448a4332a6e1a0089a: Status 404 returned error can't find the container with id 41987f8ec235141cd8e30f0fa20b9c5876ed2340cd77e3448a4332a6e1a0089a Feb 27 10:32:55 crc kubenswrapper[4728]: I0227 10:32:55.056711 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-db54df47d-24mkc" event={"ID":"f2ceb469-9bd9-4b41-b491-f9c49a047893","Type":"ContainerStarted","Data":"41987f8ec235141cd8e30f0fa20b9c5876ed2340cd77e3448a4332a6e1a0089a"} Feb 27 10:32:57 crc kubenswrapper[4728]: I0227 10:32:57.071296 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-db54df47d-24mkc" event={"ID":"f2ceb469-9bd9-4b41-b491-f9c49a047893","Type":"ContainerStarted","Data":"029e89b11485b060bce8c5cbb5f9b40ff76712365c99caaa59a3b2b866263e9f"} Feb 27 10:32:57 crc kubenswrapper[4728]: I0227 10:32:57.071906 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-db54df47d-24mkc" event={"ID":"f2ceb469-9bd9-4b41-b491-f9c49a047893","Type":"ContainerStarted","Data":"42d0407555616a5b0da413b67c1f67632626f65ede13ff07778ee40571cb3487"} Feb 27 10:32:57 crc kubenswrapper[4728]: I0227 10:32:57.097995 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-db54df47d-24mkc" podStartSLOduration=1.49589287 podStartE2EDuration="3.097967746s" podCreationTimestamp="2026-02-27 10:32:54 +0000 UTC" firstStartedPulling="2026-02-27 10:32:54.933833185 +0000 UTC m=+394.896199331" lastFinishedPulling="2026-02-27 10:32:56.535908071 +0000 UTC m=+396.498274207" observedRunningTime="2026-02-27 10:32:57.093914768 +0000 UTC m=+397.056280914" watchObservedRunningTime="2026-02-27 10:32:57.097967746 +0000 UTC m=+397.060333892" Feb 27 10:32:58 crc kubenswrapper[4728]: I0227 10:32:58.545775 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/openshift-state-metrics-566fddb674-rg2q7"] Feb 27 10:32:58 crc kubenswrapper[4728]: I0227 10:32:58.547127 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-566fddb674-rg2q7" Feb 27 10:32:58 crc kubenswrapper[4728]: I0227 10:32:58.549183 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-566fddb674-rg2q7"] Feb 27 10:32:58 crc kubenswrapper[4728]: I0227 10:32:58.549948 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-tls" Feb 27 10:32:58 crc kubenswrapper[4728]: I0227 10:32:58.550188 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-dockercfg-96jhk" Feb 27 10:32:58 crc kubenswrapper[4728]: I0227 10:32:58.550344 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" Feb 27 10:32:58 crc kubenswrapper[4728]: I0227 10:32:58.551120 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmq6n\" (UniqueName: \"kubernetes.io/projected/96ede355-adc7-4cb3-b1fa-29270249ad62-kube-api-access-pmq6n\") pod \"openshift-state-metrics-566fddb674-rg2q7\" (UID: \"96ede355-adc7-4cb3-b1fa-29270249ad62\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-rg2q7" Feb 27 10:32:58 crc kubenswrapper[4728]: I0227 10:32:58.551174 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/96ede355-adc7-4cb3-b1fa-29270249ad62-metrics-client-ca\") pod \"openshift-state-metrics-566fddb674-rg2q7\" (UID: \"96ede355-adc7-4cb3-b1fa-29270249ad62\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-rg2q7" Feb 27 10:32:58 crc kubenswrapper[4728]: I0227 10:32:58.551261 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/96ede355-adc7-4cb3-b1fa-29270249ad62-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-rg2q7\" (UID: \"96ede355-adc7-4cb3-b1fa-29270249ad62\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-rg2q7" Feb 27 10:32:58 crc kubenswrapper[4728]: I0227 10:32:58.551313 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/96ede355-adc7-4cb3-b1fa-29270249ad62-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-566fddb674-rg2q7\" (UID: \"96ede355-adc7-4cb3-b1fa-29270249ad62\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-rg2q7" Feb 27 10:32:58 crc kubenswrapper[4728]: I0227 10:32:58.564521 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/kube-state-metrics-777cb5bd5d-ftftn"] Feb 27 10:32:58 crc kubenswrapper[4728]: I0227 10:32:58.565452 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-ftftn" Feb 27 10:32:58 crc kubenswrapper[4728]: I0227 10:32:58.572298 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/node-exporter-lpgnf"] Feb 27 10:32:58 crc kubenswrapper[4728]: I0227 10:32:58.573250 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-lpgnf" Feb 27 10:32:58 crc kubenswrapper[4728]: I0227 10:32:58.581579 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-tls" Feb 27 10:32:58 crc kubenswrapper[4728]: I0227 10:32:58.582358 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-dockercfg-89npq" Feb 27 10:32:58 crc kubenswrapper[4728]: I0227 10:32:58.582496 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" Feb 27 10:32:58 crc kubenswrapper[4728]: I0227 10:32:58.582839 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-dockercfg-lsg5b" Feb 27 10:32:58 crc kubenswrapper[4728]: I0227 10:32:58.582943 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-tls" Feb 27 10:32:58 crc kubenswrapper[4728]: I0227 10:32:58.583092 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" Feb 27 10:32:58 crc kubenswrapper[4728]: I0227 10:32:58.586838 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-kube-rbac-proxy-config" Feb 27 10:32:58 crc kubenswrapper[4728]: I0227 10:32:58.602550 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-777cb5bd5d-ftftn"] Feb 27 10:32:58 crc kubenswrapper[4728]: I0227 10:32:58.654644 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pmq6n\" (UniqueName: \"kubernetes.io/projected/96ede355-adc7-4cb3-b1fa-29270249ad62-kube-api-access-pmq6n\") pod \"openshift-state-metrics-566fddb674-rg2q7\" (UID: \"96ede355-adc7-4cb3-b1fa-29270249ad62\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-rg2q7" Feb 27 10:32:58 crc kubenswrapper[4728]: I0227 10:32:58.654743 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/96ede355-adc7-4cb3-b1fa-29270249ad62-metrics-client-ca\") pod \"openshift-state-metrics-566fddb674-rg2q7\" (UID: \"96ede355-adc7-4cb3-b1fa-29270249ad62\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-rg2q7" Feb 27 10:32:58 crc kubenswrapper[4728]: I0227 10:32:58.654942 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/96ede355-adc7-4cb3-b1fa-29270249ad62-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-rg2q7\" (UID: \"96ede355-adc7-4cb3-b1fa-29270249ad62\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-rg2q7" Feb 27 10:32:58 crc kubenswrapper[4728]: I0227 10:32:58.655004 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/96ede355-adc7-4cb3-b1fa-29270249ad62-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-566fddb674-rg2q7\" (UID: \"96ede355-adc7-4cb3-b1fa-29270249ad62\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-rg2q7" Feb 27 10:32:58 crc kubenswrapper[4728]: I0227 10:32:58.655935 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/96ede355-adc7-4cb3-b1fa-29270249ad62-metrics-client-ca\") pod \"openshift-state-metrics-566fddb674-rg2q7\" (UID: \"96ede355-adc7-4cb3-b1fa-29270249ad62\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-rg2q7" Feb 27 10:32:58 crc kubenswrapper[4728]: E0227 10:32:58.656010 4728 secret.go:188] Couldn't get secret openshift-monitoring/openshift-state-metrics-tls: secret "openshift-state-metrics-tls" not found Feb 27 10:32:58 crc kubenswrapper[4728]: E0227 10:32:58.656056 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/96ede355-adc7-4cb3-b1fa-29270249ad62-openshift-state-metrics-tls podName:96ede355-adc7-4cb3-b1fa-29270249ad62 nodeName:}" failed. No retries permitted until 2026-02-27 10:32:59.156043225 +0000 UTC m=+399.118409331 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "openshift-state-metrics-tls" (UniqueName: "kubernetes.io/secret/96ede355-adc7-4cb3-b1fa-29270249ad62-openshift-state-metrics-tls") pod "openshift-state-metrics-566fddb674-rg2q7" (UID: "96ede355-adc7-4cb3-b1fa-29270249ad62") : secret "openshift-state-metrics-tls" not found Feb 27 10:32:58 crc kubenswrapper[4728]: I0227 10:32:58.681320 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/96ede355-adc7-4cb3-b1fa-29270249ad62-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-566fddb674-rg2q7\" (UID: \"96ede355-adc7-4cb3-b1fa-29270249ad62\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-rg2q7" Feb 27 10:32:58 crc kubenswrapper[4728]: I0227 10:32:58.688109 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pmq6n\" (UniqueName: \"kubernetes.io/projected/96ede355-adc7-4cb3-b1fa-29270249ad62-kube-api-access-pmq6n\") pod \"openshift-state-metrics-566fddb674-rg2q7\" (UID: \"96ede355-adc7-4cb3-b1fa-29270249ad62\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-rg2q7" Feb 27 10:32:58 crc kubenswrapper[4728]: I0227 10:32:58.758535 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/a50e4af5-0849-433f-94d4-e8855b63bc42-metrics-client-ca\") pod \"kube-state-metrics-777cb5bd5d-ftftn\" (UID: \"a50e4af5-0849-433f-94d4-e8855b63bc42\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-ftftn" Feb 27 10:32:58 crc kubenswrapper[4728]: I0227 10:32:58.758589 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/a50e4af5-0849-433f-94d4-e8855b63bc42-volume-directive-shadow\") pod \"kube-state-metrics-777cb5bd5d-ftftn\" (UID: \"a50e4af5-0849-433f-94d4-e8855b63bc42\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-ftftn" Feb 27 10:32:58 crc kubenswrapper[4728]: I0227 10:32:58.758609 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/8e8bbbdd-c45e-41cb-a6d9-259fde32539b-sys\") pod \"node-exporter-lpgnf\" (UID: \"8e8bbbdd-c45e-41cb-a6d9-259fde32539b\") " pod="openshift-monitoring/node-exporter-lpgnf" Feb 27 10:32:58 crc kubenswrapper[4728]: I0227 10:32:58.758638 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/a50e4af5-0849-433f-94d4-e8855b63bc42-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-777cb5bd5d-ftftn\" (UID: \"a50e4af5-0849-433f-94d4-e8855b63bc42\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-ftftn" Feb 27 10:32:58 crc kubenswrapper[4728]: I0227 10:32:58.758657 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/8e8bbbdd-c45e-41cb-a6d9-259fde32539b-root\") pod \"node-exporter-lpgnf\" (UID: \"8e8bbbdd-c45e-41cb-a6d9-259fde32539b\") " pod="openshift-monitoring/node-exporter-lpgnf" Feb 27 10:32:58 crc kubenswrapper[4728]: I0227 10:32:58.758818 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/8e8bbbdd-c45e-41cb-a6d9-259fde32539b-node-exporter-textfile\") pod \"node-exporter-lpgnf\" (UID: \"8e8bbbdd-c45e-41cb-a6d9-259fde32539b\") " pod="openshift-monitoring/node-exporter-lpgnf" Feb 27 10:32:58 crc kubenswrapper[4728]: I0227 10:32:58.758899 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/8e8bbbdd-c45e-41cb-a6d9-259fde32539b-node-exporter-tls\") pod \"node-exporter-lpgnf\" (UID: \"8e8bbbdd-c45e-41cb-a6d9-259fde32539b\") " pod="openshift-monitoring/node-exporter-lpgnf" Feb 27 10:32:58 crc kubenswrapper[4728]: I0227 10:32:58.758938 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/8e8bbbdd-c45e-41cb-a6d9-259fde32539b-node-exporter-wtmp\") pod \"node-exporter-lpgnf\" (UID: \"8e8bbbdd-c45e-41cb-a6d9-259fde32539b\") " pod="openshift-monitoring/node-exporter-lpgnf" Feb 27 10:32:58 crc kubenswrapper[4728]: I0227 10:32:58.759031 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/8e8bbbdd-c45e-41cb-a6d9-259fde32539b-metrics-client-ca\") pod \"node-exporter-lpgnf\" (UID: \"8e8bbbdd-c45e-41cb-a6d9-259fde32539b\") " pod="openshift-monitoring/node-exporter-lpgnf" Feb 27 10:32:58 crc kubenswrapper[4728]: I0227 10:32:58.759113 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/a50e4af5-0849-433f-94d4-e8855b63bc42-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-ftftn\" (UID: \"a50e4af5-0849-433f-94d4-e8855b63bc42\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-ftftn" Feb 27 10:32:58 crc kubenswrapper[4728]: I0227 10:32:58.759314 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/a50e4af5-0849-433f-94d4-e8855b63bc42-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-777cb5bd5d-ftftn\" (UID: \"a50e4af5-0849-433f-94d4-e8855b63bc42\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-ftftn" Feb 27 10:32:58 crc kubenswrapper[4728]: I0227 10:32:58.759369 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qfc7\" (UniqueName: \"kubernetes.io/projected/8e8bbbdd-c45e-41cb-a6d9-259fde32539b-kube-api-access-2qfc7\") pod \"node-exporter-lpgnf\" (UID: \"8e8bbbdd-c45e-41cb-a6d9-259fde32539b\") " pod="openshift-monitoring/node-exporter-lpgnf" Feb 27 10:32:58 crc kubenswrapper[4728]: I0227 10:32:58.759417 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/8e8bbbdd-c45e-41cb-a6d9-259fde32539b-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-lpgnf\" (UID: \"8e8bbbdd-c45e-41cb-a6d9-259fde32539b\") " pod="openshift-monitoring/node-exporter-lpgnf" Feb 27 10:32:58 crc kubenswrapper[4728]: I0227 10:32:58.759546 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kv2n8\" (UniqueName: \"kubernetes.io/projected/a50e4af5-0849-433f-94d4-e8855b63bc42-kube-api-access-kv2n8\") pod \"kube-state-metrics-777cb5bd5d-ftftn\" (UID: \"a50e4af5-0849-433f-94d4-e8855b63bc42\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-ftftn" Feb 27 10:32:58 crc kubenswrapper[4728]: I0227 10:32:58.860491 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/a50e4af5-0849-433f-94d4-e8855b63bc42-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-ftftn\" (UID: \"a50e4af5-0849-433f-94d4-e8855b63bc42\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-ftftn" Feb 27 10:32:58 crc kubenswrapper[4728]: I0227 10:32:58.860550 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/a50e4af5-0849-433f-94d4-e8855b63bc42-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-777cb5bd5d-ftftn\" (UID: \"a50e4af5-0849-433f-94d4-e8855b63bc42\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-ftftn" Feb 27 10:32:58 crc kubenswrapper[4728]: I0227 10:32:58.860583 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2qfc7\" (UniqueName: \"kubernetes.io/projected/8e8bbbdd-c45e-41cb-a6d9-259fde32539b-kube-api-access-2qfc7\") pod \"node-exporter-lpgnf\" (UID: \"8e8bbbdd-c45e-41cb-a6d9-259fde32539b\") " pod="openshift-monitoring/node-exporter-lpgnf" Feb 27 10:32:58 crc kubenswrapper[4728]: I0227 10:32:58.860608 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/8e8bbbdd-c45e-41cb-a6d9-259fde32539b-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-lpgnf\" (UID: \"8e8bbbdd-c45e-41cb-a6d9-259fde32539b\") " pod="openshift-monitoring/node-exporter-lpgnf" Feb 27 10:32:58 crc kubenswrapper[4728]: I0227 10:32:58.860647 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kv2n8\" (UniqueName: \"kubernetes.io/projected/a50e4af5-0849-433f-94d4-e8855b63bc42-kube-api-access-kv2n8\") pod \"kube-state-metrics-777cb5bd5d-ftftn\" (UID: \"a50e4af5-0849-433f-94d4-e8855b63bc42\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-ftftn" Feb 27 10:32:58 crc kubenswrapper[4728]: I0227 10:32:58.860670 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/a50e4af5-0849-433f-94d4-e8855b63bc42-metrics-client-ca\") pod \"kube-state-metrics-777cb5bd5d-ftftn\" (UID: \"a50e4af5-0849-433f-94d4-e8855b63bc42\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-ftftn" Feb 27 10:32:58 crc kubenswrapper[4728]: I0227 10:32:58.860690 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/a50e4af5-0849-433f-94d4-e8855b63bc42-volume-directive-shadow\") pod \"kube-state-metrics-777cb5bd5d-ftftn\" (UID: \"a50e4af5-0849-433f-94d4-e8855b63bc42\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-ftftn" Feb 27 10:32:58 crc kubenswrapper[4728]: I0227 10:32:58.860706 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/8e8bbbdd-c45e-41cb-a6d9-259fde32539b-sys\") pod \"node-exporter-lpgnf\" (UID: \"8e8bbbdd-c45e-41cb-a6d9-259fde32539b\") " pod="openshift-monitoring/node-exporter-lpgnf" Feb 27 10:32:58 crc kubenswrapper[4728]: I0227 10:32:58.860733 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/a50e4af5-0849-433f-94d4-e8855b63bc42-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-777cb5bd5d-ftftn\" (UID: \"a50e4af5-0849-433f-94d4-e8855b63bc42\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-ftftn" Feb 27 10:32:58 crc kubenswrapper[4728]: I0227 10:32:58.860749 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/8e8bbbdd-c45e-41cb-a6d9-259fde32539b-root\") pod \"node-exporter-lpgnf\" (UID: \"8e8bbbdd-c45e-41cb-a6d9-259fde32539b\") " pod="openshift-monitoring/node-exporter-lpgnf" Feb 27 10:32:58 crc kubenswrapper[4728]: I0227 10:32:58.860779 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/8e8bbbdd-c45e-41cb-a6d9-259fde32539b-node-exporter-textfile\") pod \"node-exporter-lpgnf\" (UID: \"8e8bbbdd-c45e-41cb-a6d9-259fde32539b\") " pod="openshift-monitoring/node-exporter-lpgnf" Feb 27 10:32:58 crc kubenswrapper[4728]: I0227 10:32:58.860800 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/8e8bbbdd-c45e-41cb-a6d9-259fde32539b-node-exporter-tls\") pod \"node-exporter-lpgnf\" (UID: \"8e8bbbdd-c45e-41cb-a6d9-259fde32539b\") " pod="openshift-monitoring/node-exporter-lpgnf" Feb 27 10:32:58 crc kubenswrapper[4728]: I0227 10:32:58.860817 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/8e8bbbdd-c45e-41cb-a6d9-259fde32539b-node-exporter-wtmp\") pod \"node-exporter-lpgnf\" (UID: \"8e8bbbdd-c45e-41cb-a6d9-259fde32539b\") " pod="openshift-monitoring/node-exporter-lpgnf" Feb 27 10:32:58 crc kubenswrapper[4728]: I0227 10:32:58.860841 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/8e8bbbdd-c45e-41cb-a6d9-259fde32539b-metrics-client-ca\") pod \"node-exporter-lpgnf\" (UID: \"8e8bbbdd-c45e-41cb-a6d9-259fde32539b\") " pod="openshift-monitoring/node-exporter-lpgnf" Feb 27 10:32:58 crc kubenswrapper[4728]: I0227 10:32:58.861402 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/a50e4af5-0849-433f-94d4-e8855b63bc42-volume-directive-shadow\") pod \"kube-state-metrics-777cb5bd5d-ftftn\" (UID: \"a50e4af5-0849-433f-94d4-e8855b63bc42\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-ftftn" Feb 27 10:32:58 crc kubenswrapper[4728]: I0227 10:32:58.861541 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/a50e4af5-0849-433f-94d4-e8855b63bc42-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-777cb5bd5d-ftftn\" (UID: \"a50e4af5-0849-433f-94d4-e8855b63bc42\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-ftftn" Feb 27 10:32:58 crc kubenswrapper[4728]: I0227 10:32:58.861588 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/8e8bbbdd-c45e-41cb-a6d9-259fde32539b-metrics-client-ca\") pod \"node-exporter-lpgnf\" (UID: \"8e8bbbdd-c45e-41cb-a6d9-259fde32539b\") " pod="openshift-monitoring/node-exporter-lpgnf" Feb 27 10:32:58 crc kubenswrapper[4728]: I0227 10:32:58.861700 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"root\" (UniqueName: \"kubernetes.io/host-path/8e8bbbdd-c45e-41cb-a6d9-259fde32539b-root\") pod \"node-exporter-lpgnf\" (UID: \"8e8bbbdd-c45e-41cb-a6d9-259fde32539b\") " pod="openshift-monitoring/node-exporter-lpgnf" Feb 27 10:32:58 crc kubenswrapper[4728]: I0227 10:32:58.861806 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/8e8bbbdd-c45e-41cb-a6d9-259fde32539b-node-exporter-textfile\") pod \"node-exporter-lpgnf\" (UID: \"8e8bbbdd-c45e-41cb-a6d9-259fde32539b\") " pod="openshift-monitoring/node-exporter-lpgnf" Feb 27 10:32:58 crc kubenswrapper[4728]: I0227 10:32:58.861901 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/8e8bbbdd-c45e-41cb-a6d9-259fde32539b-sys\") pod \"node-exporter-lpgnf\" (UID: \"8e8bbbdd-c45e-41cb-a6d9-259fde32539b\") " pod="openshift-monitoring/node-exporter-lpgnf" Feb 27 10:32:58 crc kubenswrapper[4728]: I0227 10:32:58.861994 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/8e8bbbdd-c45e-41cb-a6d9-259fde32539b-node-exporter-wtmp\") pod \"node-exporter-lpgnf\" (UID: \"8e8bbbdd-c45e-41cb-a6d9-259fde32539b\") " pod="openshift-monitoring/node-exporter-lpgnf" Feb 27 10:32:58 crc kubenswrapper[4728]: I0227 10:32:58.862041 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/a50e4af5-0849-433f-94d4-e8855b63bc42-metrics-client-ca\") pod \"kube-state-metrics-777cb5bd5d-ftftn\" (UID: \"a50e4af5-0849-433f-94d4-e8855b63bc42\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-ftftn" Feb 27 10:32:58 crc kubenswrapper[4728]: I0227 10:32:58.864897 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/8e8bbbdd-c45e-41cb-a6d9-259fde32539b-node-exporter-tls\") pod \"node-exporter-lpgnf\" (UID: \"8e8bbbdd-c45e-41cb-a6d9-259fde32539b\") " pod="openshift-monitoring/node-exporter-lpgnf" Feb 27 10:32:58 crc kubenswrapper[4728]: I0227 10:32:58.864984 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/8e8bbbdd-c45e-41cb-a6d9-259fde32539b-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-lpgnf\" (UID: \"8e8bbbdd-c45e-41cb-a6d9-259fde32539b\") " pod="openshift-monitoring/node-exporter-lpgnf" Feb 27 10:32:58 crc kubenswrapper[4728]: I0227 10:32:58.865075 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/a50e4af5-0849-433f-94d4-e8855b63bc42-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-ftftn\" (UID: \"a50e4af5-0849-433f-94d4-e8855b63bc42\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-ftftn" Feb 27 10:32:58 crc kubenswrapper[4728]: I0227 10:32:58.865147 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/a50e4af5-0849-433f-94d4-e8855b63bc42-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-777cb5bd5d-ftftn\" (UID: \"a50e4af5-0849-433f-94d4-e8855b63bc42\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-ftftn" Feb 27 10:32:58 crc kubenswrapper[4728]: I0227 10:32:58.877660 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2qfc7\" (UniqueName: \"kubernetes.io/projected/8e8bbbdd-c45e-41cb-a6d9-259fde32539b-kube-api-access-2qfc7\") pod \"node-exporter-lpgnf\" (UID: \"8e8bbbdd-c45e-41cb-a6d9-259fde32539b\") " pod="openshift-monitoring/node-exporter-lpgnf" Feb 27 10:32:58 crc kubenswrapper[4728]: I0227 10:32:58.882489 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kv2n8\" (UniqueName: \"kubernetes.io/projected/a50e4af5-0849-433f-94d4-e8855b63bc42-kube-api-access-kv2n8\") pod \"kube-state-metrics-777cb5bd5d-ftftn\" (UID: \"a50e4af5-0849-433f-94d4-e8855b63bc42\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-ftftn" Feb 27 10:32:58 crc kubenswrapper[4728]: I0227 10:32:58.892791 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-lpgnf" Feb 27 10:32:58 crc kubenswrapper[4728]: W0227 10:32:58.910476 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8e8bbbdd_c45e_41cb_a6d9_259fde32539b.slice/crio-770c635ed281fb4e4a5d5567fe9fc61d7a91ebd98218a562aba2e316a860e6f9 WatchSource:0}: Error finding container 770c635ed281fb4e4a5d5567fe9fc61d7a91ebd98218a562aba2e316a860e6f9: Status 404 returned error can't find the container with id 770c635ed281fb4e4a5d5567fe9fc61d7a91ebd98218a562aba2e316a860e6f9 Feb 27 10:32:59 crc kubenswrapper[4728]: I0227 10:32:59.084874 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-lpgnf" event={"ID":"8e8bbbdd-c45e-41cb-a6d9-259fde32539b","Type":"ContainerStarted","Data":"770c635ed281fb4e4a5d5567fe9fc61d7a91ebd98218a562aba2e316a860e6f9"} Feb 27 10:32:59 crc kubenswrapper[4728]: I0227 10:32:59.163785 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/96ede355-adc7-4cb3-b1fa-29270249ad62-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-rg2q7\" (UID: \"96ede355-adc7-4cb3-b1fa-29270249ad62\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-rg2q7" Feb 27 10:32:59 crc kubenswrapper[4728]: I0227 10:32:59.168931 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/96ede355-adc7-4cb3-b1fa-29270249ad62-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-rg2q7\" (UID: \"96ede355-adc7-4cb3-b1fa-29270249ad62\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-rg2q7" Feb 27 10:32:59 crc kubenswrapper[4728]: I0227 10:32:59.178299 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-ftftn" Feb 27 10:32:59 crc kubenswrapper[4728]: I0227 10:32:59.461173 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-566fddb674-rg2q7" Feb 27 10:32:59 crc kubenswrapper[4728]: I0227 10:32:59.537108 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-777cb5bd5d-ftftn"] Feb 27 10:32:59 crc kubenswrapper[4728]: I0227 10:32:59.622882 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Feb 27 10:32:59 crc kubenswrapper[4728]: I0227 10:32:59.625001 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Feb 27 10:32:59 crc kubenswrapper[4728]: I0227 10:32:59.631454 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-web-config" Feb 27 10:32:59 crc kubenswrapper[4728]: I0227 10:32:59.634277 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy" Feb 27 10:32:59 crc kubenswrapper[4728]: I0227 10:32:59.634543 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls" Feb 27 10:32:59 crc kubenswrapper[4728]: I0227 10:32:59.634696 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-generated" Feb 27 10:32:59 crc kubenswrapper[4728]: I0227 10:32:59.634787 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls-assets-0" Feb 27 10:32:59 crc kubenswrapper[4728]: I0227 10:32:59.634824 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" Feb 27 10:32:59 crc kubenswrapper[4728]: I0227 10:32:59.634945 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-dockercfg-47m9q" Feb 27 10:32:59 crc kubenswrapper[4728]: I0227 10:32:59.635029 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" Feb 27 10:32:59 crc kubenswrapper[4728]: I0227 10:32:59.652576 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Feb 27 10:32:59 crc kubenswrapper[4728]: I0227 10:32:59.657375 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"alertmanager-trusted-ca-bundle" Feb 27 10:32:59 crc kubenswrapper[4728]: I0227 10:32:59.668854 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/db3b4b4e-7bdc-4c84-8292-b7ccb2fc3a6f-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"db3b4b4e-7bdc-4c84-8292-b7ccb2fc3a6f\") " pod="openshift-monitoring/alertmanager-main-0" Feb 27 10:32:59 crc kubenswrapper[4728]: I0227 10:32:59.668905 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8j7x\" (UniqueName: \"kubernetes.io/projected/db3b4b4e-7bdc-4c84-8292-b7ccb2fc3a6f-kube-api-access-h8j7x\") pod \"alertmanager-main-0\" (UID: \"db3b4b4e-7bdc-4c84-8292-b7ccb2fc3a6f\") " pod="openshift-monitoring/alertmanager-main-0" Feb 27 10:32:59 crc kubenswrapper[4728]: I0227 10:32:59.668933 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/db3b4b4e-7bdc-4c84-8292-b7ccb2fc3a6f-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"db3b4b4e-7bdc-4c84-8292-b7ccb2fc3a6f\") " pod="openshift-monitoring/alertmanager-main-0" Feb 27 10:32:59 crc kubenswrapper[4728]: I0227 10:32:59.668959 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/db3b4b4e-7bdc-4c84-8292-b7ccb2fc3a6f-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"db3b4b4e-7bdc-4c84-8292-b7ccb2fc3a6f\") " pod="openshift-monitoring/alertmanager-main-0" Feb 27 10:32:59 crc kubenswrapper[4728]: I0227 10:32:59.668982 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/db3b4b4e-7bdc-4c84-8292-b7ccb2fc3a6f-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"db3b4b4e-7bdc-4c84-8292-b7ccb2fc3a6f\") " pod="openshift-monitoring/alertmanager-main-0" Feb 27 10:32:59 crc kubenswrapper[4728]: I0227 10:32:59.669007 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/db3b4b4e-7bdc-4c84-8292-b7ccb2fc3a6f-tls-assets\") pod \"alertmanager-main-0\" (UID: \"db3b4b4e-7bdc-4c84-8292-b7ccb2fc3a6f\") " pod="openshift-monitoring/alertmanager-main-0" Feb 27 10:32:59 crc kubenswrapper[4728]: I0227 10:32:59.669027 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/db3b4b4e-7bdc-4c84-8292-b7ccb2fc3a6f-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"db3b4b4e-7bdc-4c84-8292-b7ccb2fc3a6f\") " pod="openshift-monitoring/alertmanager-main-0" Feb 27 10:32:59 crc kubenswrapper[4728]: I0227 10:32:59.669077 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/db3b4b4e-7bdc-4c84-8292-b7ccb2fc3a6f-web-config\") pod \"alertmanager-main-0\" (UID: \"db3b4b4e-7bdc-4c84-8292-b7ccb2fc3a6f\") " pod="openshift-monitoring/alertmanager-main-0" Feb 27 10:32:59 crc kubenswrapper[4728]: I0227 10:32:59.669102 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/db3b4b4e-7bdc-4c84-8292-b7ccb2fc3a6f-config-volume\") pod \"alertmanager-main-0\" (UID: \"db3b4b4e-7bdc-4c84-8292-b7ccb2fc3a6f\") " pod="openshift-monitoring/alertmanager-main-0" Feb 27 10:32:59 crc kubenswrapper[4728]: I0227 10:32:59.669133 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/db3b4b4e-7bdc-4c84-8292-b7ccb2fc3a6f-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"db3b4b4e-7bdc-4c84-8292-b7ccb2fc3a6f\") " pod="openshift-monitoring/alertmanager-main-0" Feb 27 10:32:59 crc kubenswrapper[4728]: I0227 10:32:59.669165 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/db3b4b4e-7bdc-4c84-8292-b7ccb2fc3a6f-config-out\") pod \"alertmanager-main-0\" (UID: \"db3b4b4e-7bdc-4c84-8292-b7ccb2fc3a6f\") " pod="openshift-monitoring/alertmanager-main-0" Feb 27 10:32:59 crc kubenswrapper[4728]: I0227 10:32:59.669189 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/db3b4b4e-7bdc-4c84-8292-b7ccb2fc3a6f-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"db3b4b4e-7bdc-4c84-8292-b7ccb2fc3a6f\") " pod="openshift-monitoring/alertmanager-main-0" Feb 27 10:32:59 crc kubenswrapper[4728]: I0227 10:32:59.738270 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-566fddb674-rg2q7"] Feb 27 10:32:59 crc kubenswrapper[4728]: W0227 10:32:59.750872 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod96ede355_adc7_4cb3_b1fa_29270249ad62.slice/crio-38351bbf4ce9cf5baffc3000a09c364fe056651da4d7c19207b7837df1dcde20 WatchSource:0}: Error finding container 38351bbf4ce9cf5baffc3000a09c364fe056651da4d7c19207b7837df1dcde20: Status 404 returned error can't find the container with id 38351bbf4ce9cf5baffc3000a09c364fe056651da4d7c19207b7837df1dcde20 Feb 27 10:32:59 crc kubenswrapper[4728]: I0227 10:32:59.770077 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/db3b4b4e-7bdc-4c84-8292-b7ccb2fc3a6f-config-out\") pod \"alertmanager-main-0\" (UID: \"db3b4b4e-7bdc-4c84-8292-b7ccb2fc3a6f\") " pod="openshift-monitoring/alertmanager-main-0" Feb 27 10:32:59 crc kubenswrapper[4728]: I0227 10:32:59.770130 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/db3b4b4e-7bdc-4c84-8292-b7ccb2fc3a6f-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"db3b4b4e-7bdc-4c84-8292-b7ccb2fc3a6f\") " pod="openshift-monitoring/alertmanager-main-0" Feb 27 10:32:59 crc kubenswrapper[4728]: I0227 10:32:59.770162 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/db3b4b4e-7bdc-4c84-8292-b7ccb2fc3a6f-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"db3b4b4e-7bdc-4c84-8292-b7ccb2fc3a6f\") " pod="openshift-monitoring/alertmanager-main-0" Feb 27 10:32:59 crc kubenswrapper[4728]: I0227 10:32:59.770196 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h8j7x\" (UniqueName: \"kubernetes.io/projected/db3b4b4e-7bdc-4c84-8292-b7ccb2fc3a6f-kube-api-access-h8j7x\") pod \"alertmanager-main-0\" (UID: \"db3b4b4e-7bdc-4c84-8292-b7ccb2fc3a6f\") " pod="openshift-monitoring/alertmanager-main-0" Feb 27 10:32:59 crc kubenswrapper[4728]: I0227 10:32:59.770219 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/db3b4b4e-7bdc-4c84-8292-b7ccb2fc3a6f-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"db3b4b4e-7bdc-4c84-8292-b7ccb2fc3a6f\") " pod="openshift-monitoring/alertmanager-main-0" Feb 27 10:32:59 crc kubenswrapper[4728]: I0227 10:32:59.770245 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/db3b4b4e-7bdc-4c84-8292-b7ccb2fc3a6f-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"db3b4b4e-7bdc-4c84-8292-b7ccb2fc3a6f\") " pod="openshift-monitoring/alertmanager-main-0" Feb 27 10:32:59 crc kubenswrapper[4728]: I0227 10:32:59.770266 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/db3b4b4e-7bdc-4c84-8292-b7ccb2fc3a6f-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"db3b4b4e-7bdc-4c84-8292-b7ccb2fc3a6f\") " pod="openshift-monitoring/alertmanager-main-0" Feb 27 10:32:59 crc kubenswrapper[4728]: I0227 10:32:59.770291 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/db3b4b4e-7bdc-4c84-8292-b7ccb2fc3a6f-tls-assets\") pod \"alertmanager-main-0\" (UID: \"db3b4b4e-7bdc-4c84-8292-b7ccb2fc3a6f\") " pod="openshift-monitoring/alertmanager-main-0" Feb 27 10:32:59 crc kubenswrapper[4728]: I0227 10:32:59.770312 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/db3b4b4e-7bdc-4c84-8292-b7ccb2fc3a6f-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"db3b4b4e-7bdc-4c84-8292-b7ccb2fc3a6f\") " pod="openshift-monitoring/alertmanager-main-0" Feb 27 10:32:59 crc kubenswrapper[4728]: I0227 10:32:59.770362 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/db3b4b4e-7bdc-4c84-8292-b7ccb2fc3a6f-web-config\") pod \"alertmanager-main-0\" (UID: \"db3b4b4e-7bdc-4c84-8292-b7ccb2fc3a6f\") " pod="openshift-monitoring/alertmanager-main-0" Feb 27 10:32:59 crc kubenswrapper[4728]: I0227 10:32:59.770411 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/db3b4b4e-7bdc-4c84-8292-b7ccb2fc3a6f-config-volume\") pod \"alertmanager-main-0\" (UID: \"db3b4b4e-7bdc-4c84-8292-b7ccb2fc3a6f\") " pod="openshift-monitoring/alertmanager-main-0" Feb 27 10:32:59 crc kubenswrapper[4728]: I0227 10:32:59.770444 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/db3b4b4e-7bdc-4c84-8292-b7ccb2fc3a6f-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"db3b4b4e-7bdc-4c84-8292-b7ccb2fc3a6f\") " pod="openshift-monitoring/alertmanager-main-0" Feb 27 10:32:59 crc kubenswrapper[4728]: I0227 10:32:59.771087 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/db3b4b4e-7bdc-4c84-8292-b7ccb2fc3a6f-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"db3b4b4e-7bdc-4c84-8292-b7ccb2fc3a6f\") " pod="openshift-monitoring/alertmanager-main-0" Feb 27 10:32:59 crc kubenswrapper[4728]: I0227 10:32:59.772417 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/db3b4b4e-7bdc-4c84-8292-b7ccb2fc3a6f-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"db3b4b4e-7bdc-4c84-8292-b7ccb2fc3a6f\") " pod="openshift-monitoring/alertmanager-main-0" Feb 27 10:32:59 crc kubenswrapper[4728]: I0227 10:32:59.772717 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/db3b4b4e-7bdc-4c84-8292-b7ccb2fc3a6f-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"db3b4b4e-7bdc-4c84-8292-b7ccb2fc3a6f\") " pod="openshift-monitoring/alertmanager-main-0" Feb 27 10:32:59 crc kubenswrapper[4728]: I0227 10:32:59.776114 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/db3b4b4e-7bdc-4c84-8292-b7ccb2fc3a6f-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"db3b4b4e-7bdc-4c84-8292-b7ccb2fc3a6f\") " pod="openshift-monitoring/alertmanager-main-0" Feb 27 10:32:59 crc kubenswrapper[4728]: I0227 10:32:59.776206 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/db3b4b4e-7bdc-4c84-8292-b7ccb2fc3a6f-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"db3b4b4e-7bdc-4c84-8292-b7ccb2fc3a6f\") " pod="openshift-monitoring/alertmanager-main-0" Feb 27 10:32:59 crc kubenswrapper[4728]: I0227 10:32:59.776155 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/db3b4b4e-7bdc-4c84-8292-b7ccb2fc3a6f-config-volume\") pod \"alertmanager-main-0\" (UID: \"db3b4b4e-7bdc-4c84-8292-b7ccb2fc3a6f\") " pod="openshift-monitoring/alertmanager-main-0" Feb 27 10:32:59 crc kubenswrapper[4728]: I0227 10:32:59.778154 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/db3b4b4e-7bdc-4c84-8292-b7ccb2fc3a6f-tls-assets\") pod \"alertmanager-main-0\" (UID: \"db3b4b4e-7bdc-4c84-8292-b7ccb2fc3a6f\") " pod="openshift-monitoring/alertmanager-main-0" Feb 27 10:32:59 crc kubenswrapper[4728]: I0227 10:32:59.778770 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/db3b4b4e-7bdc-4c84-8292-b7ccb2fc3a6f-web-config\") pod \"alertmanager-main-0\" (UID: \"db3b4b4e-7bdc-4c84-8292-b7ccb2fc3a6f\") " pod="openshift-monitoring/alertmanager-main-0" Feb 27 10:32:59 crc kubenswrapper[4728]: I0227 10:32:59.778979 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/db3b4b4e-7bdc-4c84-8292-b7ccb2fc3a6f-config-out\") pod \"alertmanager-main-0\" (UID: \"db3b4b4e-7bdc-4c84-8292-b7ccb2fc3a6f\") " pod="openshift-monitoring/alertmanager-main-0" Feb 27 10:32:59 crc kubenswrapper[4728]: I0227 10:32:59.779191 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/db3b4b4e-7bdc-4c84-8292-b7ccb2fc3a6f-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"db3b4b4e-7bdc-4c84-8292-b7ccb2fc3a6f\") " pod="openshift-monitoring/alertmanager-main-0" Feb 27 10:32:59 crc kubenswrapper[4728]: I0227 10:32:59.786185 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h8j7x\" (UniqueName: \"kubernetes.io/projected/db3b4b4e-7bdc-4c84-8292-b7ccb2fc3a6f-kube-api-access-h8j7x\") pod \"alertmanager-main-0\" (UID: \"db3b4b4e-7bdc-4c84-8292-b7ccb2fc3a6f\") " pod="openshift-monitoring/alertmanager-main-0" Feb 27 10:32:59 crc kubenswrapper[4728]: I0227 10:32:59.790352 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/db3b4b4e-7bdc-4c84-8292-b7ccb2fc3a6f-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"db3b4b4e-7bdc-4c84-8292-b7ccb2fc3a6f\") " pod="openshift-monitoring/alertmanager-main-0" Feb 27 10:32:59 crc kubenswrapper[4728]: I0227 10:32:59.996277 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Feb 27 10:33:00 crc kubenswrapper[4728]: I0227 10:33:00.095361 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-ftftn" event={"ID":"a50e4af5-0849-433f-94d4-e8855b63bc42","Type":"ContainerStarted","Data":"8dad2a608ece924f864d630b83f822b40051b925519edeca99eb0ceb9558373c"} Feb 27 10:33:00 crc kubenswrapper[4728]: I0227 10:33:00.097331 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-rg2q7" event={"ID":"96ede355-adc7-4cb3-b1fa-29270249ad62","Type":"ContainerStarted","Data":"d09643729ac9fdc9a73bba80866af4df1d0d1223248a95ebc4aef7ecca2aa5e3"} Feb 27 10:33:00 crc kubenswrapper[4728]: I0227 10:33:00.097359 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-rg2q7" event={"ID":"96ede355-adc7-4cb3-b1fa-29270249ad62","Type":"ContainerStarted","Data":"38351bbf4ce9cf5baffc3000a09c364fe056651da4d7c19207b7837df1dcde20"} Feb 27 10:33:00 crc kubenswrapper[4728]: I0227 10:33:00.351160 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Feb 27 10:33:00 crc kubenswrapper[4728]: W0227 10:33:00.355531 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddb3b4b4e_7bdc_4c84_8292_b7ccb2fc3a6f.slice/crio-6a81990c00ba54997ca861b751cca09a12329bb63f4d0205d17035aa14c1f58f WatchSource:0}: Error finding container 6a81990c00ba54997ca861b751cca09a12329bb63f4d0205d17035aa14c1f58f: Status 404 returned error can't find the container with id 6a81990c00ba54997ca861b751cca09a12329bb63f4d0205d17035aa14c1f58f Feb 27 10:33:00 crc kubenswrapper[4728]: I0227 10:33:00.601135 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/thanos-querier-6c5dbd664-trjtq"] Feb 27 10:33:00 crc kubenswrapper[4728]: I0227 10:33:00.603414 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-6c5dbd664-trjtq" Feb 27 10:33:00 crc kubenswrapper[4728]: I0227 10:33:00.605128 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" Feb 27 10:33:00 crc kubenswrapper[4728]: I0227 10:33:00.605146 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-tls" Feb 27 10:33:00 crc kubenswrapper[4728]: I0227 10:33:00.605128 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy" Feb 27 10:33:00 crc kubenswrapper[4728]: I0227 10:33:00.605459 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-dockercfg-9pfcx" Feb 27 10:33:00 crc kubenswrapper[4728]: I0227 10:33:00.605623 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" Feb 27 10:33:00 crc kubenswrapper[4728]: I0227 10:33:00.606078 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" Feb 27 10:33:00 crc kubenswrapper[4728]: I0227 10:33:00.611047 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-grpc-tls-c7k714qgeagd9" Feb 27 10:33:00 crc kubenswrapper[4728]: I0227 10:33:00.611491 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-6c5dbd664-trjtq"] Feb 27 10:33:00 crc kubenswrapper[4728]: I0227 10:33:00.681209 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/d623145b-4ef4-4101-a703-ff4ef1392e82-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-6c5dbd664-trjtq\" (UID: \"d623145b-4ef4-4101-a703-ff4ef1392e82\") " pod="openshift-monitoring/thanos-querier-6c5dbd664-trjtq" Feb 27 10:33:00 crc kubenswrapper[4728]: I0227 10:33:00.681275 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/d623145b-4ef4-4101-a703-ff4ef1392e82-secret-grpc-tls\") pod \"thanos-querier-6c5dbd664-trjtq\" (UID: \"d623145b-4ef4-4101-a703-ff4ef1392e82\") " pod="openshift-monitoring/thanos-querier-6c5dbd664-trjtq" Feb 27 10:33:00 crc kubenswrapper[4728]: I0227 10:33:00.681301 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hnmj2\" (UniqueName: \"kubernetes.io/projected/d623145b-4ef4-4101-a703-ff4ef1392e82-kube-api-access-hnmj2\") pod \"thanos-querier-6c5dbd664-trjtq\" (UID: \"d623145b-4ef4-4101-a703-ff4ef1392e82\") " pod="openshift-monitoring/thanos-querier-6c5dbd664-trjtq" Feb 27 10:33:00 crc kubenswrapper[4728]: I0227 10:33:00.681368 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/d623145b-4ef4-4101-a703-ff4ef1392e82-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-6c5dbd664-trjtq\" (UID: \"d623145b-4ef4-4101-a703-ff4ef1392e82\") " pod="openshift-monitoring/thanos-querier-6c5dbd664-trjtq" Feb 27 10:33:00 crc kubenswrapper[4728]: I0227 10:33:00.681469 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/d623145b-4ef4-4101-a703-ff4ef1392e82-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-6c5dbd664-trjtq\" (UID: \"d623145b-4ef4-4101-a703-ff4ef1392e82\") " pod="openshift-monitoring/thanos-querier-6c5dbd664-trjtq" Feb 27 10:33:00 crc kubenswrapper[4728]: I0227 10:33:00.681535 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/d623145b-4ef4-4101-a703-ff4ef1392e82-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-6c5dbd664-trjtq\" (UID: \"d623145b-4ef4-4101-a703-ff4ef1392e82\") " pod="openshift-monitoring/thanos-querier-6c5dbd664-trjtq" Feb 27 10:33:00 crc kubenswrapper[4728]: I0227 10:33:00.681576 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/d623145b-4ef4-4101-a703-ff4ef1392e82-secret-thanos-querier-tls\") pod \"thanos-querier-6c5dbd664-trjtq\" (UID: \"d623145b-4ef4-4101-a703-ff4ef1392e82\") " pod="openshift-monitoring/thanos-querier-6c5dbd664-trjtq" Feb 27 10:33:00 crc kubenswrapper[4728]: I0227 10:33:00.681607 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/d623145b-4ef4-4101-a703-ff4ef1392e82-metrics-client-ca\") pod \"thanos-querier-6c5dbd664-trjtq\" (UID: \"d623145b-4ef4-4101-a703-ff4ef1392e82\") " pod="openshift-monitoring/thanos-querier-6c5dbd664-trjtq" Feb 27 10:33:00 crc kubenswrapper[4728]: I0227 10:33:00.807031 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/d623145b-4ef4-4101-a703-ff4ef1392e82-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-6c5dbd664-trjtq\" (UID: \"d623145b-4ef4-4101-a703-ff4ef1392e82\") " pod="openshift-monitoring/thanos-querier-6c5dbd664-trjtq" Feb 27 10:33:00 crc kubenswrapper[4728]: I0227 10:33:00.807110 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/d623145b-4ef4-4101-a703-ff4ef1392e82-secret-grpc-tls\") pod \"thanos-querier-6c5dbd664-trjtq\" (UID: \"d623145b-4ef4-4101-a703-ff4ef1392e82\") " pod="openshift-monitoring/thanos-querier-6c5dbd664-trjtq" Feb 27 10:33:00 crc kubenswrapper[4728]: I0227 10:33:00.807139 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hnmj2\" (UniqueName: \"kubernetes.io/projected/d623145b-4ef4-4101-a703-ff4ef1392e82-kube-api-access-hnmj2\") pod \"thanos-querier-6c5dbd664-trjtq\" (UID: \"d623145b-4ef4-4101-a703-ff4ef1392e82\") " pod="openshift-monitoring/thanos-querier-6c5dbd664-trjtq" Feb 27 10:33:00 crc kubenswrapper[4728]: I0227 10:33:00.807177 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/d623145b-4ef4-4101-a703-ff4ef1392e82-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-6c5dbd664-trjtq\" (UID: \"d623145b-4ef4-4101-a703-ff4ef1392e82\") " pod="openshift-monitoring/thanos-querier-6c5dbd664-trjtq" Feb 27 10:33:00 crc kubenswrapper[4728]: I0227 10:33:00.807212 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/d623145b-4ef4-4101-a703-ff4ef1392e82-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-6c5dbd664-trjtq\" (UID: \"d623145b-4ef4-4101-a703-ff4ef1392e82\") " pod="openshift-monitoring/thanos-querier-6c5dbd664-trjtq" Feb 27 10:33:00 crc kubenswrapper[4728]: I0227 10:33:00.807234 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/d623145b-4ef4-4101-a703-ff4ef1392e82-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-6c5dbd664-trjtq\" (UID: \"d623145b-4ef4-4101-a703-ff4ef1392e82\") " pod="openshift-monitoring/thanos-querier-6c5dbd664-trjtq" Feb 27 10:33:00 crc kubenswrapper[4728]: I0227 10:33:00.807261 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/d623145b-4ef4-4101-a703-ff4ef1392e82-secret-thanos-querier-tls\") pod \"thanos-querier-6c5dbd664-trjtq\" (UID: \"d623145b-4ef4-4101-a703-ff4ef1392e82\") " pod="openshift-monitoring/thanos-querier-6c5dbd664-trjtq" Feb 27 10:33:00 crc kubenswrapper[4728]: I0227 10:33:00.807284 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/d623145b-4ef4-4101-a703-ff4ef1392e82-metrics-client-ca\") pod \"thanos-querier-6c5dbd664-trjtq\" (UID: \"d623145b-4ef4-4101-a703-ff4ef1392e82\") " pod="openshift-monitoring/thanos-querier-6c5dbd664-trjtq" Feb 27 10:33:00 crc kubenswrapper[4728]: I0227 10:33:00.808393 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/d623145b-4ef4-4101-a703-ff4ef1392e82-metrics-client-ca\") pod \"thanos-querier-6c5dbd664-trjtq\" (UID: \"d623145b-4ef4-4101-a703-ff4ef1392e82\") " pod="openshift-monitoring/thanos-querier-6c5dbd664-trjtq" Feb 27 10:33:00 crc kubenswrapper[4728]: I0227 10:33:00.814838 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/d623145b-4ef4-4101-a703-ff4ef1392e82-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-6c5dbd664-trjtq\" (UID: \"d623145b-4ef4-4101-a703-ff4ef1392e82\") " pod="openshift-monitoring/thanos-querier-6c5dbd664-trjtq" Feb 27 10:33:00 crc kubenswrapper[4728]: I0227 10:33:00.815990 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/d623145b-4ef4-4101-a703-ff4ef1392e82-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-6c5dbd664-trjtq\" (UID: \"d623145b-4ef4-4101-a703-ff4ef1392e82\") " pod="openshift-monitoring/thanos-querier-6c5dbd664-trjtq" Feb 27 10:33:00 crc kubenswrapper[4728]: I0227 10:33:00.820430 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/d623145b-4ef4-4101-a703-ff4ef1392e82-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-6c5dbd664-trjtq\" (UID: \"d623145b-4ef4-4101-a703-ff4ef1392e82\") " pod="openshift-monitoring/thanos-querier-6c5dbd664-trjtq" Feb 27 10:33:00 crc kubenswrapper[4728]: I0227 10:33:00.822722 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/d623145b-4ef4-4101-a703-ff4ef1392e82-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-6c5dbd664-trjtq\" (UID: \"d623145b-4ef4-4101-a703-ff4ef1392e82\") " pod="openshift-monitoring/thanos-querier-6c5dbd664-trjtq" Feb 27 10:33:00 crc kubenswrapper[4728]: I0227 10:33:00.831961 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hnmj2\" (UniqueName: \"kubernetes.io/projected/d623145b-4ef4-4101-a703-ff4ef1392e82-kube-api-access-hnmj2\") pod \"thanos-querier-6c5dbd664-trjtq\" (UID: \"d623145b-4ef4-4101-a703-ff4ef1392e82\") " pod="openshift-monitoring/thanos-querier-6c5dbd664-trjtq" Feb 27 10:33:00 crc kubenswrapper[4728]: I0227 10:33:00.832405 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/d623145b-4ef4-4101-a703-ff4ef1392e82-secret-thanos-querier-tls\") pod \"thanos-querier-6c5dbd664-trjtq\" (UID: \"d623145b-4ef4-4101-a703-ff4ef1392e82\") " pod="openshift-monitoring/thanos-querier-6c5dbd664-trjtq" Feb 27 10:33:00 crc kubenswrapper[4728]: I0227 10:33:00.839575 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/d623145b-4ef4-4101-a703-ff4ef1392e82-secret-grpc-tls\") pod \"thanos-querier-6c5dbd664-trjtq\" (UID: \"d623145b-4ef4-4101-a703-ff4ef1392e82\") " pod="openshift-monitoring/thanos-querier-6c5dbd664-trjtq" Feb 27 10:33:00 crc kubenswrapper[4728]: I0227 10:33:00.879858 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-6c5dbd664-trjtq" Feb 27 10:33:01 crc kubenswrapper[4728]: I0227 10:33:01.073376 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-6c5dbd664-trjtq"] Feb 27 10:33:01 crc kubenswrapper[4728]: W0227 10:33:01.080240 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd623145b_4ef4_4101_a703_ff4ef1392e82.slice/crio-180836c1f6030a106939417d1f0bfdc6a3c1819d436b255280303dc40dbc3e92 WatchSource:0}: Error finding container 180836c1f6030a106939417d1f0bfdc6a3c1819d436b255280303dc40dbc3e92: Status 404 returned error can't find the container with id 180836c1f6030a106939417d1f0bfdc6a3c1819d436b255280303dc40dbc3e92 Feb 27 10:33:01 crc kubenswrapper[4728]: I0227 10:33:01.105535 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"db3b4b4e-7bdc-4c84-8292-b7ccb2fc3a6f","Type":"ContainerStarted","Data":"6a81990c00ba54997ca861b751cca09a12329bb63f4d0205d17035aa14c1f58f"} Feb 27 10:33:01 crc kubenswrapper[4728]: I0227 10:33:01.111747 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-rg2q7" event={"ID":"96ede355-adc7-4cb3-b1fa-29270249ad62","Type":"ContainerStarted","Data":"4607b22142d4bdbf17ec2813c4a3000fb82a18ac1487ce59e5e469b9d4321e42"} Feb 27 10:33:01 crc kubenswrapper[4728]: I0227 10:33:01.114994 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-lpgnf" event={"ID":"8e8bbbdd-c45e-41cb-a6d9-259fde32539b","Type":"ContainerDied","Data":"59f5c1e2e122e7643d86edc324f7466698712c76cf9b1ee7c60d2b6e704dc738"} Feb 27 10:33:01 crc kubenswrapper[4728]: I0227 10:33:01.114999 4728 generic.go:334] "Generic (PLEG): container finished" podID="8e8bbbdd-c45e-41cb-a6d9-259fde32539b" containerID="59f5c1e2e122e7643d86edc324f7466698712c76cf9b1ee7c60d2b6e704dc738" exitCode=0 Feb 27 10:33:01 crc kubenswrapper[4728]: I0227 10:33:01.116744 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-6c5dbd664-trjtq" event={"ID":"d623145b-4ef4-4101-a703-ff4ef1392e82","Type":"ContainerStarted","Data":"180836c1f6030a106939417d1f0bfdc6a3c1819d436b255280303dc40dbc3e92"} Feb 27 10:33:02 crc kubenswrapper[4728]: I0227 10:33:02.124993 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-lpgnf" event={"ID":"8e8bbbdd-c45e-41cb-a6d9-259fde32539b","Type":"ContainerStarted","Data":"5b31560374eb55424a4aa6181dffbe5bc71f61ae79f2809f1d784705b8e469ca"} Feb 27 10:33:02 crc kubenswrapper[4728]: I0227 10:33:02.125421 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-lpgnf" event={"ID":"8e8bbbdd-c45e-41cb-a6d9-259fde32539b","Type":"ContainerStarted","Data":"7cf9468a937814407fc4c476a5e58ce1a679a17429fcd9ece5cae99276cdaf18"} Feb 27 10:33:02 crc kubenswrapper[4728]: I0227 10:33:02.149528 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/node-exporter-lpgnf" podStartSLOduration=2.934362465 podStartE2EDuration="4.149473949s" podCreationTimestamp="2026-02-27 10:32:58 +0000 UTC" firstStartedPulling="2026-02-27 10:32:58.912606619 +0000 UTC m=+398.874972725" lastFinishedPulling="2026-02-27 10:33:00.127718083 +0000 UTC m=+400.090084209" observedRunningTime="2026-02-27 10:33:02.146744376 +0000 UTC m=+402.109110502" watchObservedRunningTime="2026-02-27 10:33:02.149473949 +0000 UTC m=+402.111840075" Feb 27 10:33:03 crc kubenswrapper[4728]: I0227 10:33:03.132837 4728 generic.go:334] "Generic (PLEG): container finished" podID="db3b4b4e-7bdc-4c84-8292-b7ccb2fc3a6f" containerID="7e5d754eda89ff693aae58e17f1734eeebd55c82e0bb4e9f4ff1fad0de255d5a" exitCode=0 Feb 27 10:33:03 crc kubenswrapper[4728]: I0227 10:33:03.132940 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"db3b4b4e-7bdc-4c84-8292-b7ccb2fc3a6f","Type":"ContainerDied","Data":"7e5d754eda89ff693aae58e17f1734eeebd55c82e0bb4e9f4ff1fad0de255d5a"} Feb 27 10:33:03 crc kubenswrapper[4728]: I0227 10:33:03.134943 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-ftftn" event={"ID":"a50e4af5-0849-433f-94d4-e8855b63bc42","Type":"ContainerStarted","Data":"65fac31afb3fda05db668e5e9928a379eeaa4598a417181c81d156eb9db1cad1"} Feb 27 10:33:03 crc kubenswrapper[4728]: I0227 10:33:03.138776 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-rg2q7" event={"ID":"96ede355-adc7-4cb3-b1fa-29270249ad62","Type":"ContainerStarted","Data":"65979481acb6badbd149a24a1122057a5cead0b1e8eac24c90fad5825d24cdd0"} Feb 27 10:33:03 crc kubenswrapper[4728]: I0227 10:33:03.186246 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/openshift-state-metrics-566fddb674-rg2q7" podStartSLOduration=2.566133069 podStartE2EDuration="5.186228132s" podCreationTimestamp="2026-02-27 10:32:58 +0000 UTC" firstStartedPulling="2026-02-27 10:33:00.246720269 +0000 UTC m=+400.209086415" lastFinishedPulling="2026-02-27 10:33:02.866815372 +0000 UTC m=+402.829181478" observedRunningTime="2026-02-27 10:33:03.183264924 +0000 UTC m=+403.145631040" watchObservedRunningTime="2026-02-27 10:33:03.186228132 +0000 UTC m=+403.148594238" Feb 27 10:33:03 crc kubenswrapper[4728]: I0227 10:33:03.339034 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-5b986f7559-nxhwt"] Feb 27 10:33:03 crc kubenswrapper[4728]: I0227 10:33:03.340422 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5b986f7559-nxhwt" Feb 27 10:33:03 crc kubenswrapper[4728]: I0227 10:33:03.348546 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/651e26df-d8dd-423b-9df7-ce05f925582c-console-oauth-config\") pod \"console-5b986f7559-nxhwt\" (UID: \"651e26df-d8dd-423b-9df7-ce05f925582c\") " pod="openshift-console/console-5b986f7559-nxhwt" Feb 27 10:33:03 crc kubenswrapper[4728]: I0227 10:33:03.348608 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/651e26df-d8dd-423b-9df7-ce05f925582c-console-config\") pod \"console-5b986f7559-nxhwt\" (UID: \"651e26df-d8dd-423b-9df7-ce05f925582c\") " pod="openshift-console/console-5b986f7559-nxhwt" Feb 27 10:33:03 crc kubenswrapper[4728]: I0227 10:33:03.348657 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/651e26df-d8dd-423b-9df7-ce05f925582c-service-ca\") pod \"console-5b986f7559-nxhwt\" (UID: \"651e26df-d8dd-423b-9df7-ce05f925582c\") " pod="openshift-console/console-5b986f7559-nxhwt" Feb 27 10:33:03 crc kubenswrapper[4728]: I0227 10:33:03.348724 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/651e26df-d8dd-423b-9df7-ce05f925582c-console-serving-cert\") pod \"console-5b986f7559-nxhwt\" (UID: \"651e26df-d8dd-423b-9df7-ce05f925582c\") " pod="openshift-console/console-5b986f7559-nxhwt" Feb 27 10:33:03 crc kubenswrapper[4728]: I0227 10:33:03.348773 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rj6pd\" (UniqueName: \"kubernetes.io/projected/651e26df-d8dd-423b-9df7-ce05f925582c-kube-api-access-rj6pd\") pod \"console-5b986f7559-nxhwt\" (UID: \"651e26df-d8dd-423b-9df7-ce05f925582c\") " pod="openshift-console/console-5b986f7559-nxhwt" Feb 27 10:33:03 crc kubenswrapper[4728]: I0227 10:33:03.348799 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/651e26df-d8dd-423b-9df7-ce05f925582c-trusted-ca-bundle\") pod \"console-5b986f7559-nxhwt\" (UID: \"651e26df-d8dd-423b-9df7-ce05f925582c\") " pod="openshift-console/console-5b986f7559-nxhwt" Feb 27 10:33:03 crc kubenswrapper[4728]: I0227 10:33:03.348820 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/651e26df-d8dd-423b-9df7-ce05f925582c-oauth-serving-cert\") pod \"console-5b986f7559-nxhwt\" (UID: \"651e26df-d8dd-423b-9df7-ce05f925582c\") " pod="openshift-console/console-5b986f7559-nxhwt" Feb 27 10:33:03 crc kubenswrapper[4728]: I0227 10:33:03.350954 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5b986f7559-nxhwt"] Feb 27 10:33:03 crc kubenswrapper[4728]: I0227 10:33:03.449685 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rj6pd\" (UniqueName: \"kubernetes.io/projected/651e26df-d8dd-423b-9df7-ce05f925582c-kube-api-access-rj6pd\") pod \"console-5b986f7559-nxhwt\" (UID: \"651e26df-d8dd-423b-9df7-ce05f925582c\") " pod="openshift-console/console-5b986f7559-nxhwt" Feb 27 10:33:03 crc kubenswrapper[4728]: I0227 10:33:03.449725 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/651e26df-d8dd-423b-9df7-ce05f925582c-trusted-ca-bundle\") pod \"console-5b986f7559-nxhwt\" (UID: \"651e26df-d8dd-423b-9df7-ce05f925582c\") " pod="openshift-console/console-5b986f7559-nxhwt" Feb 27 10:33:03 crc kubenswrapper[4728]: I0227 10:33:03.449740 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/651e26df-d8dd-423b-9df7-ce05f925582c-oauth-serving-cert\") pod \"console-5b986f7559-nxhwt\" (UID: \"651e26df-d8dd-423b-9df7-ce05f925582c\") " pod="openshift-console/console-5b986f7559-nxhwt" Feb 27 10:33:03 crc kubenswrapper[4728]: I0227 10:33:03.449782 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/651e26df-d8dd-423b-9df7-ce05f925582c-console-oauth-config\") pod \"console-5b986f7559-nxhwt\" (UID: \"651e26df-d8dd-423b-9df7-ce05f925582c\") " pod="openshift-console/console-5b986f7559-nxhwt" Feb 27 10:33:03 crc kubenswrapper[4728]: I0227 10:33:03.449804 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/651e26df-d8dd-423b-9df7-ce05f925582c-console-config\") pod \"console-5b986f7559-nxhwt\" (UID: \"651e26df-d8dd-423b-9df7-ce05f925582c\") " pod="openshift-console/console-5b986f7559-nxhwt" Feb 27 10:33:03 crc kubenswrapper[4728]: I0227 10:33:03.449840 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/651e26df-d8dd-423b-9df7-ce05f925582c-service-ca\") pod \"console-5b986f7559-nxhwt\" (UID: \"651e26df-d8dd-423b-9df7-ce05f925582c\") " pod="openshift-console/console-5b986f7559-nxhwt" Feb 27 10:33:03 crc kubenswrapper[4728]: I0227 10:33:03.449881 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/651e26df-d8dd-423b-9df7-ce05f925582c-console-serving-cert\") pod \"console-5b986f7559-nxhwt\" (UID: \"651e26df-d8dd-423b-9df7-ce05f925582c\") " pod="openshift-console/console-5b986f7559-nxhwt" Feb 27 10:33:03 crc kubenswrapper[4728]: I0227 10:33:03.450834 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/651e26df-d8dd-423b-9df7-ce05f925582c-oauth-serving-cert\") pod \"console-5b986f7559-nxhwt\" (UID: \"651e26df-d8dd-423b-9df7-ce05f925582c\") " pod="openshift-console/console-5b986f7559-nxhwt" Feb 27 10:33:03 crc kubenswrapper[4728]: I0227 10:33:03.451984 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/651e26df-d8dd-423b-9df7-ce05f925582c-console-config\") pod \"console-5b986f7559-nxhwt\" (UID: \"651e26df-d8dd-423b-9df7-ce05f925582c\") " pod="openshift-console/console-5b986f7559-nxhwt" Feb 27 10:33:03 crc kubenswrapper[4728]: I0227 10:33:03.452173 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/651e26df-d8dd-423b-9df7-ce05f925582c-trusted-ca-bundle\") pod \"console-5b986f7559-nxhwt\" (UID: \"651e26df-d8dd-423b-9df7-ce05f925582c\") " pod="openshift-console/console-5b986f7559-nxhwt" Feb 27 10:33:03 crc kubenswrapper[4728]: I0227 10:33:03.453328 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/651e26df-d8dd-423b-9df7-ce05f925582c-service-ca\") pod \"console-5b986f7559-nxhwt\" (UID: \"651e26df-d8dd-423b-9df7-ce05f925582c\") " pod="openshift-console/console-5b986f7559-nxhwt" Feb 27 10:33:03 crc kubenswrapper[4728]: I0227 10:33:03.455557 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/651e26df-d8dd-423b-9df7-ce05f925582c-console-oauth-config\") pod \"console-5b986f7559-nxhwt\" (UID: \"651e26df-d8dd-423b-9df7-ce05f925582c\") " pod="openshift-console/console-5b986f7559-nxhwt" Feb 27 10:33:03 crc kubenswrapper[4728]: I0227 10:33:03.456741 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/651e26df-d8dd-423b-9df7-ce05f925582c-console-serving-cert\") pod \"console-5b986f7559-nxhwt\" (UID: \"651e26df-d8dd-423b-9df7-ce05f925582c\") " pod="openshift-console/console-5b986f7559-nxhwt" Feb 27 10:33:03 crc kubenswrapper[4728]: I0227 10:33:03.473657 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rj6pd\" (UniqueName: \"kubernetes.io/projected/651e26df-d8dd-423b-9df7-ce05f925582c-kube-api-access-rj6pd\") pod \"console-5b986f7559-nxhwt\" (UID: \"651e26df-d8dd-423b-9df7-ce05f925582c\") " pod="openshift-console/console-5b986f7559-nxhwt" Feb 27 10:33:03 crc kubenswrapper[4728]: I0227 10:33:03.664107 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5b986f7559-nxhwt" Feb 27 10:33:03 crc kubenswrapper[4728]: I0227 10:33:03.856975 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/metrics-server-865ccf4b8c-kfx52"] Feb 27 10:33:03 crc kubenswrapper[4728]: I0227 10:33:03.858164 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-865ccf4b8c-kfx52" Feb 27 10:33:03 crc kubenswrapper[4728]: I0227 10:33:03.861699 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-6qofe6lfs0l0t" Feb 27 10:33:03 crc kubenswrapper[4728]: I0227 10:33:03.861755 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-tls" Feb 27 10:33:03 crc kubenswrapper[4728]: I0227 10:33:03.861899 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-client-certs" Feb 27 10:33:03 crc kubenswrapper[4728]: I0227 10:33:03.861981 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-server-audit-profiles" Feb 27 10:33:03 crc kubenswrapper[4728]: I0227 10:33:03.861902 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-dockercfg-zwq7c" Feb 27 10:33:03 crc kubenswrapper[4728]: I0227 10:33:03.862079 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kubelet-serving-ca-bundle" Feb 27 10:33:03 crc kubenswrapper[4728]: I0227 10:33:03.867336 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-865ccf4b8c-kfx52"] Feb 27 10:33:04 crc kubenswrapper[4728]: I0227 10:33:04.056791 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67c62fc2-8eff-4513-a465-696999f20a5b-client-ca-bundle\") pod \"metrics-server-865ccf4b8c-kfx52\" (UID: \"67c62fc2-8eff-4513-a465-696999f20a5b\") " pod="openshift-monitoring/metrics-server-865ccf4b8c-kfx52" Feb 27 10:33:04 crc kubenswrapper[4728]: I0227 10:33:04.057013 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/67c62fc2-8eff-4513-a465-696999f20a5b-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-865ccf4b8c-kfx52\" (UID: \"67c62fc2-8eff-4513-a465-696999f20a5b\") " pod="openshift-monitoring/metrics-server-865ccf4b8c-kfx52" Feb 27 10:33:04 crc kubenswrapper[4728]: I0227 10:33:04.057079 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/67c62fc2-8eff-4513-a465-696999f20a5b-audit-log\") pod \"metrics-server-865ccf4b8c-kfx52\" (UID: \"67c62fc2-8eff-4513-a465-696999f20a5b\") " pod="openshift-monitoring/metrics-server-865ccf4b8c-kfx52" Feb 27 10:33:04 crc kubenswrapper[4728]: I0227 10:33:04.057155 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zk2sn\" (UniqueName: \"kubernetes.io/projected/67c62fc2-8eff-4513-a465-696999f20a5b-kube-api-access-zk2sn\") pod \"metrics-server-865ccf4b8c-kfx52\" (UID: \"67c62fc2-8eff-4513-a465-696999f20a5b\") " pod="openshift-monitoring/metrics-server-865ccf4b8c-kfx52" Feb 27 10:33:04 crc kubenswrapper[4728]: I0227 10:33:04.057210 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/67c62fc2-8eff-4513-a465-696999f20a5b-secret-metrics-client-certs\") pod \"metrics-server-865ccf4b8c-kfx52\" (UID: \"67c62fc2-8eff-4513-a465-696999f20a5b\") " pod="openshift-monitoring/metrics-server-865ccf4b8c-kfx52" Feb 27 10:33:04 crc kubenswrapper[4728]: I0227 10:33:04.057372 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/67c62fc2-8eff-4513-a465-696999f20a5b-metrics-server-audit-profiles\") pod \"metrics-server-865ccf4b8c-kfx52\" (UID: \"67c62fc2-8eff-4513-a465-696999f20a5b\") " pod="openshift-monitoring/metrics-server-865ccf4b8c-kfx52" Feb 27 10:33:04 crc kubenswrapper[4728]: I0227 10:33:04.057496 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/67c62fc2-8eff-4513-a465-696999f20a5b-secret-metrics-server-tls\") pod \"metrics-server-865ccf4b8c-kfx52\" (UID: \"67c62fc2-8eff-4513-a465-696999f20a5b\") " pod="openshift-monitoring/metrics-server-865ccf4b8c-kfx52" Feb 27 10:33:04 crc kubenswrapper[4728]: I0227 10:33:04.140078 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5b986f7559-nxhwt"] Feb 27 10:33:04 crc kubenswrapper[4728]: I0227 10:33:04.151224 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-ftftn" event={"ID":"a50e4af5-0849-433f-94d4-e8855b63bc42","Type":"ContainerStarted","Data":"e0f98a9c68ffc20be9234252841a6ab44c74918f56ead9fbdf74e28c5a9e3493"} Feb 27 10:33:04 crc kubenswrapper[4728]: I0227 10:33:04.151269 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-ftftn" event={"ID":"a50e4af5-0849-433f-94d4-e8855b63bc42","Type":"ContainerStarted","Data":"96e8078449cc934597cbb32202eaddf2050288eb7310adcbb598ecdf18790c5d"} Feb 27 10:33:04 crc kubenswrapper[4728]: I0227 10:33:04.159071 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67c62fc2-8eff-4513-a465-696999f20a5b-client-ca-bundle\") pod \"metrics-server-865ccf4b8c-kfx52\" (UID: \"67c62fc2-8eff-4513-a465-696999f20a5b\") " pod="openshift-monitoring/metrics-server-865ccf4b8c-kfx52" Feb 27 10:33:04 crc kubenswrapper[4728]: I0227 10:33:04.159190 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/67c62fc2-8eff-4513-a465-696999f20a5b-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-865ccf4b8c-kfx52\" (UID: \"67c62fc2-8eff-4513-a465-696999f20a5b\") " pod="openshift-monitoring/metrics-server-865ccf4b8c-kfx52" Feb 27 10:33:04 crc kubenswrapper[4728]: I0227 10:33:04.159271 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/67c62fc2-8eff-4513-a465-696999f20a5b-audit-log\") pod \"metrics-server-865ccf4b8c-kfx52\" (UID: \"67c62fc2-8eff-4513-a465-696999f20a5b\") " pod="openshift-monitoring/metrics-server-865ccf4b8c-kfx52" Feb 27 10:33:04 crc kubenswrapper[4728]: I0227 10:33:04.159320 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zk2sn\" (UniqueName: \"kubernetes.io/projected/67c62fc2-8eff-4513-a465-696999f20a5b-kube-api-access-zk2sn\") pod \"metrics-server-865ccf4b8c-kfx52\" (UID: \"67c62fc2-8eff-4513-a465-696999f20a5b\") " pod="openshift-monitoring/metrics-server-865ccf4b8c-kfx52" Feb 27 10:33:04 crc kubenswrapper[4728]: I0227 10:33:04.159388 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/67c62fc2-8eff-4513-a465-696999f20a5b-secret-metrics-client-certs\") pod \"metrics-server-865ccf4b8c-kfx52\" (UID: \"67c62fc2-8eff-4513-a465-696999f20a5b\") " pod="openshift-monitoring/metrics-server-865ccf4b8c-kfx52" Feb 27 10:33:04 crc kubenswrapper[4728]: I0227 10:33:04.159433 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/67c62fc2-8eff-4513-a465-696999f20a5b-metrics-server-audit-profiles\") pod \"metrics-server-865ccf4b8c-kfx52\" (UID: \"67c62fc2-8eff-4513-a465-696999f20a5b\") " pod="openshift-monitoring/metrics-server-865ccf4b8c-kfx52" Feb 27 10:33:04 crc kubenswrapper[4728]: I0227 10:33:04.159460 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/67c62fc2-8eff-4513-a465-696999f20a5b-secret-metrics-server-tls\") pod \"metrics-server-865ccf4b8c-kfx52\" (UID: \"67c62fc2-8eff-4513-a465-696999f20a5b\") " pod="openshift-monitoring/metrics-server-865ccf4b8c-kfx52" Feb 27 10:33:04 crc kubenswrapper[4728]: I0227 10:33:04.162921 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/67c62fc2-8eff-4513-a465-696999f20a5b-audit-log\") pod \"metrics-server-865ccf4b8c-kfx52\" (UID: \"67c62fc2-8eff-4513-a465-696999f20a5b\") " pod="openshift-monitoring/metrics-server-865ccf4b8c-kfx52" Feb 27 10:33:04 crc kubenswrapper[4728]: I0227 10:33:04.164783 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67c62fc2-8eff-4513-a465-696999f20a5b-client-ca-bundle\") pod \"metrics-server-865ccf4b8c-kfx52\" (UID: \"67c62fc2-8eff-4513-a465-696999f20a5b\") " pod="openshift-monitoring/metrics-server-865ccf4b8c-kfx52" Feb 27 10:33:04 crc kubenswrapper[4728]: I0227 10:33:04.164800 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/67c62fc2-8eff-4513-a465-696999f20a5b-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-865ccf4b8c-kfx52\" (UID: \"67c62fc2-8eff-4513-a465-696999f20a5b\") " pod="openshift-monitoring/metrics-server-865ccf4b8c-kfx52" Feb 27 10:33:04 crc kubenswrapper[4728]: I0227 10:33:04.165015 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/67c62fc2-8eff-4513-a465-696999f20a5b-metrics-server-audit-profiles\") pod \"metrics-server-865ccf4b8c-kfx52\" (UID: \"67c62fc2-8eff-4513-a465-696999f20a5b\") " pod="openshift-monitoring/metrics-server-865ccf4b8c-kfx52" Feb 27 10:33:04 crc kubenswrapper[4728]: I0227 10:33:04.168139 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/67c62fc2-8eff-4513-a465-696999f20a5b-secret-metrics-server-tls\") pod \"metrics-server-865ccf4b8c-kfx52\" (UID: \"67c62fc2-8eff-4513-a465-696999f20a5b\") " pod="openshift-monitoring/metrics-server-865ccf4b8c-kfx52" Feb 27 10:33:04 crc kubenswrapper[4728]: I0227 10:33:04.168483 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/67c62fc2-8eff-4513-a465-696999f20a5b-secret-metrics-client-certs\") pod \"metrics-server-865ccf4b8c-kfx52\" (UID: \"67c62fc2-8eff-4513-a465-696999f20a5b\") " pod="openshift-monitoring/metrics-server-865ccf4b8c-kfx52" Feb 27 10:33:04 crc kubenswrapper[4728]: I0227 10:33:04.180299 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-ftftn" podStartSLOduration=2.888715574 podStartE2EDuration="6.180282274s" podCreationTimestamp="2026-02-27 10:32:58 +0000 UTC" firstStartedPulling="2026-02-27 10:32:59.567350743 +0000 UTC m=+399.529716849" lastFinishedPulling="2026-02-27 10:33:02.858917443 +0000 UTC m=+402.821283549" observedRunningTime="2026-02-27 10:33:04.173423442 +0000 UTC m=+404.135789548" watchObservedRunningTime="2026-02-27 10:33:04.180282274 +0000 UTC m=+404.142648380" Feb 27 10:33:04 crc kubenswrapper[4728]: I0227 10:33:04.184468 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zk2sn\" (UniqueName: \"kubernetes.io/projected/67c62fc2-8eff-4513-a465-696999f20a5b-kube-api-access-zk2sn\") pod \"metrics-server-865ccf4b8c-kfx52\" (UID: \"67c62fc2-8eff-4513-a465-696999f20a5b\") " pod="openshift-monitoring/metrics-server-865ccf4b8c-kfx52" Feb 27 10:33:04 crc kubenswrapper[4728]: I0227 10:33:04.211282 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-865ccf4b8c-kfx52" Feb 27 10:33:04 crc kubenswrapper[4728]: I0227 10:33:04.324595 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/monitoring-plugin-6d4444d757-pgkmh"] Feb 27 10:33:04 crc kubenswrapper[4728]: I0227 10:33:04.325985 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-6d4444d757-pgkmh" Feb 27 10:33:04 crc kubenswrapper[4728]: I0227 10:33:04.328908 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"default-dockercfg-6tstp" Feb 27 10:33:04 crc kubenswrapper[4728]: I0227 10:33:04.329753 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"monitoring-plugin-cert" Feb 27 10:33:04 crc kubenswrapper[4728]: I0227 10:33:04.336706 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-6d4444d757-pgkmh"] Feb 27 10:33:04 crc kubenswrapper[4728]: I0227 10:33:04.361578 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/d07fcbb8-785f-4a06-a5a7-9353ee537cf3-monitoring-plugin-cert\") pod \"monitoring-plugin-6d4444d757-pgkmh\" (UID: \"d07fcbb8-785f-4a06-a5a7-9353ee537cf3\") " pod="openshift-monitoring/monitoring-plugin-6d4444d757-pgkmh" Feb 27 10:33:04 crc kubenswrapper[4728]: I0227 10:33:04.462564 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/d07fcbb8-785f-4a06-a5a7-9353ee537cf3-monitoring-plugin-cert\") pod \"monitoring-plugin-6d4444d757-pgkmh\" (UID: \"d07fcbb8-785f-4a06-a5a7-9353ee537cf3\") " pod="openshift-monitoring/monitoring-plugin-6d4444d757-pgkmh" Feb 27 10:33:04 crc kubenswrapper[4728]: I0227 10:33:04.467290 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/d07fcbb8-785f-4a06-a5a7-9353ee537cf3-monitoring-plugin-cert\") pod \"monitoring-plugin-6d4444d757-pgkmh\" (UID: \"d07fcbb8-785f-4a06-a5a7-9353ee537cf3\") " pod="openshift-monitoring/monitoring-plugin-6d4444d757-pgkmh" Feb 27 10:33:04 crc kubenswrapper[4728]: I0227 10:33:04.642320 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-6d4444d757-pgkmh" Feb 27 10:33:04 crc kubenswrapper[4728]: I0227 10:33:04.864209 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Feb 27 10:33:04 crc kubenswrapper[4728]: I0227 10:33:04.866577 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Feb 27 10:33:04 crc kubenswrapper[4728]: I0227 10:33:04.868398 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/613d3f3d-0258-48b0-ab33-59cb3a3d36a1-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"613d3f3d-0258-48b0-ab33-59cb3a3d36a1\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 27 10:33:04 crc kubenswrapper[4728]: I0227 10:33:04.868626 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/613d3f3d-0258-48b0-ab33-59cb3a3d36a1-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"613d3f3d-0258-48b0-ab33-59cb3a3d36a1\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 27 10:33:04 crc kubenswrapper[4728]: I0227 10:33:04.868786 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/613d3f3d-0258-48b0-ab33-59cb3a3d36a1-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"613d3f3d-0258-48b0-ab33-59cb3a3d36a1\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 27 10:33:04 crc kubenswrapper[4728]: I0227 10:33:04.868904 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/613d3f3d-0258-48b0-ab33-59cb3a3d36a1-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"613d3f3d-0258-48b0-ab33-59cb3a3d36a1\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 27 10:33:04 crc kubenswrapper[4728]: I0227 10:33:04.869032 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/613d3f3d-0258-48b0-ab33-59cb3a3d36a1-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"613d3f3d-0258-48b0-ab33-59cb3a3d36a1\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 27 10:33:04 crc kubenswrapper[4728]: I0227 10:33:04.869112 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/613d3f3d-0258-48b0-ab33-59cb3a3d36a1-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"613d3f3d-0258-48b0-ab33-59cb3a3d36a1\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 27 10:33:04 crc kubenswrapper[4728]: I0227 10:33:04.869140 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/613d3f3d-0258-48b0-ab33-59cb3a3d36a1-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"613d3f3d-0258-48b0-ab33-59cb3a3d36a1\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 27 10:33:04 crc kubenswrapper[4728]: I0227 10:33:04.869177 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/613d3f3d-0258-48b0-ab33-59cb3a3d36a1-web-config\") pod \"prometheus-k8s-0\" (UID: \"613d3f3d-0258-48b0-ab33-59cb3a3d36a1\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 27 10:33:04 crc kubenswrapper[4728]: I0227 10:33:04.869197 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/613d3f3d-0258-48b0-ab33-59cb3a3d36a1-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"613d3f3d-0258-48b0-ab33-59cb3a3d36a1\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 27 10:33:04 crc kubenswrapper[4728]: I0227 10:33:04.869267 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/613d3f3d-0258-48b0-ab33-59cb3a3d36a1-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"613d3f3d-0258-48b0-ab33-59cb3a3d36a1\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 27 10:33:04 crc kubenswrapper[4728]: I0227 10:33:04.869319 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/613d3f3d-0258-48b0-ab33-59cb3a3d36a1-config-out\") pod \"prometheus-k8s-0\" (UID: \"613d3f3d-0258-48b0-ab33-59cb3a3d36a1\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 27 10:33:04 crc kubenswrapper[4728]: I0227 10:33:04.869357 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/613d3f3d-0258-48b0-ab33-59cb3a3d36a1-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"613d3f3d-0258-48b0-ab33-59cb3a3d36a1\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 27 10:33:04 crc kubenswrapper[4728]: I0227 10:33:04.869378 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/613d3f3d-0258-48b0-ab33-59cb3a3d36a1-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"613d3f3d-0258-48b0-ab33-59cb3a3d36a1\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 27 10:33:04 crc kubenswrapper[4728]: I0227 10:33:04.869399 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/613d3f3d-0258-48b0-ab33-59cb3a3d36a1-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"613d3f3d-0258-48b0-ab33-59cb3a3d36a1\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 27 10:33:04 crc kubenswrapper[4728]: I0227 10:33:04.869415 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/613d3f3d-0258-48b0-ab33-59cb3a3d36a1-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"613d3f3d-0258-48b0-ab33-59cb3a3d36a1\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 27 10:33:04 crc kubenswrapper[4728]: I0227 10:33:04.869570 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/613d3f3d-0258-48b0-ab33-59cb3a3d36a1-config\") pod \"prometheus-k8s-0\" (UID: \"613d3f3d-0258-48b0-ab33-59cb3a3d36a1\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 27 10:33:04 crc kubenswrapper[4728]: I0227 10:33:04.869617 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7nj2l\" (UniqueName: \"kubernetes.io/projected/613d3f3d-0258-48b0-ab33-59cb3a3d36a1-kube-api-access-7nj2l\") pod \"prometheus-k8s-0\" (UID: \"613d3f3d-0258-48b0-ab33-59cb3a3d36a1\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 27 10:33:04 crc kubenswrapper[4728]: I0227 10:33:04.869653 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/613d3f3d-0258-48b0-ab33-59cb3a3d36a1-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"613d3f3d-0258-48b0-ab33-59cb3a3d36a1\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 27 10:33:04 crc kubenswrapper[4728]: I0227 10:33:04.871752 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"serving-certs-ca-bundle" Feb 27 10:33:04 crc kubenswrapper[4728]: I0227 10:33:04.871988 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" Feb 27 10:33:04 crc kubenswrapper[4728]: I0227 10:33:04.871992 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-rbac-proxy" Feb 27 10:33:04 crc kubenswrapper[4728]: I0227 10:33:04.872137 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-web-config" Feb 27 10:33:04 crc kubenswrapper[4728]: I0227 10:33:04.872525 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-dockercfg-5wz4k" Feb 27 10:33:04 crc kubenswrapper[4728]: I0227 10:33:04.872651 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" Feb 27 10:33:04 crc kubenswrapper[4728]: I0227 10:33:04.872713 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls" Feb 27 10:33:04 crc kubenswrapper[4728]: I0227 10:33:04.872770 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls-assets-0" Feb 27 10:33:04 crc kubenswrapper[4728]: I0227 10:33:04.872812 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" Feb 27 10:33:04 crc kubenswrapper[4728]: I0227 10:33:04.872868 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-grpc-tls-d7kt9rgdlud3j" Feb 27 10:33:04 crc kubenswrapper[4728]: I0227 10:33:04.872944 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s" Feb 27 10:33:04 crc kubenswrapper[4728]: I0227 10:33:04.875882 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-k8s-rulefiles-0" Feb 27 10:33:04 crc kubenswrapper[4728]: I0227 10:33:04.885107 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-trusted-ca-bundle" Feb 27 10:33:04 crc kubenswrapper[4728]: I0227 10:33:04.887476 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Feb 27 10:33:04 crc kubenswrapper[4728]: I0227 10:33:04.970645 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/613d3f3d-0258-48b0-ab33-59cb3a3d36a1-config-out\") pod \"prometheus-k8s-0\" (UID: \"613d3f3d-0258-48b0-ab33-59cb3a3d36a1\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 27 10:33:04 crc kubenswrapper[4728]: I0227 10:33:04.970705 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/613d3f3d-0258-48b0-ab33-59cb3a3d36a1-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"613d3f3d-0258-48b0-ab33-59cb3a3d36a1\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 27 10:33:04 crc kubenswrapper[4728]: I0227 10:33:04.970729 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/613d3f3d-0258-48b0-ab33-59cb3a3d36a1-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"613d3f3d-0258-48b0-ab33-59cb3a3d36a1\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 27 10:33:04 crc kubenswrapper[4728]: I0227 10:33:04.970757 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/613d3f3d-0258-48b0-ab33-59cb3a3d36a1-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"613d3f3d-0258-48b0-ab33-59cb3a3d36a1\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 27 10:33:04 crc kubenswrapper[4728]: I0227 10:33:04.970778 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/613d3f3d-0258-48b0-ab33-59cb3a3d36a1-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"613d3f3d-0258-48b0-ab33-59cb3a3d36a1\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 27 10:33:04 crc kubenswrapper[4728]: I0227 10:33:04.970825 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/613d3f3d-0258-48b0-ab33-59cb3a3d36a1-config\") pod \"prometheus-k8s-0\" (UID: \"613d3f3d-0258-48b0-ab33-59cb3a3d36a1\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 27 10:33:04 crc kubenswrapper[4728]: I0227 10:33:04.970847 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7nj2l\" (UniqueName: \"kubernetes.io/projected/613d3f3d-0258-48b0-ab33-59cb3a3d36a1-kube-api-access-7nj2l\") pod \"prometheus-k8s-0\" (UID: \"613d3f3d-0258-48b0-ab33-59cb3a3d36a1\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 27 10:33:04 crc kubenswrapper[4728]: I0227 10:33:04.970872 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/613d3f3d-0258-48b0-ab33-59cb3a3d36a1-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"613d3f3d-0258-48b0-ab33-59cb3a3d36a1\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 27 10:33:04 crc kubenswrapper[4728]: I0227 10:33:04.970904 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/613d3f3d-0258-48b0-ab33-59cb3a3d36a1-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"613d3f3d-0258-48b0-ab33-59cb3a3d36a1\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 27 10:33:04 crc kubenswrapper[4728]: I0227 10:33:04.970929 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/613d3f3d-0258-48b0-ab33-59cb3a3d36a1-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"613d3f3d-0258-48b0-ab33-59cb3a3d36a1\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 27 10:33:04 crc kubenswrapper[4728]: I0227 10:33:04.970963 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/613d3f3d-0258-48b0-ab33-59cb3a3d36a1-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"613d3f3d-0258-48b0-ab33-59cb3a3d36a1\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 27 10:33:04 crc kubenswrapper[4728]: I0227 10:33:04.970989 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/613d3f3d-0258-48b0-ab33-59cb3a3d36a1-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"613d3f3d-0258-48b0-ab33-59cb3a3d36a1\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 27 10:33:04 crc kubenswrapper[4728]: I0227 10:33:04.971016 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/613d3f3d-0258-48b0-ab33-59cb3a3d36a1-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"613d3f3d-0258-48b0-ab33-59cb3a3d36a1\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 27 10:33:04 crc kubenswrapper[4728]: I0227 10:33:04.971044 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/613d3f3d-0258-48b0-ab33-59cb3a3d36a1-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"613d3f3d-0258-48b0-ab33-59cb3a3d36a1\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 27 10:33:04 crc kubenswrapper[4728]: I0227 10:33:04.971064 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/613d3f3d-0258-48b0-ab33-59cb3a3d36a1-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"613d3f3d-0258-48b0-ab33-59cb3a3d36a1\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 27 10:33:04 crc kubenswrapper[4728]: I0227 10:33:04.971089 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/613d3f3d-0258-48b0-ab33-59cb3a3d36a1-web-config\") pod \"prometheus-k8s-0\" (UID: \"613d3f3d-0258-48b0-ab33-59cb3a3d36a1\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 27 10:33:04 crc kubenswrapper[4728]: I0227 10:33:04.971110 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/613d3f3d-0258-48b0-ab33-59cb3a3d36a1-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"613d3f3d-0258-48b0-ab33-59cb3a3d36a1\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 27 10:33:04 crc kubenswrapper[4728]: I0227 10:33:04.971146 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/613d3f3d-0258-48b0-ab33-59cb3a3d36a1-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"613d3f3d-0258-48b0-ab33-59cb3a3d36a1\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 27 10:33:04 crc kubenswrapper[4728]: I0227 10:33:04.971626 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/613d3f3d-0258-48b0-ab33-59cb3a3d36a1-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"613d3f3d-0258-48b0-ab33-59cb3a3d36a1\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 27 10:33:04 crc kubenswrapper[4728]: I0227 10:33:04.971775 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/613d3f3d-0258-48b0-ab33-59cb3a3d36a1-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"613d3f3d-0258-48b0-ab33-59cb3a3d36a1\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 27 10:33:04 crc kubenswrapper[4728]: I0227 10:33:04.972362 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/613d3f3d-0258-48b0-ab33-59cb3a3d36a1-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"613d3f3d-0258-48b0-ab33-59cb3a3d36a1\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 27 10:33:04 crc kubenswrapper[4728]: I0227 10:33:04.972941 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/613d3f3d-0258-48b0-ab33-59cb3a3d36a1-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"613d3f3d-0258-48b0-ab33-59cb3a3d36a1\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 27 10:33:04 crc kubenswrapper[4728]: I0227 10:33:04.973194 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/613d3f3d-0258-48b0-ab33-59cb3a3d36a1-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"613d3f3d-0258-48b0-ab33-59cb3a3d36a1\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 27 10:33:04 crc kubenswrapper[4728]: I0227 10:33:04.977460 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/613d3f3d-0258-48b0-ab33-59cb3a3d36a1-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"613d3f3d-0258-48b0-ab33-59cb3a3d36a1\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 27 10:33:04 crc kubenswrapper[4728]: I0227 10:33:04.978265 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/613d3f3d-0258-48b0-ab33-59cb3a3d36a1-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"613d3f3d-0258-48b0-ab33-59cb3a3d36a1\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 27 10:33:04 crc kubenswrapper[4728]: I0227 10:33:04.978309 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/613d3f3d-0258-48b0-ab33-59cb3a3d36a1-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"613d3f3d-0258-48b0-ab33-59cb3a3d36a1\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 27 10:33:04 crc kubenswrapper[4728]: I0227 10:33:04.978523 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/613d3f3d-0258-48b0-ab33-59cb3a3d36a1-config-out\") pod \"prometheus-k8s-0\" (UID: \"613d3f3d-0258-48b0-ab33-59cb3a3d36a1\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 27 10:33:04 crc kubenswrapper[4728]: I0227 10:33:04.981963 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/613d3f3d-0258-48b0-ab33-59cb3a3d36a1-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"613d3f3d-0258-48b0-ab33-59cb3a3d36a1\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 27 10:33:04 crc kubenswrapper[4728]: I0227 10:33:04.982178 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/613d3f3d-0258-48b0-ab33-59cb3a3d36a1-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"613d3f3d-0258-48b0-ab33-59cb3a3d36a1\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 27 10:33:04 crc kubenswrapper[4728]: I0227 10:33:04.982796 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/613d3f3d-0258-48b0-ab33-59cb3a3d36a1-config\") pod \"prometheus-k8s-0\" (UID: \"613d3f3d-0258-48b0-ab33-59cb3a3d36a1\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 27 10:33:04 crc kubenswrapper[4728]: I0227 10:33:04.982801 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/613d3f3d-0258-48b0-ab33-59cb3a3d36a1-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"613d3f3d-0258-48b0-ab33-59cb3a3d36a1\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 27 10:33:04 crc kubenswrapper[4728]: I0227 10:33:04.982802 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/613d3f3d-0258-48b0-ab33-59cb3a3d36a1-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"613d3f3d-0258-48b0-ab33-59cb3a3d36a1\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 27 10:33:04 crc kubenswrapper[4728]: I0227 10:33:04.983048 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/613d3f3d-0258-48b0-ab33-59cb3a3d36a1-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"613d3f3d-0258-48b0-ab33-59cb3a3d36a1\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 27 10:33:04 crc kubenswrapper[4728]: I0227 10:33:04.983977 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/613d3f3d-0258-48b0-ab33-59cb3a3d36a1-web-config\") pod \"prometheus-k8s-0\" (UID: \"613d3f3d-0258-48b0-ab33-59cb3a3d36a1\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 27 10:33:04 crc kubenswrapper[4728]: I0227 10:33:04.984808 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/613d3f3d-0258-48b0-ab33-59cb3a3d36a1-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"613d3f3d-0258-48b0-ab33-59cb3a3d36a1\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 27 10:33:04 crc kubenswrapper[4728]: I0227 10:33:04.987456 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7nj2l\" (UniqueName: \"kubernetes.io/projected/613d3f3d-0258-48b0-ab33-59cb3a3d36a1-kube-api-access-7nj2l\") pod \"prometheus-k8s-0\" (UID: \"613d3f3d-0258-48b0-ab33-59cb3a3d36a1\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 27 10:33:05 crc kubenswrapper[4728]: I0227 10:33:05.184487 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Feb 27 10:33:06 crc kubenswrapper[4728]: I0227 10:33:06.188564 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"db3b4b4e-7bdc-4c84-8292-b7ccb2fc3a6f","Type":"ContainerStarted","Data":"361304a78f2579ce91186898918b1e83ca51e32c91f4382653036cc807951907"} Feb 27 10:33:06 crc kubenswrapper[4728]: I0227 10:33:06.195296 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-6c5dbd664-trjtq" event={"ID":"d623145b-4ef4-4101-a703-ff4ef1392e82","Type":"ContainerStarted","Data":"72506058a5fafc46d53049a99065d306922d30927b2908a39e478010e52f5326"} Feb 27 10:33:06 crc kubenswrapper[4728]: I0227 10:33:06.199938 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5b986f7559-nxhwt" event={"ID":"651e26df-d8dd-423b-9df7-ce05f925582c","Type":"ContainerStarted","Data":"1ff21c620cff1431cb08994f5b13b73df6532d6fbab8a0cd88b9cfcc26a5db58"} Feb 27 10:33:06 crc kubenswrapper[4728]: I0227 10:33:06.199961 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5b986f7559-nxhwt" event={"ID":"651e26df-d8dd-423b-9df7-ce05f925582c","Type":"ContainerStarted","Data":"b9864df126d88724016843cb4985ab4b50c5c409620bb47a8f80220297d334f2"} Feb 27 10:33:06 crc kubenswrapper[4728]: I0227 10:33:06.219033 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-5b986f7559-nxhwt" podStartSLOduration=3.21901873 podStartE2EDuration="3.21901873s" podCreationTimestamp="2026-02-27 10:33:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:33:06.218054295 +0000 UTC m=+406.180420401" watchObservedRunningTime="2026-02-27 10:33:06.21901873 +0000 UTC m=+406.181384826" Feb 27 10:33:06 crc kubenswrapper[4728]: I0227 10:33:06.379654 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-6d4444d757-pgkmh"] Feb 27 10:33:06 crc kubenswrapper[4728]: W0227 10:33:06.395448 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd07fcbb8_785f_4a06_a5a7_9353ee537cf3.slice/crio-c6e2c96be767bbdf028be026a2c276b4f251c0ebab8ac9dabcbf454817c75c80 WatchSource:0}: Error finding container c6e2c96be767bbdf028be026a2c276b4f251c0ebab8ac9dabcbf454817c75c80: Status 404 returned error can't find the container with id c6e2c96be767bbdf028be026a2c276b4f251c0ebab8ac9dabcbf454817c75c80 Feb 27 10:33:06 crc kubenswrapper[4728]: I0227 10:33:06.453304 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-865ccf4b8c-kfx52"] Feb 27 10:33:06 crc kubenswrapper[4728]: W0227 10:33:06.463301 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod67c62fc2_8eff_4513_a465_696999f20a5b.slice/crio-ffa74ed19e41c8bd677881bb2b5f8cfca1e6cb0f7868176be9838b6fb5ae17ca WatchSource:0}: Error finding container ffa74ed19e41c8bd677881bb2b5f8cfca1e6cb0f7868176be9838b6fb5ae17ca: Status 404 returned error can't find the container with id ffa74ed19e41c8bd677881bb2b5f8cfca1e6cb0f7868176be9838b6fb5ae17ca Feb 27 10:33:06 crc kubenswrapper[4728]: I0227 10:33:06.470336 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Feb 27 10:33:06 crc kubenswrapper[4728]: W0227 10:33:06.491305 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod613d3f3d_0258_48b0_ab33_59cb3a3d36a1.slice/crio-2b8792f28dbb06ae146184ff534af8a7b3ccb1f30f97926fc031eb561440687f WatchSource:0}: Error finding container 2b8792f28dbb06ae146184ff534af8a7b3ccb1f30f97926fc031eb561440687f: Status 404 returned error can't find the container with id 2b8792f28dbb06ae146184ff534af8a7b3ccb1f30f97926fc031eb561440687f Feb 27 10:33:07 crc kubenswrapper[4728]: I0227 10:33:07.212061 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"db3b4b4e-7bdc-4c84-8292-b7ccb2fc3a6f","Type":"ContainerStarted","Data":"3c27b075d3f933ef3f3e5e00e8c3e0c0afc790140f5da45764a0c2b49cf7e197"} Feb 27 10:33:07 crc kubenswrapper[4728]: I0227 10:33:07.212463 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"db3b4b4e-7bdc-4c84-8292-b7ccb2fc3a6f","Type":"ContainerStarted","Data":"e11825858c81cf2af0b4b3ebbf958b625431bbfdc1aa97a265fa0ccfb3f1a09c"} Feb 27 10:33:07 crc kubenswrapper[4728]: I0227 10:33:07.212488 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"db3b4b4e-7bdc-4c84-8292-b7ccb2fc3a6f","Type":"ContainerStarted","Data":"cb888dcce9f87d802eea89d26c839f7fa44a06487992c054943da238b9a88ec9"} Feb 27 10:33:07 crc kubenswrapper[4728]: I0227 10:33:07.212543 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"db3b4b4e-7bdc-4c84-8292-b7ccb2fc3a6f","Type":"ContainerStarted","Data":"01d5aaa42516016c738792e4b73ee3f6ea72484ae69a3152151c00f4d4363fde"} Feb 27 10:33:07 crc kubenswrapper[4728]: I0227 10:33:07.213112 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-6d4444d757-pgkmh" event={"ID":"d07fcbb8-785f-4a06-a5a7-9353ee537cf3","Type":"ContainerStarted","Data":"c6e2c96be767bbdf028be026a2c276b4f251c0ebab8ac9dabcbf454817c75c80"} Feb 27 10:33:07 crc kubenswrapper[4728]: I0227 10:33:07.214014 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-865ccf4b8c-kfx52" event={"ID":"67c62fc2-8eff-4513-a465-696999f20a5b","Type":"ContainerStarted","Data":"ffa74ed19e41c8bd677881bb2b5f8cfca1e6cb0f7868176be9838b6fb5ae17ca"} Feb 27 10:33:07 crc kubenswrapper[4728]: I0227 10:33:07.215484 4728 generic.go:334] "Generic (PLEG): container finished" podID="613d3f3d-0258-48b0-ab33-59cb3a3d36a1" containerID="d3461fef2558d15d853b5b7b371820ba2b7444324ce0a732aa7c56ebcaeb0563" exitCode=0 Feb 27 10:33:07 crc kubenswrapper[4728]: I0227 10:33:07.215628 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"613d3f3d-0258-48b0-ab33-59cb3a3d36a1","Type":"ContainerDied","Data":"d3461fef2558d15d853b5b7b371820ba2b7444324ce0a732aa7c56ebcaeb0563"} Feb 27 10:33:07 crc kubenswrapper[4728]: I0227 10:33:07.215672 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"613d3f3d-0258-48b0-ab33-59cb3a3d36a1","Type":"ContainerStarted","Data":"2b8792f28dbb06ae146184ff534af8a7b3ccb1f30f97926fc031eb561440687f"} Feb 27 10:33:07 crc kubenswrapper[4728]: I0227 10:33:07.223250 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-6c5dbd664-trjtq" event={"ID":"d623145b-4ef4-4101-a703-ff4ef1392e82","Type":"ContainerStarted","Data":"4448f9422b2e633dfd3725b0050bd0a8fb8eaa63ae0849c837e5d9aed586df0b"} Feb 27 10:33:07 crc kubenswrapper[4728]: I0227 10:33:07.223291 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-6c5dbd664-trjtq" event={"ID":"d623145b-4ef4-4101-a703-ff4ef1392e82","Type":"ContainerStarted","Data":"b8bea100947aacf829a5ca0ba672ef70f6af57f8b76816daae12b326452a2ec4"} Feb 27 10:33:08 crc kubenswrapper[4728]: I0227 10:33:08.234210 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-6c5dbd664-trjtq" event={"ID":"d623145b-4ef4-4101-a703-ff4ef1392e82","Type":"ContainerStarted","Data":"c92ef4923c47bf4425f173e399923f8fcb3f9aa452dc3affd0297ab25f4a59f9"} Feb 27 10:33:08 crc kubenswrapper[4728]: I0227 10:33:08.240814 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"db3b4b4e-7bdc-4c84-8292-b7ccb2fc3a6f","Type":"ContainerStarted","Data":"c592edd0b3c48a5c9b3914a8383d4d3006e8946cbfc119c30a87810ca593e863"} Feb 27 10:33:09 crc kubenswrapper[4728]: I0227 10:33:09.251198 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-6c5dbd664-trjtq" event={"ID":"d623145b-4ef4-4101-a703-ff4ef1392e82","Type":"ContainerStarted","Data":"e5e07f6acb9d8609b12ccdcc4d8fb205778cc5d6062eccd0ed17efaa44982100"} Feb 27 10:33:09 crc kubenswrapper[4728]: I0227 10:33:09.251829 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/thanos-querier-6c5dbd664-trjtq" Feb 27 10:33:09 crc kubenswrapper[4728]: I0227 10:33:09.251846 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-6c5dbd664-trjtq" event={"ID":"d623145b-4ef4-4101-a703-ff4ef1392e82","Type":"ContainerStarted","Data":"478bcb9b8ab601db1cbd6779ec00d83d5904207a74a9da39193fadcf31a43d7d"} Feb 27 10:33:09 crc kubenswrapper[4728]: I0227 10:33:09.253691 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-6d4444d757-pgkmh" event={"ID":"d07fcbb8-785f-4a06-a5a7-9353ee537cf3","Type":"ContainerStarted","Data":"b925646059ec5f632152207b9a639cf49ad3bf03634963612e9c831831d7d965"} Feb 27 10:33:09 crc kubenswrapper[4728]: I0227 10:33:09.253916 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/monitoring-plugin-6d4444d757-pgkmh" Feb 27 10:33:09 crc kubenswrapper[4728]: I0227 10:33:09.256354 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-865ccf4b8c-kfx52" event={"ID":"67c62fc2-8eff-4513-a465-696999f20a5b","Type":"ContainerStarted","Data":"377463b296e7255b0550dacca8f58d39b38be825b66ce0afebef933a224b6bc9"} Feb 27 10:33:09 crc kubenswrapper[4728]: I0227 10:33:09.261553 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/monitoring-plugin-6d4444d757-pgkmh" Feb 27 10:33:09 crc kubenswrapper[4728]: I0227 10:33:09.279300 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/thanos-querier-6c5dbd664-trjtq" podStartSLOduration=2.853736615 podStartE2EDuration="9.279274646s" podCreationTimestamp="2026-02-27 10:33:00 +0000 UTC" firstStartedPulling="2026-02-27 10:33:01.082582455 +0000 UTC m=+401.044948561" lastFinishedPulling="2026-02-27 10:33:07.508120486 +0000 UTC m=+407.470486592" observedRunningTime="2026-02-27 10:33:09.275733173 +0000 UTC m=+409.238099319" watchObservedRunningTime="2026-02-27 10:33:09.279274646 +0000 UTC m=+409.241640782" Feb 27 10:33:09 crc kubenswrapper[4728]: I0227 10:33:09.284448 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/alertmanager-main-0" podStartSLOduration=3.105824161 podStartE2EDuration="10.284431853s" podCreationTimestamp="2026-02-27 10:32:59 +0000 UTC" firstStartedPulling="2026-02-27 10:33:00.358052391 +0000 UTC m=+400.320418487" lastFinishedPulling="2026-02-27 10:33:07.536660073 +0000 UTC m=+407.499026179" observedRunningTime="2026-02-27 10:33:08.270643908 +0000 UTC m=+408.233010034" watchObservedRunningTime="2026-02-27 10:33:09.284431853 +0000 UTC m=+409.246797959" Feb 27 10:33:09 crc kubenswrapper[4728]: I0227 10:33:09.302002 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/monitoring-plugin-6d4444d757-pgkmh" podStartSLOduration=3.122833187 podStartE2EDuration="5.301947137s" podCreationTimestamp="2026-02-27 10:33:04 +0000 UTC" firstStartedPulling="2026-02-27 10:33:06.39815211 +0000 UTC m=+406.360518216" lastFinishedPulling="2026-02-27 10:33:08.57726606 +0000 UTC m=+408.539632166" observedRunningTime="2026-02-27 10:33:09.294572752 +0000 UTC m=+409.256938878" watchObservedRunningTime="2026-02-27 10:33:09.301947137 +0000 UTC m=+409.264313273" Feb 27 10:33:09 crc kubenswrapper[4728]: I0227 10:33:09.331643 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/metrics-server-865ccf4b8c-kfx52" podStartSLOduration=4.231601262 podStartE2EDuration="6.331622044s" podCreationTimestamp="2026-02-27 10:33:03 +0000 UTC" firstStartedPulling="2026-02-27 10:33:06.476121558 +0000 UTC m=+406.438487664" lastFinishedPulling="2026-02-27 10:33:08.57614234 +0000 UTC m=+408.538508446" observedRunningTime="2026-02-27 10:33:09.325196364 +0000 UTC m=+409.287562470" watchObservedRunningTime="2026-02-27 10:33:09.331622044 +0000 UTC m=+409.293988150" Feb 27 10:33:12 crc kubenswrapper[4728]: I0227 10:33:12.276210 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"613d3f3d-0258-48b0-ab33-59cb3a3d36a1","Type":"ContainerStarted","Data":"bccccaebb885eb0070b26c344b319d2a4e642ce59cd6e8a685f72042d4d8a83b"} Feb 27 10:33:12 crc kubenswrapper[4728]: I0227 10:33:12.276742 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"613d3f3d-0258-48b0-ab33-59cb3a3d36a1","Type":"ContainerStarted","Data":"4caf5bcfacd7f878594ec4365306e8c5953028100d077950607e6884a0f20c64"} Feb 27 10:33:12 crc kubenswrapper[4728]: I0227 10:33:12.276779 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"613d3f3d-0258-48b0-ab33-59cb3a3d36a1","Type":"ContainerStarted","Data":"808490a85d0236fc705b6468cf323a4c47106aa637332dbb5522edb2bbbcdfd2"} Feb 27 10:33:12 crc kubenswrapper[4728]: I0227 10:33:12.276805 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"613d3f3d-0258-48b0-ab33-59cb3a3d36a1","Type":"ContainerStarted","Data":"287f34abb0aad496370214af95030d5d1937e1fead58ea845185251109a26381"} Feb 27 10:33:12 crc kubenswrapper[4728]: I0227 10:33:12.276828 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"613d3f3d-0258-48b0-ab33-59cb3a3d36a1","Type":"ContainerStarted","Data":"5b1de67ef76936af02485ec0c22e617106eed1398da86af7244eb604653865ea"} Feb 27 10:33:12 crc kubenswrapper[4728]: I0227 10:33:12.276851 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"613d3f3d-0258-48b0-ab33-59cb3a3d36a1","Type":"ContainerStarted","Data":"b637c79c7e80aba7a5577ddf42c08b611b626e87a6e7920deda5267795e1f7a3"} Feb 27 10:33:12 crc kubenswrapper[4728]: I0227 10:33:12.313429 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-k8s-0" podStartSLOduration=4.123161096 podStartE2EDuration="8.313409979s" podCreationTimestamp="2026-02-27 10:33:04 +0000 UTC" firstStartedPulling="2026-02-27 10:33:07.218113905 +0000 UTC m=+407.180480021" lastFinishedPulling="2026-02-27 10:33:11.408362798 +0000 UTC m=+411.370728904" observedRunningTime="2026-02-27 10:33:12.311265183 +0000 UTC m=+412.273631299" watchObservedRunningTime="2026-02-27 10:33:12.313409979 +0000 UTC m=+412.275776085" Feb 27 10:33:13 crc kubenswrapper[4728]: I0227 10:33:13.665426 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-5b986f7559-nxhwt" Feb 27 10:33:13 crc kubenswrapper[4728]: I0227 10:33:13.666547 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-5b986f7559-nxhwt" Feb 27 10:33:13 crc kubenswrapper[4728]: I0227 10:33:13.670911 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-5b986f7559-nxhwt" Feb 27 10:33:14 crc kubenswrapper[4728]: I0227 10:33:14.298370 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-5b986f7559-nxhwt" Feb 27 10:33:14 crc kubenswrapper[4728]: I0227 10:33:14.347717 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-tvflj"] Feb 27 10:33:15 crc kubenswrapper[4728]: I0227 10:33:15.185601 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-k8s-0" Feb 27 10:33:15 crc kubenswrapper[4728]: I0227 10:33:15.894962 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/thanos-querier-6c5dbd664-trjtq" Feb 27 10:33:24 crc kubenswrapper[4728]: I0227 10:33:24.211445 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/metrics-server-865ccf4b8c-kfx52" Feb 27 10:33:24 crc kubenswrapper[4728]: I0227 10:33:24.212018 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/metrics-server-865ccf4b8c-kfx52" Feb 27 10:33:39 crc kubenswrapper[4728]: I0227 10:33:39.387080 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-tvflj" podUID="b1d22605-abd6-4fc6-8352-8fe78ec02332" containerName="console" containerID="cri-o://f0362825852f38e4ef290066e730fbff332e995d46735cb7c29c1f8dc563bad0" gracePeriod=15 Feb 27 10:33:40 crc kubenswrapper[4728]: I0227 10:33:40.477104 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-tvflj_b1d22605-abd6-4fc6-8352-8fe78ec02332/console/0.log" Feb 27 10:33:40 crc kubenswrapper[4728]: I0227 10:33:40.477436 4728 generic.go:334] "Generic (PLEG): container finished" podID="b1d22605-abd6-4fc6-8352-8fe78ec02332" containerID="f0362825852f38e4ef290066e730fbff332e995d46735cb7c29c1f8dc563bad0" exitCode=2 Feb 27 10:33:40 crc kubenswrapper[4728]: I0227 10:33:40.477469 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-tvflj" event={"ID":"b1d22605-abd6-4fc6-8352-8fe78ec02332","Type":"ContainerDied","Data":"f0362825852f38e4ef290066e730fbff332e995d46735cb7c29c1f8dc563bad0"} Feb 27 10:33:41 crc kubenswrapper[4728]: I0227 10:33:41.015987 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-tvflj_b1d22605-abd6-4fc6-8352-8fe78ec02332/console/0.log" Feb 27 10:33:41 crc kubenswrapper[4728]: I0227 10:33:41.016062 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-tvflj" Feb 27 10:33:41 crc kubenswrapper[4728]: I0227 10:33:41.143619 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b1d22605-abd6-4fc6-8352-8fe78ec02332-console-oauth-config\") pod \"b1d22605-abd6-4fc6-8352-8fe78ec02332\" (UID: \"b1d22605-abd6-4fc6-8352-8fe78ec02332\") " Feb 27 10:33:41 crc kubenswrapper[4728]: I0227 10:33:41.144021 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sx27b\" (UniqueName: \"kubernetes.io/projected/b1d22605-abd6-4fc6-8352-8fe78ec02332-kube-api-access-sx27b\") pod \"b1d22605-abd6-4fc6-8352-8fe78ec02332\" (UID: \"b1d22605-abd6-4fc6-8352-8fe78ec02332\") " Feb 27 10:33:41 crc kubenswrapper[4728]: I0227 10:33:41.144129 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b1d22605-abd6-4fc6-8352-8fe78ec02332-console-serving-cert\") pod \"b1d22605-abd6-4fc6-8352-8fe78ec02332\" (UID: \"b1d22605-abd6-4fc6-8352-8fe78ec02332\") " Feb 27 10:33:41 crc kubenswrapper[4728]: I0227 10:33:41.144215 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b1d22605-abd6-4fc6-8352-8fe78ec02332-service-ca\") pod \"b1d22605-abd6-4fc6-8352-8fe78ec02332\" (UID: \"b1d22605-abd6-4fc6-8352-8fe78ec02332\") " Feb 27 10:33:41 crc kubenswrapper[4728]: I0227 10:33:41.144245 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b1d22605-abd6-4fc6-8352-8fe78ec02332-oauth-serving-cert\") pod \"b1d22605-abd6-4fc6-8352-8fe78ec02332\" (UID: \"b1d22605-abd6-4fc6-8352-8fe78ec02332\") " Feb 27 10:33:41 crc kubenswrapper[4728]: I0227 10:33:41.144291 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b1d22605-abd6-4fc6-8352-8fe78ec02332-trusted-ca-bundle\") pod \"b1d22605-abd6-4fc6-8352-8fe78ec02332\" (UID: \"b1d22605-abd6-4fc6-8352-8fe78ec02332\") " Feb 27 10:33:41 crc kubenswrapper[4728]: I0227 10:33:41.144331 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b1d22605-abd6-4fc6-8352-8fe78ec02332-console-config\") pod \"b1d22605-abd6-4fc6-8352-8fe78ec02332\" (UID: \"b1d22605-abd6-4fc6-8352-8fe78ec02332\") " Feb 27 10:33:41 crc kubenswrapper[4728]: I0227 10:33:41.145281 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b1d22605-abd6-4fc6-8352-8fe78ec02332-service-ca" (OuterVolumeSpecName: "service-ca") pod "b1d22605-abd6-4fc6-8352-8fe78ec02332" (UID: "b1d22605-abd6-4fc6-8352-8fe78ec02332"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:33:41 crc kubenswrapper[4728]: I0227 10:33:41.145297 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b1d22605-abd6-4fc6-8352-8fe78ec02332-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "b1d22605-abd6-4fc6-8352-8fe78ec02332" (UID: "b1d22605-abd6-4fc6-8352-8fe78ec02332"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:33:41 crc kubenswrapper[4728]: I0227 10:33:41.145313 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b1d22605-abd6-4fc6-8352-8fe78ec02332-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "b1d22605-abd6-4fc6-8352-8fe78ec02332" (UID: "b1d22605-abd6-4fc6-8352-8fe78ec02332"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:33:41 crc kubenswrapper[4728]: I0227 10:33:41.145326 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b1d22605-abd6-4fc6-8352-8fe78ec02332-console-config" (OuterVolumeSpecName: "console-config") pod "b1d22605-abd6-4fc6-8352-8fe78ec02332" (UID: "b1d22605-abd6-4fc6-8352-8fe78ec02332"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:33:41 crc kubenswrapper[4728]: I0227 10:33:41.161722 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1d22605-abd6-4fc6-8352-8fe78ec02332-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "b1d22605-abd6-4fc6-8352-8fe78ec02332" (UID: "b1d22605-abd6-4fc6-8352-8fe78ec02332"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:33:41 crc kubenswrapper[4728]: I0227 10:33:41.162152 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1d22605-abd6-4fc6-8352-8fe78ec02332-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "b1d22605-abd6-4fc6-8352-8fe78ec02332" (UID: "b1d22605-abd6-4fc6-8352-8fe78ec02332"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:33:41 crc kubenswrapper[4728]: I0227 10:33:41.162701 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1d22605-abd6-4fc6-8352-8fe78ec02332-kube-api-access-sx27b" (OuterVolumeSpecName: "kube-api-access-sx27b") pod "b1d22605-abd6-4fc6-8352-8fe78ec02332" (UID: "b1d22605-abd6-4fc6-8352-8fe78ec02332"). InnerVolumeSpecName "kube-api-access-sx27b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:33:41 crc kubenswrapper[4728]: I0227 10:33:41.245876 4728 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b1d22605-abd6-4fc6-8352-8fe78ec02332-service-ca\") on node \"crc\" DevicePath \"\"" Feb 27 10:33:41 crc kubenswrapper[4728]: I0227 10:33:41.245912 4728 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b1d22605-abd6-4fc6-8352-8fe78ec02332-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 10:33:41 crc kubenswrapper[4728]: I0227 10:33:41.245925 4728 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b1d22605-abd6-4fc6-8352-8fe78ec02332-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 10:33:41 crc kubenswrapper[4728]: I0227 10:33:41.245935 4728 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b1d22605-abd6-4fc6-8352-8fe78ec02332-console-config\") on node \"crc\" DevicePath \"\"" Feb 27 10:33:41 crc kubenswrapper[4728]: I0227 10:33:41.245944 4728 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b1d22605-abd6-4fc6-8352-8fe78ec02332-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 27 10:33:41 crc kubenswrapper[4728]: I0227 10:33:41.245953 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sx27b\" (UniqueName: \"kubernetes.io/projected/b1d22605-abd6-4fc6-8352-8fe78ec02332-kube-api-access-sx27b\") on node \"crc\" DevicePath \"\"" Feb 27 10:33:41 crc kubenswrapper[4728]: I0227 10:33:41.245961 4728 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b1d22605-abd6-4fc6-8352-8fe78ec02332-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 10:33:41 crc kubenswrapper[4728]: I0227 10:33:41.486538 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-tvflj_b1d22605-abd6-4fc6-8352-8fe78ec02332/console/0.log" Feb 27 10:33:41 crc kubenswrapper[4728]: I0227 10:33:41.486620 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-tvflj" event={"ID":"b1d22605-abd6-4fc6-8352-8fe78ec02332","Type":"ContainerDied","Data":"56c79be41d0ed79e41abcf9d588c28c4a9e0751970eb545cc2f507f5cd3ea33c"} Feb 27 10:33:41 crc kubenswrapper[4728]: I0227 10:33:41.486671 4728 scope.go:117] "RemoveContainer" containerID="f0362825852f38e4ef290066e730fbff332e995d46735cb7c29c1f8dc563bad0" Feb 27 10:33:41 crc kubenswrapper[4728]: I0227 10:33:41.486742 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-tvflj" Feb 27 10:33:41 crc kubenswrapper[4728]: I0227 10:33:41.516860 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-tvflj"] Feb 27 10:33:41 crc kubenswrapper[4728]: I0227 10:33:41.522410 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-tvflj"] Feb 27 10:33:42 crc kubenswrapper[4728]: I0227 10:33:42.734559 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b1d22605-abd6-4fc6-8352-8fe78ec02332" path="/var/lib/kubelet/pods/b1d22605-abd6-4fc6-8352-8fe78ec02332/volumes" Feb 27 10:33:44 crc kubenswrapper[4728]: I0227 10:33:44.217343 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/metrics-server-865ccf4b8c-kfx52" Feb 27 10:33:44 crc kubenswrapper[4728]: I0227 10:33:44.221411 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/metrics-server-865ccf4b8c-kfx52" Feb 27 10:34:00 crc kubenswrapper[4728]: I0227 10:34:00.142025 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536474-8lbnm"] Feb 27 10:34:00 crc kubenswrapper[4728]: E0227 10:34:00.142952 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1d22605-abd6-4fc6-8352-8fe78ec02332" containerName="console" Feb 27 10:34:00 crc kubenswrapper[4728]: I0227 10:34:00.142967 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1d22605-abd6-4fc6-8352-8fe78ec02332" containerName="console" Feb 27 10:34:00 crc kubenswrapper[4728]: I0227 10:34:00.143544 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1d22605-abd6-4fc6-8352-8fe78ec02332" containerName="console" Feb 27 10:34:00 crc kubenswrapper[4728]: I0227 10:34:00.145467 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536474-8lbnm" Feb 27 10:34:00 crc kubenswrapper[4728]: I0227 10:34:00.158650 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-wxdfj" Feb 27 10:34:00 crc kubenswrapper[4728]: I0227 10:34:00.158715 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 10:34:00 crc kubenswrapper[4728]: I0227 10:34:00.160971 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hzlt\" (UniqueName: \"kubernetes.io/projected/4c489801-99b4-490f-b2b9-f475aabf3f7b-kube-api-access-8hzlt\") pod \"auto-csr-approver-29536474-8lbnm\" (UID: \"4c489801-99b4-490f-b2b9-f475aabf3f7b\") " pod="openshift-infra/auto-csr-approver-29536474-8lbnm" Feb 27 10:34:00 crc kubenswrapper[4728]: I0227 10:34:00.161996 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 10:34:00 crc kubenswrapper[4728]: I0227 10:34:00.172473 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536474-8lbnm"] Feb 27 10:34:00 crc kubenswrapper[4728]: I0227 10:34:00.263021 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8hzlt\" (UniqueName: \"kubernetes.io/projected/4c489801-99b4-490f-b2b9-f475aabf3f7b-kube-api-access-8hzlt\") pod \"auto-csr-approver-29536474-8lbnm\" (UID: \"4c489801-99b4-490f-b2b9-f475aabf3f7b\") " pod="openshift-infra/auto-csr-approver-29536474-8lbnm" Feb 27 10:34:00 crc kubenswrapper[4728]: I0227 10:34:00.289653 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8hzlt\" (UniqueName: \"kubernetes.io/projected/4c489801-99b4-490f-b2b9-f475aabf3f7b-kube-api-access-8hzlt\") pod \"auto-csr-approver-29536474-8lbnm\" (UID: \"4c489801-99b4-490f-b2b9-f475aabf3f7b\") " pod="openshift-infra/auto-csr-approver-29536474-8lbnm" Feb 27 10:34:00 crc kubenswrapper[4728]: I0227 10:34:00.486928 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536474-8lbnm" Feb 27 10:34:00 crc kubenswrapper[4728]: I0227 10:34:00.743773 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536474-8lbnm"] Feb 27 10:34:01 crc kubenswrapper[4728]: I0227 10:34:01.634877 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536474-8lbnm" event={"ID":"4c489801-99b4-490f-b2b9-f475aabf3f7b","Type":"ContainerStarted","Data":"12353185750c5af13ec10bf50eae113d5a5819742291b99bb26fa44f36aa80ce"} Feb 27 10:34:02 crc kubenswrapper[4728]: I0227 10:34:02.643136 4728 generic.go:334] "Generic (PLEG): container finished" podID="4c489801-99b4-490f-b2b9-f475aabf3f7b" containerID="5840238f596c5bcf14d0d11275e0feb412a632cfba64474a091404dc0cd896aa" exitCode=0 Feb 27 10:34:02 crc kubenswrapper[4728]: I0227 10:34:02.643226 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536474-8lbnm" event={"ID":"4c489801-99b4-490f-b2b9-f475aabf3f7b","Type":"ContainerDied","Data":"5840238f596c5bcf14d0d11275e0feb412a632cfba64474a091404dc0cd896aa"} Feb 27 10:34:03 crc kubenswrapper[4728]: I0227 10:34:03.929615 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536474-8lbnm" Feb 27 10:34:04 crc kubenswrapper[4728]: I0227 10:34:04.017015 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8hzlt\" (UniqueName: \"kubernetes.io/projected/4c489801-99b4-490f-b2b9-f475aabf3f7b-kube-api-access-8hzlt\") pod \"4c489801-99b4-490f-b2b9-f475aabf3f7b\" (UID: \"4c489801-99b4-490f-b2b9-f475aabf3f7b\") " Feb 27 10:34:04 crc kubenswrapper[4728]: I0227 10:34:04.021752 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c489801-99b4-490f-b2b9-f475aabf3f7b-kube-api-access-8hzlt" (OuterVolumeSpecName: "kube-api-access-8hzlt") pod "4c489801-99b4-490f-b2b9-f475aabf3f7b" (UID: "4c489801-99b4-490f-b2b9-f475aabf3f7b"). InnerVolumeSpecName "kube-api-access-8hzlt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:34:04 crc kubenswrapper[4728]: I0227 10:34:04.118494 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8hzlt\" (UniqueName: \"kubernetes.io/projected/4c489801-99b4-490f-b2b9-f475aabf3f7b-kube-api-access-8hzlt\") on node \"crc\" DevicePath \"\"" Feb 27 10:34:04 crc kubenswrapper[4728]: I0227 10:34:04.660399 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536474-8lbnm" event={"ID":"4c489801-99b4-490f-b2b9-f475aabf3f7b","Type":"ContainerDied","Data":"12353185750c5af13ec10bf50eae113d5a5819742291b99bb26fa44f36aa80ce"} Feb 27 10:34:04 crc kubenswrapper[4728]: I0227 10:34:04.660740 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="12353185750c5af13ec10bf50eae113d5a5819742291b99bb26fa44f36aa80ce" Feb 27 10:34:04 crc kubenswrapper[4728]: I0227 10:34:04.660464 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536474-8lbnm" Feb 27 10:34:05 crc kubenswrapper[4728]: I0227 10:34:05.005182 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536468-682zs"] Feb 27 10:34:05 crc kubenswrapper[4728]: I0227 10:34:05.021442 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536468-682zs"] Feb 27 10:34:05 crc kubenswrapper[4728]: I0227 10:34:05.185769 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/prometheus-k8s-0" Feb 27 10:34:05 crc kubenswrapper[4728]: I0227 10:34:05.238606 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/prometheus-k8s-0" Feb 27 10:34:05 crc kubenswrapper[4728]: I0227 10:34:05.704168 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-k8s-0" Feb 27 10:34:05 crc kubenswrapper[4728]: I0227 10:34:05.922010 4728 patch_prober.go:28] interesting pod/machine-config-daemon-mf2hh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 10:34:05 crc kubenswrapper[4728]: I0227 10:34:05.922290 4728 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 10:34:06 crc kubenswrapper[4728]: I0227 10:34:06.736921 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="826461a8-eef9-4a1f-b4a7-4ff8076ec729" path="/var/lib/kubelet/pods/826461a8-eef9-4a1f-b4a7-4ff8076ec729/volumes" Feb 27 10:34:16 crc kubenswrapper[4728]: I0227 10:34:16.012316 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-6bb5cd776f-wblt4"] Feb 27 10:34:16 crc kubenswrapper[4728]: E0227 10:34:16.013540 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c489801-99b4-490f-b2b9-f475aabf3f7b" containerName="oc" Feb 27 10:34:16 crc kubenswrapper[4728]: I0227 10:34:16.013559 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c489801-99b4-490f-b2b9-f475aabf3f7b" containerName="oc" Feb 27 10:34:16 crc kubenswrapper[4728]: I0227 10:34:16.013749 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c489801-99b4-490f-b2b9-f475aabf3f7b" containerName="oc" Feb 27 10:34:16 crc kubenswrapper[4728]: I0227 10:34:16.014461 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6bb5cd776f-wblt4" Feb 27 10:34:16 crc kubenswrapper[4728]: I0227 10:34:16.033335 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6bb5cd776f-wblt4"] Feb 27 10:34:16 crc kubenswrapper[4728]: I0227 10:34:16.159912 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/debddd2a-191c-4a46-950a-866e0e68b4be-service-ca\") pod \"console-6bb5cd776f-wblt4\" (UID: \"debddd2a-191c-4a46-950a-866e0e68b4be\") " pod="openshift-console/console-6bb5cd776f-wblt4" Feb 27 10:34:16 crc kubenswrapper[4728]: I0227 10:34:16.160179 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52fvp\" (UniqueName: \"kubernetes.io/projected/debddd2a-191c-4a46-950a-866e0e68b4be-kube-api-access-52fvp\") pod \"console-6bb5cd776f-wblt4\" (UID: \"debddd2a-191c-4a46-950a-866e0e68b4be\") " pod="openshift-console/console-6bb5cd776f-wblt4" Feb 27 10:34:16 crc kubenswrapper[4728]: I0227 10:34:16.160318 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/debddd2a-191c-4a46-950a-866e0e68b4be-console-oauth-config\") pod \"console-6bb5cd776f-wblt4\" (UID: \"debddd2a-191c-4a46-950a-866e0e68b4be\") " pod="openshift-console/console-6bb5cd776f-wblt4" Feb 27 10:34:16 crc kubenswrapper[4728]: I0227 10:34:16.160406 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/debddd2a-191c-4a46-950a-866e0e68b4be-trusted-ca-bundle\") pod \"console-6bb5cd776f-wblt4\" (UID: \"debddd2a-191c-4a46-950a-866e0e68b4be\") " pod="openshift-console/console-6bb5cd776f-wblt4" Feb 27 10:34:16 crc kubenswrapper[4728]: I0227 10:34:16.160487 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/debddd2a-191c-4a46-950a-866e0e68b4be-console-config\") pod \"console-6bb5cd776f-wblt4\" (UID: \"debddd2a-191c-4a46-950a-866e0e68b4be\") " pod="openshift-console/console-6bb5cd776f-wblt4" Feb 27 10:34:16 crc kubenswrapper[4728]: I0227 10:34:16.160589 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/debddd2a-191c-4a46-950a-866e0e68b4be-oauth-serving-cert\") pod \"console-6bb5cd776f-wblt4\" (UID: \"debddd2a-191c-4a46-950a-866e0e68b4be\") " pod="openshift-console/console-6bb5cd776f-wblt4" Feb 27 10:34:16 crc kubenswrapper[4728]: I0227 10:34:16.160684 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/debddd2a-191c-4a46-950a-866e0e68b4be-console-serving-cert\") pod \"console-6bb5cd776f-wblt4\" (UID: \"debddd2a-191c-4a46-950a-866e0e68b4be\") " pod="openshift-console/console-6bb5cd776f-wblt4" Feb 27 10:34:16 crc kubenswrapper[4728]: I0227 10:34:16.261297 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/debddd2a-191c-4a46-950a-866e0e68b4be-service-ca\") pod \"console-6bb5cd776f-wblt4\" (UID: \"debddd2a-191c-4a46-950a-866e0e68b4be\") " pod="openshift-console/console-6bb5cd776f-wblt4" Feb 27 10:34:16 crc kubenswrapper[4728]: I0227 10:34:16.261336 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-52fvp\" (UniqueName: \"kubernetes.io/projected/debddd2a-191c-4a46-950a-866e0e68b4be-kube-api-access-52fvp\") pod \"console-6bb5cd776f-wblt4\" (UID: \"debddd2a-191c-4a46-950a-866e0e68b4be\") " pod="openshift-console/console-6bb5cd776f-wblt4" Feb 27 10:34:16 crc kubenswrapper[4728]: I0227 10:34:16.261387 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/debddd2a-191c-4a46-950a-866e0e68b4be-console-oauth-config\") pod \"console-6bb5cd776f-wblt4\" (UID: \"debddd2a-191c-4a46-950a-866e0e68b4be\") " pod="openshift-console/console-6bb5cd776f-wblt4" Feb 27 10:34:16 crc kubenswrapper[4728]: I0227 10:34:16.261406 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/debddd2a-191c-4a46-950a-866e0e68b4be-trusted-ca-bundle\") pod \"console-6bb5cd776f-wblt4\" (UID: \"debddd2a-191c-4a46-950a-866e0e68b4be\") " pod="openshift-console/console-6bb5cd776f-wblt4" Feb 27 10:34:16 crc kubenswrapper[4728]: I0227 10:34:16.261437 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/debddd2a-191c-4a46-950a-866e0e68b4be-console-config\") pod \"console-6bb5cd776f-wblt4\" (UID: \"debddd2a-191c-4a46-950a-866e0e68b4be\") " pod="openshift-console/console-6bb5cd776f-wblt4" Feb 27 10:34:16 crc kubenswrapper[4728]: I0227 10:34:16.262460 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/debddd2a-191c-4a46-950a-866e0e68b4be-service-ca\") pod \"console-6bb5cd776f-wblt4\" (UID: \"debddd2a-191c-4a46-950a-866e0e68b4be\") " pod="openshift-console/console-6bb5cd776f-wblt4" Feb 27 10:34:16 crc kubenswrapper[4728]: I0227 10:34:16.262964 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/debddd2a-191c-4a46-950a-866e0e68b4be-console-config\") pod \"console-6bb5cd776f-wblt4\" (UID: \"debddd2a-191c-4a46-950a-866e0e68b4be\") " pod="openshift-console/console-6bb5cd776f-wblt4" Feb 27 10:34:16 crc kubenswrapper[4728]: I0227 10:34:16.263015 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/debddd2a-191c-4a46-950a-866e0e68b4be-oauth-serving-cert\") pod \"console-6bb5cd776f-wblt4\" (UID: \"debddd2a-191c-4a46-950a-866e0e68b4be\") " pod="openshift-console/console-6bb5cd776f-wblt4" Feb 27 10:34:16 crc kubenswrapper[4728]: I0227 10:34:16.261497 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/debddd2a-191c-4a46-950a-866e0e68b4be-oauth-serving-cert\") pod \"console-6bb5cd776f-wblt4\" (UID: \"debddd2a-191c-4a46-950a-866e0e68b4be\") " pod="openshift-console/console-6bb5cd776f-wblt4" Feb 27 10:34:16 crc kubenswrapper[4728]: I0227 10:34:16.263146 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/debddd2a-191c-4a46-950a-866e0e68b4be-console-serving-cert\") pod \"console-6bb5cd776f-wblt4\" (UID: \"debddd2a-191c-4a46-950a-866e0e68b4be\") " pod="openshift-console/console-6bb5cd776f-wblt4" Feb 27 10:34:16 crc kubenswrapper[4728]: I0227 10:34:16.263432 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/debddd2a-191c-4a46-950a-866e0e68b4be-trusted-ca-bundle\") pod \"console-6bb5cd776f-wblt4\" (UID: \"debddd2a-191c-4a46-950a-866e0e68b4be\") " pod="openshift-console/console-6bb5cd776f-wblt4" Feb 27 10:34:16 crc kubenswrapper[4728]: I0227 10:34:16.270317 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/debddd2a-191c-4a46-950a-866e0e68b4be-console-serving-cert\") pod \"console-6bb5cd776f-wblt4\" (UID: \"debddd2a-191c-4a46-950a-866e0e68b4be\") " pod="openshift-console/console-6bb5cd776f-wblt4" Feb 27 10:34:16 crc kubenswrapper[4728]: I0227 10:34:16.272880 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/debddd2a-191c-4a46-950a-866e0e68b4be-console-oauth-config\") pod \"console-6bb5cd776f-wblt4\" (UID: \"debddd2a-191c-4a46-950a-866e0e68b4be\") " pod="openshift-console/console-6bb5cd776f-wblt4" Feb 27 10:34:16 crc kubenswrapper[4728]: I0227 10:34:16.306963 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-52fvp\" (UniqueName: \"kubernetes.io/projected/debddd2a-191c-4a46-950a-866e0e68b4be-kube-api-access-52fvp\") pod \"console-6bb5cd776f-wblt4\" (UID: \"debddd2a-191c-4a46-950a-866e0e68b4be\") " pod="openshift-console/console-6bb5cd776f-wblt4" Feb 27 10:34:16 crc kubenswrapper[4728]: I0227 10:34:16.338305 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6bb5cd776f-wblt4" Feb 27 10:34:16 crc kubenswrapper[4728]: I0227 10:34:16.815451 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6bb5cd776f-wblt4"] Feb 27 10:34:16 crc kubenswrapper[4728]: W0227 10:34:16.824884 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddebddd2a_191c_4a46_950a_866e0e68b4be.slice/crio-071f821e8671397138ea2e1cf973c6d8629abc425a21ecff570df99169acd675 WatchSource:0}: Error finding container 071f821e8671397138ea2e1cf973c6d8629abc425a21ecff570df99169acd675: Status 404 returned error can't find the container with id 071f821e8671397138ea2e1cf973c6d8629abc425a21ecff570df99169acd675 Feb 27 10:34:17 crc kubenswrapper[4728]: I0227 10:34:17.761156 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6bb5cd776f-wblt4" event={"ID":"debddd2a-191c-4a46-950a-866e0e68b4be","Type":"ContainerStarted","Data":"ea55bdb38600049e2e0e775a2ede0430adeff6fa8455d21d408572240d8ff1d7"} Feb 27 10:34:17 crc kubenswrapper[4728]: I0227 10:34:17.761649 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6bb5cd776f-wblt4" event={"ID":"debddd2a-191c-4a46-950a-866e0e68b4be","Type":"ContainerStarted","Data":"071f821e8671397138ea2e1cf973c6d8629abc425a21ecff570df99169acd675"} Feb 27 10:34:17 crc kubenswrapper[4728]: I0227 10:34:17.789048 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-6bb5cd776f-wblt4" podStartSLOduration=2.78902231 podStartE2EDuration="2.78902231s" podCreationTimestamp="2026-02-27 10:34:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:34:17.784960132 +0000 UTC m=+477.747326278" watchObservedRunningTime="2026-02-27 10:34:17.78902231 +0000 UTC m=+477.751388446" Feb 27 10:34:26 crc kubenswrapper[4728]: I0227 10:34:26.339378 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-6bb5cd776f-wblt4" Feb 27 10:34:26 crc kubenswrapper[4728]: I0227 10:34:26.340154 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-6bb5cd776f-wblt4" Feb 27 10:34:26 crc kubenswrapper[4728]: I0227 10:34:26.349052 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-6bb5cd776f-wblt4" Feb 27 10:34:26 crc kubenswrapper[4728]: I0227 10:34:26.845639 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-6bb5cd776f-wblt4" Feb 27 10:34:26 crc kubenswrapper[4728]: I0227 10:34:26.925442 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-5b986f7559-nxhwt"] Feb 27 10:34:35 crc kubenswrapper[4728]: I0227 10:34:35.922157 4728 patch_prober.go:28] interesting pod/machine-config-daemon-mf2hh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 10:34:35 crc kubenswrapper[4728]: I0227 10:34:35.922652 4728 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 10:34:51 crc kubenswrapper[4728]: I0227 10:34:51.979262 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-5b986f7559-nxhwt" podUID="651e26df-d8dd-423b-9df7-ce05f925582c" containerName="console" containerID="cri-o://1ff21c620cff1431cb08994f5b13b73df6532d6fbab8a0cd88b9cfcc26a5db58" gracePeriod=15 Feb 27 10:34:52 crc kubenswrapper[4728]: I0227 10:34:52.430898 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-5b986f7559-nxhwt_651e26df-d8dd-423b-9df7-ce05f925582c/console/0.log" Feb 27 10:34:52 crc kubenswrapper[4728]: I0227 10:34:52.431255 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5b986f7559-nxhwt" Feb 27 10:34:52 crc kubenswrapper[4728]: I0227 10:34:52.552562 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rj6pd\" (UniqueName: \"kubernetes.io/projected/651e26df-d8dd-423b-9df7-ce05f925582c-kube-api-access-rj6pd\") pod \"651e26df-d8dd-423b-9df7-ce05f925582c\" (UID: \"651e26df-d8dd-423b-9df7-ce05f925582c\") " Feb 27 10:34:52 crc kubenswrapper[4728]: I0227 10:34:52.552600 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/651e26df-d8dd-423b-9df7-ce05f925582c-console-oauth-config\") pod \"651e26df-d8dd-423b-9df7-ce05f925582c\" (UID: \"651e26df-d8dd-423b-9df7-ce05f925582c\") " Feb 27 10:34:52 crc kubenswrapper[4728]: I0227 10:34:52.552629 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/651e26df-d8dd-423b-9df7-ce05f925582c-oauth-serving-cert\") pod \"651e26df-d8dd-423b-9df7-ce05f925582c\" (UID: \"651e26df-d8dd-423b-9df7-ce05f925582c\") " Feb 27 10:34:52 crc kubenswrapper[4728]: I0227 10:34:52.552660 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/651e26df-d8dd-423b-9df7-ce05f925582c-console-serving-cert\") pod \"651e26df-d8dd-423b-9df7-ce05f925582c\" (UID: \"651e26df-d8dd-423b-9df7-ce05f925582c\") " Feb 27 10:34:52 crc kubenswrapper[4728]: I0227 10:34:52.552677 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/651e26df-d8dd-423b-9df7-ce05f925582c-trusted-ca-bundle\") pod \"651e26df-d8dd-423b-9df7-ce05f925582c\" (UID: \"651e26df-d8dd-423b-9df7-ce05f925582c\") " Feb 27 10:34:52 crc kubenswrapper[4728]: I0227 10:34:52.552706 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/651e26df-d8dd-423b-9df7-ce05f925582c-service-ca\") pod \"651e26df-d8dd-423b-9df7-ce05f925582c\" (UID: \"651e26df-d8dd-423b-9df7-ce05f925582c\") " Feb 27 10:34:52 crc kubenswrapper[4728]: I0227 10:34:52.552727 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/651e26df-d8dd-423b-9df7-ce05f925582c-console-config\") pod \"651e26df-d8dd-423b-9df7-ce05f925582c\" (UID: \"651e26df-d8dd-423b-9df7-ce05f925582c\") " Feb 27 10:34:52 crc kubenswrapper[4728]: I0227 10:34:52.553569 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/651e26df-d8dd-423b-9df7-ce05f925582c-service-ca" (OuterVolumeSpecName: "service-ca") pod "651e26df-d8dd-423b-9df7-ce05f925582c" (UID: "651e26df-d8dd-423b-9df7-ce05f925582c"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:34:52 crc kubenswrapper[4728]: I0227 10:34:52.553599 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/651e26df-d8dd-423b-9df7-ce05f925582c-console-config" (OuterVolumeSpecName: "console-config") pod "651e26df-d8dd-423b-9df7-ce05f925582c" (UID: "651e26df-d8dd-423b-9df7-ce05f925582c"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:34:52 crc kubenswrapper[4728]: I0227 10:34:52.553630 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/651e26df-d8dd-423b-9df7-ce05f925582c-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "651e26df-d8dd-423b-9df7-ce05f925582c" (UID: "651e26df-d8dd-423b-9df7-ce05f925582c"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:34:52 crc kubenswrapper[4728]: I0227 10:34:52.553653 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/651e26df-d8dd-423b-9df7-ce05f925582c-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "651e26df-d8dd-423b-9df7-ce05f925582c" (UID: "651e26df-d8dd-423b-9df7-ce05f925582c"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:34:52 crc kubenswrapper[4728]: I0227 10:34:52.558678 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/651e26df-d8dd-423b-9df7-ce05f925582c-kube-api-access-rj6pd" (OuterVolumeSpecName: "kube-api-access-rj6pd") pod "651e26df-d8dd-423b-9df7-ce05f925582c" (UID: "651e26df-d8dd-423b-9df7-ce05f925582c"). InnerVolumeSpecName "kube-api-access-rj6pd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:34:52 crc kubenswrapper[4728]: I0227 10:34:52.558690 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/651e26df-d8dd-423b-9df7-ce05f925582c-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "651e26df-d8dd-423b-9df7-ce05f925582c" (UID: "651e26df-d8dd-423b-9df7-ce05f925582c"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:34:52 crc kubenswrapper[4728]: I0227 10:34:52.558734 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/651e26df-d8dd-423b-9df7-ce05f925582c-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "651e26df-d8dd-423b-9df7-ce05f925582c" (UID: "651e26df-d8dd-423b-9df7-ce05f925582c"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:34:52 crc kubenswrapper[4728]: I0227 10:34:52.655237 4728 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/651e26df-d8dd-423b-9df7-ce05f925582c-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 10:34:52 crc kubenswrapper[4728]: I0227 10:34:52.655295 4728 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/651e26df-d8dd-423b-9df7-ce05f925582c-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 10:34:52 crc kubenswrapper[4728]: I0227 10:34:52.655321 4728 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/651e26df-d8dd-423b-9df7-ce05f925582c-service-ca\") on node \"crc\" DevicePath \"\"" Feb 27 10:34:52 crc kubenswrapper[4728]: I0227 10:34:52.655344 4728 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/651e26df-d8dd-423b-9df7-ce05f925582c-console-config\") on node \"crc\" DevicePath \"\"" Feb 27 10:34:52 crc kubenswrapper[4728]: I0227 10:34:52.655368 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rj6pd\" (UniqueName: \"kubernetes.io/projected/651e26df-d8dd-423b-9df7-ce05f925582c-kube-api-access-rj6pd\") on node \"crc\" DevicePath \"\"" Feb 27 10:34:52 crc kubenswrapper[4728]: I0227 10:34:52.655392 4728 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/651e26df-d8dd-423b-9df7-ce05f925582c-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 27 10:34:52 crc kubenswrapper[4728]: I0227 10:34:52.655413 4728 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/651e26df-d8dd-423b-9df7-ce05f925582c-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 10:34:53 crc kubenswrapper[4728]: I0227 10:34:53.036659 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-5b986f7559-nxhwt_651e26df-d8dd-423b-9df7-ce05f925582c/console/0.log" Feb 27 10:34:53 crc kubenswrapper[4728]: I0227 10:34:53.036719 4728 generic.go:334] "Generic (PLEG): container finished" podID="651e26df-d8dd-423b-9df7-ce05f925582c" containerID="1ff21c620cff1431cb08994f5b13b73df6532d6fbab8a0cd88b9cfcc26a5db58" exitCode=2 Feb 27 10:34:53 crc kubenswrapper[4728]: I0227 10:34:53.036761 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5b986f7559-nxhwt" event={"ID":"651e26df-d8dd-423b-9df7-ce05f925582c","Type":"ContainerDied","Data":"1ff21c620cff1431cb08994f5b13b73df6532d6fbab8a0cd88b9cfcc26a5db58"} Feb 27 10:34:53 crc kubenswrapper[4728]: I0227 10:34:53.036797 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5b986f7559-nxhwt" event={"ID":"651e26df-d8dd-423b-9df7-ce05f925582c","Type":"ContainerDied","Data":"b9864df126d88724016843cb4985ab4b50c5c409620bb47a8f80220297d334f2"} Feb 27 10:34:53 crc kubenswrapper[4728]: I0227 10:34:53.036823 4728 scope.go:117] "RemoveContainer" containerID="1ff21c620cff1431cb08994f5b13b73df6532d6fbab8a0cd88b9cfcc26a5db58" Feb 27 10:34:53 crc kubenswrapper[4728]: I0227 10:34:53.037009 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5b986f7559-nxhwt" Feb 27 10:34:53 crc kubenswrapper[4728]: I0227 10:34:53.058086 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-5b986f7559-nxhwt"] Feb 27 10:34:53 crc kubenswrapper[4728]: I0227 10:34:53.062701 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-5b986f7559-nxhwt"] Feb 27 10:34:53 crc kubenswrapper[4728]: I0227 10:34:53.070307 4728 scope.go:117] "RemoveContainer" containerID="1ff21c620cff1431cb08994f5b13b73df6532d6fbab8a0cd88b9cfcc26a5db58" Feb 27 10:34:53 crc kubenswrapper[4728]: E0227 10:34:53.070994 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1ff21c620cff1431cb08994f5b13b73df6532d6fbab8a0cd88b9cfcc26a5db58\": container with ID starting with 1ff21c620cff1431cb08994f5b13b73df6532d6fbab8a0cd88b9cfcc26a5db58 not found: ID does not exist" containerID="1ff21c620cff1431cb08994f5b13b73df6532d6fbab8a0cd88b9cfcc26a5db58" Feb 27 10:34:53 crc kubenswrapper[4728]: I0227 10:34:53.071036 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ff21c620cff1431cb08994f5b13b73df6532d6fbab8a0cd88b9cfcc26a5db58"} err="failed to get container status \"1ff21c620cff1431cb08994f5b13b73df6532d6fbab8a0cd88b9cfcc26a5db58\": rpc error: code = NotFound desc = could not find container \"1ff21c620cff1431cb08994f5b13b73df6532d6fbab8a0cd88b9cfcc26a5db58\": container with ID starting with 1ff21c620cff1431cb08994f5b13b73df6532d6fbab8a0cd88b9cfcc26a5db58 not found: ID does not exist" Feb 27 10:34:54 crc kubenswrapper[4728]: I0227 10:34:54.738105 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="651e26df-d8dd-423b-9df7-ce05f925582c" path="/var/lib/kubelet/pods/651e26df-d8dd-423b-9df7-ce05f925582c/volumes" Feb 27 10:35:05 crc kubenswrapper[4728]: I0227 10:35:05.922458 4728 patch_prober.go:28] interesting pod/machine-config-daemon-mf2hh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 10:35:05 crc kubenswrapper[4728]: I0227 10:35:05.923309 4728 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 10:35:05 crc kubenswrapper[4728]: I0227 10:35:05.923384 4728 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" Feb 27 10:35:05 crc kubenswrapper[4728]: I0227 10:35:05.924332 4728 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2416fbc83dda100006dd5fec140cd5b4cb87d01da9d620e87c0949af705e048d"} pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 27 10:35:05 crc kubenswrapper[4728]: I0227 10:35:05.924579 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" containerName="machine-config-daemon" containerID="cri-o://2416fbc83dda100006dd5fec140cd5b4cb87d01da9d620e87c0949af705e048d" gracePeriod=600 Feb 27 10:35:06 crc kubenswrapper[4728]: I0227 10:35:06.139272 4728 generic.go:334] "Generic (PLEG): container finished" podID="c2cfd349-f825-497b-b698-7fb6bc258b22" containerID="2416fbc83dda100006dd5fec140cd5b4cb87d01da9d620e87c0949af705e048d" exitCode=0 Feb 27 10:35:06 crc kubenswrapper[4728]: I0227 10:35:06.139347 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" event={"ID":"c2cfd349-f825-497b-b698-7fb6bc258b22","Type":"ContainerDied","Data":"2416fbc83dda100006dd5fec140cd5b4cb87d01da9d620e87c0949af705e048d"} Feb 27 10:35:06 crc kubenswrapper[4728]: I0227 10:35:06.139728 4728 scope.go:117] "RemoveContainer" containerID="983e19c2154a1b01db67f4b9f25a99f1aecc3d35ea0f570828eabe5e7d0b10ac" Feb 27 10:35:07 crc kubenswrapper[4728]: I0227 10:35:07.152986 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" event={"ID":"c2cfd349-f825-497b-b698-7fb6bc258b22","Type":"ContainerStarted","Data":"7142bbcd5732490b77191220972aa455a45bbcb3be86cc2f77bc37171cdfdc5d"} Feb 27 10:36:00 crc kubenswrapper[4728]: I0227 10:36:00.150676 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536476-hv4q5"] Feb 27 10:36:00 crc kubenswrapper[4728]: E0227 10:36:00.151646 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="651e26df-d8dd-423b-9df7-ce05f925582c" containerName="console" Feb 27 10:36:00 crc kubenswrapper[4728]: I0227 10:36:00.151671 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="651e26df-d8dd-423b-9df7-ce05f925582c" containerName="console" Feb 27 10:36:00 crc kubenswrapper[4728]: I0227 10:36:00.151862 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="651e26df-d8dd-423b-9df7-ce05f925582c" containerName="console" Feb 27 10:36:00 crc kubenswrapper[4728]: I0227 10:36:00.152651 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536476-hv4q5" Feb 27 10:36:00 crc kubenswrapper[4728]: I0227 10:36:00.156474 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 10:36:00 crc kubenswrapper[4728]: I0227 10:36:00.156490 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 10:36:00 crc kubenswrapper[4728]: I0227 10:36:00.156742 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-wxdfj" Feb 27 10:36:00 crc kubenswrapper[4728]: I0227 10:36:00.166583 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536476-hv4q5"] Feb 27 10:36:00 crc kubenswrapper[4728]: I0227 10:36:00.207855 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7shgh\" (UniqueName: \"kubernetes.io/projected/bf9c8f91-e276-4cbb-879a-505120485bf3-kube-api-access-7shgh\") pod \"auto-csr-approver-29536476-hv4q5\" (UID: \"bf9c8f91-e276-4cbb-879a-505120485bf3\") " pod="openshift-infra/auto-csr-approver-29536476-hv4q5" Feb 27 10:36:00 crc kubenswrapper[4728]: I0227 10:36:00.308913 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7shgh\" (UniqueName: \"kubernetes.io/projected/bf9c8f91-e276-4cbb-879a-505120485bf3-kube-api-access-7shgh\") pod \"auto-csr-approver-29536476-hv4q5\" (UID: \"bf9c8f91-e276-4cbb-879a-505120485bf3\") " pod="openshift-infra/auto-csr-approver-29536476-hv4q5" Feb 27 10:36:00 crc kubenswrapper[4728]: I0227 10:36:00.342745 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7shgh\" (UniqueName: \"kubernetes.io/projected/bf9c8f91-e276-4cbb-879a-505120485bf3-kube-api-access-7shgh\") pod \"auto-csr-approver-29536476-hv4q5\" (UID: \"bf9c8f91-e276-4cbb-879a-505120485bf3\") " pod="openshift-infra/auto-csr-approver-29536476-hv4q5" Feb 27 10:36:00 crc kubenswrapper[4728]: I0227 10:36:00.470560 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536476-hv4q5" Feb 27 10:36:00 crc kubenswrapper[4728]: I0227 10:36:00.992018 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536476-hv4q5"] Feb 27 10:36:01 crc kubenswrapper[4728]: I0227 10:36:01.006017 4728 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 27 10:36:01 crc kubenswrapper[4728]: I0227 10:36:01.586446 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536476-hv4q5" event={"ID":"bf9c8f91-e276-4cbb-879a-505120485bf3","Type":"ContainerStarted","Data":"428717aae00e40d751726c98f1c468c830e9da124d16d75694c2937c899e1db2"} Feb 27 10:36:02 crc kubenswrapper[4728]: I0227 10:36:02.600352 4728 generic.go:334] "Generic (PLEG): container finished" podID="bf9c8f91-e276-4cbb-879a-505120485bf3" containerID="ed6689cb9a50987b89277af9ed5f9a6e61e2993f7d3db69e312c777ebb5bd9ea" exitCode=0 Feb 27 10:36:02 crc kubenswrapper[4728]: I0227 10:36:02.600654 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536476-hv4q5" event={"ID":"bf9c8f91-e276-4cbb-879a-505120485bf3","Type":"ContainerDied","Data":"ed6689cb9a50987b89277af9ed5f9a6e61e2993f7d3db69e312c777ebb5bd9ea"} Feb 27 10:36:03 crc kubenswrapper[4728]: I0227 10:36:03.814068 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536476-hv4q5" Feb 27 10:36:03 crc kubenswrapper[4728]: I0227 10:36:03.963083 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7shgh\" (UniqueName: \"kubernetes.io/projected/bf9c8f91-e276-4cbb-879a-505120485bf3-kube-api-access-7shgh\") pod \"bf9c8f91-e276-4cbb-879a-505120485bf3\" (UID: \"bf9c8f91-e276-4cbb-879a-505120485bf3\") " Feb 27 10:36:03 crc kubenswrapper[4728]: I0227 10:36:03.972577 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf9c8f91-e276-4cbb-879a-505120485bf3-kube-api-access-7shgh" (OuterVolumeSpecName: "kube-api-access-7shgh") pod "bf9c8f91-e276-4cbb-879a-505120485bf3" (UID: "bf9c8f91-e276-4cbb-879a-505120485bf3"). InnerVolumeSpecName "kube-api-access-7shgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:36:04 crc kubenswrapper[4728]: I0227 10:36:04.065005 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7shgh\" (UniqueName: \"kubernetes.io/projected/bf9c8f91-e276-4cbb-879a-505120485bf3-kube-api-access-7shgh\") on node \"crc\" DevicePath \"\"" Feb 27 10:36:04 crc kubenswrapper[4728]: I0227 10:36:04.619234 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536476-hv4q5" event={"ID":"bf9c8f91-e276-4cbb-879a-505120485bf3","Type":"ContainerDied","Data":"428717aae00e40d751726c98f1c468c830e9da124d16d75694c2937c899e1db2"} Feb 27 10:36:04 crc kubenswrapper[4728]: I0227 10:36:04.619625 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="428717aae00e40d751726c98f1c468c830e9da124d16d75694c2937c899e1db2" Feb 27 10:36:04 crc kubenswrapper[4728]: I0227 10:36:04.619729 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536476-hv4q5" Feb 27 10:36:04 crc kubenswrapper[4728]: I0227 10:36:04.881722 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536470-xfw9v"] Feb 27 10:36:04 crc kubenswrapper[4728]: I0227 10:36:04.890219 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536470-xfw9v"] Feb 27 10:36:06 crc kubenswrapper[4728]: I0227 10:36:06.734739 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="380270a6-c1d3-49a1-b3c7-9080ae9038b9" path="/var/lib/kubelet/pods/380270a6-c1d3-49a1-b3c7-9080ae9038b9/volumes" Feb 27 10:36:47 crc kubenswrapper[4728]: I0227 10:36:47.573056 4728 scope.go:117] "RemoveContainer" containerID="06e461d77ad5ff8e32ac2b4c96f09589f5d1521f4760cb602916d21d7e3204b2" Feb 27 10:36:47 crc kubenswrapper[4728]: I0227 10:36:47.619459 4728 scope.go:117] "RemoveContainer" containerID="3160b2014171f57fd5e20be89b6d9d288e16ad060f61d38d1026dc51dd130408" Feb 27 10:36:47 crc kubenswrapper[4728]: I0227 10:36:47.669431 4728 scope.go:117] "RemoveContainer" containerID="f44bbbbf88244223d404524160d5f84258c555bedaec4c3f6f047aaaa67a099d" Feb 27 10:37:11 crc kubenswrapper[4728]: I0227 10:37:11.151317 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gvb56"] Feb 27 10:37:11 crc kubenswrapper[4728]: E0227 10:37:11.152027 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf9c8f91-e276-4cbb-879a-505120485bf3" containerName="oc" Feb 27 10:37:11 crc kubenswrapper[4728]: I0227 10:37:11.152040 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf9c8f91-e276-4cbb-879a-505120485bf3" containerName="oc" Feb 27 10:37:11 crc kubenswrapper[4728]: I0227 10:37:11.152152 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf9c8f91-e276-4cbb-879a-505120485bf3" containerName="oc" Feb 27 10:37:11 crc kubenswrapper[4728]: I0227 10:37:11.152958 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gvb56" Feb 27 10:37:11 crc kubenswrapper[4728]: I0227 10:37:11.154858 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 27 10:37:11 crc kubenswrapper[4728]: I0227 10:37:11.165325 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gvb56"] Feb 27 10:37:11 crc kubenswrapper[4728]: I0227 10:37:11.304771 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/aa621064-b4a3-4d02-873b-c67b5a2f311c-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gvb56\" (UID: \"aa621064-b4a3-4d02-873b-c67b5a2f311c\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gvb56" Feb 27 10:37:11 crc kubenswrapper[4728]: I0227 10:37:11.304855 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrhfl\" (UniqueName: \"kubernetes.io/projected/aa621064-b4a3-4d02-873b-c67b5a2f311c-kube-api-access-wrhfl\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gvb56\" (UID: \"aa621064-b4a3-4d02-873b-c67b5a2f311c\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gvb56" Feb 27 10:37:11 crc kubenswrapper[4728]: I0227 10:37:11.305033 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/aa621064-b4a3-4d02-873b-c67b5a2f311c-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gvb56\" (UID: \"aa621064-b4a3-4d02-873b-c67b5a2f311c\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gvb56" Feb 27 10:37:11 crc kubenswrapper[4728]: I0227 10:37:11.406058 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wrhfl\" (UniqueName: \"kubernetes.io/projected/aa621064-b4a3-4d02-873b-c67b5a2f311c-kube-api-access-wrhfl\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gvb56\" (UID: \"aa621064-b4a3-4d02-873b-c67b5a2f311c\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gvb56" Feb 27 10:37:11 crc kubenswrapper[4728]: I0227 10:37:11.406138 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/aa621064-b4a3-4d02-873b-c67b5a2f311c-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gvb56\" (UID: \"aa621064-b4a3-4d02-873b-c67b5a2f311c\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gvb56" Feb 27 10:37:11 crc kubenswrapper[4728]: I0227 10:37:11.406197 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/aa621064-b4a3-4d02-873b-c67b5a2f311c-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gvb56\" (UID: \"aa621064-b4a3-4d02-873b-c67b5a2f311c\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gvb56" Feb 27 10:37:11 crc kubenswrapper[4728]: I0227 10:37:11.406686 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/aa621064-b4a3-4d02-873b-c67b5a2f311c-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gvb56\" (UID: \"aa621064-b4a3-4d02-873b-c67b5a2f311c\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gvb56" Feb 27 10:37:11 crc kubenswrapper[4728]: I0227 10:37:11.407051 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/aa621064-b4a3-4d02-873b-c67b5a2f311c-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gvb56\" (UID: \"aa621064-b4a3-4d02-873b-c67b5a2f311c\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gvb56" Feb 27 10:37:11 crc kubenswrapper[4728]: I0227 10:37:11.429133 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wrhfl\" (UniqueName: \"kubernetes.io/projected/aa621064-b4a3-4d02-873b-c67b5a2f311c-kube-api-access-wrhfl\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gvb56\" (UID: \"aa621064-b4a3-4d02-873b-c67b5a2f311c\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gvb56" Feb 27 10:37:11 crc kubenswrapper[4728]: I0227 10:37:11.473242 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gvb56" Feb 27 10:37:11 crc kubenswrapper[4728]: I0227 10:37:11.694196 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gvb56"] Feb 27 10:37:12 crc kubenswrapper[4728]: I0227 10:37:12.162853 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gvb56" event={"ID":"aa621064-b4a3-4d02-873b-c67b5a2f311c","Type":"ContainerStarted","Data":"5cdf29194db2e8a2d3276ed5cd0c31df11cbb3c666ef3ee21b9922322809ae91"} Feb 27 10:37:12 crc kubenswrapper[4728]: I0227 10:37:12.162892 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gvb56" event={"ID":"aa621064-b4a3-4d02-873b-c67b5a2f311c","Type":"ContainerStarted","Data":"a8c0aaa27bc15e0978b29ca84b54b1cb02be4be3b76de2e3f46f0bbfad4bb034"} Feb 27 10:37:13 crc kubenswrapper[4728]: I0227 10:37:13.170096 4728 generic.go:334] "Generic (PLEG): container finished" podID="aa621064-b4a3-4d02-873b-c67b5a2f311c" containerID="5cdf29194db2e8a2d3276ed5cd0c31df11cbb3c666ef3ee21b9922322809ae91" exitCode=0 Feb 27 10:37:13 crc kubenswrapper[4728]: I0227 10:37:13.170167 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gvb56" event={"ID":"aa621064-b4a3-4d02-873b-c67b5a2f311c","Type":"ContainerDied","Data":"5cdf29194db2e8a2d3276ed5cd0c31df11cbb3c666ef3ee21b9922322809ae91"} Feb 27 10:37:14 crc kubenswrapper[4728]: I0227 10:37:14.177049 4728 generic.go:334] "Generic (PLEG): container finished" podID="aa621064-b4a3-4d02-873b-c67b5a2f311c" containerID="f3203376f3730f71b9a54ace12ac666c44abcca89b16b217774ab447c35a896f" exitCode=0 Feb 27 10:37:14 crc kubenswrapper[4728]: I0227 10:37:14.177186 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gvb56" event={"ID":"aa621064-b4a3-4d02-873b-c67b5a2f311c","Type":"ContainerDied","Data":"f3203376f3730f71b9a54ace12ac666c44abcca89b16b217774ab447c35a896f"} Feb 27 10:37:15 crc kubenswrapper[4728]: I0227 10:37:15.187558 4728 generic.go:334] "Generic (PLEG): container finished" podID="aa621064-b4a3-4d02-873b-c67b5a2f311c" containerID="6be38125afbe9b3e5f235ef61d3e52da86bc042a40942c2c874485dcd53183a5" exitCode=0 Feb 27 10:37:15 crc kubenswrapper[4728]: I0227 10:37:15.187620 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gvb56" event={"ID":"aa621064-b4a3-4d02-873b-c67b5a2f311c","Type":"ContainerDied","Data":"6be38125afbe9b3e5f235ef61d3e52da86bc042a40942c2c874485dcd53183a5"} Feb 27 10:37:16 crc kubenswrapper[4728]: I0227 10:37:16.498366 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gvb56" Feb 27 10:37:16 crc kubenswrapper[4728]: I0227 10:37:16.694166 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/aa621064-b4a3-4d02-873b-c67b5a2f311c-bundle\") pod \"aa621064-b4a3-4d02-873b-c67b5a2f311c\" (UID: \"aa621064-b4a3-4d02-873b-c67b5a2f311c\") " Feb 27 10:37:16 crc kubenswrapper[4728]: I0227 10:37:16.694325 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/aa621064-b4a3-4d02-873b-c67b5a2f311c-util\") pod \"aa621064-b4a3-4d02-873b-c67b5a2f311c\" (UID: \"aa621064-b4a3-4d02-873b-c67b5a2f311c\") " Feb 27 10:37:16 crc kubenswrapper[4728]: I0227 10:37:16.694360 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wrhfl\" (UniqueName: \"kubernetes.io/projected/aa621064-b4a3-4d02-873b-c67b5a2f311c-kube-api-access-wrhfl\") pod \"aa621064-b4a3-4d02-873b-c67b5a2f311c\" (UID: \"aa621064-b4a3-4d02-873b-c67b5a2f311c\") " Feb 27 10:37:16 crc kubenswrapper[4728]: I0227 10:37:16.697248 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aa621064-b4a3-4d02-873b-c67b5a2f311c-bundle" (OuterVolumeSpecName: "bundle") pod "aa621064-b4a3-4d02-873b-c67b5a2f311c" (UID: "aa621064-b4a3-4d02-873b-c67b5a2f311c"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:37:16 crc kubenswrapper[4728]: I0227 10:37:16.703069 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa621064-b4a3-4d02-873b-c67b5a2f311c-kube-api-access-wrhfl" (OuterVolumeSpecName: "kube-api-access-wrhfl") pod "aa621064-b4a3-4d02-873b-c67b5a2f311c" (UID: "aa621064-b4a3-4d02-873b-c67b5a2f311c"). InnerVolumeSpecName "kube-api-access-wrhfl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:37:16 crc kubenswrapper[4728]: I0227 10:37:16.721942 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aa621064-b4a3-4d02-873b-c67b5a2f311c-util" (OuterVolumeSpecName: "util") pod "aa621064-b4a3-4d02-873b-c67b5a2f311c" (UID: "aa621064-b4a3-4d02-873b-c67b5a2f311c"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:37:16 crc kubenswrapper[4728]: I0227 10:37:16.797927 4728 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/aa621064-b4a3-4d02-873b-c67b5a2f311c-util\") on node \"crc\" DevicePath \"\"" Feb 27 10:37:16 crc kubenswrapper[4728]: I0227 10:37:16.797990 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wrhfl\" (UniqueName: \"kubernetes.io/projected/aa621064-b4a3-4d02-873b-c67b5a2f311c-kube-api-access-wrhfl\") on node \"crc\" DevicePath \"\"" Feb 27 10:37:16 crc kubenswrapper[4728]: I0227 10:37:16.798013 4728 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/aa621064-b4a3-4d02-873b-c67b5a2f311c-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 10:37:17 crc kubenswrapper[4728]: I0227 10:37:17.210257 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gvb56" event={"ID":"aa621064-b4a3-4d02-873b-c67b5a2f311c","Type":"ContainerDied","Data":"a8c0aaa27bc15e0978b29ca84b54b1cb02be4be3b76de2e3f46f0bbfad4bb034"} Feb 27 10:37:17 crc kubenswrapper[4728]: I0227 10:37:17.210302 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a8c0aaa27bc15e0978b29ca84b54b1cb02be4be3b76de2e3f46f0bbfad4bb034" Feb 27 10:37:17 crc kubenswrapper[4728]: I0227 10:37:17.210395 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gvb56" Feb 27 10:37:22 crc kubenswrapper[4728]: I0227 10:37:22.265956 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-rpr29"] Feb 27 10:37:22 crc kubenswrapper[4728]: I0227 10:37:22.267823 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-rpr29" podUID="b021ff26-58a3-4418-b6ba-4aa8e0bb6746" containerName="ovn-controller" containerID="cri-o://6fe524630c9ea646073ef4c97fedec7a4605b268bff442198ad57cf07a033f1e" gracePeriod=30 Feb 27 10:37:22 crc kubenswrapper[4728]: I0227 10:37:22.267967 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-rpr29" podUID="b021ff26-58a3-4418-b6ba-4aa8e0bb6746" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://97ccafee960cda236a6c78d9ac91eb112c587cbb3a09b3f3fe63fd6a169527df" gracePeriod=30 Feb 27 10:37:22 crc kubenswrapper[4728]: I0227 10:37:22.268026 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-rpr29" podUID="b021ff26-58a3-4418-b6ba-4aa8e0bb6746" containerName="ovn-acl-logging" containerID="cri-o://a2be6adb37ecb98705469e4fb97ada883e4ceb5c205af3e70b54b1a6f88b0a83" gracePeriod=30 Feb 27 10:37:22 crc kubenswrapper[4728]: I0227 10:37:22.268045 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-rpr29" podUID="b021ff26-58a3-4418-b6ba-4aa8e0bb6746" containerName="sbdb" containerID="cri-o://03ade0507ce072ec69afdcbe9da276564bbe5614b51ff1367e19f01c8e2e64fb" gracePeriod=30 Feb 27 10:37:22 crc kubenswrapper[4728]: I0227 10:37:22.267963 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-rpr29" podUID="b021ff26-58a3-4418-b6ba-4aa8e0bb6746" containerName="nbdb" containerID="cri-o://b09c7887821795cd7de5864ce224a276f25faa70f181b308de3f418f7b1a241f" gracePeriod=30 Feb 27 10:37:22 crc kubenswrapper[4728]: I0227 10:37:22.268000 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-rpr29" podUID="b021ff26-58a3-4418-b6ba-4aa8e0bb6746" containerName="kube-rbac-proxy-node" containerID="cri-o://226b2fb5a272504808ea376efa8cfc95a1089240a4d41df99b2a13c37e97dabd" gracePeriod=30 Feb 27 10:37:22 crc kubenswrapper[4728]: I0227 10:37:22.268211 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-rpr29" podUID="b021ff26-58a3-4418-b6ba-4aa8e0bb6746" containerName="northd" containerID="cri-o://2bfe2d245cfb745fa96a831c2aedf5cc232e00a62b801485743d7da8e9ae6406" gracePeriod=30 Feb 27 10:37:22 crc kubenswrapper[4728]: I0227 10:37:22.326692 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-rpr29" podUID="b021ff26-58a3-4418-b6ba-4aa8e0bb6746" containerName="ovnkube-controller" containerID="cri-o://1918119473c9678a4002db8e1aaf8917697032ea10fb68d07119849b6c95058d" gracePeriod=30 Feb 27 10:37:23 crc kubenswrapper[4728]: I0227 10:37:23.255884 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rpr29_b021ff26-58a3-4418-b6ba-4aa8e0bb6746/ovn-acl-logging/0.log" Feb 27 10:37:23 crc kubenswrapper[4728]: I0227 10:37:23.256723 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rpr29_b021ff26-58a3-4418-b6ba-4aa8e0bb6746/ovn-controller/0.log" Feb 27 10:37:23 crc kubenswrapper[4728]: I0227 10:37:23.257143 4728 generic.go:334] "Generic (PLEG): container finished" podID="b021ff26-58a3-4418-b6ba-4aa8e0bb6746" containerID="1918119473c9678a4002db8e1aaf8917697032ea10fb68d07119849b6c95058d" exitCode=0 Feb 27 10:37:23 crc kubenswrapper[4728]: I0227 10:37:23.257168 4728 generic.go:334] "Generic (PLEG): container finished" podID="b021ff26-58a3-4418-b6ba-4aa8e0bb6746" containerID="03ade0507ce072ec69afdcbe9da276564bbe5614b51ff1367e19f01c8e2e64fb" exitCode=0 Feb 27 10:37:23 crc kubenswrapper[4728]: I0227 10:37:23.257177 4728 generic.go:334] "Generic (PLEG): container finished" podID="b021ff26-58a3-4418-b6ba-4aa8e0bb6746" containerID="b09c7887821795cd7de5864ce224a276f25faa70f181b308de3f418f7b1a241f" exitCode=0 Feb 27 10:37:23 crc kubenswrapper[4728]: I0227 10:37:23.257187 4728 generic.go:334] "Generic (PLEG): container finished" podID="b021ff26-58a3-4418-b6ba-4aa8e0bb6746" containerID="2bfe2d245cfb745fa96a831c2aedf5cc232e00a62b801485743d7da8e9ae6406" exitCode=0 Feb 27 10:37:23 crc kubenswrapper[4728]: I0227 10:37:23.257195 4728 generic.go:334] "Generic (PLEG): container finished" podID="b021ff26-58a3-4418-b6ba-4aa8e0bb6746" containerID="a2be6adb37ecb98705469e4fb97ada883e4ceb5c205af3e70b54b1a6f88b0a83" exitCode=143 Feb 27 10:37:23 crc kubenswrapper[4728]: I0227 10:37:23.257204 4728 generic.go:334] "Generic (PLEG): container finished" podID="b021ff26-58a3-4418-b6ba-4aa8e0bb6746" containerID="6fe524630c9ea646073ef4c97fedec7a4605b268bff442198ad57cf07a033f1e" exitCode=143 Feb 27 10:37:23 crc kubenswrapper[4728]: I0227 10:37:23.257251 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rpr29" event={"ID":"b021ff26-58a3-4418-b6ba-4aa8e0bb6746","Type":"ContainerDied","Data":"1918119473c9678a4002db8e1aaf8917697032ea10fb68d07119849b6c95058d"} Feb 27 10:37:23 crc kubenswrapper[4728]: I0227 10:37:23.257329 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rpr29" event={"ID":"b021ff26-58a3-4418-b6ba-4aa8e0bb6746","Type":"ContainerDied","Data":"03ade0507ce072ec69afdcbe9da276564bbe5614b51ff1367e19f01c8e2e64fb"} Feb 27 10:37:23 crc kubenswrapper[4728]: I0227 10:37:23.257353 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rpr29" event={"ID":"b021ff26-58a3-4418-b6ba-4aa8e0bb6746","Type":"ContainerDied","Data":"b09c7887821795cd7de5864ce224a276f25faa70f181b308de3f418f7b1a241f"} Feb 27 10:37:23 crc kubenswrapper[4728]: I0227 10:37:23.257376 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rpr29" event={"ID":"b021ff26-58a3-4418-b6ba-4aa8e0bb6746","Type":"ContainerDied","Data":"2bfe2d245cfb745fa96a831c2aedf5cc232e00a62b801485743d7da8e9ae6406"} Feb 27 10:37:23 crc kubenswrapper[4728]: I0227 10:37:23.257394 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rpr29" event={"ID":"b021ff26-58a3-4418-b6ba-4aa8e0bb6746","Type":"ContainerDied","Data":"a2be6adb37ecb98705469e4fb97ada883e4ceb5c205af3e70b54b1a6f88b0a83"} Feb 27 10:37:23 crc kubenswrapper[4728]: I0227 10:37:23.257411 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rpr29" event={"ID":"b021ff26-58a3-4418-b6ba-4aa8e0bb6746","Type":"ContainerDied","Data":"6fe524630c9ea646073ef4c97fedec7a4605b268bff442198ad57cf07a033f1e"} Feb 27 10:37:23 crc kubenswrapper[4728]: I0227 10:37:23.259625 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-9tlth_468912b7-185a-4869-9a65-70cbcb3c4fb1/kube-multus/0.log" Feb 27 10:37:23 crc kubenswrapper[4728]: I0227 10:37:23.259667 4728 generic.go:334] "Generic (PLEG): container finished" podID="468912b7-185a-4869-9a65-70cbcb3c4fb1" containerID="f67177f4cd8151bc3425e0989b15e78fd050fb5688cddece113bc16eed09512f" exitCode=2 Feb 27 10:37:23 crc kubenswrapper[4728]: I0227 10:37:23.259694 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-9tlth" event={"ID":"468912b7-185a-4869-9a65-70cbcb3c4fb1","Type":"ContainerDied","Data":"f67177f4cd8151bc3425e0989b15e78fd050fb5688cddece113bc16eed09512f"} Feb 27 10:37:23 crc kubenswrapper[4728]: I0227 10:37:23.260162 4728 scope.go:117] "RemoveContainer" containerID="f67177f4cd8151bc3425e0989b15e78fd050fb5688cddece113bc16eed09512f" Feb 27 10:37:23 crc kubenswrapper[4728]: I0227 10:37:23.995291 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rpr29_b021ff26-58a3-4418-b6ba-4aa8e0bb6746/ovn-acl-logging/0.log" Feb 27 10:37:23 crc kubenswrapper[4728]: I0227 10:37:23.996238 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rpr29_b021ff26-58a3-4418-b6ba-4aa8e0bb6746/ovn-controller/0.log" Feb 27 10:37:23 crc kubenswrapper[4728]: I0227 10:37:23.998572 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-rpr29" Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.024411 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b021ff26-58a3-4418-b6ba-4aa8e0bb6746-env-overrides\") pod \"b021ff26-58a3-4418-b6ba-4aa8e0bb6746\" (UID: \"b021ff26-58a3-4418-b6ba-4aa8e0bb6746\") " Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.024454 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/b021ff26-58a3-4418-b6ba-4aa8e0bb6746-host-kubelet\") pod \"b021ff26-58a3-4418-b6ba-4aa8e0bb6746\" (UID: \"b021ff26-58a3-4418-b6ba-4aa8e0bb6746\") " Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.024518 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/b021ff26-58a3-4418-b6ba-4aa8e0bb6746-run-ovn\") pod \"b021ff26-58a3-4418-b6ba-4aa8e0bb6746\" (UID: \"b021ff26-58a3-4418-b6ba-4aa8e0bb6746\") " Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.024546 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b021ff26-58a3-4418-b6ba-4aa8e0bb6746-var-lib-openvswitch\") pod \"b021ff26-58a3-4418-b6ba-4aa8e0bb6746\" (UID: \"b021ff26-58a3-4418-b6ba-4aa8e0bb6746\") " Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.024566 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/b021ff26-58a3-4418-b6ba-4aa8e0bb6746-systemd-units\") pod \"b021ff26-58a3-4418-b6ba-4aa8e0bb6746\" (UID: \"b021ff26-58a3-4418-b6ba-4aa8e0bb6746\") " Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.024599 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b021ff26-58a3-4418-b6ba-4aa8e0bb6746-host-slash\") pod \"b021ff26-58a3-4418-b6ba-4aa8e0bb6746\" (UID: \"b021ff26-58a3-4418-b6ba-4aa8e0bb6746\") " Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.024629 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b021ff26-58a3-4418-b6ba-4aa8e0bb6746-etc-openvswitch\") pod \"b021ff26-58a3-4418-b6ba-4aa8e0bb6746\" (UID: \"b021ff26-58a3-4418-b6ba-4aa8e0bb6746\") " Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.024646 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b021ff26-58a3-4418-b6ba-4aa8e0bb6746-run-openvswitch\") pod \"b021ff26-58a3-4418-b6ba-4aa8e0bb6746\" (UID: \"b021ff26-58a3-4418-b6ba-4aa8e0bb6746\") " Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.024661 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b021ff26-58a3-4418-b6ba-4aa8e0bb6746-host-run-netns\") pod \"b021ff26-58a3-4418-b6ba-4aa8e0bb6746\" (UID: \"b021ff26-58a3-4418-b6ba-4aa8e0bb6746\") " Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.024685 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/b021ff26-58a3-4418-b6ba-4aa8e0bb6746-log-socket\") pod \"b021ff26-58a3-4418-b6ba-4aa8e0bb6746\" (UID: \"b021ff26-58a3-4418-b6ba-4aa8e0bb6746\") " Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.024720 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b021ff26-58a3-4418-b6ba-4aa8e0bb6746-ovnkube-config\") pod \"b021ff26-58a3-4418-b6ba-4aa8e0bb6746\" (UID: \"b021ff26-58a3-4418-b6ba-4aa8e0bb6746\") " Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.024738 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/b021ff26-58a3-4418-b6ba-4aa8e0bb6746-node-log\") pod \"b021ff26-58a3-4418-b6ba-4aa8e0bb6746\" (UID: \"b021ff26-58a3-4418-b6ba-4aa8e0bb6746\") " Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.024766 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b021ff26-58a3-4418-b6ba-4aa8e0bb6746-host-var-lib-cni-networks-ovn-kubernetes\") pod \"b021ff26-58a3-4418-b6ba-4aa8e0bb6746\" (UID: \"b021ff26-58a3-4418-b6ba-4aa8e0bb6746\") " Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.024785 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b021ff26-58a3-4418-b6ba-4aa8e0bb6746-host-cni-netd\") pod \"b021ff26-58a3-4418-b6ba-4aa8e0bb6746\" (UID: \"b021ff26-58a3-4418-b6ba-4aa8e0bb6746\") " Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.024803 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b021ff26-58a3-4418-b6ba-4aa8e0bb6746-host-cni-bin\") pod \"b021ff26-58a3-4418-b6ba-4aa8e0bb6746\" (UID: \"b021ff26-58a3-4418-b6ba-4aa8e0bb6746\") " Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.024819 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dnx4z\" (UniqueName: \"kubernetes.io/projected/b021ff26-58a3-4418-b6ba-4aa8e0bb6746-kube-api-access-dnx4z\") pod \"b021ff26-58a3-4418-b6ba-4aa8e0bb6746\" (UID: \"b021ff26-58a3-4418-b6ba-4aa8e0bb6746\") " Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.024845 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b021ff26-58a3-4418-b6ba-4aa8e0bb6746-host-run-ovn-kubernetes\") pod \"b021ff26-58a3-4418-b6ba-4aa8e0bb6746\" (UID: \"b021ff26-58a3-4418-b6ba-4aa8e0bb6746\") " Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.024875 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/b021ff26-58a3-4418-b6ba-4aa8e0bb6746-ovnkube-script-lib\") pod \"b021ff26-58a3-4418-b6ba-4aa8e0bb6746\" (UID: \"b021ff26-58a3-4418-b6ba-4aa8e0bb6746\") " Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.024898 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/b021ff26-58a3-4418-b6ba-4aa8e0bb6746-run-systemd\") pod \"b021ff26-58a3-4418-b6ba-4aa8e0bb6746\" (UID: \"b021ff26-58a3-4418-b6ba-4aa8e0bb6746\") " Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.024916 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b021ff26-58a3-4418-b6ba-4aa8e0bb6746-ovn-node-metrics-cert\") pod \"b021ff26-58a3-4418-b6ba-4aa8e0bb6746\" (UID: \"b021ff26-58a3-4418-b6ba-4aa8e0bb6746\") " Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.026636 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b021ff26-58a3-4418-b6ba-4aa8e0bb6746-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "b021ff26-58a3-4418-b6ba-4aa8e0bb6746" (UID: "b021ff26-58a3-4418-b6ba-4aa8e0bb6746"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.026658 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b021ff26-58a3-4418-b6ba-4aa8e0bb6746-log-socket" (OuterVolumeSpecName: "log-socket") pod "b021ff26-58a3-4418-b6ba-4aa8e0bb6746" (UID: "b021ff26-58a3-4418-b6ba-4aa8e0bb6746"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.026722 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b021ff26-58a3-4418-b6ba-4aa8e0bb6746-host-slash" (OuterVolumeSpecName: "host-slash") pod "b021ff26-58a3-4418-b6ba-4aa8e0bb6746" (UID: "b021ff26-58a3-4418-b6ba-4aa8e0bb6746"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.026747 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b021ff26-58a3-4418-b6ba-4aa8e0bb6746-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "b021ff26-58a3-4418-b6ba-4aa8e0bb6746" (UID: "b021ff26-58a3-4418-b6ba-4aa8e0bb6746"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.026769 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b021ff26-58a3-4418-b6ba-4aa8e0bb6746-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "b021ff26-58a3-4418-b6ba-4aa8e0bb6746" (UID: "b021ff26-58a3-4418-b6ba-4aa8e0bb6746"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.026790 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b021ff26-58a3-4418-b6ba-4aa8e0bb6746-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "b021ff26-58a3-4418-b6ba-4aa8e0bb6746" (UID: "b021ff26-58a3-4418-b6ba-4aa8e0bb6746"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.026811 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b021ff26-58a3-4418-b6ba-4aa8e0bb6746-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "b021ff26-58a3-4418-b6ba-4aa8e0bb6746" (UID: "b021ff26-58a3-4418-b6ba-4aa8e0bb6746"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.026852 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b021ff26-58a3-4418-b6ba-4aa8e0bb6746-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "b021ff26-58a3-4418-b6ba-4aa8e0bb6746" (UID: "b021ff26-58a3-4418-b6ba-4aa8e0bb6746"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.026874 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b021ff26-58a3-4418-b6ba-4aa8e0bb6746-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "b021ff26-58a3-4418-b6ba-4aa8e0bb6746" (UID: "b021ff26-58a3-4418-b6ba-4aa8e0bb6746"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.026923 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b021ff26-58a3-4418-b6ba-4aa8e0bb6746-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "b021ff26-58a3-4418-b6ba-4aa8e0bb6746" (UID: "b021ff26-58a3-4418-b6ba-4aa8e0bb6746"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.026954 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b021ff26-58a3-4418-b6ba-4aa8e0bb6746-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "b021ff26-58a3-4418-b6ba-4aa8e0bb6746" (UID: "b021ff26-58a3-4418-b6ba-4aa8e0bb6746"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.026980 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b021ff26-58a3-4418-b6ba-4aa8e0bb6746-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "b021ff26-58a3-4418-b6ba-4aa8e0bb6746" (UID: "b021ff26-58a3-4418-b6ba-4aa8e0bb6746"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.027043 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b021ff26-58a3-4418-b6ba-4aa8e0bb6746-node-log" (OuterVolumeSpecName: "node-log") pod "b021ff26-58a3-4418-b6ba-4aa8e0bb6746" (UID: "b021ff26-58a3-4418-b6ba-4aa8e0bb6746"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.027069 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b021ff26-58a3-4418-b6ba-4aa8e0bb6746-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "b021ff26-58a3-4418-b6ba-4aa8e0bb6746" (UID: "b021ff26-58a3-4418-b6ba-4aa8e0bb6746"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.027227 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b021ff26-58a3-4418-b6ba-4aa8e0bb6746-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "b021ff26-58a3-4418-b6ba-4aa8e0bb6746" (UID: "b021ff26-58a3-4418-b6ba-4aa8e0bb6746"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.027266 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b021ff26-58a3-4418-b6ba-4aa8e0bb6746-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "b021ff26-58a3-4418-b6ba-4aa8e0bb6746" (UID: "b021ff26-58a3-4418-b6ba-4aa8e0bb6746"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.027492 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b021ff26-58a3-4418-b6ba-4aa8e0bb6746-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "b021ff26-58a3-4418-b6ba-4aa8e0bb6746" (UID: "b021ff26-58a3-4418-b6ba-4aa8e0bb6746"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.031756 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b021ff26-58a3-4418-b6ba-4aa8e0bb6746-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "b021ff26-58a3-4418-b6ba-4aa8e0bb6746" (UID: "b021ff26-58a3-4418-b6ba-4aa8e0bb6746"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.034220 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b021ff26-58a3-4418-b6ba-4aa8e0bb6746-kube-api-access-dnx4z" (OuterVolumeSpecName: "kube-api-access-dnx4z") pod "b021ff26-58a3-4418-b6ba-4aa8e0bb6746" (UID: "b021ff26-58a3-4418-b6ba-4aa8e0bb6746"). InnerVolumeSpecName "kube-api-access-dnx4z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.054771 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b021ff26-58a3-4418-b6ba-4aa8e0bb6746-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "b021ff26-58a3-4418-b6ba-4aa8e0bb6746" (UID: "b021ff26-58a3-4418-b6ba-4aa8e0bb6746"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.061095 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-pmcdw"] Feb 27 10:37:24 crc kubenswrapper[4728]: E0227 10:37:24.061394 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b021ff26-58a3-4418-b6ba-4aa8e0bb6746" containerName="nbdb" Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.061419 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="b021ff26-58a3-4418-b6ba-4aa8e0bb6746" containerName="nbdb" Feb 27 10:37:24 crc kubenswrapper[4728]: E0227 10:37:24.061434 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa621064-b4a3-4d02-873b-c67b5a2f311c" containerName="util" Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.061442 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa621064-b4a3-4d02-873b-c67b5a2f311c" containerName="util" Feb 27 10:37:24 crc kubenswrapper[4728]: E0227 10:37:24.061456 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b021ff26-58a3-4418-b6ba-4aa8e0bb6746" containerName="ovn-controller" Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.061464 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="b021ff26-58a3-4418-b6ba-4aa8e0bb6746" containerName="ovn-controller" Feb 27 10:37:24 crc kubenswrapper[4728]: E0227 10:37:24.061474 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b021ff26-58a3-4418-b6ba-4aa8e0bb6746" containerName="kube-rbac-proxy-node" Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.061483 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="b021ff26-58a3-4418-b6ba-4aa8e0bb6746" containerName="kube-rbac-proxy-node" Feb 27 10:37:24 crc kubenswrapper[4728]: E0227 10:37:24.061498 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b021ff26-58a3-4418-b6ba-4aa8e0bb6746" containerName="sbdb" Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.061525 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="b021ff26-58a3-4418-b6ba-4aa8e0bb6746" containerName="sbdb" Feb 27 10:37:24 crc kubenswrapper[4728]: E0227 10:37:24.061542 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b021ff26-58a3-4418-b6ba-4aa8e0bb6746" containerName="kubecfg-setup" Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.061551 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="b021ff26-58a3-4418-b6ba-4aa8e0bb6746" containerName="kubecfg-setup" Feb 27 10:37:24 crc kubenswrapper[4728]: E0227 10:37:24.061566 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b021ff26-58a3-4418-b6ba-4aa8e0bb6746" containerName="ovnkube-controller" Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.061574 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="b021ff26-58a3-4418-b6ba-4aa8e0bb6746" containerName="ovnkube-controller" Feb 27 10:37:24 crc kubenswrapper[4728]: E0227 10:37:24.061583 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa621064-b4a3-4d02-873b-c67b5a2f311c" containerName="extract" Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.061589 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa621064-b4a3-4d02-873b-c67b5a2f311c" containerName="extract" Feb 27 10:37:24 crc kubenswrapper[4728]: E0227 10:37:24.061597 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b021ff26-58a3-4418-b6ba-4aa8e0bb6746" containerName="northd" Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.061604 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="b021ff26-58a3-4418-b6ba-4aa8e0bb6746" containerName="northd" Feb 27 10:37:24 crc kubenswrapper[4728]: E0227 10:37:24.061615 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b021ff26-58a3-4418-b6ba-4aa8e0bb6746" containerName="kube-rbac-proxy-ovn-metrics" Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.061623 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="b021ff26-58a3-4418-b6ba-4aa8e0bb6746" containerName="kube-rbac-proxy-ovn-metrics" Feb 27 10:37:24 crc kubenswrapper[4728]: E0227 10:37:24.061637 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa621064-b4a3-4d02-873b-c67b5a2f311c" containerName="pull" Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.061645 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa621064-b4a3-4d02-873b-c67b5a2f311c" containerName="pull" Feb 27 10:37:24 crc kubenswrapper[4728]: E0227 10:37:24.061657 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b021ff26-58a3-4418-b6ba-4aa8e0bb6746" containerName="ovn-acl-logging" Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.061665 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="b021ff26-58a3-4418-b6ba-4aa8e0bb6746" containerName="ovn-acl-logging" Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.061791 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="b021ff26-58a3-4418-b6ba-4aa8e0bb6746" containerName="kube-rbac-proxy-node" Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.061805 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="b021ff26-58a3-4418-b6ba-4aa8e0bb6746" containerName="kube-rbac-proxy-ovn-metrics" Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.061817 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="b021ff26-58a3-4418-b6ba-4aa8e0bb6746" containerName="northd" Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.061828 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa621064-b4a3-4d02-873b-c67b5a2f311c" containerName="extract" Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.061838 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="b021ff26-58a3-4418-b6ba-4aa8e0bb6746" containerName="ovnkube-controller" Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.061848 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="b021ff26-58a3-4418-b6ba-4aa8e0bb6746" containerName="sbdb" Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.061859 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="b021ff26-58a3-4418-b6ba-4aa8e0bb6746" containerName="nbdb" Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.061868 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="b021ff26-58a3-4418-b6ba-4aa8e0bb6746" containerName="ovn-controller" Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.061881 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="b021ff26-58a3-4418-b6ba-4aa8e0bb6746" containerName="ovn-acl-logging" Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.064421 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-pmcdw" Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.125932 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wtsdd\" (UniqueName: \"kubernetes.io/projected/37aa7967-25e0-4f97-b792-65ace9fdfd37-kube-api-access-wtsdd\") pod \"ovnkube-node-pmcdw\" (UID: \"37aa7967-25e0-4f97-b792-65ace9fdfd37\") " pod="openshift-ovn-kubernetes/ovnkube-node-pmcdw" Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.125977 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/37aa7967-25e0-4f97-b792-65ace9fdfd37-host-kubelet\") pod \"ovnkube-node-pmcdw\" (UID: \"37aa7967-25e0-4f97-b792-65ace9fdfd37\") " pod="openshift-ovn-kubernetes/ovnkube-node-pmcdw" Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.125994 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/37aa7967-25e0-4f97-b792-65ace9fdfd37-ovnkube-config\") pod \"ovnkube-node-pmcdw\" (UID: \"37aa7967-25e0-4f97-b792-65ace9fdfd37\") " pod="openshift-ovn-kubernetes/ovnkube-node-pmcdw" Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.126159 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/37aa7967-25e0-4f97-b792-65ace9fdfd37-host-slash\") pod \"ovnkube-node-pmcdw\" (UID: \"37aa7967-25e0-4f97-b792-65ace9fdfd37\") " pod="openshift-ovn-kubernetes/ovnkube-node-pmcdw" Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.126212 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/37aa7967-25e0-4f97-b792-65ace9fdfd37-run-ovn\") pod \"ovnkube-node-pmcdw\" (UID: \"37aa7967-25e0-4f97-b792-65ace9fdfd37\") " pod="openshift-ovn-kubernetes/ovnkube-node-pmcdw" Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.126263 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/37aa7967-25e0-4f97-b792-65ace9fdfd37-etc-openvswitch\") pod \"ovnkube-node-pmcdw\" (UID: \"37aa7967-25e0-4f97-b792-65ace9fdfd37\") " pod="openshift-ovn-kubernetes/ovnkube-node-pmcdw" Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.126317 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/37aa7967-25e0-4f97-b792-65ace9fdfd37-host-cni-bin\") pod \"ovnkube-node-pmcdw\" (UID: \"37aa7967-25e0-4f97-b792-65ace9fdfd37\") " pod="openshift-ovn-kubernetes/ovnkube-node-pmcdw" Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.126353 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/37aa7967-25e0-4f97-b792-65ace9fdfd37-ovnkube-script-lib\") pod \"ovnkube-node-pmcdw\" (UID: \"37aa7967-25e0-4f97-b792-65ace9fdfd37\") " pod="openshift-ovn-kubernetes/ovnkube-node-pmcdw" Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.126374 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/37aa7967-25e0-4f97-b792-65ace9fdfd37-var-lib-openvswitch\") pod \"ovnkube-node-pmcdw\" (UID: \"37aa7967-25e0-4f97-b792-65ace9fdfd37\") " pod="openshift-ovn-kubernetes/ovnkube-node-pmcdw" Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.126398 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/37aa7967-25e0-4f97-b792-65ace9fdfd37-host-run-ovn-kubernetes\") pod \"ovnkube-node-pmcdw\" (UID: \"37aa7967-25e0-4f97-b792-65ace9fdfd37\") " pod="openshift-ovn-kubernetes/ovnkube-node-pmcdw" Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.126446 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/37aa7967-25e0-4f97-b792-65ace9fdfd37-run-systemd\") pod \"ovnkube-node-pmcdw\" (UID: \"37aa7967-25e0-4f97-b792-65ace9fdfd37\") " pod="openshift-ovn-kubernetes/ovnkube-node-pmcdw" Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.126473 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/37aa7967-25e0-4f97-b792-65ace9fdfd37-log-socket\") pod \"ovnkube-node-pmcdw\" (UID: \"37aa7967-25e0-4f97-b792-65ace9fdfd37\") " pod="openshift-ovn-kubernetes/ovnkube-node-pmcdw" Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.126495 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/37aa7967-25e0-4f97-b792-65ace9fdfd37-host-run-netns\") pod \"ovnkube-node-pmcdw\" (UID: \"37aa7967-25e0-4f97-b792-65ace9fdfd37\") " pod="openshift-ovn-kubernetes/ovnkube-node-pmcdw" Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.126545 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/37aa7967-25e0-4f97-b792-65ace9fdfd37-node-log\") pod \"ovnkube-node-pmcdw\" (UID: \"37aa7967-25e0-4f97-b792-65ace9fdfd37\") " pod="openshift-ovn-kubernetes/ovnkube-node-pmcdw" Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.126563 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/37aa7967-25e0-4f97-b792-65ace9fdfd37-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-pmcdw\" (UID: \"37aa7967-25e0-4f97-b792-65ace9fdfd37\") " pod="openshift-ovn-kubernetes/ovnkube-node-pmcdw" Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.126599 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/37aa7967-25e0-4f97-b792-65ace9fdfd37-env-overrides\") pod \"ovnkube-node-pmcdw\" (UID: \"37aa7967-25e0-4f97-b792-65ace9fdfd37\") " pod="openshift-ovn-kubernetes/ovnkube-node-pmcdw" Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.126617 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/37aa7967-25e0-4f97-b792-65ace9fdfd37-host-cni-netd\") pod \"ovnkube-node-pmcdw\" (UID: \"37aa7967-25e0-4f97-b792-65ace9fdfd37\") " pod="openshift-ovn-kubernetes/ovnkube-node-pmcdw" Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.126635 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/37aa7967-25e0-4f97-b792-65ace9fdfd37-systemd-units\") pod \"ovnkube-node-pmcdw\" (UID: \"37aa7967-25e0-4f97-b792-65ace9fdfd37\") " pod="openshift-ovn-kubernetes/ovnkube-node-pmcdw" Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.126674 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/37aa7967-25e0-4f97-b792-65ace9fdfd37-ovn-node-metrics-cert\") pod \"ovnkube-node-pmcdw\" (UID: \"37aa7967-25e0-4f97-b792-65ace9fdfd37\") " pod="openshift-ovn-kubernetes/ovnkube-node-pmcdw" Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.126719 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/37aa7967-25e0-4f97-b792-65ace9fdfd37-run-openvswitch\") pod \"ovnkube-node-pmcdw\" (UID: \"37aa7967-25e0-4f97-b792-65ace9fdfd37\") " pod="openshift-ovn-kubernetes/ovnkube-node-pmcdw" Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.126849 4728 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/b021ff26-58a3-4418-b6ba-4aa8e0bb6746-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.126861 4728 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/b021ff26-58a3-4418-b6ba-4aa8e0bb6746-run-systemd\") on node \"crc\" DevicePath \"\"" Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.126871 4728 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b021ff26-58a3-4418-b6ba-4aa8e0bb6746-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.126896 4728 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b021ff26-58a3-4418-b6ba-4aa8e0bb6746-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.126905 4728 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/b021ff26-58a3-4418-b6ba-4aa8e0bb6746-host-kubelet\") on node \"crc\" DevicePath \"\"" Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.126914 4728 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/b021ff26-58a3-4418-b6ba-4aa8e0bb6746-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.126923 4728 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b021ff26-58a3-4418-b6ba-4aa8e0bb6746-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.126934 4728 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/b021ff26-58a3-4418-b6ba-4aa8e0bb6746-systemd-units\") on node \"crc\" DevicePath \"\"" Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.126943 4728 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b021ff26-58a3-4418-b6ba-4aa8e0bb6746-host-slash\") on node \"crc\" DevicePath \"\"" Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.126963 4728 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b021ff26-58a3-4418-b6ba-4aa8e0bb6746-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.126973 4728 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b021ff26-58a3-4418-b6ba-4aa8e0bb6746-run-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.126983 4728 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b021ff26-58a3-4418-b6ba-4aa8e0bb6746-host-run-netns\") on node \"crc\" DevicePath \"\"" Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.126992 4728 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/b021ff26-58a3-4418-b6ba-4aa8e0bb6746-log-socket\") on node \"crc\" DevicePath \"\"" Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.127001 4728 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b021ff26-58a3-4418-b6ba-4aa8e0bb6746-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.127010 4728 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/b021ff26-58a3-4418-b6ba-4aa8e0bb6746-node-log\") on node \"crc\" DevicePath \"\"" Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.127021 4728 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b021ff26-58a3-4418-b6ba-4aa8e0bb6746-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.127029 4728 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b021ff26-58a3-4418-b6ba-4aa8e0bb6746-host-cni-netd\") on node \"crc\" DevicePath \"\"" Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.127037 4728 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b021ff26-58a3-4418-b6ba-4aa8e0bb6746-host-cni-bin\") on node \"crc\" DevicePath \"\"" Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.127044 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dnx4z\" (UniqueName: \"kubernetes.io/projected/b021ff26-58a3-4418-b6ba-4aa8e0bb6746-kube-api-access-dnx4z\") on node \"crc\" DevicePath \"\"" Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.127053 4728 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b021ff26-58a3-4418-b6ba-4aa8e0bb6746-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.227919 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/37aa7967-25e0-4f97-b792-65ace9fdfd37-systemd-units\") pod \"ovnkube-node-pmcdw\" (UID: \"37aa7967-25e0-4f97-b792-65ace9fdfd37\") " pod="openshift-ovn-kubernetes/ovnkube-node-pmcdw" Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.228272 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/37aa7967-25e0-4f97-b792-65ace9fdfd37-ovn-node-metrics-cert\") pod \"ovnkube-node-pmcdw\" (UID: \"37aa7967-25e0-4f97-b792-65ace9fdfd37\") " pod="openshift-ovn-kubernetes/ovnkube-node-pmcdw" Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.228296 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/37aa7967-25e0-4f97-b792-65ace9fdfd37-run-openvswitch\") pod \"ovnkube-node-pmcdw\" (UID: \"37aa7967-25e0-4f97-b792-65ace9fdfd37\") " pod="openshift-ovn-kubernetes/ovnkube-node-pmcdw" Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.228318 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wtsdd\" (UniqueName: \"kubernetes.io/projected/37aa7967-25e0-4f97-b792-65ace9fdfd37-kube-api-access-wtsdd\") pod \"ovnkube-node-pmcdw\" (UID: \"37aa7967-25e0-4f97-b792-65ace9fdfd37\") " pod="openshift-ovn-kubernetes/ovnkube-node-pmcdw" Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.228336 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/37aa7967-25e0-4f97-b792-65ace9fdfd37-host-kubelet\") pod \"ovnkube-node-pmcdw\" (UID: \"37aa7967-25e0-4f97-b792-65ace9fdfd37\") " pod="openshift-ovn-kubernetes/ovnkube-node-pmcdw" Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.228350 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/37aa7967-25e0-4f97-b792-65ace9fdfd37-ovnkube-config\") pod \"ovnkube-node-pmcdw\" (UID: \"37aa7967-25e0-4f97-b792-65ace9fdfd37\") " pod="openshift-ovn-kubernetes/ovnkube-node-pmcdw" Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.228380 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/37aa7967-25e0-4f97-b792-65ace9fdfd37-host-slash\") pod \"ovnkube-node-pmcdw\" (UID: \"37aa7967-25e0-4f97-b792-65ace9fdfd37\") " pod="openshift-ovn-kubernetes/ovnkube-node-pmcdw" Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.228404 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/37aa7967-25e0-4f97-b792-65ace9fdfd37-run-ovn\") pod \"ovnkube-node-pmcdw\" (UID: \"37aa7967-25e0-4f97-b792-65ace9fdfd37\") " pod="openshift-ovn-kubernetes/ovnkube-node-pmcdw" Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.228429 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/37aa7967-25e0-4f97-b792-65ace9fdfd37-etc-openvswitch\") pod \"ovnkube-node-pmcdw\" (UID: \"37aa7967-25e0-4f97-b792-65ace9fdfd37\") " pod="openshift-ovn-kubernetes/ovnkube-node-pmcdw" Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.228449 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/37aa7967-25e0-4f97-b792-65ace9fdfd37-host-cni-bin\") pod \"ovnkube-node-pmcdw\" (UID: \"37aa7967-25e0-4f97-b792-65ace9fdfd37\") " pod="openshift-ovn-kubernetes/ovnkube-node-pmcdw" Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.228466 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/37aa7967-25e0-4f97-b792-65ace9fdfd37-ovnkube-script-lib\") pod \"ovnkube-node-pmcdw\" (UID: \"37aa7967-25e0-4f97-b792-65ace9fdfd37\") " pod="openshift-ovn-kubernetes/ovnkube-node-pmcdw" Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.228481 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/37aa7967-25e0-4f97-b792-65ace9fdfd37-var-lib-openvswitch\") pod \"ovnkube-node-pmcdw\" (UID: \"37aa7967-25e0-4f97-b792-65ace9fdfd37\") " pod="openshift-ovn-kubernetes/ovnkube-node-pmcdw" Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.228498 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/37aa7967-25e0-4f97-b792-65ace9fdfd37-host-run-ovn-kubernetes\") pod \"ovnkube-node-pmcdw\" (UID: \"37aa7967-25e0-4f97-b792-65ace9fdfd37\") " pod="openshift-ovn-kubernetes/ovnkube-node-pmcdw" Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.228528 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/37aa7967-25e0-4f97-b792-65ace9fdfd37-run-systemd\") pod \"ovnkube-node-pmcdw\" (UID: \"37aa7967-25e0-4f97-b792-65ace9fdfd37\") " pod="openshift-ovn-kubernetes/ovnkube-node-pmcdw" Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.228547 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/37aa7967-25e0-4f97-b792-65ace9fdfd37-log-socket\") pod \"ovnkube-node-pmcdw\" (UID: \"37aa7967-25e0-4f97-b792-65ace9fdfd37\") " pod="openshift-ovn-kubernetes/ovnkube-node-pmcdw" Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.228562 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/37aa7967-25e0-4f97-b792-65ace9fdfd37-host-run-netns\") pod \"ovnkube-node-pmcdw\" (UID: \"37aa7967-25e0-4f97-b792-65ace9fdfd37\") " pod="openshift-ovn-kubernetes/ovnkube-node-pmcdw" Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.228582 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/37aa7967-25e0-4f97-b792-65ace9fdfd37-node-log\") pod \"ovnkube-node-pmcdw\" (UID: \"37aa7967-25e0-4f97-b792-65ace9fdfd37\") " pod="openshift-ovn-kubernetes/ovnkube-node-pmcdw" Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.228601 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/37aa7967-25e0-4f97-b792-65ace9fdfd37-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-pmcdw\" (UID: \"37aa7967-25e0-4f97-b792-65ace9fdfd37\") " pod="openshift-ovn-kubernetes/ovnkube-node-pmcdw" Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.228637 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/37aa7967-25e0-4f97-b792-65ace9fdfd37-env-overrides\") pod \"ovnkube-node-pmcdw\" (UID: \"37aa7967-25e0-4f97-b792-65ace9fdfd37\") " pod="openshift-ovn-kubernetes/ovnkube-node-pmcdw" Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.228659 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/37aa7967-25e0-4f97-b792-65ace9fdfd37-host-cni-netd\") pod \"ovnkube-node-pmcdw\" (UID: \"37aa7967-25e0-4f97-b792-65ace9fdfd37\") " pod="openshift-ovn-kubernetes/ovnkube-node-pmcdw" Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.228728 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/37aa7967-25e0-4f97-b792-65ace9fdfd37-host-cni-netd\") pod \"ovnkube-node-pmcdw\" (UID: \"37aa7967-25e0-4f97-b792-65ace9fdfd37\") " pod="openshift-ovn-kubernetes/ovnkube-node-pmcdw" Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.228767 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/37aa7967-25e0-4f97-b792-65ace9fdfd37-systemd-units\") pod \"ovnkube-node-pmcdw\" (UID: \"37aa7967-25e0-4f97-b792-65ace9fdfd37\") " pod="openshift-ovn-kubernetes/ovnkube-node-pmcdw" Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.230898 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/37aa7967-25e0-4f97-b792-65ace9fdfd37-ovnkube-script-lib\") pod \"ovnkube-node-pmcdw\" (UID: \"37aa7967-25e0-4f97-b792-65ace9fdfd37\") " pod="openshift-ovn-kubernetes/ovnkube-node-pmcdw" Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.230915 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/37aa7967-25e0-4f97-b792-65ace9fdfd37-log-socket\") pod \"ovnkube-node-pmcdw\" (UID: \"37aa7967-25e0-4f97-b792-65ace9fdfd37\") " pod="openshift-ovn-kubernetes/ovnkube-node-pmcdw" Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.230956 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/37aa7967-25e0-4f97-b792-65ace9fdfd37-run-openvswitch\") pod \"ovnkube-node-pmcdw\" (UID: \"37aa7967-25e0-4f97-b792-65ace9fdfd37\") " pod="openshift-ovn-kubernetes/ovnkube-node-pmcdw" Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.230986 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/37aa7967-25e0-4f97-b792-65ace9fdfd37-var-lib-openvswitch\") pod \"ovnkube-node-pmcdw\" (UID: \"37aa7967-25e0-4f97-b792-65ace9fdfd37\") " pod="openshift-ovn-kubernetes/ovnkube-node-pmcdw" Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.231017 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/37aa7967-25e0-4f97-b792-65ace9fdfd37-host-run-ovn-kubernetes\") pod \"ovnkube-node-pmcdw\" (UID: \"37aa7967-25e0-4f97-b792-65ace9fdfd37\") " pod="openshift-ovn-kubernetes/ovnkube-node-pmcdw" Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.231043 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/37aa7967-25e0-4f97-b792-65ace9fdfd37-run-systemd\") pod \"ovnkube-node-pmcdw\" (UID: \"37aa7967-25e0-4f97-b792-65ace9fdfd37\") " pod="openshift-ovn-kubernetes/ovnkube-node-pmcdw" Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.231067 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/37aa7967-25e0-4f97-b792-65ace9fdfd37-host-slash\") pod \"ovnkube-node-pmcdw\" (UID: \"37aa7967-25e0-4f97-b792-65ace9fdfd37\") " pod="openshift-ovn-kubernetes/ovnkube-node-pmcdw" Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.231086 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/37aa7967-25e0-4f97-b792-65ace9fdfd37-host-kubelet\") pod \"ovnkube-node-pmcdw\" (UID: \"37aa7967-25e0-4f97-b792-65ace9fdfd37\") " pod="openshift-ovn-kubernetes/ovnkube-node-pmcdw" Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.231226 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/37aa7967-25e0-4f97-b792-65ace9fdfd37-run-ovn\") pod \"ovnkube-node-pmcdw\" (UID: \"37aa7967-25e0-4f97-b792-65ace9fdfd37\") " pod="openshift-ovn-kubernetes/ovnkube-node-pmcdw" Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.231269 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/37aa7967-25e0-4f97-b792-65ace9fdfd37-etc-openvswitch\") pod \"ovnkube-node-pmcdw\" (UID: \"37aa7967-25e0-4f97-b792-65ace9fdfd37\") " pod="openshift-ovn-kubernetes/ovnkube-node-pmcdw" Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.231297 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/37aa7967-25e0-4f97-b792-65ace9fdfd37-host-cni-bin\") pod \"ovnkube-node-pmcdw\" (UID: \"37aa7967-25e0-4f97-b792-65ace9fdfd37\") " pod="openshift-ovn-kubernetes/ovnkube-node-pmcdw" Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.231327 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/37aa7967-25e0-4f97-b792-65ace9fdfd37-node-log\") pod \"ovnkube-node-pmcdw\" (UID: \"37aa7967-25e0-4f97-b792-65ace9fdfd37\") " pod="openshift-ovn-kubernetes/ovnkube-node-pmcdw" Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.231360 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/37aa7967-25e0-4f97-b792-65ace9fdfd37-host-run-netns\") pod \"ovnkube-node-pmcdw\" (UID: \"37aa7967-25e0-4f97-b792-65ace9fdfd37\") " pod="openshift-ovn-kubernetes/ovnkube-node-pmcdw" Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.231388 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/37aa7967-25e0-4f97-b792-65ace9fdfd37-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-pmcdw\" (UID: \"37aa7967-25e0-4f97-b792-65ace9fdfd37\") " pod="openshift-ovn-kubernetes/ovnkube-node-pmcdw" Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.231662 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/37aa7967-25e0-4f97-b792-65ace9fdfd37-ovnkube-config\") pod \"ovnkube-node-pmcdw\" (UID: \"37aa7967-25e0-4f97-b792-65ace9fdfd37\") " pod="openshift-ovn-kubernetes/ovnkube-node-pmcdw" Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.231898 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/37aa7967-25e0-4f97-b792-65ace9fdfd37-ovn-node-metrics-cert\") pod \"ovnkube-node-pmcdw\" (UID: \"37aa7967-25e0-4f97-b792-65ace9fdfd37\") " pod="openshift-ovn-kubernetes/ovnkube-node-pmcdw" Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.232203 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/37aa7967-25e0-4f97-b792-65ace9fdfd37-env-overrides\") pod \"ovnkube-node-pmcdw\" (UID: \"37aa7967-25e0-4f97-b792-65ace9fdfd37\") " pod="openshift-ovn-kubernetes/ovnkube-node-pmcdw" Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.260198 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wtsdd\" (UniqueName: \"kubernetes.io/projected/37aa7967-25e0-4f97-b792-65ace9fdfd37-kube-api-access-wtsdd\") pod \"ovnkube-node-pmcdw\" (UID: \"37aa7967-25e0-4f97-b792-65ace9fdfd37\") " pod="openshift-ovn-kubernetes/ovnkube-node-pmcdw" Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.270841 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rpr29_b021ff26-58a3-4418-b6ba-4aa8e0bb6746/ovn-acl-logging/0.log" Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.271406 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rpr29_b021ff26-58a3-4418-b6ba-4aa8e0bb6746/ovn-controller/0.log" Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.272072 4728 generic.go:334] "Generic (PLEG): container finished" podID="b021ff26-58a3-4418-b6ba-4aa8e0bb6746" containerID="97ccafee960cda236a6c78d9ac91eb112c587cbb3a09b3f3fe63fd6a169527df" exitCode=0 Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.272101 4728 generic.go:334] "Generic (PLEG): container finished" podID="b021ff26-58a3-4418-b6ba-4aa8e0bb6746" containerID="226b2fb5a272504808ea376efa8cfc95a1089240a4d41df99b2a13c37e97dabd" exitCode=0 Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.272178 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rpr29" event={"ID":"b021ff26-58a3-4418-b6ba-4aa8e0bb6746","Type":"ContainerDied","Data":"97ccafee960cda236a6c78d9ac91eb112c587cbb3a09b3f3fe63fd6a169527df"} Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.272262 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rpr29" event={"ID":"b021ff26-58a3-4418-b6ba-4aa8e0bb6746","Type":"ContainerDied","Data":"226b2fb5a272504808ea376efa8cfc95a1089240a4d41df99b2a13c37e97dabd"} Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.272293 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rpr29" event={"ID":"b021ff26-58a3-4418-b6ba-4aa8e0bb6746","Type":"ContainerDied","Data":"3a8e0b7cc10e78d8d955c50238a1305518903d59220fad8a73f9f277d9a4a66b"} Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.272301 4728 scope.go:117] "RemoveContainer" containerID="1918119473c9678a4002db8e1aaf8917697032ea10fb68d07119849b6c95058d" Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.272567 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-rpr29" Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.274708 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-9tlth_468912b7-185a-4869-9a65-70cbcb3c4fb1/kube-multus/0.log" Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.274759 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-9tlth" event={"ID":"468912b7-185a-4869-9a65-70cbcb3c4fb1","Type":"ContainerStarted","Data":"d4cd8b2a8f50323ef597fc84e0778180bb66dbb1f82b9b23e4b4b551f4dfa188"} Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.299993 4728 scope.go:117] "RemoveContainer" containerID="03ade0507ce072ec69afdcbe9da276564bbe5614b51ff1367e19f01c8e2e64fb" Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.319398 4728 scope.go:117] "RemoveContainer" containerID="b09c7887821795cd7de5864ce224a276f25faa70f181b308de3f418f7b1a241f" Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.348217 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-rpr29"] Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.372695 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-rpr29"] Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.372717 4728 scope.go:117] "RemoveContainer" containerID="2bfe2d245cfb745fa96a831c2aedf5cc232e00a62b801485743d7da8e9ae6406" Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.395146 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-pmcdw" Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.410670 4728 scope.go:117] "RemoveContainer" containerID="97ccafee960cda236a6c78d9ac91eb112c587cbb3a09b3f3fe63fd6a169527df" Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.486947 4728 scope.go:117] "RemoveContainer" containerID="226b2fb5a272504808ea376efa8cfc95a1089240a4d41df99b2a13c37e97dabd" Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.506167 4728 scope.go:117] "RemoveContainer" containerID="a2be6adb37ecb98705469e4fb97ada883e4ceb5c205af3e70b54b1a6f88b0a83" Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.526837 4728 scope.go:117] "RemoveContainer" containerID="6fe524630c9ea646073ef4c97fedec7a4605b268bff442198ad57cf07a033f1e" Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.541389 4728 scope.go:117] "RemoveContainer" containerID="4fae001e6877f3c7fb93e4b04bc24d137dc9dc274d4acac247fb84bd9c32d80d" Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.554922 4728 scope.go:117] "RemoveContainer" containerID="1918119473c9678a4002db8e1aaf8917697032ea10fb68d07119849b6c95058d" Feb 27 10:37:24 crc kubenswrapper[4728]: E0227 10:37:24.555429 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1918119473c9678a4002db8e1aaf8917697032ea10fb68d07119849b6c95058d\": container with ID starting with 1918119473c9678a4002db8e1aaf8917697032ea10fb68d07119849b6c95058d not found: ID does not exist" containerID="1918119473c9678a4002db8e1aaf8917697032ea10fb68d07119849b6c95058d" Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.555484 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1918119473c9678a4002db8e1aaf8917697032ea10fb68d07119849b6c95058d"} err="failed to get container status \"1918119473c9678a4002db8e1aaf8917697032ea10fb68d07119849b6c95058d\": rpc error: code = NotFound desc = could not find container \"1918119473c9678a4002db8e1aaf8917697032ea10fb68d07119849b6c95058d\": container with ID starting with 1918119473c9678a4002db8e1aaf8917697032ea10fb68d07119849b6c95058d not found: ID does not exist" Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.555535 4728 scope.go:117] "RemoveContainer" containerID="03ade0507ce072ec69afdcbe9da276564bbe5614b51ff1367e19f01c8e2e64fb" Feb 27 10:37:24 crc kubenswrapper[4728]: E0227 10:37:24.555846 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"03ade0507ce072ec69afdcbe9da276564bbe5614b51ff1367e19f01c8e2e64fb\": container with ID starting with 03ade0507ce072ec69afdcbe9da276564bbe5614b51ff1367e19f01c8e2e64fb not found: ID does not exist" containerID="03ade0507ce072ec69afdcbe9da276564bbe5614b51ff1367e19f01c8e2e64fb" Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.555872 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"03ade0507ce072ec69afdcbe9da276564bbe5614b51ff1367e19f01c8e2e64fb"} err="failed to get container status \"03ade0507ce072ec69afdcbe9da276564bbe5614b51ff1367e19f01c8e2e64fb\": rpc error: code = NotFound desc = could not find container \"03ade0507ce072ec69afdcbe9da276564bbe5614b51ff1367e19f01c8e2e64fb\": container with ID starting with 03ade0507ce072ec69afdcbe9da276564bbe5614b51ff1367e19f01c8e2e64fb not found: ID does not exist" Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.555887 4728 scope.go:117] "RemoveContainer" containerID="b09c7887821795cd7de5864ce224a276f25faa70f181b308de3f418f7b1a241f" Feb 27 10:37:24 crc kubenswrapper[4728]: E0227 10:37:24.556290 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b09c7887821795cd7de5864ce224a276f25faa70f181b308de3f418f7b1a241f\": container with ID starting with b09c7887821795cd7de5864ce224a276f25faa70f181b308de3f418f7b1a241f not found: ID does not exist" containerID="b09c7887821795cd7de5864ce224a276f25faa70f181b308de3f418f7b1a241f" Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.556332 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b09c7887821795cd7de5864ce224a276f25faa70f181b308de3f418f7b1a241f"} err="failed to get container status \"b09c7887821795cd7de5864ce224a276f25faa70f181b308de3f418f7b1a241f\": rpc error: code = NotFound desc = could not find container \"b09c7887821795cd7de5864ce224a276f25faa70f181b308de3f418f7b1a241f\": container with ID starting with b09c7887821795cd7de5864ce224a276f25faa70f181b308de3f418f7b1a241f not found: ID does not exist" Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.556368 4728 scope.go:117] "RemoveContainer" containerID="2bfe2d245cfb745fa96a831c2aedf5cc232e00a62b801485743d7da8e9ae6406" Feb 27 10:37:24 crc kubenswrapper[4728]: E0227 10:37:24.556712 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2bfe2d245cfb745fa96a831c2aedf5cc232e00a62b801485743d7da8e9ae6406\": container with ID starting with 2bfe2d245cfb745fa96a831c2aedf5cc232e00a62b801485743d7da8e9ae6406 not found: ID does not exist" containerID="2bfe2d245cfb745fa96a831c2aedf5cc232e00a62b801485743d7da8e9ae6406" Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.556742 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2bfe2d245cfb745fa96a831c2aedf5cc232e00a62b801485743d7da8e9ae6406"} err="failed to get container status \"2bfe2d245cfb745fa96a831c2aedf5cc232e00a62b801485743d7da8e9ae6406\": rpc error: code = NotFound desc = could not find container \"2bfe2d245cfb745fa96a831c2aedf5cc232e00a62b801485743d7da8e9ae6406\": container with ID starting with 2bfe2d245cfb745fa96a831c2aedf5cc232e00a62b801485743d7da8e9ae6406 not found: ID does not exist" Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.556765 4728 scope.go:117] "RemoveContainer" containerID="97ccafee960cda236a6c78d9ac91eb112c587cbb3a09b3f3fe63fd6a169527df" Feb 27 10:37:24 crc kubenswrapper[4728]: E0227 10:37:24.557184 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"97ccafee960cda236a6c78d9ac91eb112c587cbb3a09b3f3fe63fd6a169527df\": container with ID starting with 97ccafee960cda236a6c78d9ac91eb112c587cbb3a09b3f3fe63fd6a169527df not found: ID does not exist" containerID="97ccafee960cda236a6c78d9ac91eb112c587cbb3a09b3f3fe63fd6a169527df" Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.557231 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"97ccafee960cda236a6c78d9ac91eb112c587cbb3a09b3f3fe63fd6a169527df"} err="failed to get container status \"97ccafee960cda236a6c78d9ac91eb112c587cbb3a09b3f3fe63fd6a169527df\": rpc error: code = NotFound desc = could not find container \"97ccafee960cda236a6c78d9ac91eb112c587cbb3a09b3f3fe63fd6a169527df\": container with ID starting with 97ccafee960cda236a6c78d9ac91eb112c587cbb3a09b3f3fe63fd6a169527df not found: ID does not exist" Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.557258 4728 scope.go:117] "RemoveContainer" containerID="226b2fb5a272504808ea376efa8cfc95a1089240a4d41df99b2a13c37e97dabd" Feb 27 10:37:24 crc kubenswrapper[4728]: E0227 10:37:24.557556 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"226b2fb5a272504808ea376efa8cfc95a1089240a4d41df99b2a13c37e97dabd\": container with ID starting with 226b2fb5a272504808ea376efa8cfc95a1089240a4d41df99b2a13c37e97dabd not found: ID does not exist" containerID="226b2fb5a272504808ea376efa8cfc95a1089240a4d41df99b2a13c37e97dabd" Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.557588 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"226b2fb5a272504808ea376efa8cfc95a1089240a4d41df99b2a13c37e97dabd"} err="failed to get container status \"226b2fb5a272504808ea376efa8cfc95a1089240a4d41df99b2a13c37e97dabd\": rpc error: code = NotFound desc = could not find container \"226b2fb5a272504808ea376efa8cfc95a1089240a4d41df99b2a13c37e97dabd\": container with ID starting with 226b2fb5a272504808ea376efa8cfc95a1089240a4d41df99b2a13c37e97dabd not found: ID does not exist" Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.557610 4728 scope.go:117] "RemoveContainer" containerID="a2be6adb37ecb98705469e4fb97ada883e4ceb5c205af3e70b54b1a6f88b0a83" Feb 27 10:37:24 crc kubenswrapper[4728]: E0227 10:37:24.558080 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a2be6adb37ecb98705469e4fb97ada883e4ceb5c205af3e70b54b1a6f88b0a83\": container with ID starting with a2be6adb37ecb98705469e4fb97ada883e4ceb5c205af3e70b54b1a6f88b0a83 not found: ID does not exist" containerID="a2be6adb37ecb98705469e4fb97ada883e4ceb5c205af3e70b54b1a6f88b0a83" Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.558110 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a2be6adb37ecb98705469e4fb97ada883e4ceb5c205af3e70b54b1a6f88b0a83"} err="failed to get container status \"a2be6adb37ecb98705469e4fb97ada883e4ceb5c205af3e70b54b1a6f88b0a83\": rpc error: code = NotFound desc = could not find container \"a2be6adb37ecb98705469e4fb97ada883e4ceb5c205af3e70b54b1a6f88b0a83\": container with ID starting with a2be6adb37ecb98705469e4fb97ada883e4ceb5c205af3e70b54b1a6f88b0a83 not found: ID does not exist" Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.558131 4728 scope.go:117] "RemoveContainer" containerID="6fe524630c9ea646073ef4c97fedec7a4605b268bff442198ad57cf07a033f1e" Feb 27 10:37:24 crc kubenswrapper[4728]: E0227 10:37:24.558428 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6fe524630c9ea646073ef4c97fedec7a4605b268bff442198ad57cf07a033f1e\": container with ID starting with 6fe524630c9ea646073ef4c97fedec7a4605b268bff442198ad57cf07a033f1e not found: ID does not exist" containerID="6fe524630c9ea646073ef4c97fedec7a4605b268bff442198ad57cf07a033f1e" Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.558463 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6fe524630c9ea646073ef4c97fedec7a4605b268bff442198ad57cf07a033f1e"} err="failed to get container status \"6fe524630c9ea646073ef4c97fedec7a4605b268bff442198ad57cf07a033f1e\": rpc error: code = NotFound desc = could not find container \"6fe524630c9ea646073ef4c97fedec7a4605b268bff442198ad57cf07a033f1e\": container with ID starting with 6fe524630c9ea646073ef4c97fedec7a4605b268bff442198ad57cf07a033f1e not found: ID does not exist" Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.558479 4728 scope.go:117] "RemoveContainer" containerID="4fae001e6877f3c7fb93e4b04bc24d137dc9dc274d4acac247fb84bd9c32d80d" Feb 27 10:37:24 crc kubenswrapper[4728]: E0227 10:37:24.558806 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4fae001e6877f3c7fb93e4b04bc24d137dc9dc274d4acac247fb84bd9c32d80d\": container with ID starting with 4fae001e6877f3c7fb93e4b04bc24d137dc9dc274d4acac247fb84bd9c32d80d not found: ID does not exist" containerID="4fae001e6877f3c7fb93e4b04bc24d137dc9dc274d4acac247fb84bd9c32d80d" Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.558839 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4fae001e6877f3c7fb93e4b04bc24d137dc9dc274d4acac247fb84bd9c32d80d"} err="failed to get container status \"4fae001e6877f3c7fb93e4b04bc24d137dc9dc274d4acac247fb84bd9c32d80d\": rpc error: code = NotFound desc = could not find container \"4fae001e6877f3c7fb93e4b04bc24d137dc9dc274d4acac247fb84bd9c32d80d\": container with ID starting with 4fae001e6877f3c7fb93e4b04bc24d137dc9dc274d4acac247fb84bd9c32d80d not found: ID does not exist" Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.558860 4728 scope.go:117] "RemoveContainer" containerID="1918119473c9678a4002db8e1aaf8917697032ea10fb68d07119849b6c95058d" Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.559143 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1918119473c9678a4002db8e1aaf8917697032ea10fb68d07119849b6c95058d"} err="failed to get container status \"1918119473c9678a4002db8e1aaf8917697032ea10fb68d07119849b6c95058d\": rpc error: code = NotFound desc = could not find container \"1918119473c9678a4002db8e1aaf8917697032ea10fb68d07119849b6c95058d\": container with ID starting with 1918119473c9678a4002db8e1aaf8917697032ea10fb68d07119849b6c95058d not found: ID does not exist" Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.559165 4728 scope.go:117] "RemoveContainer" containerID="03ade0507ce072ec69afdcbe9da276564bbe5614b51ff1367e19f01c8e2e64fb" Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.559429 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"03ade0507ce072ec69afdcbe9da276564bbe5614b51ff1367e19f01c8e2e64fb"} err="failed to get container status \"03ade0507ce072ec69afdcbe9da276564bbe5614b51ff1367e19f01c8e2e64fb\": rpc error: code = NotFound desc = could not find container \"03ade0507ce072ec69afdcbe9da276564bbe5614b51ff1367e19f01c8e2e64fb\": container with ID starting with 03ade0507ce072ec69afdcbe9da276564bbe5614b51ff1367e19f01c8e2e64fb not found: ID does not exist" Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.559454 4728 scope.go:117] "RemoveContainer" containerID="b09c7887821795cd7de5864ce224a276f25faa70f181b308de3f418f7b1a241f" Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.559733 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b09c7887821795cd7de5864ce224a276f25faa70f181b308de3f418f7b1a241f"} err="failed to get container status \"b09c7887821795cd7de5864ce224a276f25faa70f181b308de3f418f7b1a241f\": rpc error: code = NotFound desc = could not find container \"b09c7887821795cd7de5864ce224a276f25faa70f181b308de3f418f7b1a241f\": container with ID starting with b09c7887821795cd7de5864ce224a276f25faa70f181b308de3f418f7b1a241f not found: ID does not exist" Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.559757 4728 scope.go:117] "RemoveContainer" containerID="2bfe2d245cfb745fa96a831c2aedf5cc232e00a62b801485743d7da8e9ae6406" Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.560081 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2bfe2d245cfb745fa96a831c2aedf5cc232e00a62b801485743d7da8e9ae6406"} err="failed to get container status \"2bfe2d245cfb745fa96a831c2aedf5cc232e00a62b801485743d7da8e9ae6406\": rpc error: code = NotFound desc = could not find container \"2bfe2d245cfb745fa96a831c2aedf5cc232e00a62b801485743d7da8e9ae6406\": container with ID starting with 2bfe2d245cfb745fa96a831c2aedf5cc232e00a62b801485743d7da8e9ae6406 not found: ID does not exist" Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.560101 4728 scope.go:117] "RemoveContainer" containerID="97ccafee960cda236a6c78d9ac91eb112c587cbb3a09b3f3fe63fd6a169527df" Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.560455 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"97ccafee960cda236a6c78d9ac91eb112c587cbb3a09b3f3fe63fd6a169527df"} err="failed to get container status \"97ccafee960cda236a6c78d9ac91eb112c587cbb3a09b3f3fe63fd6a169527df\": rpc error: code = NotFound desc = could not find container \"97ccafee960cda236a6c78d9ac91eb112c587cbb3a09b3f3fe63fd6a169527df\": container with ID starting with 97ccafee960cda236a6c78d9ac91eb112c587cbb3a09b3f3fe63fd6a169527df not found: ID does not exist" Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.560484 4728 scope.go:117] "RemoveContainer" containerID="226b2fb5a272504808ea376efa8cfc95a1089240a4d41df99b2a13c37e97dabd" Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.560841 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"226b2fb5a272504808ea376efa8cfc95a1089240a4d41df99b2a13c37e97dabd"} err="failed to get container status \"226b2fb5a272504808ea376efa8cfc95a1089240a4d41df99b2a13c37e97dabd\": rpc error: code = NotFound desc = could not find container \"226b2fb5a272504808ea376efa8cfc95a1089240a4d41df99b2a13c37e97dabd\": container with ID starting with 226b2fb5a272504808ea376efa8cfc95a1089240a4d41df99b2a13c37e97dabd not found: ID does not exist" Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.560860 4728 scope.go:117] "RemoveContainer" containerID="a2be6adb37ecb98705469e4fb97ada883e4ceb5c205af3e70b54b1a6f88b0a83" Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.561178 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a2be6adb37ecb98705469e4fb97ada883e4ceb5c205af3e70b54b1a6f88b0a83"} err="failed to get container status \"a2be6adb37ecb98705469e4fb97ada883e4ceb5c205af3e70b54b1a6f88b0a83\": rpc error: code = NotFound desc = could not find container \"a2be6adb37ecb98705469e4fb97ada883e4ceb5c205af3e70b54b1a6f88b0a83\": container with ID starting with a2be6adb37ecb98705469e4fb97ada883e4ceb5c205af3e70b54b1a6f88b0a83 not found: ID does not exist" Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.561202 4728 scope.go:117] "RemoveContainer" containerID="6fe524630c9ea646073ef4c97fedec7a4605b268bff442198ad57cf07a033f1e" Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.561583 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6fe524630c9ea646073ef4c97fedec7a4605b268bff442198ad57cf07a033f1e"} err="failed to get container status \"6fe524630c9ea646073ef4c97fedec7a4605b268bff442198ad57cf07a033f1e\": rpc error: code = NotFound desc = could not find container \"6fe524630c9ea646073ef4c97fedec7a4605b268bff442198ad57cf07a033f1e\": container with ID starting with 6fe524630c9ea646073ef4c97fedec7a4605b268bff442198ad57cf07a033f1e not found: ID does not exist" Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.561602 4728 scope.go:117] "RemoveContainer" containerID="4fae001e6877f3c7fb93e4b04bc24d137dc9dc274d4acac247fb84bd9c32d80d" Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.562012 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4fae001e6877f3c7fb93e4b04bc24d137dc9dc274d4acac247fb84bd9c32d80d"} err="failed to get container status \"4fae001e6877f3c7fb93e4b04bc24d137dc9dc274d4acac247fb84bd9c32d80d\": rpc error: code = NotFound desc = could not find container \"4fae001e6877f3c7fb93e4b04bc24d137dc9dc274d4acac247fb84bd9c32d80d\": container with ID starting with 4fae001e6877f3c7fb93e4b04bc24d137dc9dc274d4acac247fb84bd9c32d80d not found: ID does not exist" Feb 27 10:37:24 crc kubenswrapper[4728]: I0227 10:37:24.731763 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b021ff26-58a3-4418-b6ba-4aa8e0bb6746" path="/var/lib/kubelet/pods/b021ff26-58a3-4418-b6ba-4aa8e0bb6746/volumes" Feb 27 10:37:25 crc kubenswrapper[4728]: I0227 10:37:25.285849 4728 generic.go:334] "Generic (PLEG): container finished" podID="37aa7967-25e0-4f97-b792-65ace9fdfd37" containerID="26cf78ad81af2b4820677e78be4bc2ddd44a388a1e4d377d54c99fe07c56a805" exitCode=0 Feb 27 10:37:25 crc kubenswrapper[4728]: I0227 10:37:25.285960 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pmcdw" event={"ID":"37aa7967-25e0-4f97-b792-65ace9fdfd37","Type":"ContainerDied","Data":"26cf78ad81af2b4820677e78be4bc2ddd44a388a1e4d377d54c99fe07c56a805"} Feb 27 10:37:25 crc kubenswrapper[4728]: I0227 10:37:25.286250 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pmcdw" event={"ID":"37aa7967-25e0-4f97-b792-65ace9fdfd37","Type":"ContainerStarted","Data":"973f2d870915b6b4b46930ffcbd81876197f8b08f6736339e5e5aab39a2ab00c"} Feb 27 10:37:26 crc kubenswrapper[4728]: I0227 10:37:26.339221 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pmcdw" event={"ID":"37aa7967-25e0-4f97-b792-65ace9fdfd37","Type":"ContainerStarted","Data":"99707f4eb8b3b4726e72272fe30a3f48a5a94e219ff541913909a343ae7b4a20"} Feb 27 10:37:26 crc kubenswrapper[4728]: I0227 10:37:26.339585 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pmcdw" event={"ID":"37aa7967-25e0-4f97-b792-65ace9fdfd37","Type":"ContainerStarted","Data":"60a3e4dcab261790f7b579e6bfc67d378c4dfa3bf3ee791ae85b68a306d0e4c0"} Feb 27 10:37:26 crc kubenswrapper[4728]: I0227 10:37:26.339602 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pmcdw" event={"ID":"37aa7967-25e0-4f97-b792-65ace9fdfd37","Type":"ContainerStarted","Data":"ef6f05f126d8d6c34e9ec22f3be4889fa92e02fe82e025a7e81204f4d9e0c772"} Feb 27 10:37:26 crc kubenswrapper[4728]: I0227 10:37:26.339614 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pmcdw" event={"ID":"37aa7967-25e0-4f97-b792-65ace9fdfd37","Type":"ContainerStarted","Data":"95027e1b306bb5b8d3b2c6df0f31e88732a170fd15a9fa584a31bb4aac47cf6e"} Feb 27 10:37:26 crc kubenswrapper[4728]: I0227 10:37:26.339625 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pmcdw" event={"ID":"37aa7967-25e0-4f97-b792-65ace9fdfd37","Type":"ContainerStarted","Data":"98865667007fc3e29309717cd1a2e164889cd08ea675cfb3bd1921e4659cc3fb"} Feb 27 10:37:26 crc kubenswrapper[4728]: I0227 10:37:26.339635 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pmcdw" event={"ID":"37aa7967-25e0-4f97-b792-65ace9fdfd37","Type":"ContainerStarted","Data":"caa9f24cf17499a2c6ae30b0f710628e09a94eeb9e9e88d6f043cb541ed589ed"} Feb 27 10:37:29 crc kubenswrapper[4728]: I0227 10:37:29.138843 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-zzdsp"] Feb 27 10:37:29 crc kubenswrapper[4728]: I0227 10:37:29.140076 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-zzdsp" Feb 27 10:37:29 crc kubenswrapper[4728]: I0227 10:37:29.141904 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Feb 27 10:37:29 crc kubenswrapper[4728]: I0227 10:37:29.142199 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-v2dlf" Feb 27 10:37:29 crc kubenswrapper[4728]: I0227 10:37:29.145686 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Feb 27 10:37:29 crc kubenswrapper[4728]: I0227 10:37:29.190280 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6bf87b847d-g67pg"] Feb 27 10:37:29 crc kubenswrapper[4728]: I0227 10:37:29.191008 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6bf87b847d-g67pg" Feb 27 10:37:29 crc kubenswrapper[4728]: I0227 10:37:29.194052 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-gjbtl" Feb 27 10:37:29 crc kubenswrapper[4728]: I0227 10:37:29.194208 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Feb 27 10:37:29 crc kubenswrapper[4728]: I0227 10:37:29.197770 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6bf87b847d-6b5tt"] Feb 27 10:37:29 crc kubenswrapper[4728]: I0227 10:37:29.198945 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6bf87b847d-6b5tt" Feb 27 10:37:29 crc kubenswrapper[4728]: I0227 10:37:29.210993 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjjdd\" (UniqueName: \"kubernetes.io/projected/936f8a2d-37fa-4d39-9de8-a07aa8efaf6a-kube-api-access-cjjdd\") pod \"obo-prometheus-operator-68bc856cb9-zzdsp\" (UID: \"936f8a2d-37fa-4d39-9de8-a07aa8efaf6a\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-zzdsp" Feb 27 10:37:29 crc kubenswrapper[4728]: I0227 10:37:29.312003 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/18b42bdb-6a22-4f0f-88bd-2bff0d1a4b82-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6bf87b847d-6b5tt\" (UID: \"18b42bdb-6a22-4f0f-88bd-2bff0d1a4b82\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6bf87b847d-6b5tt" Feb 27 10:37:29 crc kubenswrapper[4728]: I0227 10:37:29.312076 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c4fb8c6f-07d9-4e5c-8f99-80776c0e62bb-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6bf87b847d-g67pg\" (UID: \"c4fb8c6f-07d9-4e5c-8f99-80776c0e62bb\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6bf87b847d-g67pg" Feb 27 10:37:29 crc kubenswrapper[4728]: I0227 10:37:29.312110 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cjjdd\" (UniqueName: \"kubernetes.io/projected/936f8a2d-37fa-4d39-9de8-a07aa8efaf6a-kube-api-access-cjjdd\") pod \"obo-prometheus-operator-68bc856cb9-zzdsp\" (UID: \"936f8a2d-37fa-4d39-9de8-a07aa8efaf6a\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-zzdsp" Feb 27 10:37:29 crc kubenswrapper[4728]: I0227 10:37:29.312178 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c4fb8c6f-07d9-4e5c-8f99-80776c0e62bb-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6bf87b847d-g67pg\" (UID: \"c4fb8c6f-07d9-4e5c-8f99-80776c0e62bb\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6bf87b847d-g67pg" Feb 27 10:37:29 crc kubenswrapper[4728]: I0227 10:37:29.312214 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/18b42bdb-6a22-4f0f-88bd-2bff0d1a4b82-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6bf87b847d-6b5tt\" (UID: \"18b42bdb-6a22-4f0f-88bd-2bff0d1a4b82\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6bf87b847d-6b5tt" Feb 27 10:37:29 crc kubenswrapper[4728]: I0227 10:37:29.337442 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cjjdd\" (UniqueName: \"kubernetes.io/projected/936f8a2d-37fa-4d39-9de8-a07aa8efaf6a-kube-api-access-cjjdd\") pod \"obo-prometheus-operator-68bc856cb9-zzdsp\" (UID: \"936f8a2d-37fa-4d39-9de8-a07aa8efaf6a\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-zzdsp" Feb 27 10:37:29 crc kubenswrapper[4728]: I0227 10:37:29.358561 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pmcdw" event={"ID":"37aa7967-25e0-4f97-b792-65ace9fdfd37","Type":"ContainerStarted","Data":"90395e552acd57f51de9da58a45ce7e6e8c779ef651588edd1e2cd619acaad5d"} Feb 27 10:37:29 crc kubenswrapper[4728]: I0227 10:37:29.373824 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-n82qn"] Feb 27 10:37:29 crc kubenswrapper[4728]: I0227 10:37:29.374558 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-n82qn" Feb 27 10:37:29 crc kubenswrapper[4728]: I0227 10:37:29.376228 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Feb 27 10:37:29 crc kubenswrapper[4728]: I0227 10:37:29.376360 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-rv478" Feb 27 10:37:29 crc kubenswrapper[4728]: I0227 10:37:29.413199 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/18b42bdb-6a22-4f0f-88bd-2bff0d1a4b82-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6bf87b847d-6b5tt\" (UID: \"18b42bdb-6a22-4f0f-88bd-2bff0d1a4b82\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6bf87b847d-6b5tt" Feb 27 10:37:29 crc kubenswrapper[4728]: I0227 10:37:29.413247 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/18b42bdb-6a22-4f0f-88bd-2bff0d1a4b82-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6bf87b847d-6b5tt\" (UID: \"18b42bdb-6a22-4f0f-88bd-2bff0d1a4b82\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6bf87b847d-6b5tt" Feb 27 10:37:29 crc kubenswrapper[4728]: I0227 10:37:29.413282 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/685b65a4-9d96-4018-b8df-a45eccc1e923-observability-operator-tls\") pod \"observability-operator-59bdc8b94-n82qn\" (UID: \"685b65a4-9d96-4018-b8df-a45eccc1e923\") " pod="openshift-operators/observability-operator-59bdc8b94-n82qn" Feb 27 10:37:29 crc kubenswrapper[4728]: I0227 10:37:29.413302 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kd9sd\" (UniqueName: \"kubernetes.io/projected/685b65a4-9d96-4018-b8df-a45eccc1e923-kube-api-access-kd9sd\") pod \"observability-operator-59bdc8b94-n82qn\" (UID: \"685b65a4-9d96-4018-b8df-a45eccc1e923\") " pod="openshift-operators/observability-operator-59bdc8b94-n82qn" Feb 27 10:37:29 crc kubenswrapper[4728]: I0227 10:37:29.413339 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c4fb8c6f-07d9-4e5c-8f99-80776c0e62bb-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6bf87b847d-g67pg\" (UID: \"c4fb8c6f-07d9-4e5c-8f99-80776c0e62bb\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6bf87b847d-g67pg" Feb 27 10:37:29 crc kubenswrapper[4728]: I0227 10:37:29.413696 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c4fb8c6f-07d9-4e5c-8f99-80776c0e62bb-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6bf87b847d-g67pg\" (UID: \"c4fb8c6f-07d9-4e5c-8f99-80776c0e62bb\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6bf87b847d-g67pg" Feb 27 10:37:29 crc kubenswrapper[4728]: I0227 10:37:29.416703 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/18b42bdb-6a22-4f0f-88bd-2bff0d1a4b82-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6bf87b847d-6b5tt\" (UID: \"18b42bdb-6a22-4f0f-88bd-2bff0d1a4b82\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6bf87b847d-6b5tt" Feb 27 10:37:29 crc kubenswrapper[4728]: I0227 10:37:29.416880 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c4fb8c6f-07d9-4e5c-8f99-80776c0e62bb-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6bf87b847d-g67pg\" (UID: \"c4fb8c6f-07d9-4e5c-8f99-80776c0e62bb\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6bf87b847d-g67pg" Feb 27 10:37:29 crc kubenswrapper[4728]: I0227 10:37:29.417104 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/18b42bdb-6a22-4f0f-88bd-2bff0d1a4b82-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6bf87b847d-6b5tt\" (UID: \"18b42bdb-6a22-4f0f-88bd-2bff0d1a4b82\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6bf87b847d-6b5tt" Feb 27 10:37:29 crc kubenswrapper[4728]: I0227 10:37:29.417306 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c4fb8c6f-07d9-4e5c-8f99-80776c0e62bb-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6bf87b847d-g67pg\" (UID: \"c4fb8c6f-07d9-4e5c-8f99-80776c0e62bb\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6bf87b847d-g67pg" Feb 27 10:37:29 crc kubenswrapper[4728]: I0227 10:37:29.455120 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-zzdsp" Feb 27 10:37:29 crc kubenswrapper[4728]: I0227 10:37:29.456685 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-2bd92"] Feb 27 10:37:29 crc kubenswrapper[4728]: I0227 10:37:29.457531 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-2bd92" Feb 27 10:37:29 crc kubenswrapper[4728]: I0227 10:37:29.462441 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-gcg5m" Feb 27 10:37:29 crc kubenswrapper[4728]: E0227 10:37:29.493671 4728 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-zzdsp_openshift-operators_936f8a2d-37fa-4d39-9de8-a07aa8efaf6a_0(0a8e49e89a14a0050b80f157520763a267e973b4f40eeb7f4b0ce5b261e338bf): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 27 10:37:29 crc kubenswrapper[4728]: E0227 10:37:29.493747 4728 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-zzdsp_openshift-operators_936f8a2d-37fa-4d39-9de8-a07aa8efaf6a_0(0a8e49e89a14a0050b80f157520763a267e973b4f40eeb7f4b0ce5b261e338bf): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-zzdsp" Feb 27 10:37:29 crc kubenswrapper[4728]: E0227 10:37:29.493777 4728 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-zzdsp_openshift-operators_936f8a2d-37fa-4d39-9de8-a07aa8efaf6a_0(0a8e49e89a14a0050b80f157520763a267e973b4f40eeb7f4b0ce5b261e338bf): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-zzdsp" Feb 27 10:37:29 crc kubenswrapper[4728]: E0227 10:37:29.493830 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-68bc856cb9-zzdsp_openshift-operators(936f8a2d-37fa-4d39-9de8-a07aa8efaf6a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-68bc856cb9-zzdsp_openshift-operators(936f8a2d-37fa-4d39-9de8-a07aa8efaf6a)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-zzdsp_openshift-operators_936f8a2d-37fa-4d39-9de8-a07aa8efaf6a_0(0a8e49e89a14a0050b80f157520763a267e973b4f40eeb7f4b0ce5b261e338bf): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-zzdsp" podUID="936f8a2d-37fa-4d39-9de8-a07aa8efaf6a" Feb 27 10:37:29 crc kubenswrapper[4728]: I0227 10:37:29.503613 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6bf87b847d-g67pg" Feb 27 10:37:29 crc kubenswrapper[4728]: I0227 10:37:29.513828 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6bf87b847d-6b5tt" Feb 27 10:37:29 crc kubenswrapper[4728]: I0227 10:37:29.514849 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/52f3fd68-b1f1-4b15-b15c-5356d08aeedd-openshift-service-ca\") pod \"perses-operator-5bf474d74f-2bd92\" (UID: \"52f3fd68-b1f1-4b15-b15c-5356d08aeedd\") " pod="openshift-operators/perses-operator-5bf474d74f-2bd92" Feb 27 10:37:29 crc kubenswrapper[4728]: I0227 10:37:29.514891 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5bdx\" (UniqueName: \"kubernetes.io/projected/52f3fd68-b1f1-4b15-b15c-5356d08aeedd-kube-api-access-h5bdx\") pod \"perses-operator-5bf474d74f-2bd92\" (UID: \"52f3fd68-b1f1-4b15-b15c-5356d08aeedd\") " pod="openshift-operators/perses-operator-5bf474d74f-2bd92" Feb 27 10:37:29 crc kubenswrapper[4728]: I0227 10:37:29.514968 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/685b65a4-9d96-4018-b8df-a45eccc1e923-observability-operator-tls\") pod \"observability-operator-59bdc8b94-n82qn\" (UID: \"685b65a4-9d96-4018-b8df-a45eccc1e923\") " pod="openshift-operators/observability-operator-59bdc8b94-n82qn" Feb 27 10:37:29 crc kubenswrapper[4728]: I0227 10:37:29.514995 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kd9sd\" (UniqueName: \"kubernetes.io/projected/685b65a4-9d96-4018-b8df-a45eccc1e923-kube-api-access-kd9sd\") pod \"observability-operator-59bdc8b94-n82qn\" (UID: \"685b65a4-9d96-4018-b8df-a45eccc1e923\") " pod="openshift-operators/observability-operator-59bdc8b94-n82qn" Feb 27 10:37:29 crc kubenswrapper[4728]: I0227 10:37:29.525054 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/685b65a4-9d96-4018-b8df-a45eccc1e923-observability-operator-tls\") pod \"observability-operator-59bdc8b94-n82qn\" (UID: \"685b65a4-9d96-4018-b8df-a45eccc1e923\") " pod="openshift-operators/observability-operator-59bdc8b94-n82qn" Feb 27 10:37:29 crc kubenswrapper[4728]: I0227 10:37:29.533783 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kd9sd\" (UniqueName: \"kubernetes.io/projected/685b65a4-9d96-4018-b8df-a45eccc1e923-kube-api-access-kd9sd\") pod \"observability-operator-59bdc8b94-n82qn\" (UID: \"685b65a4-9d96-4018-b8df-a45eccc1e923\") " pod="openshift-operators/observability-operator-59bdc8b94-n82qn" Feb 27 10:37:29 crc kubenswrapper[4728]: E0227 10:37:29.546704 4728 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6bf87b847d-g67pg_openshift-operators_c4fb8c6f-07d9-4e5c-8f99-80776c0e62bb_0(ee8727542b3ec8fe44e0b0c9ce87b802d85be49223b3d7e996e350114ad0d12f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 27 10:37:29 crc kubenswrapper[4728]: E0227 10:37:29.546777 4728 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6bf87b847d-g67pg_openshift-operators_c4fb8c6f-07d9-4e5c-8f99-80776c0e62bb_0(ee8727542b3ec8fe44e0b0c9ce87b802d85be49223b3d7e996e350114ad0d12f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6bf87b847d-g67pg" Feb 27 10:37:29 crc kubenswrapper[4728]: E0227 10:37:29.546813 4728 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6bf87b847d-g67pg_openshift-operators_c4fb8c6f-07d9-4e5c-8f99-80776c0e62bb_0(ee8727542b3ec8fe44e0b0c9ce87b802d85be49223b3d7e996e350114ad0d12f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6bf87b847d-g67pg" Feb 27 10:37:29 crc kubenswrapper[4728]: E0227 10:37:29.546884 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-6bf87b847d-g67pg_openshift-operators(c4fb8c6f-07d9-4e5c-8f99-80776c0e62bb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-6bf87b847d-g67pg_openshift-operators(c4fb8c6f-07d9-4e5c-8f99-80776c0e62bb)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6bf87b847d-g67pg_openshift-operators_c4fb8c6f-07d9-4e5c-8f99-80776c0e62bb_0(ee8727542b3ec8fe44e0b0c9ce87b802d85be49223b3d7e996e350114ad0d12f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6bf87b847d-g67pg" podUID="c4fb8c6f-07d9-4e5c-8f99-80776c0e62bb" Feb 27 10:37:29 crc kubenswrapper[4728]: E0227 10:37:29.549036 4728 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6bf87b847d-6b5tt_openshift-operators_18b42bdb-6a22-4f0f-88bd-2bff0d1a4b82_0(f2c08623e2828afc24c74dd36a2a8cd5a242e4f9379089afa0cf219540e60981): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 27 10:37:29 crc kubenswrapper[4728]: E0227 10:37:29.549087 4728 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6bf87b847d-6b5tt_openshift-operators_18b42bdb-6a22-4f0f-88bd-2bff0d1a4b82_0(f2c08623e2828afc24c74dd36a2a8cd5a242e4f9379089afa0cf219540e60981): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6bf87b847d-6b5tt" Feb 27 10:37:29 crc kubenswrapper[4728]: E0227 10:37:29.549109 4728 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6bf87b847d-6b5tt_openshift-operators_18b42bdb-6a22-4f0f-88bd-2bff0d1a4b82_0(f2c08623e2828afc24c74dd36a2a8cd5a242e4f9379089afa0cf219540e60981): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6bf87b847d-6b5tt" Feb 27 10:37:29 crc kubenswrapper[4728]: E0227 10:37:29.549157 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-6bf87b847d-6b5tt_openshift-operators(18b42bdb-6a22-4f0f-88bd-2bff0d1a4b82)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-6bf87b847d-6b5tt_openshift-operators(18b42bdb-6a22-4f0f-88bd-2bff0d1a4b82)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6bf87b847d-6b5tt_openshift-operators_18b42bdb-6a22-4f0f-88bd-2bff0d1a4b82_0(f2c08623e2828afc24c74dd36a2a8cd5a242e4f9379089afa0cf219540e60981): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6bf87b847d-6b5tt" podUID="18b42bdb-6a22-4f0f-88bd-2bff0d1a4b82" Feb 27 10:37:29 crc kubenswrapper[4728]: I0227 10:37:29.615759 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/52f3fd68-b1f1-4b15-b15c-5356d08aeedd-openshift-service-ca\") pod \"perses-operator-5bf474d74f-2bd92\" (UID: \"52f3fd68-b1f1-4b15-b15c-5356d08aeedd\") " pod="openshift-operators/perses-operator-5bf474d74f-2bd92" Feb 27 10:37:29 crc kubenswrapper[4728]: I0227 10:37:29.615806 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h5bdx\" (UniqueName: \"kubernetes.io/projected/52f3fd68-b1f1-4b15-b15c-5356d08aeedd-kube-api-access-h5bdx\") pod \"perses-operator-5bf474d74f-2bd92\" (UID: \"52f3fd68-b1f1-4b15-b15c-5356d08aeedd\") " pod="openshift-operators/perses-operator-5bf474d74f-2bd92" Feb 27 10:37:29 crc kubenswrapper[4728]: I0227 10:37:29.616959 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/52f3fd68-b1f1-4b15-b15c-5356d08aeedd-openshift-service-ca\") pod \"perses-operator-5bf474d74f-2bd92\" (UID: \"52f3fd68-b1f1-4b15-b15c-5356d08aeedd\") " pod="openshift-operators/perses-operator-5bf474d74f-2bd92" Feb 27 10:37:29 crc kubenswrapper[4728]: I0227 10:37:29.643389 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h5bdx\" (UniqueName: \"kubernetes.io/projected/52f3fd68-b1f1-4b15-b15c-5356d08aeedd-kube-api-access-h5bdx\") pod \"perses-operator-5bf474d74f-2bd92\" (UID: \"52f3fd68-b1f1-4b15-b15c-5356d08aeedd\") " pod="openshift-operators/perses-operator-5bf474d74f-2bd92" Feb 27 10:37:29 crc kubenswrapper[4728]: I0227 10:37:29.688107 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-n82qn" Feb 27 10:37:29 crc kubenswrapper[4728]: E0227 10:37:29.707396 4728 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-n82qn_openshift-operators_685b65a4-9d96-4018-b8df-a45eccc1e923_0(f0d455515d933fb0a33e9757c3b47c6c213af58ca900f379c43089a531e158fa): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 27 10:37:29 crc kubenswrapper[4728]: E0227 10:37:29.707490 4728 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-n82qn_openshift-operators_685b65a4-9d96-4018-b8df-a45eccc1e923_0(f0d455515d933fb0a33e9757c3b47c6c213af58ca900f379c43089a531e158fa): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-n82qn" Feb 27 10:37:29 crc kubenswrapper[4728]: E0227 10:37:29.707545 4728 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-n82qn_openshift-operators_685b65a4-9d96-4018-b8df-a45eccc1e923_0(f0d455515d933fb0a33e9757c3b47c6c213af58ca900f379c43089a531e158fa): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-n82qn" Feb 27 10:37:29 crc kubenswrapper[4728]: E0227 10:37:29.707616 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-59bdc8b94-n82qn_openshift-operators(685b65a4-9d96-4018-b8df-a45eccc1e923)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-59bdc8b94-n82qn_openshift-operators(685b65a4-9d96-4018-b8df-a45eccc1e923)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-n82qn_openshift-operators_685b65a4-9d96-4018-b8df-a45eccc1e923_0(f0d455515d933fb0a33e9757c3b47c6c213af58ca900f379c43089a531e158fa): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-59bdc8b94-n82qn" podUID="685b65a4-9d96-4018-b8df-a45eccc1e923" Feb 27 10:37:29 crc kubenswrapper[4728]: I0227 10:37:29.815457 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-2bd92" Feb 27 10:37:29 crc kubenswrapper[4728]: E0227 10:37:29.838894 4728 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-2bd92_openshift-operators_52f3fd68-b1f1-4b15-b15c-5356d08aeedd_0(0d852096045419d6956aa4830acd13cf0464332a214cbdfd8f9349c36205f952): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 27 10:37:29 crc kubenswrapper[4728]: E0227 10:37:29.839203 4728 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-2bd92_openshift-operators_52f3fd68-b1f1-4b15-b15c-5356d08aeedd_0(0d852096045419d6956aa4830acd13cf0464332a214cbdfd8f9349c36205f952): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-2bd92" Feb 27 10:37:29 crc kubenswrapper[4728]: E0227 10:37:29.839243 4728 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-2bd92_openshift-operators_52f3fd68-b1f1-4b15-b15c-5356d08aeedd_0(0d852096045419d6956aa4830acd13cf0464332a214cbdfd8f9349c36205f952): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-2bd92" Feb 27 10:37:29 crc kubenswrapper[4728]: E0227 10:37:29.839304 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5bf474d74f-2bd92_openshift-operators(52f3fd68-b1f1-4b15-b15c-5356d08aeedd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5bf474d74f-2bd92_openshift-operators(52f3fd68-b1f1-4b15-b15c-5356d08aeedd)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-2bd92_openshift-operators_52f3fd68-b1f1-4b15-b15c-5356d08aeedd_0(0d852096045419d6956aa4830acd13cf0464332a214cbdfd8f9349c36205f952): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5bf474d74f-2bd92" podUID="52f3fd68-b1f1-4b15-b15c-5356d08aeedd" Feb 27 10:37:31 crc kubenswrapper[4728]: I0227 10:37:31.382134 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pmcdw" event={"ID":"37aa7967-25e0-4f97-b792-65ace9fdfd37","Type":"ContainerStarted","Data":"d08a0be65ceae2e1db2927270b7f0d7165dfd35b46be11c503d9893d08db7ac1"} Feb 27 10:37:31 crc kubenswrapper[4728]: I0227 10:37:31.382511 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-pmcdw" Feb 27 10:37:31 crc kubenswrapper[4728]: I0227 10:37:31.382543 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-pmcdw" Feb 27 10:37:31 crc kubenswrapper[4728]: I0227 10:37:31.382553 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-pmcdw" Feb 27 10:37:31 crc kubenswrapper[4728]: I0227 10:37:31.428132 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-pmcdw" podStartSLOduration=7.428114386 podStartE2EDuration="7.428114386s" podCreationTimestamp="2026-02-27 10:37:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:37:31.424347115 +0000 UTC m=+671.386713221" watchObservedRunningTime="2026-02-27 10:37:31.428114386 +0000 UTC m=+671.390480492" Feb 27 10:37:31 crc kubenswrapper[4728]: I0227 10:37:31.475128 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-pmcdw" Feb 27 10:37:31 crc kubenswrapper[4728]: I0227 10:37:31.476923 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-pmcdw" Feb 27 10:37:31 crc kubenswrapper[4728]: I0227 10:37:31.946236 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-zzdsp"] Feb 27 10:37:31 crc kubenswrapper[4728]: I0227 10:37:31.946355 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-zzdsp" Feb 27 10:37:31 crc kubenswrapper[4728]: I0227 10:37:31.946777 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-zzdsp" Feb 27 10:37:31 crc kubenswrapper[4728]: I0227 10:37:31.951658 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-2bd92"] Feb 27 10:37:31 crc kubenswrapper[4728]: I0227 10:37:31.951776 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-2bd92" Feb 27 10:37:31 crc kubenswrapper[4728]: I0227 10:37:31.952142 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-2bd92" Feb 27 10:37:31 crc kubenswrapper[4728]: I0227 10:37:31.955750 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6bf87b847d-g67pg"] Feb 27 10:37:31 crc kubenswrapper[4728]: I0227 10:37:31.955965 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6bf87b847d-g67pg" Feb 27 10:37:31 crc kubenswrapper[4728]: I0227 10:37:31.956458 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6bf87b847d-g67pg" Feb 27 10:37:31 crc kubenswrapper[4728]: I0227 10:37:31.966689 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6bf87b847d-6b5tt"] Feb 27 10:37:31 crc kubenswrapper[4728]: I0227 10:37:31.966811 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6bf87b847d-6b5tt" Feb 27 10:37:31 crc kubenswrapper[4728]: I0227 10:37:31.967231 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6bf87b847d-6b5tt" Feb 27 10:37:31 crc kubenswrapper[4728]: I0227 10:37:31.979562 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-n82qn"] Feb 27 10:37:31 crc kubenswrapper[4728]: I0227 10:37:31.979738 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-n82qn" Feb 27 10:37:31 crc kubenswrapper[4728]: I0227 10:37:31.980214 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-n82qn" Feb 27 10:37:32 crc kubenswrapper[4728]: E0227 10:37:32.002612 4728 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-zzdsp_openshift-operators_936f8a2d-37fa-4d39-9de8-a07aa8efaf6a_0(62fe5e538352f1e096fdec91a9a5928de9c14bfe3bf42178e1b4258c377c1176): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 27 10:37:32 crc kubenswrapper[4728]: E0227 10:37:32.002765 4728 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-zzdsp_openshift-operators_936f8a2d-37fa-4d39-9de8-a07aa8efaf6a_0(62fe5e538352f1e096fdec91a9a5928de9c14bfe3bf42178e1b4258c377c1176): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-zzdsp" Feb 27 10:37:32 crc kubenswrapper[4728]: E0227 10:37:32.002845 4728 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-zzdsp_openshift-operators_936f8a2d-37fa-4d39-9de8-a07aa8efaf6a_0(62fe5e538352f1e096fdec91a9a5928de9c14bfe3bf42178e1b4258c377c1176): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-zzdsp" Feb 27 10:37:32 crc kubenswrapper[4728]: E0227 10:37:32.002946 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-68bc856cb9-zzdsp_openshift-operators(936f8a2d-37fa-4d39-9de8-a07aa8efaf6a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-68bc856cb9-zzdsp_openshift-operators(936f8a2d-37fa-4d39-9de8-a07aa8efaf6a)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-zzdsp_openshift-operators_936f8a2d-37fa-4d39-9de8-a07aa8efaf6a_0(62fe5e538352f1e096fdec91a9a5928de9c14bfe3bf42178e1b4258c377c1176): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-zzdsp" podUID="936f8a2d-37fa-4d39-9de8-a07aa8efaf6a" Feb 27 10:37:32 crc kubenswrapper[4728]: E0227 10:37:32.013174 4728 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-2bd92_openshift-operators_52f3fd68-b1f1-4b15-b15c-5356d08aeedd_0(89e60cf1f614d1b2453ee88faca71158da13c3eeb10cacea477e4c5b891cdbad): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 27 10:37:32 crc kubenswrapper[4728]: E0227 10:37:32.013238 4728 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-2bd92_openshift-operators_52f3fd68-b1f1-4b15-b15c-5356d08aeedd_0(89e60cf1f614d1b2453ee88faca71158da13c3eeb10cacea477e4c5b891cdbad): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-2bd92" Feb 27 10:37:32 crc kubenswrapper[4728]: E0227 10:37:32.013259 4728 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-2bd92_openshift-operators_52f3fd68-b1f1-4b15-b15c-5356d08aeedd_0(89e60cf1f614d1b2453ee88faca71158da13c3eeb10cacea477e4c5b891cdbad): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-2bd92" Feb 27 10:37:32 crc kubenswrapper[4728]: E0227 10:37:32.013309 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5bf474d74f-2bd92_openshift-operators(52f3fd68-b1f1-4b15-b15c-5356d08aeedd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5bf474d74f-2bd92_openshift-operators(52f3fd68-b1f1-4b15-b15c-5356d08aeedd)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-2bd92_openshift-operators_52f3fd68-b1f1-4b15-b15c-5356d08aeedd_0(89e60cf1f614d1b2453ee88faca71158da13c3eeb10cacea477e4c5b891cdbad): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5bf474d74f-2bd92" podUID="52f3fd68-b1f1-4b15-b15c-5356d08aeedd" Feb 27 10:37:32 crc kubenswrapper[4728]: E0227 10:37:32.034543 4728 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6bf87b847d-g67pg_openshift-operators_c4fb8c6f-07d9-4e5c-8f99-80776c0e62bb_0(cb38d259bb9036f304b79d4a78ad227378228ace0182096c14081eaade0c0aed): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 27 10:37:32 crc kubenswrapper[4728]: E0227 10:37:32.034629 4728 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6bf87b847d-g67pg_openshift-operators_c4fb8c6f-07d9-4e5c-8f99-80776c0e62bb_0(cb38d259bb9036f304b79d4a78ad227378228ace0182096c14081eaade0c0aed): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6bf87b847d-g67pg" Feb 27 10:37:32 crc kubenswrapper[4728]: E0227 10:37:32.034678 4728 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6bf87b847d-g67pg_openshift-operators_c4fb8c6f-07d9-4e5c-8f99-80776c0e62bb_0(cb38d259bb9036f304b79d4a78ad227378228ace0182096c14081eaade0c0aed): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6bf87b847d-g67pg" Feb 27 10:37:32 crc kubenswrapper[4728]: E0227 10:37:32.034757 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-6bf87b847d-g67pg_openshift-operators(c4fb8c6f-07d9-4e5c-8f99-80776c0e62bb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-6bf87b847d-g67pg_openshift-operators(c4fb8c6f-07d9-4e5c-8f99-80776c0e62bb)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6bf87b847d-g67pg_openshift-operators_c4fb8c6f-07d9-4e5c-8f99-80776c0e62bb_0(cb38d259bb9036f304b79d4a78ad227378228ace0182096c14081eaade0c0aed): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6bf87b847d-g67pg" podUID="c4fb8c6f-07d9-4e5c-8f99-80776c0e62bb" Feb 27 10:37:32 crc kubenswrapper[4728]: E0227 10:37:32.042685 4728 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6bf87b847d-6b5tt_openshift-operators_18b42bdb-6a22-4f0f-88bd-2bff0d1a4b82_0(ea8fafcc6fb48b79ffd1cf49b0692bb5251f420257ff2749d5e753811b883d37): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 27 10:37:32 crc kubenswrapper[4728]: E0227 10:37:32.042751 4728 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6bf87b847d-6b5tt_openshift-operators_18b42bdb-6a22-4f0f-88bd-2bff0d1a4b82_0(ea8fafcc6fb48b79ffd1cf49b0692bb5251f420257ff2749d5e753811b883d37): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6bf87b847d-6b5tt" Feb 27 10:37:32 crc kubenswrapper[4728]: E0227 10:37:32.042773 4728 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6bf87b847d-6b5tt_openshift-operators_18b42bdb-6a22-4f0f-88bd-2bff0d1a4b82_0(ea8fafcc6fb48b79ffd1cf49b0692bb5251f420257ff2749d5e753811b883d37): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6bf87b847d-6b5tt" Feb 27 10:37:32 crc kubenswrapper[4728]: E0227 10:37:32.042822 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-6bf87b847d-6b5tt_openshift-operators(18b42bdb-6a22-4f0f-88bd-2bff0d1a4b82)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-6bf87b847d-6b5tt_openshift-operators(18b42bdb-6a22-4f0f-88bd-2bff0d1a4b82)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6bf87b847d-6b5tt_openshift-operators_18b42bdb-6a22-4f0f-88bd-2bff0d1a4b82_0(ea8fafcc6fb48b79ffd1cf49b0692bb5251f420257ff2749d5e753811b883d37): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6bf87b847d-6b5tt" podUID="18b42bdb-6a22-4f0f-88bd-2bff0d1a4b82" Feb 27 10:37:32 crc kubenswrapper[4728]: E0227 10:37:32.066642 4728 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-n82qn_openshift-operators_685b65a4-9d96-4018-b8df-a45eccc1e923_0(2d15d0fcd6f6ea3d1e5c3163487734fec6f6963e53e43ddb9b66821fda22502e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 27 10:37:32 crc kubenswrapper[4728]: E0227 10:37:32.066706 4728 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-n82qn_openshift-operators_685b65a4-9d96-4018-b8df-a45eccc1e923_0(2d15d0fcd6f6ea3d1e5c3163487734fec6f6963e53e43ddb9b66821fda22502e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-n82qn" Feb 27 10:37:32 crc kubenswrapper[4728]: E0227 10:37:32.066728 4728 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-n82qn_openshift-operators_685b65a4-9d96-4018-b8df-a45eccc1e923_0(2d15d0fcd6f6ea3d1e5c3163487734fec6f6963e53e43ddb9b66821fda22502e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-n82qn" Feb 27 10:37:32 crc kubenswrapper[4728]: E0227 10:37:32.066770 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-59bdc8b94-n82qn_openshift-operators(685b65a4-9d96-4018-b8df-a45eccc1e923)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-59bdc8b94-n82qn_openshift-operators(685b65a4-9d96-4018-b8df-a45eccc1e923)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-n82qn_openshift-operators_685b65a4-9d96-4018-b8df-a45eccc1e923_0(2d15d0fcd6f6ea3d1e5c3163487734fec6f6963e53e43ddb9b66821fda22502e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-59bdc8b94-n82qn" podUID="685b65a4-9d96-4018-b8df-a45eccc1e923" Feb 27 10:37:35 crc kubenswrapper[4728]: I0227 10:37:35.922225 4728 patch_prober.go:28] interesting pod/machine-config-daemon-mf2hh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 10:37:35 crc kubenswrapper[4728]: I0227 10:37:35.922841 4728 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 10:37:42 crc kubenswrapper[4728]: I0227 10:37:42.724151 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6bf87b847d-g67pg" Feb 27 10:37:42 crc kubenswrapper[4728]: I0227 10:37:42.724926 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6bf87b847d-g67pg" Feb 27 10:37:43 crc kubenswrapper[4728]: I0227 10:37:43.034930 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6bf87b847d-g67pg"] Feb 27 10:37:43 crc kubenswrapper[4728]: I0227 10:37:43.453356 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6bf87b847d-g67pg" event={"ID":"c4fb8c6f-07d9-4e5c-8f99-80776c0e62bb","Type":"ContainerStarted","Data":"646562b9ca4f72b93f3a91e8f4f265d61c8fa3568ea2c7d800efdf35c2f397ef"} Feb 27 10:37:43 crc kubenswrapper[4728]: I0227 10:37:43.724059 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-zzdsp" Feb 27 10:37:43 crc kubenswrapper[4728]: I0227 10:37:43.724601 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-zzdsp" Feb 27 10:37:43 crc kubenswrapper[4728]: I0227 10:37:43.997482 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-zzdsp"] Feb 27 10:37:44 crc kubenswrapper[4728]: I0227 10:37:44.461440 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-zzdsp" event={"ID":"936f8a2d-37fa-4d39-9de8-a07aa8efaf6a","Type":"ContainerStarted","Data":"0f252043ad59a6952c5154c7dd7ddfb9cc7a118c326d7520bb80a352f93e1a82"} Feb 27 10:37:44 crc kubenswrapper[4728]: I0227 10:37:44.724813 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6bf87b847d-6b5tt" Feb 27 10:37:44 crc kubenswrapper[4728]: I0227 10:37:44.725073 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6bf87b847d-6b5tt" Feb 27 10:37:45 crc kubenswrapper[4728]: I0227 10:37:45.038187 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6bf87b847d-6b5tt"] Feb 27 10:37:45 crc kubenswrapper[4728]: I0227 10:37:45.723950 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-n82qn" Feb 27 10:37:45 crc kubenswrapper[4728]: I0227 10:37:45.723940 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-2bd92" Feb 27 10:37:45 crc kubenswrapper[4728]: I0227 10:37:45.724932 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-n82qn" Feb 27 10:37:45 crc kubenswrapper[4728]: I0227 10:37:45.724986 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-2bd92" Feb 27 10:37:47 crc kubenswrapper[4728]: I0227 10:37:47.021169 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-2bd92"] Feb 27 10:37:47 crc kubenswrapper[4728]: W0227 10:37:47.092321 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod52f3fd68_b1f1_4b15_b15c_5356d08aeedd.slice/crio-679582d1f33a13c254996d43edeab410f7119a9e9cbdea5f70c566c9940b8706 WatchSource:0}: Error finding container 679582d1f33a13c254996d43edeab410f7119a9e9cbdea5f70c566c9940b8706: Status 404 returned error can't find the container with id 679582d1f33a13c254996d43edeab410f7119a9e9cbdea5f70c566c9940b8706 Feb 27 10:37:47 crc kubenswrapper[4728]: I0227 10:37:47.278463 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-n82qn"] Feb 27 10:37:47 crc kubenswrapper[4728]: W0227 10:37:47.286812 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod685b65a4_9d96_4018_b8df_a45eccc1e923.slice/crio-c1ed5132ccd3b57e8e7bf77ea68236446230dd68ea6433bc7f0f9cbfe37c41d9 WatchSource:0}: Error finding container c1ed5132ccd3b57e8e7bf77ea68236446230dd68ea6433bc7f0f9cbfe37c41d9: Status 404 returned error can't find the container with id c1ed5132ccd3b57e8e7bf77ea68236446230dd68ea6433bc7f0f9cbfe37c41d9 Feb 27 10:37:47 crc kubenswrapper[4728]: I0227 10:37:47.487745 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-n82qn" event={"ID":"685b65a4-9d96-4018-b8df-a45eccc1e923","Type":"ContainerStarted","Data":"c1ed5132ccd3b57e8e7bf77ea68236446230dd68ea6433bc7f0f9cbfe37c41d9"} Feb 27 10:37:47 crc kubenswrapper[4728]: I0227 10:37:47.488923 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-2bd92" event={"ID":"52f3fd68-b1f1-4b15-b15c-5356d08aeedd","Type":"ContainerStarted","Data":"679582d1f33a13c254996d43edeab410f7119a9e9cbdea5f70c566c9940b8706"} Feb 27 10:37:47 crc kubenswrapper[4728]: I0227 10:37:47.490327 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6bf87b847d-6b5tt" event={"ID":"18b42bdb-6a22-4f0f-88bd-2bff0d1a4b82","Type":"ContainerStarted","Data":"aef49ff4d84c960f50ea6be7db092843df4aa4d48c0f879631178dfaa03678cb"} Feb 27 10:37:47 crc kubenswrapper[4728]: I0227 10:37:47.492075 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6bf87b847d-g67pg" event={"ID":"c4fb8c6f-07d9-4e5c-8f99-80776c0e62bb","Type":"ContainerStarted","Data":"0ceb472df21a55923f170e82a1bad4e1cb250afd83f674bee30bc33eee0f0c98"} Feb 27 10:37:47 crc kubenswrapper[4728]: I0227 10:37:47.520819 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6bf87b847d-g67pg" podStartSLOduration=14.447114505 podStartE2EDuration="18.52080258s" podCreationTimestamp="2026-02-27 10:37:29 +0000 UTC" firstStartedPulling="2026-02-27 10:37:43.050997367 +0000 UTC m=+683.013363463" lastFinishedPulling="2026-02-27 10:37:47.124685422 +0000 UTC m=+687.087051538" observedRunningTime="2026-02-27 10:37:47.517619664 +0000 UTC m=+687.479985780" watchObservedRunningTime="2026-02-27 10:37:47.52080258 +0000 UTC m=+687.483168676" Feb 27 10:37:48 crc kubenswrapper[4728]: I0227 10:37:48.502915 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6bf87b847d-6b5tt" event={"ID":"18b42bdb-6a22-4f0f-88bd-2bff0d1a4b82","Type":"ContainerStarted","Data":"0346fb233a49d3cf43ec6355428e748c8bcf9d8252acda2cd967614fa8264fc6"} Feb 27 10:37:48 crc kubenswrapper[4728]: I0227 10:37:48.531459 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6bf87b847d-6b5tt" podStartSLOduration=18.629309162 podStartE2EDuration="19.531444627s" podCreationTimestamp="2026-02-27 10:37:29 +0000 UTC" firstStartedPulling="2026-02-27 10:37:46.799983975 +0000 UTC m=+686.762350081" lastFinishedPulling="2026-02-27 10:37:47.70211944 +0000 UTC m=+687.664485546" observedRunningTime="2026-02-27 10:37:48.527865031 +0000 UTC m=+688.490231157" watchObservedRunningTime="2026-02-27 10:37:48.531444627 +0000 UTC m=+688.493810733" Feb 27 10:37:49 crc kubenswrapper[4728]: I0227 10:37:49.515199 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-zzdsp" event={"ID":"936f8a2d-37fa-4d39-9de8-a07aa8efaf6a","Type":"ContainerStarted","Data":"1c2535dd3fe086e2a1977773292e7ece56dbba06392f029a23fe0771c395aa3d"} Feb 27 10:37:49 crc kubenswrapper[4728]: I0227 10:37:49.538116 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-zzdsp" podStartSLOduration=15.491962427 podStartE2EDuration="20.538096878s" podCreationTimestamp="2026-02-27 10:37:29 +0000 UTC" firstStartedPulling="2026-02-27 10:37:44.008686258 +0000 UTC m=+683.971052364" lastFinishedPulling="2026-02-27 10:37:49.054820719 +0000 UTC m=+689.017186815" observedRunningTime="2026-02-27 10:37:49.536897966 +0000 UTC m=+689.499264082" watchObservedRunningTime="2026-02-27 10:37:49.538096878 +0000 UTC m=+689.500462994" Feb 27 10:37:54 crc kubenswrapper[4728]: I0227 10:37:54.429896 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-pmcdw" Feb 27 10:37:54 crc kubenswrapper[4728]: I0227 10:37:54.556199 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-2bd92" event={"ID":"52f3fd68-b1f1-4b15-b15c-5356d08aeedd","Type":"ContainerStarted","Data":"cc7b39fe0d1c46885f2441ba12361745e6b2329b993f4ceb08e6bce465b16dbf"} Feb 27 10:37:54 crc kubenswrapper[4728]: I0227 10:37:54.556480 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5bf474d74f-2bd92" Feb 27 10:37:54 crc kubenswrapper[4728]: I0227 10:37:54.558340 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-n82qn" event={"ID":"685b65a4-9d96-4018-b8df-a45eccc1e923","Type":"ContainerStarted","Data":"8dc748e062f74cfc35c4b71c186575f571089a5789ef2757671e58208c23101f"} Feb 27 10:37:54 crc kubenswrapper[4728]: I0227 10:37:54.558557 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-n82qn" Feb 27 10:37:54 crc kubenswrapper[4728]: I0227 10:37:54.576456 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5bf474d74f-2bd92" podStartSLOduration=18.653018057 podStartE2EDuration="25.576432121s" podCreationTimestamp="2026-02-27 10:37:29 +0000 UTC" firstStartedPulling="2026-02-27 10:37:47.094680149 +0000 UTC m=+687.057046255" lastFinishedPulling="2026-02-27 10:37:54.018094213 +0000 UTC m=+693.980460319" observedRunningTime="2026-02-27 10:37:54.571350794 +0000 UTC m=+694.533716920" watchObservedRunningTime="2026-02-27 10:37:54.576432121 +0000 UTC m=+694.538798237" Feb 27 10:37:54 crc kubenswrapper[4728]: I0227 10:37:54.587500 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-59bdc8b94-n82qn" Feb 27 10:37:54 crc kubenswrapper[4728]: I0227 10:37:54.594073 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-59bdc8b94-n82qn" podStartSLOduration=18.821895413 podStartE2EDuration="25.594050821s" podCreationTimestamp="2026-02-27 10:37:29 +0000 UTC" firstStartedPulling="2026-02-27 10:37:47.289214333 +0000 UTC m=+687.251580439" lastFinishedPulling="2026-02-27 10:37:54.061369731 +0000 UTC m=+694.023735847" observedRunningTime="2026-02-27 10:37:54.59137227 +0000 UTC m=+694.553738376" watchObservedRunningTime="2026-02-27 10:37:54.594050821 +0000 UTC m=+694.556416937" Feb 27 10:37:59 crc kubenswrapper[4728]: I0227 10:37:59.818730 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5bf474d74f-2bd92" Feb 27 10:38:00 crc kubenswrapper[4728]: I0227 10:38:00.133751 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536478-gfnt7"] Feb 27 10:38:00 crc kubenswrapper[4728]: I0227 10:38:00.134453 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536478-gfnt7" Feb 27 10:38:00 crc kubenswrapper[4728]: I0227 10:38:00.141985 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 10:38:00 crc kubenswrapper[4728]: I0227 10:38:00.142210 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 10:38:00 crc kubenswrapper[4728]: I0227 10:38:00.142470 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-wxdfj" Feb 27 10:38:00 crc kubenswrapper[4728]: I0227 10:38:00.150694 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536478-gfnt7"] Feb 27 10:38:00 crc kubenswrapper[4728]: I0227 10:38:00.243749 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krpq9\" (UniqueName: \"kubernetes.io/projected/cc56479a-8090-419f-bcb1-5557bdae6677-kube-api-access-krpq9\") pod \"auto-csr-approver-29536478-gfnt7\" (UID: \"cc56479a-8090-419f-bcb1-5557bdae6677\") " pod="openshift-infra/auto-csr-approver-29536478-gfnt7" Feb 27 10:38:00 crc kubenswrapper[4728]: I0227 10:38:00.345783 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-krpq9\" (UniqueName: \"kubernetes.io/projected/cc56479a-8090-419f-bcb1-5557bdae6677-kube-api-access-krpq9\") pod \"auto-csr-approver-29536478-gfnt7\" (UID: \"cc56479a-8090-419f-bcb1-5557bdae6677\") " pod="openshift-infra/auto-csr-approver-29536478-gfnt7" Feb 27 10:38:00 crc kubenswrapper[4728]: I0227 10:38:00.368573 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-krpq9\" (UniqueName: \"kubernetes.io/projected/cc56479a-8090-419f-bcb1-5557bdae6677-kube-api-access-krpq9\") pod \"auto-csr-approver-29536478-gfnt7\" (UID: \"cc56479a-8090-419f-bcb1-5557bdae6677\") " pod="openshift-infra/auto-csr-approver-29536478-gfnt7" Feb 27 10:38:00 crc kubenswrapper[4728]: I0227 10:38:00.449634 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536478-gfnt7" Feb 27 10:38:00 crc kubenswrapper[4728]: I0227 10:38:00.800378 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-cf62w"] Feb 27 10:38:00 crc kubenswrapper[4728]: I0227 10:38:00.801794 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-cf62w" Feb 27 10:38:00 crc kubenswrapper[4728]: I0227 10:38:00.807624 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-m79gz"] Feb 27 10:38:00 crc kubenswrapper[4728]: I0227 10:38:00.808414 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-m79gz" Feb 27 10:38:00 crc kubenswrapper[4728]: I0227 10:38:00.808965 4728 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-hwjpb" Feb 27 10:38:00 crc kubenswrapper[4728]: I0227 10:38:00.809052 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Feb 27 10:38:00 crc kubenswrapper[4728]: I0227 10:38:00.809646 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Feb 27 10:38:00 crc kubenswrapper[4728]: I0227 10:38:00.814065 4728 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-k9zxs" Feb 27 10:38:00 crc kubenswrapper[4728]: I0227 10:38:00.820623 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-m79gz"] Feb 27 10:38:00 crc kubenswrapper[4728]: I0227 10:38:00.826594 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-cf62w"] Feb 27 10:38:00 crc kubenswrapper[4728]: I0227 10:38:00.847563 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-lgkck"] Feb 27 10:38:00 crc kubenswrapper[4728]: I0227 10:38:00.848473 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-lgkck" Feb 27 10:38:00 crc kubenswrapper[4728]: I0227 10:38:00.852003 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5zm6\" (UniqueName: \"kubernetes.io/projected/c8862af5-495d-455a-9b63-aeb694a4f768-kube-api-access-f5zm6\") pod \"cert-manager-858654f9db-m79gz\" (UID: \"c8862af5-495d-455a-9b63-aeb694a4f768\") " pod="cert-manager/cert-manager-858654f9db-m79gz" Feb 27 10:38:00 crc kubenswrapper[4728]: I0227 10:38:00.852067 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fg5ht\" (UniqueName: \"kubernetes.io/projected/f79a2ce3-db43-4848-8b1a-4ebee40a850b-kube-api-access-fg5ht\") pod \"cert-manager-cainjector-cf98fcc89-cf62w\" (UID: \"f79a2ce3-db43-4848-8b1a-4ebee40a850b\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-cf62w" Feb 27 10:38:00 crc kubenswrapper[4728]: I0227 10:38:00.853214 4728 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-vlzgf" Feb 27 10:38:00 crc kubenswrapper[4728]: I0227 10:38:00.863489 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-lgkck"] Feb 27 10:38:00 crc kubenswrapper[4728]: I0227 10:38:00.903967 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536478-gfnt7"] Feb 27 10:38:00 crc kubenswrapper[4728]: I0227 10:38:00.953625 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fg5ht\" (UniqueName: \"kubernetes.io/projected/f79a2ce3-db43-4848-8b1a-4ebee40a850b-kube-api-access-fg5ht\") pod \"cert-manager-cainjector-cf98fcc89-cf62w\" (UID: \"f79a2ce3-db43-4848-8b1a-4ebee40a850b\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-cf62w" Feb 27 10:38:00 crc kubenswrapper[4728]: I0227 10:38:00.953762 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f5zm6\" (UniqueName: \"kubernetes.io/projected/c8862af5-495d-455a-9b63-aeb694a4f768-kube-api-access-f5zm6\") pod \"cert-manager-858654f9db-m79gz\" (UID: \"c8862af5-495d-455a-9b63-aeb694a4f768\") " pod="cert-manager/cert-manager-858654f9db-m79gz" Feb 27 10:38:00 crc kubenswrapper[4728]: I0227 10:38:00.953790 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxcz8\" (UniqueName: \"kubernetes.io/projected/8bbbf6f7-c5e4-4942-8b37-cd24c2e729d9-kube-api-access-nxcz8\") pod \"cert-manager-webhook-687f57d79b-lgkck\" (UID: \"8bbbf6f7-c5e4-4942-8b37-cd24c2e729d9\") " pod="cert-manager/cert-manager-webhook-687f57d79b-lgkck" Feb 27 10:38:00 crc kubenswrapper[4728]: I0227 10:38:00.971777 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fg5ht\" (UniqueName: \"kubernetes.io/projected/f79a2ce3-db43-4848-8b1a-4ebee40a850b-kube-api-access-fg5ht\") pod \"cert-manager-cainjector-cf98fcc89-cf62w\" (UID: \"f79a2ce3-db43-4848-8b1a-4ebee40a850b\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-cf62w" Feb 27 10:38:00 crc kubenswrapper[4728]: I0227 10:38:00.971779 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f5zm6\" (UniqueName: \"kubernetes.io/projected/c8862af5-495d-455a-9b63-aeb694a4f768-kube-api-access-f5zm6\") pod \"cert-manager-858654f9db-m79gz\" (UID: \"c8862af5-495d-455a-9b63-aeb694a4f768\") " pod="cert-manager/cert-manager-858654f9db-m79gz" Feb 27 10:38:01 crc kubenswrapper[4728]: I0227 10:38:01.055673 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nxcz8\" (UniqueName: \"kubernetes.io/projected/8bbbf6f7-c5e4-4942-8b37-cd24c2e729d9-kube-api-access-nxcz8\") pod \"cert-manager-webhook-687f57d79b-lgkck\" (UID: \"8bbbf6f7-c5e4-4942-8b37-cd24c2e729d9\") " pod="cert-manager/cert-manager-webhook-687f57d79b-lgkck" Feb 27 10:38:01 crc kubenswrapper[4728]: I0227 10:38:01.075835 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nxcz8\" (UniqueName: \"kubernetes.io/projected/8bbbf6f7-c5e4-4942-8b37-cd24c2e729d9-kube-api-access-nxcz8\") pod \"cert-manager-webhook-687f57d79b-lgkck\" (UID: \"8bbbf6f7-c5e4-4942-8b37-cd24c2e729d9\") " pod="cert-manager/cert-manager-webhook-687f57d79b-lgkck" Feb 27 10:38:01 crc kubenswrapper[4728]: I0227 10:38:01.124209 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-cf62w" Feb 27 10:38:01 crc kubenswrapper[4728]: I0227 10:38:01.134901 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-m79gz" Feb 27 10:38:01 crc kubenswrapper[4728]: I0227 10:38:01.171758 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-lgkck" Feb 27 10:38:01 crc kubenswrapper[4728]: I0227 10:38:01.444033 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-lgkck"] Feb 27 10:38:01 crc kubenswrapper[4728]: I0227 10:38:01.565900 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-cf62w"] Feb 27 10:38:01 crc kubenswrapper[4728]: I0227 10:38:01.609466 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536478-gfnt7" event={"ID":"cc56479a-8090-419f-bcb1-5557bdae6677","Type":"ContainerStarted","Data":"9dfa342d540d500f5f9f79d15108e1158474d85aaf7f76fed013f6044e7f1c0d"} Feb 27 10:38:01 crc kubenswrapper[4728]: I0227 10:38:01.610420 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-lgkck" event={"ID":"8bbbf6f7-c5e4-4942-8b37-cd24c2e729d9","Type":"ContainerStarted","Data":"62f504d1a2a42e8ee301fff0df883e8a87d87519273b5b921681eb9b7de252a6"} Feb 27 10:38:01 crc kubenswrapper[4728]: I0227 10:38:01.611893 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-cf62w" event={"ID":"f79a2ce3-db43-4848-8b1a-4ebee40a850b","Type":"ContainerStarted","Data":"0f882667336d3bc0aec07b7c220a443d921d22d34861c611da0e6bdc19f9ada5"} Feb 27 10:38:01 crc kubenswrapper[4728]: I0227 10:38:01.731215 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-m79gz"] Feb 27 10:38:01 crc kubenswrapper[4728]: W0227 10:38:01.738459 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc8862af5_495d_455a_9b63_aeb694a4f768.slice/crio-58824c33217898e0eb87c0584a5804a5966d3d10910f94bfbf8656882aa60537 WatchSource:0}: Error finding container 58824c33217898e0eb87c0584a5804a5966d3d10910f94bfbf8656882aa60537: Status 404 returned error can't find the container with id 58824c33217898e0eb87c0584a5804a5966d3d10910f94bfbf8656882aa60537 Feb 27 10:38:02 crc kubenswrapper[4728]: I0227 10:38:02.620684 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-m79gz" event={"ID":"c8862af5-495d-455a-9b63-aeb694a4f768","Type":"ContainerStarted","Data":"58824c33217898e0eb87c0584a5804a5966d3d10910f94bfbf8656882aa60537"} Feb 27 10:38:02 crc kubenswrapper[4728]: I0227 10:38:02.624428 4728 generic.go:334] "Generic (PLEG): container finished" podID="cc56479a-8090-419f-bcb1-5557bdae6677" containerID="47cf2e934432f313b3d956021e7985061ccf5e96e4cad0a9fc224fd456cb56e0" exitCode=0 Feb 27 10:38:02 crc kubenswrapper[4728]: I0227 10:38:02.624475 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536478-gfnt7" event={"ID":"cc56479a-8090-419f-bcb1-5557bdae6677","Type":"ContainerDied","Data":"47cf2e934432f313b3d956021e7985061ccf5e96e4cad0a9fc224fd456cb56e0"} Feb 27 10:38:05 crc kubenswrapper[4728]: I0227 10:38:05.922136 4728 patch_prober.go:28] interesting pod/machine-config-daemon-mf2hh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 10:38:05 crc kubenswrapper[4728]: I0227 10:38:05.922976 4728 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 10:38:06 crc kubenswrapper[4728]: I0227 10:38:06.708785 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536478-gfnt7" Feb 27 10:38:06 crc kubenswrapper[4728]: I0227 10:38:06.744412 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-krpq9\" (UniqueName: \"kubernetes.io/projected/cc56479a-8090-419f-bcb1-5557bdae6677-kube-api-access-krpq9\") pod \"cc56479a-8090-419f-bcb1-5557bdae6677\" (UID: \"cc56479a-8090-419f-bcb1-5557bdae6677\") " Feb 27 10:38:06 crc kubenswrapper[4728]: I0227 10:38:06.752779 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc56479a-8090-419f-bcb1-5557bdae6677-kube-api-access-krpq9" (OuterVolumeSpecName: "kube-api-access-krpq9") pod "cc56479a-8090-419f-bcb1-5557bdae6677" (UID: "cc56479a-8090-419f-bcb1-5557bdae6677"). InnerVolumeSpecName "kube-api-access-krpq9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:38:06 crc kubenswrapper[4728]: I0227 10:38:06.846792 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-krpq9\" (UniqueName: \"kubernetes.io/projected/cc56479a-8090-419f-bcb1-5557bdae6677-kube-api-access-krpq9\") on node \"crc\" DevicePath \"\"" Feb 27 10:38:07 crc kubenswrapper[4728]: I0227 10:38:07.661581 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536478-gfnt7" event={"ID":"cc56479a-8090-419f-bcb1-5557bdae6677","Type":"ContainerDied","Data":"9dfa342d540d500f5f9f79d15108e1158474d85aaf7f76fed013f6044e7f1c0d"} Feb 27 10:38:07 crc kubenswrapper[4728]: I0227 10:38:07.661892 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9dfa342d540d500f5f9f79d15108e1158474d85aaf7f76fed013f6044e7f1c0d" Feb 27 10:38:07 crc kubenswrapper[4728]: I0227 10:38:07.661621 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536478-gfnt7" Feb 27 10:38:07 crc kubenswrapper[4728]: I0227 10:38:07.769279 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536472-w6zws"] Feb 27 10:38:07 crc kubenswrapper[4728]: I0227 10:38:07.772692 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536472-w6zws"] Feb 27 10:38:08 crc kubenswrapper[4728]: I0227 10:38:08.733819 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a679f75d-9b65-494b-8520-5f79c1ce159f" path="/var/lib/kubelet/pods/a679f75d-9b65-494b-8520-5f79c1ce159f/volumes" Feb 27 10:38:09 crc kubenswrapper[4728]: I0227 10:38:09.677528 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-cf62w" event={"ID":"f79a2ce3-db43-4848-8b1a-4ebee40a850b","Type":"ContainerStarted","Data":"02937433d1bef2e13171a0489e8e143d808c2f9d6032d81af45099486356b8b8"} Feb 27 10:38:09 crc kubenswrapper[4728]: I0227 10:38:09.680017 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-m79gz" event={"ID":"c8862af5-495d-455a-9b63-aeb694a4f768","Type":"ContainerStarted","Data":"f84acf7b700dcbca794fae442f348563a3bda0c98c7c01263a7282e1f7f52edb"} Feb 27 10:38:09 crc kubenswrapper[4728]: I0227 10:38:09.682365 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-lgkck" event={"ID":"8bbbf6f7-c5e4-4942-8b37-cd24c2e729d9","Type":"ContainerStarted","Data":"d85089c91e2a12321e16598794315d45191cf077c698091d38b21ff2de2fe799"} Feb 27 10:38:09 crc kubenswrapper[4728]: I0227 10:38:09.682585 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-lgkck" Feb 27 10:38:09 crc kubenswrapper[4728]: I0227 10:38:09.705420 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-cf62w" podStartSLOduration=2.486458259 podStartE2EDuration="9.70539233s" podCreationTimestamp="2026-02-27 10:38:00 +0000 UTC" firstStartedPulling="2026-02-27 10:38:01.57779948 +0000 UTC m=+701.540165586" lastFinishedPulling="2026-02-27 10:38:08.796733551 +0000 UTC m=+708.759099657" observedRunningTime="2026-02-27 10:38:09.698134196 +0000 UTC m=+709.660500312" watchObservedRunningTime="2026-02-27 10:38:09.70539233 +0000 UTC m=+709.667758476" Feb 27 10:38:09 crc kubenswrapper[4728]: I0227 10:38:09.728363 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-m79gz" podStartSLOduration=2.619182531 podStartE2EDuration="9.728332984s" podCreationTimestamp="2026-02-27 10:38:00 +0000 UTC" firstStartedPulling="2026-02-27 10:38:01.740448692 +0000 UTC m=+701.702814798" lastFinishedPulling="2026-02-27 10:38:08.849599135 +0000 UTC m=+708.811965251" observedRunningTime="2026-02-27 10:38:09.717108244 +0000 UTC m=+709.679474360" watchObservedRunningTime="2026-02-27 10:38:09.728332984 +0000 UTC m=+709.690699140" Feb 27 10:38:09 crc kubenswrapper[4728]: I0227 10:38:09.748883 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-lgkck" podStartSLOduration=2.411244028 podStartE2EDuration="9.748864433s" podCreationTimestamp="2026-02-27 10:38:00 +0000 UTC" firstStartedPulling="2026-02-27 10:38:01.458116969 +0000 UTC m=+701.420483075" lastFinishedPulling="2026-02-27 10:38:08.795737374 +0000 UTC m=+708.758103480" observedRunningTime="2026-02-27 10:38:09.744581889 +0000 UTC m=+709.706948005" watchObservedRunningTime="2026-02-27 10:38:09.748864433 +0000 UTC m=+709.711230539" Feb 27 10:38:16 crc kubenswrapper[4728]: I0227 10:38:16.176431 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-lgkck" Feb 27 10:38:35 crc kubenswrapper[4728]: I0227 10:38:35.921882 4728 patch_prober.go:28] interesting pod/machine-config-daemon-mf2hh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 10:38:35 crc kubenswrapper[4728]: I0227 10:38:35.922602 4728 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 10:38:35 crc kubenswrapper[4728]: I0227 10:38:35.923214 4728 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" Feb 27 10:38:35 crc kubenswrapper[4728]: I0227 10:38:35.924083 4728 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7142bbcd5732490b77191220972aa455a45bbcb3be86cc2f77bc37171cdfdc5d"} pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 27 10:38:35 crc kubenswrapper[4728]: I0227 10:38:35.924179 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" containerName="machine-config-daemon" containerID="cri-o://7142bbcd5732490b77191220972aa455a45bbcb3be86cc2f77bc37171cdfdc5d" gracePeriod=600 Feb 27 10:38:36 crc kubenswrapper[4728]: I0227 10:38:36.895323 4728 generic.go:334] "Generic (PLEG): container finished" podID="c2cfd349-f825-497b-b698-7fb6bc258b22" containerID="7142bbcd5732490b77191220972aa455a45bbcb3be86cc2f77bc37171cdfdc5d" exitCode=0 Feb 27 10:38:36 crc kubenswrapper[4728]: I0227 10:38:36.895424 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" event={"ID":"c2cfd349-f825-497b-b698-7fb6bc258b22","Type":"ContainerDied","Data":"7142bbcd5732490b77191220972aa455a45bbcb3be86cc2f77bc37171cdfdc5d"} Feb 27 10:38:36 crc kubenswrapper[4728]: I0227 10:38:36.895661 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" event={"ID":"c2cfd349-f825-497b-b698-7fb6bc258b22","Type":"ContainerStarted","Data":"0c1db5be2b8f7ae48c2eb85c7a1f9d89d594ab5c8b362069a65d852dc6140374"} Feb 27 10:38:36 crc kubenswrapper[4728]: I0227 10:38:36.895681 4728 scope.go:117] "RemoveContainer" containerID="2416fbc83dda100006dd5fec140cd5b4cb87d01da9d620e87c0949af705e048d" Feb 27 10:38:43 crc kubenswrapper[4728]: I0227 10:38:43.016931 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989pr292"] Feb 27 10:38:43 crc kubenswrapper[4728]: E0227 10:38:43.017729 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc56479a-8090-419f-bcb1-5557bdae6677" containerName="oc" Feb 27 10:38:43 crc kubenswrapper[4728]: I0227 10:38:43.017744 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc56479a-8090-419f-bcb1-5557bdae6677" containerName="oc" Feb 27 10:38:43 crc kubenswrapper[4728]: I0227 10:38:43.017871 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc56479a-8090-419f-bcb1-5557bdae6677" containerName="oc" Feb 27 10:38:43 crc kubenswrapper[4728]: I0227 10:38:43.018909 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989pr292" Feb 27 10:38:43 crc kubenswrapper[4728]: I0227 10:38:43.021049 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 27 10:38:43 crc kubenswrapper[4728]: I0227 10:38:43.036212 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989pr292"] Feb 27 10:38:43 crc kubenswrapper[4728]: I0227 10:38:43.090633 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/53d1767e-571f-4e76-8eba-d92dd82716ce-util\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989pr292\" (UID: \"53d1767e-571f-4e76-8eba-d92dd82716ce\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989pr292" Feb 27 10:38:43 crc kubenswrapper[4728]: I0227 10:38:43.090699 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/53d1767e-571f-4e76-8eba-d92dd82716ce-bundle\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989pr292\" (UID: \"53d1767e-571f-4e76-8eba-d92dd82716ce\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989pr292" Feb 27 10:38:43 crc kubenswrapper[4728]: I0227 10:38:43.090772 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4bmq\" (UniqueName: \"kubernetes.io/projected/53d1767e-571f-4e76-8eba-d92dd82716ce-kube-api-access-h4bmq\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989pr292\" (UID: \"53d1767e-571f-4e76-8eba-d92dd82716ce\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989pr292" Feb 27 10:38:43 crc kubenswrapper[4728]: I0227 10:38:43.192255 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h4bmq\" (UniqueName: \"kubernetes.io/projected/53d1767e-571f-4e76-8eba-d92dd82716ce-kube-api-access-h4bmq\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989pr292\" (UID: \"53d1767e-571f-4e76-8eba-d92dd82716ce\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989pr292" Feb 27 10:38:43 crc kubenswrapper[4728]: I0227 10:38:43.192331 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/53d1767e-571f-4e76-8eba-d92dd82716ce-util\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989pr292\" (UID: \"53d1767e-571f-4e76-8eba-d92dd82716ce\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989pr292" Feb 27 10:38:43 crc kubenswrapper[4728]: I0227 10:38:43.192389 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/53d1767e-571f-4e76-8eba-d92dd82716ce-bundle\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989pr292\" (UID: \"53d1767e-571f-4e76-8eba-d92dd82716ce\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989pr292" Feb 27 10:38:43 crc kubenswrapper[4728]: I0227 10:38:43.192970 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/53d1767e-571f-4e76-8eba-d92dd82716ce-bundle\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989pr292\" (UID: \"53d1767e-571f-4e76-8eba-d92dd82716ce\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989pr292" Feb 27 10:38:43 crc kubenswrapper[4728]: I0227 10:38:43.193570 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/53d1767e-571f-4e76-8eba-d92dd82716ce-util\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989pr292\" (UID: \"53d1767e-571f-4e76-8eba-d92dd82716ce\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989pr292" Feb 27 10:38:43 crc kubenswrapper[4728]: I0227 10:38:43.235595 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h4bmq\" (UniqueName: \"kubernetes.io/projected/53d1767e-571f-4e76-8eba-d92dd82716ce-kube-api-access-h4bmq\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989pr292\" (UID: \"53d1767e-571f-4e76-8eba-d92dd82716ce\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989pr292" Feb 27 10:38:43 crc kubenswrapper[4728]: I0227 10:38:43.241088 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19l6pwv"] Feb 27 10:38:43 crc kubenswrapper[4728]: I0227 10:38:43.254148 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19l6pwv" Feb 27 10:38:43 crc kubenswrapper[4728]: I0227 10:38:43.257025 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19l6pwv"] Feb 27 10:38:43 crc kubenswrapper[4728]: I0227 10:38:43.293797 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9c824e4a-0c58-41aa-8bee-1bd53bb4967d-util\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19l6pwv\" (UID: \"9c824e4a-0c58-41aa-8bee-1bd53bb4967d\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19l6pwv" Feb 27 10:38:43 crc kubenswrapper[4728]: I0227 10:38:43.293896 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mkvk8\" (UniqueName: \"kubernetes.io/projected/9c824e4a-0c58-41aa-8bee-1bd53bb4967d-kube-api-access-mkvk8\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19l6pwv\" (UID: \"9c824e4a-0c58-41aa-8bee-1bd53bb4967d\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19l6pwv" Feb 27 10:38:43 crc kubenswrapper[4728]: I0227 10:38:43.293937 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9c824e4a-0c58-41aa-8bee-1bd53bb4967d-bundle\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19l6pwv\" (UID: \"9c824e4a-0c58-41aa-8bee-1bd53bb4967d\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19l6pwv" Feb 27 10:38:43 crc kubenswrapper[4728]: I0227 10:38:43.340211 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989pr292" Feb 27 10:38:43 crc kubenswrapper[4728]: I0227 10:38:43.395609 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mkvk8\" (UniqueName: \"kubernetes.io/projected/9c824e4a-0c58-41aa-8bee-1bd53bb4967d-kube-api-access-mkvk8\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19l6pwv\" (UID: \"9c824e4a-0c58-41aa-8bee-1bd53bb4967d\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19l6pwv" Feb 27 10:38:43 crc kubenswrapper[4728]: I0227 10:38:43.395952 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9c824e4a-0c58-41aa-8bee-1bd53bb4967d-bundle\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19l6pwv\" (UID: \"9c824e4a-0c58-41aa-8bee-1bd53bb4967d\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19l6pwv" Feb 27 10:38:43 crc kubenswrapper[4728]: I0227 10:38:43.395999 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9c824e4a-0c58-41aa-8bee-1bd53bb4967d-util\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19l6pwv\" (UID: \"9c824e4a-0c58-41aa-8bee-1bd53bb4967d\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19l6pwv" Feb 27 10:38:43 crc kubenswrapper[4728]: I0227 10:38:43.396704 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9c824e4a-0c58-41aa-8bee-1bd53bb4967d-util\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19l6pwv\" (UID: \"9c824e4a-0c58-41aa-8bee-1bd53bb4967d\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19l6pwv" Feb 27 10:38:43 crc kubenswrapper[4728]: I0227 10:38:43.396755 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9c824e4a-0c58-41aa-8bee-1bd53bb4967d-bundle\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19l6pwv\" (UID: \"9c824e4a-0c58-41aa-8bee-1bd53bb4967d\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19l6pwv" Feb 27 10:38:43 crc kubenswrapper[4728]: I0227 10:38:43.414379 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mkvk8\" (UniqueName: \"kubernetes.io/projected/9c824e4a-0c58-41aa-8bee-1bd53bb4967d-kube-api-access-mkvk8\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19l6pwv\" (UID: \"9c824e4a-0c58-41aa-8bee-1bd53bb4967d\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19l6pwv" Feb 27 10:38:43 crc kubenswrapper[4728]: I0227 10:38:43.588932 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19l6pwv" Feb 27 10:38:43 crc kubenswrapper[4728]: I0227 10:38:43.796163 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19l6pwv"] Feb 27 10:38:43 crc kubenswrapper[4728]: I0227 10:38:43.804152 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989pr292"] Feb 27 10:38:43 crc kubenswrapper[4728]: W0227 10:38:43.806827 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod53d1767e_571f_4e76_8eba_d92dd82716ce.slice/crio-bf8ccee5912d1b6193a6577ad420718c76adfc074671c664278c62ceac8de935 WatchSource:0}: Error finding container bf8ccee5912d1b6193a6577ad420718c76adfc074671c664278c62ceac8de935: Status 404 returned error can't find the container with id bf8ccee5912d1b6193a6577ad420718c76adfc074671c664278c62ceac8de935 Feb 27 10:38:43 crc kubenswrapper[4728]: I0227 10:38:43.955680 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989pr292" event={"ID":"53d1767e-571f-4e76-8eba-d92dd82716ce","Type":"ContainerStarted","Data":"17811f6f96253a31da6af79d971bea0635082ef1a47a23e89bc0eb70e00a1a3e"} Feb 27 10:38:43 crc kubenswrapper[4728]: I0227 10:38:43.956199 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989pr292" event={"ID":"53d1767e-571f-4e76-8eba-d92dd82716ce","Type":"ContainerStarted","Data":"bf8ccee5912d1b6193a6577ad420718c76adfc074671c664278c62ceac8de935"} Feb 27 10:38:43 crc kubenswrapper[4728]: I0227 10:38:43.957334 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19l6pwv" event={"ID":"9c824e4a-0c58-41aa-8bee-1bd53bb4967d","Type":"ContainerStarted","Data":"ca08930720049aa501c4d8c2de554ed4875598507b6b8ea020822c7a651c76d5"} Feb 27 10:38:43 crc kubenswrapper[4728]: I0227 10:38:43.957362 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19l6pwv" event={"ID":"9c824e4a-0c58-41aa-8bee-1bd53bb4967d","Type":"ContainerStarted","Data":"3b24d31cf884270908273101adbdd71e7bd77657192f1aa5aa675c961053a211"} Feb 27 10:38:44 crc kubenswrapper[4728]: I0227 10:38:44.965142 4728 generic.go:334] "Generic (PLEG): container finished" podID="9c824e4a-0c58-41aa-8bee-1bd53bb4967d" containerID="ca08930720049aa501c4d8c2de554ed4875598507b6b8ea020822c7a651c76d5" exitCode=0 Feb 27 10:38:44 crc kubenswrapper[4728]: I0227 10:38:44.965245 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19l6pwv" event={"ID":"9c824e4a-0c58-41aa-8bee-1bd53bb4967d","Type":"ContainerDied","Data":"ca08930720049aa501c4d8c2de554ed4875598507b6b8ea020822c7a651c76d5"} Feb 27 10:38:44 crc kubenswrapper[4728]: I0227 10:38:44.966780 4728 generic.go:334] "Generic (PLEG): container finished" podID="53d1767e-571f-4e76-8eba-d92dd82716ce" containerID="17811f6f96253a31da6af79d971bea0635082ef1a47a23e89bc0eb70e00a1a3e" exitCode=0 Feb 27 10:38:44 crc kubenswrapper[4728]: I0227 10:38:44.966817 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989pr292" event={"ID":"53d1767e-571f-4e76-8eba-d92dd82716ce","Type":"ContainerDied","Data":"17811f6f96253a31da6af79d971bea0635082ef1a47a23e89bc0eb70e00a1a3e"} Feb 27 10:38:46 crc kubenswrapper[4728]: I0227 10:38:46.987696 4728 generic.go:334] "Generic (PLEG): container finished" podID="53d1767e-571f-4e76-8eba-d92dd82716ce" containerID="0ed38323e764a7990497230c08d96c493932d9904d403924f7078fb089ea48ff" exitCode=0 Feb 27 10:38:46 crc kubenswrapper[4728]: I0227 10:38:46.987789 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989pr292" event={"ID":"53d1767e-571f-4e76-8eba-d92dd82716ce","Type":"ContainerDied","Data":"0ed38323e764a7990497230c08d96c493932d9904d403924f7078fb089ea48ff"} Feb 27 10:38:47 crc kubenswrapper[4728]: I0227 10:38:47.775828 4728 scope.go:117] "RemoveContainer" containerID="3102fb675e817da84f9726e038944f1e0ccf862fa0ca0f3822d16b585b47c058" Feb 27 10:38:48 crc kubenswrapper[4728]: I0227 10:38:48.003275 4728 generic.go:334] "Generic (PLEG): container finished" podID="53d1767e-571f-4e76-8eba-d92dd82716ce" containerID="fe7e1801af3e995e3ef96594168240b3bd5d978c992e999e425be90e85e911f3" exitCode=0 Feb 27 10:38:48 crc kubenswrapper[4728]: I0227 10:38:48.003318 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989pr292" event={"ID":"53d1767e-571f-4e76-8eba-d92dd82716ce","Type":"ContainerDied","Data":"fe7e1801af3e995e3ef96594168240b3bd5d978c992e999e425be90e85e911f3"} Feb 27 10:38:49 crc kubenswrapper[4728]: I0227 10:38:49.358325 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989pr292" Feb 27 10:38:49 crc kubenswrapper[4728]: I0227 10:38:49.381631 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h4bmq\" (UniqueName: \"kubernetes.io/projected/53d1767e-571f-4e76-8eba-d92dd82716ce-kube-api-access-h4bmq\") pod \"53d1767e-571f-4e76-8eba-d92dd82716ce\" (UID: \"53d1767e-571f-4e76-8eba-d92dd82716ce\") " Feb 27 10:38:49 crc kubenswrapper[4728]: I0227 10:38:49.381944 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/53d1767e-571f-4e76-8eba-d92dd82716ce-util\") pod \"53d1767e-571f-4e76-8eba-d92dd82716ce\" (UID: \"53d1767e-571f-4e76-8eba-d92dd82716ce\") " Feb 27 10:38:49 crc kubenswrapper[4728]: I0227 10:38:49.382051 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/53d1767e-571f-4e76-8eba-d92dd82716ce-bundle\") pod \"53d1767e-571f-4e76-8eba-d92dd82716ce\" (UID: \"53d1767e-571f-4e76-8eba-d92dd82716ce\") " Feb 27 10:38:49 crc kubenswrapper[4728]: I0227 10:38:49.383673 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/53d1767e-571f-4e76-8eba-d92dd82716ce-bundle" (OuterVolumeSpecName: "bundle") pod "53d1767e-571f-4e76-8eba-d92dd82716ce" (UID: "53d1767e-571f-4e76-8eba-d92dd82716ce"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:38:49 crc kubenswrapper[4728]: I0227 10:38:49.398728 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53d1767e-571f-4e76-8eba-d92dd82716ce-kube-api-access-h4bmq" (OuterVolumeSpecName: "kube-api-access-h4bmq") pod "53d1767e-571f-4e76-8eba-d92dd82716ce" (UID: "53d1767e-571f-4e76-8eba-d92dd82716ce"). InnerVolumeSpecName "kube-api-access-h4bmq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:38:49 crc kubenswrapper[4728]: I0227 10:38:49.483175 4728 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/53d1767e-571f-4e76-8eba-d92dd82716ce-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 10:38:49 crc kubenswrapper[4728]: I0227 10:38:49.483212 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h4bmq\" (UniqueName: \"kubernetes.io/projected/53d1767e-571f-4e76-8eba-d92dd82716ce-kube-api-access-h4bmq\") on node \"crc\" DevicePath \"\"" Feb 27 10:38:49 crc kubenswrapper[4728]: I0227 10:38:49.974523 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/53d1767e-571f-4e76-8eba-d92dd82716ce-util" (OuterVolumeSpecName: "util") pod "53d1767e-571f-4e76-8eba-d92dd82716ce" (UID: "53d1767e-571f-4e76-8eba-d92dd82716ce"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:38:49 crc kubenswrapper[4728]: I0227 10:38:49.989493 4728 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/53d1767e-571f-4e76-8eba-d92dd82716ce-util\") on node \"crc\" DevicePath \"\"" Feb 27 10:38:50 crc kubenswrapper[4728]: I0227 10:38:50.021807 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989pr292" event={"ID":"53d1767e-571f-4e76-8eba-d92dd82716ce","Type":"ContainerDied","Data":"bf8ccee5912d1b6193a6577ad420718c76adfc074671c664278c62ceac8de935"} Feb 27 10:38:50 crc kubenswrapper[4728]: I0227 10:38:50.021855 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bf8ccee5912d1b6193a6577ad420718c76adfc074671c664278c62ceac8de935" Feb 27 10:38:50 crc kubenswrapper[4728]: I0227 10:38:50.021941 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989pr292" Feb 27 10:38:59 crc kubenswrapper[4728]: I0227 10:38:59.141707 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-5bcdfff8f4-2p27k"] Feb 27 10:38:59 crc kubenswrapper[4728]: E0227 10:38:59.142575 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53d1767e-571f-4e76-8eba-d92dd82716ce" containerName="extract" Feb 27 10:38:59 crc kubenswrapper[4728]: I0227 10:38:59.142592 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="53d1767e-571f-4e76-8eba-d92dd82716ce" containerName="extract" Feb 27 10:38:59 crc kubenswrapper[4728]: E0227 10:38:59.142611 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53d1767e-571f-4e76-8eba-d92dd82716ce" containerName="util" Feb 27 10:38:59 crc kubenswrapper[4728]: I0227 10:38:59.142620 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="53d1767e-571f-4e76-8eba-d92dd82716ce" containerName="util" Feb 27 10:38:59 crc kubenswrapper[4728]: E0227 10:38:59.142641 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53d1767e-571f-4e76-8eba-d92dd82716ce" containerName="pull" Feb 27 10:38:59 crc kubenswrapper[4728]: I0227 10:38:59.142650 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="53d1767e-571f-4e76-8eba-d92dd82716ce" containerName="pull" Feb 27 10:38:59 crc kubenswrapper[4728]: I0227 10:38:59.142797 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="53d1767e-571f-4e76-8eba-d92dd82716ce" containerName="extract" Feb 27 10:38:59 crc kubenswrapper[4728]: I0227 10:38:59.143662 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-5bcdfff8f4-2p27k" Feb 27 10:38:59 crc kubenswrapper[4728]: I0227 10:38:59.146925 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"loki-operator-manager-config" Feb 27 10:38:59 crc kubenswrapper[4728]: I0227 10:38:59.146948 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"openshift-service-ca.crt" Feb 27 10:38:59 crc kubenswrapper[4728]: I0227 10:38:59.155664 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-5bcdfff8f4-2p27k"] Feb 27 10:38:59 crc kubenswrapper[4728]: I0227 10:38:59.155802 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-controller-manager-service-cert" Feb 27 10:38:59 crc kubenswrapper[4728]: I0227 10:38:59.156042 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-controller-manager-dockercfg-dwmmz" Feb 27 10:38:59 crc kubenswrapper[4728]: I0227 10:38:59.156169 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"kube-root-ca.crt" Feb 27 10:38:59 crc kubenswrapper[4728]: I0227 10:38:59.156267 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-metrics" Feb 27 10:38:59 crc kubenswrapper[4728]: I0227 10:38:59.186600 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lnsh5\" (UniqueName: \"kubernetes.io/projected/392252ea-72ab-456c-9462-1c85678476cb-kube-api-access-lnsh5\") pod \"loki-operator-controller-manager-5bcdfff8f4-2p27k\" (UID: \"392252ea-72ab-456c-9462-1c85678476cb\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5bcdfff8f4-2p27k" Feb 27 10:38:59 crc kubenswrapper[4728]: I0227 10:38:59.186644 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/392252ea-72ab-456c-9462-1c85678476cb-manager-config\") pod \"loki-operator-controller-manager-5bcdfff8f4-2p27k\" (UID: \"392252ea-72ab-456c-9462-1c85678476cb\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5bcdfff8f4-2p27k" Feb 27 10:38:59 crc kubenswrapper[4728]: I0227 10:38:59.186684 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/392252ea-72ab-456c-9462-1c85678476cb-webhook-cert\") pod \"loki-operator-controller-manager-5bcdfff8f4-2p27k\" (UID: \"392252ea-72ab-456c-9462-1c85678476cb\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5bcdfff8f4-2p27k" Feb 27 10:38:59 crc kubenswrapper[4728]: I0227 10:38:59.186726 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/392252ea-72ab-456c-9462-1c85678476cb-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-5bcdfff8f4-2p27k\" (UID: \"392252ea-72ab-456c-9462-1c85678476cb\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5bcdfff8f4-2p27k" Feb 27 10:38:59 crc kubenswrapper[4728]: I0227 10:38:59.186749 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/392252ea-72ab-456c-9462-1c85678476cb-apiservice-cert\") pod \"loki-operator-controller-manager-5bcdfff8f4-2p27k\" (UID: \"392252ea-72ab-456c-9462-1c85678476cb\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5bcdfff8f4-2p27k" Feb 27 10:38:59 crc kubenswrapper[4728]: I0227 10:38:59.288150 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/392252ea-72ab-456c-9462-1c85678476cb-apiservice-cert\") pod \"loki-operator-controller-manager-5bcdfff8f4-2p27k\" (UID: \"392252ea-72ab-456c-9462-1c85678476cb\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5bcdfff8f4-2p27k" Feb 27 10:38:59 crc kubenswrapper[4728]: I0227 10:38:59.288468 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lnsh5\" (UniqueName: \"kubernetes.io/projected/392252ea-72ab-456c-9462-1c85678476cb-kube-api-access-lnsh5\") pod \"loki-operator-controller-manager-5bcdfff8f4-2p27k\" (UID: \"392252ea-72ab-456c-9462-1c85678476cb\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5bcdfff8f4-2p27k" Feb 27 10:38:59 crc kubenswrapper[4728]: I0227 10:38:59.288579 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/392252ea-72ab-456c-9462-1c85678476cb-manager-config\") pod \"loki-operator-controller-manager-5bcdfff8f4-2p27k\" (UID: \"392252ea-72ab-456c-9462-1c85678476cb\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5bcdfff8f4-2p27k" Feb 27 10:38:59 crc kubenswrapper[4728]: I0227 10:38:59.288678 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/392252ea-72ab-456c-9462-1c85678476cb-webhook-cert\") pod \"loki-operator-controller-manager-5bcdfff8f4-2p27k\" (UID: \"392252ea-72ab-456c-9462-1c85678476cb\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5bcdfff8f4-2p27k" Feb 27 10:38:59 crc kubenswrapper[4728]: I0227 10:38:59.288768 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/392252ea-72ab-456c-9462-1c85678476cb-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-5bcdfff8f4-2p27k\" (UID: \"392252ea-72ab-456c-9462-1c85678476cb\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5bcdfff8f4-2p27k" Feb 27 10:38:59 crc kubenswrapper[4728]: I0227 10:38:59.289400 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/392252ea-72ab-456c-9462-1c85678476cb-manager-config\") pod \"loki-operator-controller-manager-5bcdfff8f4-2p27k\" (UID: \"392252ea-72ab-456c-9462-1c85678476cb\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5bcdfff8f4-2p27k" Feb 27 10:38:59 crc kubenswrapper[4728]: I0227 10:38:59.293988 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/392252ea-72ab-456c-9462-1c85678476cb-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-5bcdfff8f4-2p27k\" (UID: \"392252ea-72ab-456c-9462-1c85678476cb\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5bcdfff8f4-2p27k" Feb 27 10:38:59 crc kubenswrapper[4728]: I0227 10:38:59.295069 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/392252ea-72ab-456c-9462-1c85678476cb-webhook-cert\") pod \"loki-operator-controller-manager-5bcdfff8f4-2p27k\" (UID: \"392252ea-72ab-456c-9462-1c85678476cb\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5bcdfff8f4-2p27k" Feb 27 10:38:59 crc kubenswrapper[4728]: I0227 10:38:59.301762 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/392252ea-72ab-456c-9462-1c85678476cb-apiservice-cert\") pod \"loki-operator-controller-manager-5bcdfff8f4-2p27k\" (UID: \"392252ea-72ab-456c-9462-1c85678476cb\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5bcdfff8f4-2p27k" Feb 27 10:38:59 crc kubenswrapper[4728]: I0227 10:38:59.311152 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lnsh5\" (UniqueName: \"kubernetes.io/projected/392252ea-72ab-456c-9462-1c85678476cb-kube-api-access-lnsh5\") pod \"loki-operator-controller-manager-5bcdfff8f4-2p27k\" (UID: \"392252ea-72ab-456c-9462-1c85678476cb\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5bcdfff8f4-2p27k" Feb 27 10:38:59 crc kubenswrapper[4728]: I0227 10:38:59.478204 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-5bcdfff8f4-2p27k" Feb 27 10:38:59 crc kubenswrapper[4728]: I0227 10:38:59.767335 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-5bcdfff8f4-2p27k"] Feb 27 10:38:59 crc kubenswrapper[4728]: W0227 10:38:59.774679 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod392252ea_72ab_456c_9462_1c85678476cb.slice/crio-03c443152e7cd7526b74cddec8fc328dbe1346eae68c21806a37e07c3c97eb11 WatchSource:0}: Error finding container 03c443152e7cd7526b74cddec8fc328dbe1346eae68c21806a37e07c3c97eb11: Status 404 returned error can't find the container with id 03c443152e7cd7526b74cddec8fc328dbe1346eae68c21806a37e07c3c97eb11 Feb 27 10:39:00 crc kubenswrapper[4728]: I0227 10:39:00.096529 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-5bcdfff8f4-2p27k" event={"ID":"392252ea-72ab-456c-9462-1c85678476cb","Type":"ContainerStarted","Data":"03c443152e7cd7526b74cddec8fc328dbe1346eae68c21806a37e07c3c97eb11"} Feb 27 10:39:05 crc kubenswrapper[4728]: I0227 10:39:05.140062 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-5bcdfff8f4-2p27k" event={"ID":"392252ea-72ab-456c-9462-1c85678476cb","Type":"ContainerStarted","Data":"3861a607686ade0b7c7a401f3bd98bb0ad674da7b543b641ca8254317cd5527a"} Feb 27 10:39:06 crc kubenswrapper[4728]: I0227 10:39:06.148953 4728 generic.go:334] "Generic (PLEG): container finished" podID="9c824e4a-0c58-41aa-8bee-1bd53bb4967d" containerID="fbe6a96191bda519d9ded861e4a5eeeb294cc6f0ca3bb17ef50f6c22dfb10fa8" exitCode=0 Feb 27 10:39:06 crc kubenswrapper[4728]: I0227 10:39:06.149009 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19l6pwv" event={"ID":"9c824e4a-0c58-41aa-8bee-1bd53bb4967d","Type":"ContainerDied","Data":"fbe6a96191bda519d9ded861e4a5eeeb294cc6f0ca3bb17ef50f6c22dfb10fa8"} Feb 27 10:39:07 crc kubenswrapper[4728]: I0227 10:39:07.163387 4728 generic.go:334] "Generic (PLEG): container finished" podID="9c824e4a-0c58-41aa-8bee-1bd53bb4967d" containerID="8d302c582b7969f3d87b0b06014ef2cc8ed7bb0e9a5bb6cda882f5f8195403c2" exitCode=0 Feb 27 10:39:07 crc kubenswrapper[4728]: I0227 10:39:07.163568 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19l6pwv" event={"ID":"9c824e4a-0c58-41aa-8bee-1bd53bb4967d","Type":"ContainerDied","Data":"8d302c582b7969f3d87b0b06014ef2cc8ed7bb0e9a5bb6cda882f5f8195403c2"} Feb 27 10:39:10 crc kubenswrapper[4728]: I0227 10:39:10.820753 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19l6pwv" Feb 27 10:39:10 crc kubenswrapper[4728]: I0227 10:39:10.912617 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9c824e4a-0c58-41aa-8bee-1bd53bb4967d-bundle\") pod \"9c824e4a-0c58-41aa-8bee-1bd53bb4967d\" (UID: \"9c824e4a-0c58-41aa-8bee-1bd53bb4967d\") " Feb 27 10:39:10 crc kubenswrapper[4728]: I0227 10:39:10.912677 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9c824e4a-0c58-41aa-8bee-1bd53bb4967d-util\") pod \"9c824e4a-0c58-41aa-8bee-1bd53bb4967d\" (UID: \"9c824e4a-0c58-41aa-8bee-1bd53bb4967d\") " Feb 27 10:39:10 crc kubenswrapper[4728]: I0227 10:39:10.912763 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mkvk8\" (UniqueName: \"kubernetes.io/projected/9c824e4a-0c58-41aa-8bee-1bd53bb4967d-kube-api-access-mkvk8\") pod \"9c824e4a-0c58-41aa-8bee-1bd53bb4967d\" (UID: \"9c824e4a-0c58-41aa-8bee-1bd53bb4967d\") " Feb 27 10:39:10 crc kubenswrapper[4728]: I0227 10:39:10.914334 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9c824e4a-0c58-41aa-8bee-1bd53bb4967d-bundle" (OuterVolumeSpecName: "bundle") pod "9c824e4a-0c58-41aa-8bee-1bd53bb4967d" (UID: "9c824e4a-0c58-41aa-8bee-1bd53bb4967d"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:39:10 crc kubenswrapper[4728]: I0227 10:39:10.925741 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c824e4a-0c58-41aa-8bee-1bd53bb4967d-kube-api-access-mkvk8" (OuterVolumeSpecName: "kube-api-access-mkvk8") pod "9c824e4a-0c58-41aa-8bee-1bd53bb4967d" (UID: "9c824e4a-0c58-41aa-8bee-1bd53bb4967d"). InnerVolumeSpecName "kube-api-access-mkvk8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:39:10 crc kubenswrapper[4728]: I0227 10:39:10.943824 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9c824e4a-0c58-41aa-8bee-1bd53bb4967d-util" (OuterVolumeSpecName: "util") pod "9c824e4a-0c58-41aa-8bee-1bd53bb4967d" (UID: "9c824e4a-0c58-41aa-8bee-1bd53bb4967d"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:39:11 crc kubenswrapper[4728]: I0227 10:39:11.014340 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mkvk8\" (UniqueName: \"kubernetes.io/projected/9c824e4a-0c58-41aa-8bee-1bd53bb4967d-kube-api-access-mkvk8\") on node \"crc\" DevicePath \"\"" Feb 27 10:39:11 crc kubenswrapper[4728]: I0227 10:39:11.014373 4728 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9c824e4a-0c58-41aa-8bee-1bd53bb4967d-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 10:39:11 crc kubenswrapper[4728]: I0227 10:39:11.014382 4728 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9c824e4a-0c58-41aa-8bee-1bd53bb4967d-util\") on node \"crc\" DevicePath \"\"" Feb 27 10:39:11 crc kubenswrapper[4728]: I0227 10:39:11.191972 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19l6pwv" event={"ID":"9c824e4a-0c58-41aa-8bee-1bd53bb4967d","Type":"ContainerDied","Data":"3b24d31cf884270908273101adbdd71e7bd77657192f1aa5aa675c961053a211"} Feb 27 10:39:11 crc kubenswrapper[4728]: I0227 10:39:11.192063 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3b24d31cf884270908273101adbdd71e7bd77657192f1aa5aa675c961053a211" Feb 27 10:39:11 crc kubenswrapper[4728]: I0227 10:39:11.192206 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19l6pwv" Feb 27 10:39:12 crc kubenswrapper[4728]: I0227 10:39:12.202610 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-5bcdfff8f4-2p27k" event={"ID":"392252ea-72ab-456c-9462-1c85678476cb","Type":"ContainerStarted","Data":"6eae8f5fe65a58608997a1852cd3c145d388260f0cee645d7dfced0f78c06ca1"} Feb 27 10:39:12 crc kubenswrapper[4728]: I0227 10:39:12.202949 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators-redhat/loki-operator-controller-manager-5bcdfff8f4-2p27k" Feb 27 10:39:12 crc kubenswrapper[4728]: I0227 10:39:12.210289 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators-redhat/loki-operator-controller-manager-5bcdfff8f4-2p27k" Feb 27 10:39:12 crc kubenswrapper[4728]: I0227 10:39:12.241795 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators-redhat/loki-operator-controller-manager-5bcdfff8f4-2p27k" podStartSLOduration=1.513013547 podStartE2EDuration="13.241682366s" podCreationTimestamp="2026-02-27 10:38:59 +0000 UTC" firstStartedPulling="2026-02-27 10:38:59.776460709 +0000 UTC m=+759.738826815" lastFinishedPulling="2026-02-27 10:39:11.505129528 +0000 UTC m=+771.467495634" observedRunningTime="2026-02-27 10:39:12.23513943 +0000 UTC m=+772.197505576" watchObservedRunningTime="2026-02-27 10:39:12.241682366 +0000 UTC m=+772.204048472" Feb 27 10:39:15 crc kubenswrapper[4728]: I0227 10:39:15.246222 4728 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 27 10:39:19 crc kubenswrapper[4728]: I0227 10:39:19.524484 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/cluster-logging-operator-c769fd969-c8l9h"] Feb 27 10:39:19 crc kubenswrapper[4728]: E0227 10:39:19.525355 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c824e4a-0c58-41aa-8bee-1bd53bb4967d" containerName="extract" Feb 27 10:39:19 crc kubenswrapper[4728]: I0227 10:39:19.525374 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c824e4a-0c58-41aa-8bee-1bd53bb4967d" containerName="extract" Feb 27 10:39:19 crc kubenswrapper[4728]: E0227 10:39:19.525405 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c824e4a-0c58-41aa-8bee-1bd53bb4967d" containerName="pull" Feb 27 10:39:19 crc kubenswrapper[4728]: I0227 10:39:19.525413 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c824e4a-0c58-41aa-8bee-1bd53bb4967d" containerName="pull" Feb 27 10:39:19 crc kubenswrapper[4728]: E0227 10:39:19.525424 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c824e4a-0c58-41aa-8bee-1bd53bb4967d" containerName="util" Feb 27 10:39:19 crc kubenswrapper[4728]: I0227 10:39:19.525432 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c824e4a-0c58-41aa-8bee-1bd53bb4967d" containerName="util" Feb 27 10:39:19 crc kubenswrapper[4728]: I0227 10:39:19.525589 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c824e4a-0c58-41aa-8bee-1bd53bb4967d" containerName="extract" Feb 27 10:39:19 crc kubenswrapper[4728]: I0227 10:39:19.526083 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/cluster-logging-operator-c769fd969-c8l9h" Feb 27 10:39:19 crc kubenswrapper[4728]: I0227 10:39:19.528155 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"openshift-service-ca.crt" Feb 27 10:39:19 crc kubenswrapper[4728]: I0227 10:39:19.528234 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"cluster-logging-operator-dockercfg-5fbdd" Feb 27 10:39:19 crc kubenswrapper[4728]: I0227 10:39:19.528384 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"kube-root-ca.crt" Feb 27 10:39:19 crc kubenswrapper[4728]: I0227 10:39:19.533355 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/cluster-logging-operator-c769fd969-c8l9h"] Feb 27 10:39:19 crc kubenswrapper[4728]: I0227 10:39:19.649611 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8p62t\" (UniqueName: \"kubernetes.io/projected/6aa8d145-6b1c-4547-8f98-de72f2f0b5e2-kube-api-access-8p62t\") pod \"cluster-logging-operator-c769fd969-c8l9h\" (UID: \"6aa8d145-6b1c-4547-8f98-de72f2f0b5e2\") " pod="openshift-logging/cluster-logging-operator-c769fd969-c8l9h" Feb 27 10:39:19 crc kubenswrapper[4728]: I0227 10:39:19.750707 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8p62t\" (UniqueName: \"kubernetes.io/projected/6aa8d145-6b1c-4547-8f98-de72f2f0b5e2-kube-api-access-8p62t\") pod \"cluster-logging-operator-c769fd969-c8l9h\" (UID: \"6aa8d145-6b1c-4547-8f98-de72f2f0b5e2\") " pod="openshift-logging/cluster-logging-operator-c769fd969-c8l9h" Feb 27 10:39:19 crc kubenswrapper[4728]: I0227 10:39:19.771777 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8p62t\" (UniqueName: \"kubernetes.io/projected/6aa8d145-6b1c-4547-8f98-de72f2f0b5e2-kube-api-access-8p62t\") pod \"cluster-logging-operator-c769fd969-c8l9h\" (UID: \"6aa8d145-6b1c-4547-8f98-de72f2f0b5e2\") " pod="openshift-logging/cluster-logging-operator-c769fd969-c8l9h" Feb 27 10:39:19 crc kubenswrapper[4728]: I0227 10:39:19.843069 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/cluster-logging-operator-c769fd969-c8l9h" Feb 27 10:39:20 crc kubenswrapper[4728]: I0227 10:39:20.263712 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/cluster-logging-operator-c769fd969-c8l9h"] Feb 27 10:39:21 crc kubenswrapper[4728]: I0227 10:39:21.276469 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/cluster-logging-operator-c769fd969-c8l9h" event={"ID":"6aa8d145-6b1c-4547-8f98-de72f2f0b5e2","Type":"ContainerStarted","Data":"16918241dd304314e32567296027cee791af1f0175934097a4f0a47b8f8ab679"} Feb 27 10:39:28 crc kubenswrapper[4728]: I0227 10:39:28.322202 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/cluster-logging-operator-c769fd969-c8l9h" event={"ID":"6aa8d145-6b1c-4547-8f98-de72f2f0b5e2","Type":"ContainerStarted","Data":"ab78278899a6b19d0b08543c8d889d7a02c9dea1328c39f0bb9c580fb5818460"} Feb 27 10:39:28 crc kubenswrapper[4728]: I0227 10:39:28.346446 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/cluster-logging-operator-c769fd969-c8l9h" podStartSLOduration=2.511996254 podStartE2EDuration="9.346420459s" podCreationTimestamp="2026-02-27 10:39:19 +0000 UTC" firstStartedPulling="2026-02-27 10:39:20.279609232 +0000 UTC m=+780.241975338" lastFinishedPulling="2026-02-27 10:39:27.114033437 +0000 UTC m=+787.076399543" observedRunningTime="2026-02-27 10:39:28.345139605 +0000 UTC m=+788.307505721" watchObservedRunningTime="2026-02-27 10:39:28.346420459 +0000 UTC m=+788.308786575" Feb 27 10:39:33 crc kubenswrapper[4728]: I0227 10:39:33.048232 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["minio-dev/minio"] Feb 27 10:39:33 crc kubenswrapper[4728]: I0227 10:39:33.050680 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="minio-dev/minio" Feb 27 10:39:33 crc kubenswrapper[4728]: I0227 10:39:33.055868 4728 reflector.go:368] Caches populated for *v1.Secret from object-"minio-dev"/"default-dockercfg-6f6vq" Feb 27 10:39:33 crc kubenswrapper[4728]: I0227 10:39:33.056247 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"minio-dev"/"kube-root-ca.crt" Feb 27 10:39:33 crc kubenswrapper[4728]: I0227 10:39:33.056556 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"minio-dev"/"openshift-service-ca.crt" Feb 27 10:39:33 crc kubenswrapper[4728]: I0227 10:39:33.057992 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["minio-dev/minio"] Feb 27 10:39:33 crc kubenswrapper[4728]: I0227 10:39:33.166710 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-c76f82fb-76b9-4a2f-b8f1-596bb67e29bd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c76f82fb-76b9-4a2f-b8f1-596bb67e29bd\") pod \"minio\" (UID: \"58aec174-a80e-44a4-962f-4ad316dc0488\") " pod="minio-dev/minio" Feb 27 10:39:33 crc kubenswrapper[4728]: I0227 10:39:33.166790 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6784\" (UniqueName: \"kubernetes.io/projected/58aec174-a80e-44a4-962f-4ad316dc0488-kube-api-access-k6784\") pod \"minio\" (UID: \"58aec174-a80e-44a4-962f-4ad316dc0488\") " pod="minio-dev/minio" Feb 27 10:39:33 crc kubenswrapper[4728]: I0227 10:39:33.267835 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k6784\" (UniqueName: \"kubernetes.io/projected/58aec174-a80e-44a4-962f-4ad316dc0488-kube-api-access-k6784\") pod \"minio\" (UID: \"58aec174-a80e-44a4-962f-4ad316dc0488\") " pod="minio-dev/minio" Feb 27 10:39:33 crc kubenswrapper[4728]: I0227 10:39:33.267975 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-c76f82fb-76b9-4a2f-b8f1-596bb67e29bd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c76f82fb-76b9-4a2f-b8f1-596bb67e29bd\") pod \"minio\" (UID: \"58aec174-a80e-44a4-962f-4ad316dc0488\") " pod="minio-dev/minio" Feb 27 10:39:33 crc kubenswrapper[4728]: I0227 10:39:33.271754 4728 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 27 10:39:33 crc kubenswrapper[4728]: I0227 10:39:33.271803 4728 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-c76f82fb-76b9-4a2f-b8f1-596bb67e29bd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c76f82fb-76b9-4a2f-b8f1-596bb67e29bd\") pod \"minio\" (UID: \"58aec174-a80e-44a4-962f-4ad316dc0488\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/ac7985a055f852144701f2ad8fc204d09b0e50460a8f2dc9335587994338aecf/globalmount\"" pod="minio-dev/minio" Feb 27 10:39:33 crc kubenswrapper[4728]: I0227 10:39:33.305198 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k6784\" (UniqueName: \"kubernetes.io/projected/58aec174-a80e-44a4-962f-4ad316dc0488-kube-api-access-k6784\") pod \"minio\" (UID: \"58aec174-a80e-44a4-962f-4ad316dc0488\") " pod="minio-dev/minio" Feb 27 10:39:33 crc kubenswrapper[4728]: I0227 10:39:33.324624 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-c76f82fb-76b9-4a2f-b8f1-596bb67e29bd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c76f82fb-76b9-4a2f-b8f1-596bb67e29bd\") pod \"minio\" (UID: \"58aec174-a80e-44a4-962f-4ad316dc0488\") " pod="minio-dev/minio" Feb 27 10:39:33 crc kubenswrapper[4728]: I0227 10:39:33.392904 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="minio-dev/minio" Feb 27 10:39:33 crc kubenswrapper[4728]: I0227 10:39:33.618367 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["minio-dev/minio"] Feb 27 10:39:33 crc kubenswrapper[4728]: W0227 10:39:33.622621 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod58aec174_a80e_44a4_962f_4ad316dc0488.slice/crio-feae43ecc7400f2249adb7c1ac0701938a6e8a9b9a2d2da00e34295a21196220 WatchSource:0}: Error finding container feae43ecc7400f2249adb7c1ac0701938a6e8a9b9a2d2da00e34295a21196220: Status 404 returned error can't find the container with id feae43ecc7400f2249adb7c1ac0701938a6e8a9b9a2d2da00e34295a21196220 Feb 27 10:39:34 crc kubenswrapper[4728]: I0227 10:39:34.359376 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="minio-dev/minio" event={"ID":"58aec174-a80e-44a4-962f-4ad316dc0488","Type":"ContainerStarted","Data":"feae43ecc7400f2249adb7c1ac0701938a6e8a9b9a2d2da00e34295a21196220"} Feb 27 10:39:37 crc kubenswrapper[4728]: I0227 10:39:37.389169 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="minio-dev/minio" event={"ID":"58aec174-a80e-44a4-962f-4ad316dc0488","Type":"ContainerStarted","Data":"7a0dbd4b4a32da012bd7eba2e90069b5e5c9f39fc9060f643ecc47a860db1e1a"} Feb 27 10:39:37 crc kubenswrapper[4728]: I0227 10:39:37.416003 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="minio-dev/minio" podStartSLOduration=4.126718315 podStartE2EDuration="7.415980346s" podCreationTimestamp="2026-02-27 10:39:30 +0000 UTC" firstStartedPulling="2026-02-27 10:39:33.625876075 +0000 UTC m=+793.588242181" lastFinishedPulling="2026-02-27 10:39:36.915138106 +0000 UTC m=+796.877504212" observedRunningTime="2026-02-27 10:39:37.406435688 +0000 UTC m=+797.368801794" watchObservedRunningTime="2026-02-27 10:39:37.415980346 +0000 UTC m=+797.378346462" Feb 27 10:39:45 crc kubenswrapper[4728]: I0227 10:39:45.366456 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-distributor-5d5548c9f5-mgsnx"] Feb 27 10:39:45 crc kubenswrapper[4728]: I0227 10:39:45.368120 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-mgsnx" Feb 27 10:39:45 crc kubenswrapper[4728]: I0227 10:39:45.372599 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-distributor-http" Feb 27 10:39:45 crc kubenswrapper[4728]: I0227 10:39:45.373942 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-ca-bundle" Feb 27 10:39:45 crc kubenswrapper[4728]: I0227 10:39:45.373954 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-distributor-grpc" Feb 27 10:39:45 crc kubenswrapper[4728]: I0227 10:39:45.374020 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-config" Feb 27 10:39:45 crc kubenswrapper[4728]: I0227 10:39:45.381055 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-dockercfg-fgpwj" Feb 27 10:39:45 crc kubenswrapper[4728]: I0227 10:39:45.401667 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-distributor-5d5548c9f5-mgsnx"] Feb 27 10:39:45 crc kubenswrapper[4728]: I0227 10:39:45.459704 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/404c78e7-51af-4a23-8db8-5c19bf876bdc-config\") pod \"logging-loki-distributor-5d5548c9f5-mgsnx\" (UID: \"404c78e7-51af-4a23-8db8-5c19bf876bdc\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-mgsnx" Feb 27 10:39:45 crc kubenswrapper[4728]: I0227 10:39:45.459801 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/404c78e7-51af-4a23-8db8-5c19bf876bdc-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-5d5548c9f5-mgsnx\" (UID: \"404c78e7-51af-4a23-8db8-5c19bf876bdc\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-mgsnx" Feb 27 10:39:45 crc kubenswrapper[4728]: I0227 10:39:45.459845 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/404c78e7-51af-4a23-8db8-5c19bf876bdc-logging-loki-ca-bundle\") pod \"logging-loki-distributor-5d5548c9f5-mgsnx\" (UID: \"404c78e7-51af-4a23-8db8-5c19bf876bdc\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-mgsnx" Feb 27 10:39:45 crc kubenswrapper[4728]: I0227 10:39:45.459930 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/404c78e7-51af-4a23-8db8-5c19bf876bdc-logging-loki-distributor-http\") pod \"logging-loki-distributor-5d5548c9f5-mgsnx\" (UID: \"404c78e7-51af-4a23-8db8-5c19bf876bdc\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-mgsnx" Feb 27 10:39:45 crc kubenswrapper[4728]: I0227 10:39:45.459980 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vck8b\" (UniqueName: \"kubernetes.io/projected/404c78e7-51af-4a23-8db8-5c19bf876bdc-kube-api-access-vck8b\") pod \"logging-loki-distributor-5d5548c9f5-mgsnx\" (UID: \"404c78e7-51af-4a23-8db8-5c19bf876bdc\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-mgsnx" Feb 27 10:39:45 crc kubenswrapper[4728]: I0227 10:39:45.520466 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-querier-76bf7b6d45-thpcv"] Feb 27 10:39:45 crc kubenswrapper[4728]: I0227 10:39:45.521829 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-querier-76bf7b6d45-thpcv" Feb 27 10:39:45 crc kubenswrapper[4728]: I0227 10:39:45.533771 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-s3" Feb 27 10:39:45 crc kubenswrapper[4728]: I0227 10:39:45.533997 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-querier-http" Feb 27 10:39:45 crc kubenswrapper[4728]: I0227 10:39:45.550707 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-querier-grpc" Feb 27 10:39:45 crc kubenswrapper[4728]: I0227 10:39:45.556385 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-querier-76bf7b6d45-thpcv"] Feb 27 10:39:45 crc kubenswrapper[4728]: I0227 10:39:45.561198 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/404c78e7-51af-4a23-8db8-5c19bf876bdc-logging-loki-distributor-http\") pod \"logging-loki-distributor-5d5548c9f5-mgsnx\" (UID: \"404c78e7-51af-4a23-8db8-5c19bf876bdc\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-mgsnx" Feb 27 10:39:45 crc kubenswrapper[4728]: I0227 10:39:45.561259 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vck8b\" (UniqueName: \"kubernetes.io/projected/404c78e7-51af-4a23-8db8-5c19bf876bdc-kube-api-access-vck8b\") pod \"logging-loki-distributor-5d5548c9f5-mgsnx\" (UID: \"404c78e7-51af-4a23-8db8-5c19bf876bdc\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-mgsnx" Feb 27 10:39:45 crc kubenswrapper[4728]: I0227 10:39:45.561284 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/404c78e7-51af-4a23-8db8-5c19bf876bdc-config\") pod \"logging-loki-distributor-5d5548c9f5-mgsnx\" (UID: \"404c78e7-51af-4a23-8db8-5c19bf876bdc\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-mgsnx" Feb 27 10:39:45 crc kubenswrapper[4728]: I0227 10:39:45.561316 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/404c78e7-51af-4a23-8db8-5c19bf876bdc-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-5d5548c9f5-mgsnx\" (UID: \"404c78e7-51af-4a23-8db8-5c19bf876bdc\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-mgsnx" Feb 27 10:39:45 crc kubenswrapper[4728]: I0227 10:39:45.561341 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/404c78e7-51af-4a23-8db8-5c19bf876bdc-logging-loki-ca-bundle\") pod \"logging-loki-distributor-5d5548c9f5-mgsnx\" (UID: \"404c78e7-51af-4a23-8db8-5c19bf876bdc\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-mgsnx" Feb 27 10:39:45 crc kubenswrapper[4728]: I0227 10:39:45.562518 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/404c78e7-51af-4a23-8db8-5c19bf876bdc-config\") pod \"logging-loki-distributor-5d5548c9f5-mgsnx\" (UID: \"404c78e7-51af-4a23-8db8-5c19bf876bdc\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-mgsnx" Feb 27 10:39:45 crc kubenswrapper[4728]: I0227 10:39:45.566897 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/404c78e7-51af-4a23-8db8-5c19bf876bdc-logging-loki-ca-bundle\") pod \"logging-loki-distributor-5d5548c9f5-mgsnx\" (UID: \"404c78e7-51af-4a23-8db8-5c19bf876bdc\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-mgsnx" Feb 27 10:39:45 crc kubenswrapper[4728]: I0227 10:39:45.571929 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/404c78e7-51af-4a23-8db8-5c19bf876bdc-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-5d5548c9f5-mgsnx\" (UID: \"404c78e7-51af-4a23-8db8-5c19bf876bdc\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-mgsnx" Feb 27 10:39:45 crc kubenswrapper[4728]: I0227 10:39:45.572069 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/404c78e7-51af-4a23-8db8-5c19bf876bdc-logging-loki-distributor-http\") pod \"logging-loki-distributor-5d5548c9f5-mgsnx\" (UID: \"404c78e7-51af-4a23-8db8-5c19bf876bdc\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-mgsnx" Feb 27 10:39:45 crc kubenswrapper[4728]: I0227 10:39:45.584959 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vck8b\" (UniqueName: \"kubernetes.io/projected/404c78e7-51af-4a23-8db8-5c19bf876bdc-kube-api-access-vck8b\") pod \"logging-loki-distributor-5d5548c9f5-mgsnx\" (UID: \"404c78e7-51af-4a23-8db8-5c19bf876bdc\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-mgsnx" Feb 27 10:39:45 crc kubenswrapper[4728]: I0227 10:39:45.662297 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hstt4\" (UniqueName: \"kubernetes.io/projected/bd0adbab-805e-4ac7-b2a2-3c67275176e0-kube-api-access-hstt4\") pod \"logging-loki-querier-76bf7b6d45-thpcv\" (UID: \"bd0adbab-805e-4ac7-b2a2-3c67275176e0\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-thpcv" Feb 27 10:39:45 crc kubenswrapper[4728]: I0227 10:39:45.662359 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/bd0adbab-805e-4ac7-b2a2-3c67275176e0-logging-loki-querier-http\") pod \"logging-loki-querier-76bf7b6d45-thpcv\" (UID: \"bd0adbab-805e-4ac7-b2a2-3c67275176e0\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-thpcv" Feb 27 10:39:45 crc kubenswrapper[4728]: I0227 10:39:45.662389 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/bd0adbab-805e-4ac7-b2a2-3c67275176e0-logging-loki-querier-grpc\") pod \"logging-loki-querier-76bf7b6d45-thpcv\" (UID: \"bd0adbab-805e-4ac7-b2a2-3c67275176e0\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-thpcv" Feb 27 10:39:45 crc kubenswrapper[4728]: I0227 10:39:45.662470 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bd0adbab-805e-4ac7-b2a2-3c67275176e0-logging-loki-ca-bundle\") pod \"logging-loki-querier-76bf7b6d45-thpcv\" (UID: \"bd0adbab-805e-4ac7-b2a2-3c67275176e0\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-thpcv" Feb 27 10:39:45 crc kubenswrapper[4728]: I0227 10:39:45.662497 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/bd0adbab-805e-4ac7-b2a2-3c67275176e0-logging-loki-s3\") pod \"logging-loki-querier-76bf7b6d45-thpcv\" (UID: \"bd0adbab-805e-4ac7-b2a2-3c67275176e0\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-thpcv" Feb 27 10:39:45 crc kubenswrapper[4728]: I0227 10:39:45.662600 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd0adbab-805e-4ac7-b2a2-3c67275176e0-config\") pod \"logging-loki-querier-76bf7b6d45-thpcv\" (UID: \"bd0adbab-805e-4ac7-b2a2-3c67275176e0\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-thpcv" Feb 27 10:39:45 crc kubenswrapper[4728]: I0227 10:39:45.701719 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-mgsnx" Feb 27 10:39:45 crc kubenswrapper[4728]: I0227 10:39:45.720352 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-query-frontend-6d6859c548-958gn"] Feb 27 10:39:45 crc kubenswrapper[4728]: I0227 10:39:45.721183 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-958gn" Feb 27 10:39:45 crc kubenswrapper[4728]: I0227 10:39:45.727824 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-query-frontend-http" Feb 27 10:39:45 crc kubenswrapper[4728]: I0227 10:39:45.728315 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-query-frontend-grpc" Feb 27 10:39:45 crc kubenswrapper[4728]: I0227 10:39:45.741601 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-query-frontend-6d6859c548-958gn"] Feb 27 10:39:45 crc kubenswrapper[4728]: I0227 10:39:45.763519 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/bd0adbab-805e-4ac7-b2a2-3c67275176e0-logging-loki-s3\") pod \"logging-loki-querier-76bf7b6d45-thpcv\" (UID: \"bd0adbab-805e-4ac7-b2a2-3c67275176e0\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-thpcv" Feb 27 10:39:45 crc kubenswrapper[4728]: I0227 10:39:45.763571 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd0adbab-805e-4ac7-b2a2-3c67275176e0-config\") pod \"logging-loki-querier-76bf7b6d45-thpcv\" (UID: \"bd0adbab-805e-4ac7-b2a2-3c67275176e0\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-thpcv" Feb 27 10:39:45 crc kubenswrapper[4728]: I0227 10:39:45.763607 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hstt4\" (UniqueName: \"kubernetes.io/projected/bd0adbab-805e-4ac7-b2a2-3c67275176e0-kube-api-access-hstt4\") pod \"logging-loki-querier-76bf7b6d45-thpcv\" (UID: \"bd0adbab-805e-4ac7-b2a2-3c67275176e0\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-thpcv" Feb 27 10:39:45 crc kubenswrapper[4728]: I0227 10:39:45.763647 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/bd0adbab-805e-4ac7-b2a2-3c67275176e0-logging-loki-querier-http\") pod \"logging-loki-querier-76bf7b6d45-thpcv\" (UID: \"bd0adbab-805e-4ac7-b2a2-3c67275176e0\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-thpcv" Feb 27 10:39:45 crc kubenswrapper[4728]: I0227 10:39:45.763680 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/bd0adbab-805e-4ac7-b2a2-3c67275176e0-logging-loki-querier-grpc\") pod \"logging-loki-querier-76bf7b6d45-thpcv\" (UID: \"bd0adbab-805e-4ac7-b2a2-3c67275176e0\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-thpcv" Feb 27 10:39:45 crc kubenswrapper[4728]: I0227 10:39:45.763734 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bd0adbab-805e-4ac7-b2a2-3c67275176e0-logging-loki-ca-bundle\") pod \"logging-loki-querier-76bf7b6d45-thpcv\" (UID: \"bd0adbab-805e-4ac7-b2a2-3c67275176e0\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-thpcv" Feb 27 10:39:45 crc kubenswrapper[4728]: I0227 10:39:45.764524 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd0adbab-805e-4ac7-b2a2-3c67275176e0-config\") pod \"logging-loki-querier-76bf7b6d45-thpcv\" (UID: \"bd0adbab-805e-4ac7-b2a2-3c67275176e0\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-thpcv" Feb 27 10:39:45 crc kubenswrapper[4728]: I0227 10:39:45.765279 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bd0adbab-805e-4ac7-b2a2-3c67275176e0-logging-loki-ca-bundle\") pod \"logging-loki-querier-76bf7b6d45-thpcv\" (UID: \"bd0adbab-805e-4ac7-b2a2-3c67275176e0\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-thpcv" Feb 27 10:39:45 crc kubenswrapper[4728]: I0227 10:39:45.774093 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/bd0adbab-805e-4ac7-b2a2-3c67275176e0-logging-loki-querier-grpc\") pod \"logging-loki-querier-76bf7b6d45-thpcv\" (UID: \"bd0adbab-805e-4ac7-b2a2-3c67275176e0\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-thpcv" Feb 27 10:39:45 crc kubenswrapper[4728]: I0227 10:39:45.774270 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/bd0adbab-805e-4ac7-b2a2-3c67275176e0-logging-loki-s3\") pod \"logging-loki-querier-76bf7b6d45-thpcv\" (UID: \"bd0adbab-805e-4ac7-b2a2-3c67275176e0\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-thpcv" Feb 27 10:39:45 crc kubenswrapper[4728]: I0227 10:39:45.780704 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/bd0adbab-805e-4ac7-b2a2-3c67275176e0-logging-loki-querier-http\") pod \"logging-loki-querier-76bf7b6d45-thpcv\" (UID: \"bd0adbab-805e-4ac7-b2a2-3c67275176e0\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-thpcv" Feb 27 10:39:45 crc kubenswrapper[4728]: I0227 10:39:45.813370 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hstt4\" (UniqueName: \"kubernetes.io/projected/bd0adbab-805e-4ac7-b2a2-3c67275176e0-kube-api-access-hstt4\") pod \"logging-loki-querier-76bf7b6d45-thpcv\" (UID: \"bd0adbab-805e-4ac7-b2a2-3c67275176e0\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-thpcv" Feb 27 10:39:45 crc kubenswrapper[4728]: I0227 10:39:45.834089 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-querier-76bf7b6d45-thpcv" Feb 27 10:39:45 crc kubenswrapper[4728]: I0227 10:39:45.844421 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-gateway-6c7d6ccd54-z9tn8"] Feb 27 10:39:45 crc kubenswrapper[4728]: I0227 10:39:45.845440 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-6c7d6ccd54-z9tn8" Feb 27 10:39:45 crc kubenswrapper[4728]: I0227 10:39:45.850993 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-client-http" Feb 27 10:39:45 crc kubenswrapper[4728]: I0227 10:39:45.853029 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway" Feb 27 10:39:45 crc kubenswrapper[4728]: I0227 10:39:45.853236 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-dockercfg-jspcr" Feb 27 10:39:45 crc kubenswrapper[4728]: I0227 10:39:45.863104 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-gateway-6c7d6ccd54-mdl8x"] Feb 27 10:39:45 crc kubenswrapper[4728]: I0227 10:39:45.866855 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-http" Feb 27 10:39:45 crc kubenswrapper[4728]: I0227 10:39:45.867055 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-gateway-ca-bundle" Feb 27 10:39:45 crc kubenswrapper[4728]: I0227 10:39:45.867708 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m9dgc\" (UniqueName: \"kubernetes.io/projected/63bf6e97-3442-45f7-a57b-b811efabb073-kube-api-access-m9dgc\") pod \"logging-loki-query-frontend-6d6859c548-958gn\" (UID: \"63bf6e97-3442-45f7-a57b-b811efabb073\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-958gn" Feb 27 10:39:45 crc kubenswrapper[4728]: I0227 10:39:45.867761 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/63bf6e97-3442-45f7-a57b-b811efabb073-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-6d6859c548-958gn\" (UID: \"63bf6e97-3442-45f7-a57b-b811efabb073\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-958gn" Feb 27 10:39:45 crc kubenswrapper[4728]: I0227 10:39:45.867781 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/63bf6e97-3442-45f7-a57b-b811efabb073-config\") pod \"logging-loki-query-frontend-6d6859c548-958gn\" (UID: \"63bf6e97-3442-45f7-a57b-b811efabb073\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-958gn" Feb 27 10:39:45 crc kubenswrapper[4728]: I0227 10:39:45.867813 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/63bf6e97-3442-45f7-a57b-b811efabb073-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-6d6859c548-958gn\" (UID: \"63bf6e97-3442-45f7-a57b-b811efabb073\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-958gn" Feb 27 10:39:45 crc kubenswrapper[4728]: I0227 10:39:45.867833 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/63bf6e97-3442-45f7-a57b-b811efabb073-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-6d6859c548-958gn\" (UID: \"63bf6e97-3442-45f7-a57b-b811efabb073\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-958gn" Feb 27 10:39:45 crc kubenswrapper[4728]: I0227 10:39:45.870325 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-6c7d6ccd54-mdl8x" Feb 27 10:39:45 crc kubenswrapper[4728]: I0227 10:39:45.870759 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-gateway" Feb 27 10:39:45 crc kubenswrapper[4728]: I0227 10:39:45.886290 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-6c7d6ccd54-z9tn8"] Feb 27 10:39:45 crc kubenswrapper[4728]: I0227 10:39:45.886331 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-6c7d6ccd54-mdl8x"] Feb 27 10:39:45 crc kubenswrapper[4728]: I0227 10:39:45.970135 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/a623f6cb-5632-4d1d-9754-7d146be81c79-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-6c7d6ccd54-z9tn8\" (UID: \"a623f6cb-5632-4d1d-9754-7d146be81c79\") " pod="openshift-logging/logging-loki-gateway-6c7d6ccd54-z9tn8" Feb 27 10:39:45 crc kubenswrapper[4728]: I0227 10:39:45.970401 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m9dgc\" (UniqueName: \"kubernetes.io/projected/63bf6e97-3442-45f7-a57b-b811efabb073-kube-api-access-m9dgc\") pod \"logging-loki-query-frontend-6d6859c548-958gn\" (UID: \"63bf6e97-3442-45f7-a57b-b811efabb073\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-958gn" Feb 27 10:39:45 crc kubenswrapper[4728]: I0227 10:39:45.970424 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/9d307382-547a-4a24-b552-9c3c2390a947-lokistack-gateway\") pod \"logging-loki-gateway-6c7d6ccd54-mdl8x\" (UID: \"9d307382-547a-4a24-b552-9c3c2390a947\") " pod="openshift-logging/logging-loki-gateway-6c7d6ccd54-mdl8x" Feb 27 10:39:45 crc kubenswrapper[4728]: I0227 10:39:45.970460 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9d307382-547a-4a24-b552-9c3c2390a947-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-6c7d6ccd54-mdl8x\" (UID: \"9d307382-547a-4a24-b552-9c3c2390a947\") " pod="openshift-logging/logging-loki-gateway-6c7d6ccd54-mdl8x" Feb 27 10:39:45 crc kubenswrapper[4728]: I0227 10:39:45.970481 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/a623f6cb-5632-4d1d-9754-7d146be81c79-lokistack-gateway\") pod \"logging-loki-gateway-6c7d6ccd54-z9tn8\" (UID: \"a623f6cb-5632-4d1d-9754-7d146be81c79\") " pod="openshift-logging/logging-loki-gateway-6c7d6ccd54-z9tn8" Feb 27 10:39:45 crc kubenswrapper[4728]: I0227 10:39:45.970497 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a623f6cb-5632-4d1d-9754-7d146be81c79-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-6c7d6ccd54-z9tn8\" (UID: \"a623f6cb-5632-4d1d-9754-7d146be81c79\") " pod="openshift-logging/logging-loki-gateway-6c7d6ccd54-z9tn8" Feb 27 10:39:45 crc kubenswrapper[4728]: I0227 10:39:45.970537 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/63bf6e97-3442-45f7-a57b-b811efabb073-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-6d6859c548-958gn\" (UID: \"63bf6e97-3442-45f7-a57b-b811efabb073\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-958gn" Feb 27 10:39:45 crc kubenswrapper[4728]: I0227 10:39:45.970556 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/63bf6e97-3442-45f7-a57b-b811efabb073-config\") pod \"logging-loki-query-frontend-6d6859c548-958gn\" (UID: \"63bf6e97-3442-45f7-a57b-b811efabb073\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-958gn" Feb 27 10:39:45 crc kubenswrapper[4728]: I0227 10:39:45.970579 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/9d307382-547a-4a24-b552-9c3c2390a947-tenants\") pod \"logging-loki-gateway-6c7d6ccd54-mdl8x\" (UID: \"9d307382-547a-4a24-b552-9c3c2390a947\") " pod="openshift-logging/logging-loki-gateway-6c7d6ccd54-mdl8x" Feb 27 10:39:45 crc kubenswrapper[4728]: I0227 10:39:45.970602 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/9d307382-547a-4a24-b552-9c3c2390a947-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-6c7d6ccd54-mdl8x\" (UID: \"9d307382-547a-4a24-b552-9c3c2390a947\") " pod="openshift-logging/logging-loki-gateway-6c7d6ccd54-mdl8x" Feb 27 10:39:45 crc kubenswrapper[4728]: I0227 10:39:45.970617 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/9d307382-547a-4a24-b552-9c3c2390a947-tls-secret\") pod \"logging-loki-gateway-6c7d6ccd54-mdl8x\" (UID: \"9d307382-547a-4a24-b552-9c3c2390a947\") " pod="openshift-logging/logging-loki-gateway-6c7d6ccd54-mdl8x" Feb 27 10:39:45 crc kubenswrapper[4728]: I0227 10:39:45.970637 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/63bf6e97-3442-45f7-a57b-b811efabb073-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-6d6859c548-958gn\" (UID: \"63bf6e97-3442-45f7-a57b-b811efabb073\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-958gn" Feb 27 10:39:45 crc kubenswrapper[4728]: I0227 10:39:45.970652 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5mbp\" (UniqueName: \"kubernetes.io/projected/a623f6cb-5632-4d1d-9754-7d146be81c79-kube-api-access-w5mbp\") pod \"logging-loki-gateway-6c7d6ccd54-z9tn8\" (UID: \"a623f6cb-5632-4d1d-9754-7d146be81c79\") " pod="openshift-logging/logging-loki-gateway-6c7d6ccd54-z9tn8" Feb 27 10:39:45 crc kubenswrapper[4728]: I0227 10:39:45.970670 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/9d307382-547a-4a24-b552-9c3c2390a947-rbac\") pod \"logging-loki-gateway-6c7d6ccd54-mdl8x\" (UID: \"9d307382-547a-4a24-b552-9c3c2390a947\") " pod="openshift-logging/logging-loki-gateway-6c7d6ccd54-mdl8x" Feb 27 10:39:45 crc kubenswrapper[4728]: I0227 10:39:45.970689 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/63bf6e97-3442-45f7-a57b-b811efabb073-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-6d6859c548-958gn\" (UID: \"63bf6e97-3442-45f7-a57b-b811efabb073\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-958gn" Feb 27 10:39:45 crc kubenswrapper[4728]: I0227 10:39:45.970712 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a623f6cb-5632-4d1d-9754-7d146be81c79-logging-loki-ca-bundle\") pod \"logging-loki-gateway-6c7d6ccd54-z9tn8\" (UID: \"a623f6cb-5632-4d1d-9754-7d146be81c79\") " pod="openshift-logging/logging-loki-gateway-6c7d6ccd54-z9tn8" Feb 27 10:39:45 crc kubenswrapper[4728]: I0227 10:39:45.970734 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/a623f6cb-5632-4d1d-9754-7d146be81c79-tenants\") pod \"logging-loki-gateway-6c7d6ccd54-z9tn8\" (UID: \"a623f6cb-5632-4d1d-9754-7d146be81c79\") " pod="openshift-logging/logging-loki-gateway-6c7d6ccd54-z9tn8" Feb 27 10:39:45 crc kubenswrapper[4728]: I0227 10:39:45.970750 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b88jl\" (UniqueName: \"kubernetes.io/projected/9d307382-547a-4a24-b552-9c3c2390a947-kube-api-access-b88jl\") pod \"logging-loki-gateway-6c7d6ccd54-mdl8x\" (UID: \"9d307382-547a-4a24-b552-9c3c2390a947\") " pod="openshift-logging/logging-loki-gateway-6c7d6ccd54-mdl8x" Feb 27 10:39:45 crc kubenswrapper[4728]: I0227 10:39:45.970775 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/a623f6cb-5632-4d1d-9754-7d146be81c79-tls-secret\") pod \"logging-loki-gateway-6c7d6ccd54-z9tn8\" (UID: \"a623f6cb-5632-4d1d-9754-7d146be81c79\") " pod="openshift-logging/logging-loki-gateway-6c7d6ccd54-z9tn8" Feb 27 10:39:45 crc kubenswrapper[4728]: I0227 10:39:45.970797 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/a623f6cb-5632-4d1d-9754-7d146be81c79-rbac\") pod \"logging-loki-gateway-6c7d6ccd54-z9tn8\" (UID: \"a623f6cb-5632-4d1d-9754-7d146be81c79\") " pod="openshift-logging/logging-loki-gateway-6c7d6ccd54-z9tn8" Feb 27 10:39:45 crc kubenswrapper[4728]: I0227 10:39:45.971056 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9d307382-547a-4a24-b552-9c3c2390a947-logging-loki-ca-bundle\") pod \"logging-loki-gateway-6c7d6ccd54-mdl8x\" (UID: \"9d307382-547a-4a24-b552-9c3c2390a947\") " pod="openshift-logging/logging-loki-gateway-6c7d6ccd54-mdl8x" Feb 27 10:39:45 crc kubenswrapper[4728]: I0227 10:39:45.971804 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/63bf6e97-3442-45f7-a57b-b811efabb073-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-6d6859c548-958gn\" (UID: \"63bf6e97-3442-45f7-a57b-b811efabb073\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-958gn" Feb 27 10:39:45 crc kubenswrapper[4728]: I0227 10:39:45.972352 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/63bf6e97-3442-45f7-a57b-b811efabb073-config\") pod \"logging-loki-query-frontend-6d6859c548-958gn\" (UID: \"63bf6e97-3442-45f7-a57b-b811efabb073\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-958gn" Feb 27 10:39:45 crc kubenswrapper[4728]: I0227 10:39:45.980985 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/63bf6e97-3442-45f7-a57b-b811efabb073-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-6d6859c548-958gn\" (UID: \"63bf6e97-3442-45f7-a57b-b811efabb073\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-958gn" Feb 27 10:39:45 crc kubenswrapper[4728]: I0227 10:39:45.984865 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/63bf6e97-3442-45f7-a57b-b811efabb073-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-6d6859c548-958gn\" (UID: \"63bf6e97-3442-45f7-a57b-b811efabb073\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-958gn" Feb 27 10:39:45 crc kubenswrapper[4728]: I0227 10:39:45.989194 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m9dgc\" (UniqueName: \"kubernetes.io/projected/63bf6e97-3442-45f7-a57b-b811efabb073-kube-api-access-m9dgc\") pod \"logging-loki-query-frontend-6d6859c548-958gn\" (UID: \"63bf6e97-3442-45f7-a57b-b811efabb073\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-958gn" Feb 27 10:39:46 crc kubenswrapper[4728]: I0227 10:39:46.072642 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a623f6cb-5632-4d1d-9754-7d146be81c79-logging-loki-ca-bundle\") pod \"logging-loki-gateway-6c7d6ccd54-z9tn8\" (UID: \"a623f6cb-5632-4d1d-9754-7d146be81c79\") " pod="openshift-logging/logging-loki-gateway-6c7d6ccd54-z9tn8" Feb 27 10:39:46 crc kubenswrapper[4728]: I0227 10:39:46.072688 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/a623f6cb-5632-4d1d-9754-7d146be81c79-tenants\") pod \"logging-loki-gateway-6c7d6ccd54-z9tn8\" (UID: \"a623f6cb-5632-4d1d-9754-7d146be81c79\") " pod="openshift-logging/logging-loki-gateway-6c7d6ccd54-z9tn8" Feb 27 10:39:46 crc kubenswrapper[4728]: I0227 10:39:46.072708 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b88jl\" (UniqueName: \"kubernetes.io/projected/9d307382-547a-4a24-b552-9c3c2390a947-kube-api-access-b88jl\") pod \"logging-loki-gateway-6c7d6ccd54-mdl8x\" (UID: \"9d307382-547a-4a24-b552-9c3c2390a947\") " pod="openshift-logging/logging-loki-gateway-6c7d6ccd54-mdl8x" Feb 27 10:39:46 crc kubenswrapper[4728]: I0227 10:39:46.072742 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/a623f6cb-5632-4d1d-9754-7d146be81c79-tls-secret\") pod \"logging-loki-gateway-6c7d6ccd54-z9tn8\" (UID: \"a623f6cb-5632-4d1d-9754-7d146be81c79\") " pod="openshift-logging/logging-loki-gateway-6c7d6ccd54-z9tn8" Feb 27 10:39:46 crc kubenswrapper[4728]: I0227 10:39:46.072764 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/a623f6cb-5632-4d1d-9754-7d146be81c79-rbac\") pod \"logging-loki-gateway-6c7d6ccd54-z9tn8\" (UID: \"a623f6cb-5632-4d1d-9754-7d146be81c79\") " pod="openshift-logging/logging-loki-gateway-6c7d6ccd54-z9tn8" Feb 27 10:39:46 crc kubenswrapper[4728]: I0227 10:39:46.072790 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9d307382-547a-4a24-b552-9c3c2390a947-logging-loki-ca-bundle\") pod \"logging-loki-gateway-6c7d6ccd54-mdl8x\" (UID: \"9d307382-547a-4a24-b552-9c3c2390a947\") " pod="openshift-logging/logging-loki-gateway-6c7d6ccd54-mdl8x" Feb 27 10:39:46 crc kubenswrapper[4728]: I0227 10:39:46.072811 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/a623f6cb-5632-4d1d-9754-7d146be81c79-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-6c7d6ccd54-z9tn8\" (UID: \"a623f6cb-5632-4d1d-9754-7d146be81c79\") " pod="openshift-logging/logging-loki-gateway-6c7d6ccd54-z9tn8" Feb 27 10:39:46 crc kubenswrapper[4728]: I0227 10:39:46.072836 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/9d307382-547a-4a24-b552-9c3c2390a947-lokistack-gateway\") pod \"logging-loki-gateway-6c7d6ccd54-mdl8x\" (UID: \"9d307382-547a-4a24-b552-9c3c2390a947\") " pod="openshift-logging/logging-loki-gateway-6c7d6ccd54-mdl8x" Feb 27 10:39:46 crc kubenswrapper[4728]: I0227 10:39:46.072870 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9d307382-547a-4a24-b552-9c3c2390a947-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-6c7d6ccd54-mdl8x\" (UID: \"9d307382-547a-4a24-b552-9c3c2390a947\") " pod="openshift-logging/logging-loki-gateway-6c7d6ccd54-mdl8x" Feb 27 10:39:46 crc kubenswrapper[4728]: I0227 10:39:46.072891 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/a623f6cb-5632-4d1d-9754-7d146be81c79-lokistack-gateway\") pod \"logging-loki-gateway-6c7d6ccd54-z9tn8\" (UID: \"a623f6cb-5632-4d1d-9754-7d146be81c79\") " pod="openshift-logging/logging-loki-gateway-6c7d6ccd54-z9tn8" Feb 27 10:39:46 crc kubenswrapper[4728]: I0227 10:39:46.072907 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a623f6cb-5632-4d1d-9754-7d146be81c79-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-6c7d6ccd54-z9tn8\" (UID: \"a623f6cb-5632-4d1d-9754-7d146be81c79\") " pod="openshift-logging/logging-loki-gateway-6c7d6ccd54-z9tn8" Feb 27 10:39:46 crc kubenswrapper[4728]: I0227 10:39:46.072929 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/9d307382-547a-4a24-b552-9c3c2390a947-tenants\") pod \"logging-loki-gateway-6c7d6ccd54-mdl8x\" (UID: \"9d307382-547a-4a24-b552-9c3c2390a947\") " pod="openshift-logging/logging-loki-gateway-6c7d6ccd54-mdl8x" Feb 27 10:39:46 crc kubenswrapper[4728]: I0227 10:39:46.072965 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/9d307382-547a-4a24-b552-9c3c2390a947-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-6c7d6ccd54-mdl8x\" (UID: \"9d307382-547a-4a24-b552-9c3c2390a947\") " pod="openshift-logging/logging-loki-gateway-6c7d6ccd54-mdl8x" Feb 27 10:39:46 crc kubenswrapper[4728]: I0227 10:39:46.072989 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/9d307382-547a-4a24-b552-9c3c2390a947-tls-secret\") pod \"logging-loki-gateway-6c7d6ccd54-mdl8x\" (UID: \"9d307382-547a-4a24-b552-9c3c2390a947\") " pod="openshift-logging/logging-loki-gateway-6c7d6ccd54-mdl8x" Feb 27 10:39:46 crc kubenswrapper[4728]: I0227 10:39:46.073006 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w5mbp\" (UniqueName: \"kubernetes.io/projected/a623f6cb-5632-4d1d-9754-7d146be81c79-kube-api-access-w5mbp\") pod \"logging-loki-gateway-6c7d6ccd54-z9tn8\" (UID: \"a623f6cb-5632-4d1d-9754-7d146be81c79\") " pod="openshift-logging/logging-loki-gateway-6c7d6ccd54-z9tn8" Feb 27 10:39:46 crc kubenswrapper[4728]: I0227 10:39:46.073021 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/9d307382-547a-4a24-b552-9c3c2390a947-rbac\") pod \"logging-loki-gateway-6c7d6ccd54-mdl8x\" (UID: \"9d307382-547a-4a24-b552-9c3c2390a947\") " pod="openshift-logging/logging-loki-gateway-6c7d6ccd54-mdl8x" Feb 27 10:39:46 crc kubenswrapper[4728]: I0227 10:39:46.073598 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a623f6cb-5632-4d1d-9754-7d146be81c79-logging-loki-ca-bundle\") pod \"logging-loki-gateway-6c7d6ccd54-z9tn8\" (UID: \"a623f6cb-5632-4d1d-9754-7d146be81c79\") " pod="openshift-logging/logging-loki-gateway-6c7d6ccd54-z9tn8" Feb 27 10:39:46 crc kubenswrapper[4728]: I0227 10:39:46.073899 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/9d307382-547a-4a24-b552-9c3c2390a947-rbac\") pod \"logging-loki-gateway-6c7d6ccd54-mdl8x\" (UID: \"9d307382-547a-4a24-b552-9c3c2390a947\") " pod="openshift-logging/logging-loki-gateway-6c7d6ccd54-mdl8x" Feb 27 10:39:46 crc kubenswrapper[4728]: I0227 10:39:46.073929 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9d307382-547a-4a24-b552-9c3c2390a947-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-6c7d6ccd54-mdl8x\" (UID: \"9d307382-547a-4a24-b552-9c3c2390a947\") " pod="openshift-logging/logging-loki-gateway-6c7d6ccd54-mdl8x" Feb 27 10:39:46 crc kubenswrapper[4728]: I0227 10:39:46.074488 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9d307382-547a-4a24-b552-9c3c2390a947-logging-loki-ca-bundle\") pod \"logging-loki-gateway-6c7d6ccd54-mdl8x\" (UID: \"9d307382-547a-4a24-b552-9c3c2390a947\") " pod="openshift-logging/logging-loki-gateway-6c7d6ccd54-mdl8x" Feb 27 10:39:46 crc kubenswrapper[4728]: I0227 10:39:46.074562 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/a623f6cb-5632-4d1d-9754-7d146be81c79-lokistack-gateway\") pod \"logging-loki-gateway-6c7d6ccd54-z9tn8\" (UID: \"a623f6cb-5632-4d1d-9754-7d146be81c79\") " pod="openshift-logging/logging-loki-gateway-6c7d6ccd54-z9tn8" Feb 27 10:39:46 crc kubenswrapper[4728]: I0227 10:39:46.075163 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/a623f6cb-5632-4d1d-9754-7d146be81c79-tenants\") pod \"logging-loki-gateway-6c7d6ccd54-z9tn8\" (UID: \"a623f6cb-5632-4d1d-9754-7d146be81c79\") " pod="openshift-logging/logging-loki-gateway-6c7d6ccd54-z9tn8" Feb 27 10:39:46 crc kubenswrapper[4728]: I0227 10:39:46.075205 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a623f6cb-5632-4d1d-9754-7d146be81c79-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-6c7d6ccd54-z9tn8\" (UID: \"a623f6cb-5632-4d1d-9754-7d146be81c79\") " pod="openshift-logging/logging-loki-gateway-6c7d6ccd54-z9tn8" Feb 27 10:39:46 crc kubenswrapper[4728]: I0227 10:39:46.075438 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/a623f6cb-5632-4d1d-9754-7d146be81c79-rbac\") pod \"logging-loki-gateway-6c7d6ccd54-z9tn8\" (UID: \"a623f6cb-5632-4d1d-9754-7d146be81c79\") " pod="openshift-logging/logging-loki-gateway-6c7d6ccd54-z9tn8" Feb 27 10:39:46 crc kubenswrapper[4728]: I0227 10:39:46.075815 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/9d307382-547a-4a24-b552-9c3c2390a947-lokistack-gateway\") pod \"logging-loki-gateway-6c7d6ccd54-mdl8x\" (UID: \"9d307382-547a-4a24-b552-9c3c2390a947\") " pod="openshift-logging/logging-loki-gateway-6c7d6ccd54-mdl8x" Feb 27 10:39:46 crc kubenswrapper[4728]: I0227 10:39:46.077461 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/a623f6cb-5632-4d1d-9754-7d146be81c79-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-6c7d6ccd54-z9tn8\" (UID: \"a623f6cb-5632-4d1d-9754-7d146be81c79\") " pod="openshift-logging/logging-loki-gateway-6c7d6ccd54-z9tn8" Feb 27 10:39:46 crc kubenswrapper[4728]: I0227 10:39:46.077522 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/9d307382-547a-4a24-b552-9c3c2390a947-tenants\") pod \"logging-loki-gateway-6c7d6ccd54-mdl8x\" (UID: \"9d307382-547a-4a24-b552-9c3c2390a947\") " pod="openshift-logging/logging-loki-gateway-6c7d6ccd54-mdl8x" Feb 27 10:39:46 crc kubenswrapper[4728]: I0227 10:39:46.079140 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/9d307382-547a-4a24-b552-9c3c2390a947-tls-secret\") pod \"logging-loki-gateway-6c7d6ccd54-mdl8x\" (UID: \"9d307382-547a-4a24-b552-9c3c2390a947\") " pod="openshift-logging/logging-loki-gateway-6c7d6ccd54-mdl8x" Feb 27 10:39:46 crc kubenswrapper[4728]: I0227 10:39:46.081590 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/9d307382-547a-4a24-b552-9c3c2390a947-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-6c7d6ccd54-mdl8x\" (UID: \"9d307382-547a-4a24-b552-9c3c2390a947\") " pod="openshift-logging/logging-loki-gateway-6c7d6ccd54-mdl8x" Feb 27 10:39:46 crc kubenswrapper[4728]: I0227 10:39:46.091791 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b88jl\" (UniqueName: \"kubernetes.io/projected/9d307382-547a-4a24-b552-9c3c2390a947-kube-api-access-b88jl\") pod \"logging-loki-gateway-6c7d6ccd54-mdl8x\" (UID: \"9d307382-547a-4a24-b552-9c3c2390a947\") " pod="openshift-logging/logging-loki-gateway-6c7d6ccd54-mdl8x" Feb 27 10:39:46 crc kubenswrapper[4728]: I0227 10:39:46.092348 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w5mbp\" (UniqueName: \"kubernetes.io/projected/a623f6cb-5632-4d1d-9754-7d146be81c79-kube-api-access-w5mbp\") pod \"logging-loki-gateway-6c7d6ccd54-z9tn8\" (UID: \"a623f6cb-5632-4d1d-9754-7d146be81c79\") " pod="openshift-logging/logging-loki-gateway-6c7d6ccd54-z9tn8" Feb 27 10:39:46 crc kubenswrapper[4728]: I0227 10:39:46.102055 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/a623f6cb-5632-4d1d-9754-7d146be81c79-tls-secret\") pod \"logging-loki-gateway-6c7d6ccd54-z9tn8\" (UID: \"a623f6cb-5632-4d1d-9754-7d146be81c79\") " pod="openshift-logging/logging-loki-gateway-6c7d6ccd54-z9tn8" Feb 27 10:39:46 crc kubenswrapper[4728]: I0227 10:39:46.164633 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-958gn" Feb 27 10:39:46 crc kubenswrapper[4728]: I0227 10:39:46.239731 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-6c7d6ccd54-z9tn8" Feb 27 10:39:46 crc kubenswrapper[4728]: I0227 10:39:46.249361 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-6c7d6ccd54-mdl8x" Feb 27 10:39:46 crc kubenswrapper[4728]: I0227 10:39:46.309471 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-distributor-5d5548c9f5-mgsnx"] Feb 27 10:39:46 crc kubenswrapper[4728]: I0227 10:39:46.383937 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-querier-76bf7b6d45-thpcv"] Feb 27 10:39:46 crc kubenswrapper[4728]: W0227 10:39:46.409908 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbd0adbab_805e_4ac7_b2a2_3c67275176e0.slice/crio-3f37dc4fbeaf8ca71d131e9da29b6a625443047f81c95aa5c3db632552aedd93 WatchSource:0}: Error finding container 3f37dc4fbeaf8ca71d131e9da29b6a625443047f81c95aa5c3db632552aedd93: Status 404 returned error can't find the container with id 3f37dc4fbeaf8ca71d131e9da29b6a625443047f81c95aa5c3db632552aedd93 Feb 27 10:39:46 crc kubenswrapper[4728]: I0227 10:39:46.420243 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-query-frontend-6d6859c548-958gn"] Feb 27 10:39:46 crc kubenswrapper[4728]: W0227 10:39:46.422966 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod63bf6e97_3442_45f7_a57b_b811efabb073.slice/crio-19bad94ef2227adf8d1fd982be94d50acbdc555f9e38d2b9141a91d4a4c8ffbe WatchSource:0}: Error finding container 19bad94ef2227adf8d1fd982be94d50acbdc555f9e38d2b9141a91d4a4c8ffbe: Status 404 returned error can't find the container with id 19bad94ef2227adf8d1fd982be94d50acbdc555f9e38d2b9141a91d4a4c8ffbe Feb 27 10:39:46 crc kubenswrapper[4728]: I0227 10:39:46.461349 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-958gn" event={"ID":"63bf6e97-3442-45f7-a57b-b811efabb073","Type":"ContainerStarted","Data":"19bad94ef2227adf8d1fd982be94d50acbdc555f9e38d2b9141a91d4a4c8ffbe"} Feb 27 10:39:46 crc kubenswrapper[4728]: I0227 10:39:46.469312 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-mgsnx" event={"ID":"404c78e7-51af-4a23-8db8-5c19bf876bdc","Type":"ContainerStarted","Data":"f64434769a7110d570b05cdaaa22243f6953cafd76149a8d6088dfa374755af0"} Feb 27 10:39:46 crc kubenswrapper[4728]: I0227 10:39:46.471351 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-querier-76bf7b6d45-thpcv" event={"ID":"bd0adbab-805e-4ac7-b2a2-3c67275176e0","Type":"ContainerStarted","Data":"3f37dc4fbeaf8ca71d131e9da29b6a625443047f81c95aa5c3db632552aedd93"} Feb 27 10:39:46 crc kubenswrapper[4728]: I0227 10:39:46.516972 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Feb 27 10:39:46 crc kubenswrapper[4728]: I0227 10:39:46.518189 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-ingester-0" Feb 27 10:39:46 crc kubenswrapper[4728]: I0227 10:39:46.522942 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-ingester-grpc" Feb 27 10:39:46 crc kubenswrapper[4728]: I0227 10:39:46.525132 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-ingester-http" Feb 27 10:39:46 crc kubenswrapper[4728]: I0227 10:39:46.532094 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Feb 27 10:39:46 crc kubenswrapper[4728]: I0227 10:39:46.584754 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-0c811e4c-97d9-4db8-bc26-c843833e4ee9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0c811e4c-97d9-4db8-bc26-c843833e4ee9\") pod \"logging-loki-ingester-0\" (UID: \"30b747c6-8aaf-4862-ab83-c642456f025a\") " pod="openshift-logging/logging-loki-ingester-0" Feb 27 10:39:46 crc kubenswrapper[4728]: I0227 10:39:46.584805 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/30b747c6-8aaf-4862-ab83-c642456f025a-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"30b747c6-8aaf-4862-ab83-c642456f025a\") " pod="openshift-logging/logging-loki-ingester-0" Feb 27 10:39:46 crc kubenswrapper[4728]: I0227 10:39:46.584827 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-ac0cc79e-c629-409c-b97e-87f9f78ed7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ac0cc79e-c629-409c-b97e-87f9f78ed7a8\") pod \"logging-loki-ingester-0\" (UID: \"30b747c6-8aaf-4862-ab83-c642456f025a\") " pod="openshift-logging/logging-loki-ingester-0" Feb 27 10:39:46 crc kubenswrapper[4728]: I0227 10:39:46.584847 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bkjgl\" (UniqueName: \"kubernetes.io/projected/30b747c6-8aaf-4862-ab83-c642456f025a-kube-api-access-bkjgl\") pod \"logging-loki-ingester-0\" (UID: \"30b747c6-8aaf-4862-ab83-c642456f025a\") " pod="openshift-logging/logging-loki-ingester-0" Feb 27 10:39:46 crc kubenswrapper[4728]: I0227 10:39:46.584864 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/30b747c6-8aaf-4862-ab83-c642456f025a-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"30b747c6-8aaf-4862-ab83-c642456f025a\") " pod="openshift-logging/logging-loki-ingester-0" Feb 27 10:39:46 crc kubenswrapper[4728]: I0227 10:39:46.584900 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/30b747c6-8aaf-4862-ab83-c642456f025a-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"30b747c6-8aaf-4862-ab83-c642456f025a\") " pod="openshift-logging/logging-loki-ingester-0" Feb 27 10:39:46 crc kubenswrapper[4728]: I0227 10:39:46.584939 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/30b747c6-8aaf-4862-ab83-c642456f025a-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"30b747c6-8aaf-4862-ab83-c642456f025a\") " pod="openshift-logging/logging-loki-ingester-0" Feb 27 10:39:46 crc kubenswrapper[4728]: I0227 10:39:46.584986 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30b747c6-8aaf-4862-ab83-c642456f025a-config\") pod \"logging-loki-ingester-0\" (UID: \"30b747c6-8aaf-4862-ab83-c642456f025a\") " pod="openshift-logging/logging-loki-ingester-0" Feb 27 10:39:46 crc kubenswrapper[4728]: I0227 10:39:46.637481 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Feb 27 10:39:46 crc kubenswrapper[4728]: I0227 10:39:46.640047 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-compactor-0" Feb 27 10:39:46 crc kubenswrapper[4728]: I0227 10:39:46.643404 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-compactor-grpc" Feb 27 10:39:46 crc kubenswrapper[4728]: I0227 10:39:46.644011 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-compactor-http" Feb 27 10:39:46 crc kubenswrapper[4728]: I0227 10:39:46.652277 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Feb 27 10:39:46 crc kubenswrapper[4728]: I0227 10:39:46.686986 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jnzp6\" (UniqueName: \"kubernetes.io/projected/d201247f-3eb4-46c6-a46c-6c37f5a28219-kube-api-access-jnzp6\") pod \"logging-loki-compactor-0\" (UID: \"d201247f-3eb4-46c6-a46c-6c37f5a28219\") " pod="openshift-logging/logging-loki-compactor-0" Feb 27 10:39:46 crc kubenswrapper[4728]: I0227 10:39:46.687083 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-0c811e4c-97d9-4db8-bc26-c843833e4ee9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0c811e4c-97d9-4db8-bc26-c843833e4ee9\") pod \"logging-loki-ingester-0\" (UID: \"30b747c6-8aaf-4862-ab83-c642456f025a\") " pod="openshift-logging/logging-loki-ingester-0" Feb 27 10:39:46 crc kubenswrapper[4728]: I0227 10:39:46.687149 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/d201247f-3eb4-46c6-a46c-6c37f5a28219-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"d201247f-3eb4-46c6-a46c-6c37f5a28219\") " pod="openshift-logging/logging-loki-compactor-0" Feb 27 10:39:46 crc kubenswrapper[4728]: I0227 10:39:46.687180 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d201247f-3eb4-46c6-a46c-6c37f5a28219-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"d201247f-3eb4-46c6-a46c-6c37f5a28219\") " pod="openshift-logging/logging-loki-compactor-0" Feb 27 10:39:46 crc kubenswrapper[4728]: I0227 10:39:46.687211 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/30b747c6-8aaf-4862-ab83-c642456f025a-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"30b747c6-8aaf-4862-ab83-c642456f025a\") " pod="openshift-logging/logging-loki-ingester-0" Feb 27 10:39:46 crc kubenswrapper[4728]: I0227 10:39:46.687244 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-ac0cc79e-c629-409c-b97e-87f9f78ed7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ac0cc79e-c629-409c-b97e-87f9f78ed7a8\") pod \"logging-loki-ingester-0\" (UID: \"30b747c6-8aaf-4862-ab83-c642456f025a\") " pod="openshift-logging/logging-loki-ingester-0" Feb 27 10:39:46 crc kubenswrapper[4728]: I0227 10:39:46.687273 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d201247f-3eb4-46c6-a46c-6c37f5a28219-config\") pod \"logging-loki-compactor-0\" (UID: \"d201247f-3eb4-46c6-a46c-6c37f5a28219\") " pod="openshift-logging/logging-loki-compactor-0" Feb 27 10:39:46 crc kubenswrapper[4728]: I0227 10:39:46.687304 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bkjgl\" (UniqueName: \"kubernetes.io/projected/30b747c6-8aaf-4862-ab83-c642456f025a-kube-api-access-bkjgl\") pod \"logging-loki-ingester-0\" (UID: \"30b747c6-8aaf-4862-ab83-c642456f025a\") " pod="openshift-logging/logging-loki-ingester-0" Feb 27 10:39:46 crc kubenswrapper[4728]: I0227 10:39:46.687331 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/30b747c6-8aaf-4862-ab83-c642456f025a-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"30b747c6-8aaf-4862-ab83-c642456f025a\") " pod="openshift-logging/logging-loki-ingester-0" Feb 27 10:39:46 crc kubenswrapper[4728]: I0227 10:39:46.688183 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/d201247f-3eb4-46c6-a46c-6c37f5a28219-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"d201247f-3eb4-46c6-a46c-6c37f5a28219\") " pod="openshift-logging/logging-loki-compactor-0" Feb 27 10:39:46 crc kubenswrapper[4728]: I0227 10:39:46.688622 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/30b747c6-8aaf-4862-ab83-c642456f025a-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"30b747c6-8aaf-4862-ab83-c642456f025a\") " pod="openshift-logging/logging-loki-ingester-0" Feb 27 10:39:46 crc kubenswrapper[4728]: I0227 10:39:46.688687 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/d201247f-3eb4-46c6-a46c-6c37f5a28219-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"d201247f-3eb4-46c6-a46c-6c37f5a28219\") " pod="openshift-logging/logging-loki-compactor-0" Feb 27 10:39:46 crc kubenswrapper[4728]: I0227 10:39:46.688759 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/30b747c6-8aaf-4862-ab83-c642456f025a-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"30b747c6-8aaf-4862-ab83-c642456f025a\") " pod="openshift-logging/logging-loki-ingester-0" Feb 27 10:39:46 crc kubenswrapper[4728]: I0227 10:39:46.688780 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-75c33cc1-07cf-48eb-83de-b6bda9b9b2de\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-75c33cc1-07cf-48eb-83de-b6bda9b9b2de\") pod \"logging-loki-compactor-0\" (UID: \"d201247f-3eb4-46c6-a46c-6c37f5a28219\") " pod="openshift-logging/logging-loki-compactor-0" Feb 27 10:39:46 crc kubenswrapper[4728]: I0227 10:39:46.688893 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30b747c6-8aaf-4862-ab83-c642456f025a-config\") pod \"logging-loki-ingester-0\" (UID: \"30b747c6-8aaf-4862-ab83-c642456f025a\") " pod="openshift-logging/logging-loki-ingester-0" Feb 27 10:39:46 crc kubenswrapper[4728]: I0227 10:39:46.689430 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/30b747c6-8aaf-4862-ab83-c642456f025a-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"30b747c6-8aaf-4862-ab83-c642456f025a\") " pod="openshift-logging/logging-loki-ingester-0" Feb 27 10:39:46 crc kubenswrapper[4728]: I0227 10:39:46.689857 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30b747c6-8aaf-4862-ab83-c642456f025a-config\") pod \"logging-loki-ingester-0\" (UID: \"30b747c6-8aaf-4862-ab83-c642456f025a\") " pod="openshift-logging/logging-loki-ingester-0" Feb 27 10:39:46 crc kubenswrapper[4728]: I0227 10:39:46.691685 4728 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 27 10:39:46 crc kubenswrapper[4728]: I0227 10:39:46.691727 4728 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-ac0cc79e-c629-409c-b97e-87f9f78ed7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ac0cc79e-c629-409c-b97e-87f9f78ed7a8\") pod \"logging-loki-ingester-0\" (UID: \"30b747c6-8aaf-4862-ab83-c642456f025a\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/a20851d2d1f19d7a89bf040a83f17c5bdf3f9ac9e85cd98182a13bf7f9544548/globalmount\"" pod="openshift-logging/logging-loki-ingester-0" Feb 27 10:39:46 crc kubenswrapper[4728]: I0227 10:39:46.691751 4728 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 27 10:39:46 crc kubenswrapper[4728]: I0227 10:39:46.691782 4728 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-0c811e4c-97d9-4db8-bc26-c843833e4ee9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0c811e4c-97d9-4db8-bc26-c843833e4ee9\") pod \"logging-loki-ingester-0\" (UID: \"30b747c6-8aaf-4862-ab83-c642456f025a\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/71c67a36fb553e79353b76798dfba2605f4ee07c80b7830512e3673a2b8bd02a/globalmount\"" pod="openshift-logging/logging-loki-ingester-0" Feb 27 10:39:46 crc kubenswrapper[4728]: I0227 10:39:46.693759 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/30b747c6-8aaf-4862-ab83-c642456f025a-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"30b747c6-8aaf-4862-ab83-c642456f025a\") " pod="openshift-logging/logging-loki-ingester-0" Feb 27 10:39:46 crc kubenswrapper[4728]: I0227 10:39:46.693809 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/30b747c6-8aaf-4862-ab83-c642456f025a-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"30b747c6-8aaf-4862-ab83-c642456f025a\") " pod="openshift-logging/logging-loki-ingester-0" Feb 27 10:39:46 crc kubenswrapper[4728]: I0227 10:39:46.694417 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/30b747c6-8aaf-4862-ab83-c642456f025a-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"30b747c6-8aaf-4862-ab83-c642456f025a\") " pod="openshift-logging/logging-loki-ingester-0" Feb 27 10:39:46 crc kubenswrapper[4728]: I0227 10:39:46.708293 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bkjgl\" (UniqueName: \"kubernetes.io/projected/30b747c6-8aaf-4862-ab83-c642456f025a-kube-api-access-bkjgl\") pod \"logging-loki-ingester-0\" (UID: \"30b747c6-8aaf-4862-ab83-c642456f025a\") " pod="openshift-logging/logging-loki-ingester-0" Feb 27 10:39:46 crc kubenswrapper[4728]: I0227 10:39:46.727727 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-ac0cc79e-c629-409c-b97e-87f9f78ed7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ac0cc79e-c629-409c-b97e-87f9f78ed7a8\") pod \"logging-loki-ingester-0\" (UID: \"30b747c6-8aaf-4862-ab83-c642456f025a\") " pod="openshift-logging/logging-loki-ingester-0" Feb 27 10:39:46 crc kubenswrapper[4728]: I0227 10:39:46.729352 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-0c811e4c-97d9-4db8-bc26-c843833e4ee9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0c811e4c-97d9-4db8-bc26-c843833e4ee9\") pod \"logging-loki-ingester-0\" (UID: \"30b747c6-8aaf-4862-ab83-c642456f025a\") " pod="openshift-logging/logging-loki-ingester-0" Feb 27 10:39:46 crc kubenswrapper[4728]: I0227 10:39:46.770282 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-6c7d6ccd54-z9tn8"] Feb 27 10:39:46 crc kubenswrapper[4728]: I0227 10:39:46.779873 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Feb 27 10:39:46 crc kubenswrapper[4728]: I0227 10:39:46.780877 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-index-gateway-0" Feb 27 10:39:46 crc kubenswrapper[4728]: I0227 10:39:46.783300 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-index-gateway-grpc" Feb 27 10:39:46 crc kubenswrapper[4728]: I0227 10:39:46.783696 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-index-gateway-http" Feb 27 10:39:46 crc kubenswrapper[4728]: I0227 10:39:46.790361 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/d201247f-3eb4-46c6-a46c-6c37f5a28219-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"d201247f-3eb4-46c6-a46c-6c37f5a28219\") " pod="openshift-logging/logging-loki-compactor-0" Feb 27 10:39:46 crc kubenswrapper[4728]: I0227 10:39:46.790414 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/d201247f-3eb4-46c6-a46c-6c37f5a28219-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"d201247f-3eb4-46c6-a46c-6c37f5a28219\") " pod="openshift-logging/logging-loki-compactor-0" Feb 27 10:39:46 crc kubenswrapper[4728]: I0227 10:39:46.790449 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-75c33cc1-07cf-48eb-83de-b6bda9b9b2de\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-75c33cc1-07cf-48eb-83de-b6bda9b9b2de\") pod \"logging-loki-compactor-0\" (UID: \"d201247f-3eb4-46c6-a46c-6c37f5a28219\") " pod="openshift-logging/logging-loki-compactor-0" Feb 27 10:39:46 crc kubenswrapper[4728]: I0227 10:39:46.790534 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jnzp6\" (UniqueName: \"kubernetes.io/projected/d201247f-3eb4-46c6-a46c-6c37f5a28219-kube-api-access-jnzp6\") pod \"logging-loki-compactor-0\" (UID: \"d201247f-3eb4-46c6-a46c-6c37f5a28219\") " pod="openshift-logging/logging-loki-compactor-0" Feb 27 10:39:46 crc kubenswrapper[4728]: I0227 10:39:46.790588 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/d201247f-3eb4-46c6-a46c-6c37f5a28219-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"d201247f-3eb4-46c6-a46c-6c37f5a28219\") " pod="openshift-logging/logging-loki-compactor-0" Feb 27 10:39:46 crc kubenswrapper[4728]: I0227 10:39:46.790611 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d201247f-3eb4-46c6-a46c-6c37f5a28219-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"d201247f-3eb4-46c6-a46c-6c37f5a28219\") " pod="openshift-logging/logging-loki-compactor-0" Feb 27 10:39:46 crc kubenswrapper[4728]: I0227 10:39:46.790645 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d201247f-3eb4-46c6-a46c-6c37f5a28219-config\") pod \"logging-loki-compactor-0\" (UID: \"d201247f-3eb4-46c6-a46c-6c37f5a28219\") " pod="openshift-logging/logging-loki-compactor-0" Feb 27 10:39:46 crc kubenswrapper[4728]: I0227 10:39:46.793319 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d201247f-3eb4-46c6-a46c-6c37f5a28219-config\") pod \"logging-loki-compactor-0\" (UID: \"d201247f-3eb4-46c6-a46c-6c37f5a28219\") " pod="openshift-logging/logging-loki-compactor-0" Feb 27 10:39:46 crc kubenswrapper[4728]: I0227 10:39:46.801029 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d201247f-3eb4-46c6-a46c-6c37f5a28219-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"d201247f-3eb4-46c6-a46c-6c37f5a28219\") " pod="openshift-logging/logging-loki-compactor-0" Feb 27 10:39:46 crc kubenswrapper[4728]: I0227 10:39:46.801045 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/d201247f-3eb4-46c6-a46c-6c37f5a28219-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"d201247f-3eb4-46c6-a46c-6c37f5a28219\") " pod="openshift-logging/logging-loki-compactor-0" Feb 27 10:39:46 crc kubenswrapper[4728]: I0227 10:39:46.801087 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Feb 27 10:39:46 crc kubenswrapper[4728]: I0227 10:39:46.801295 4728 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 27 10:39:46 crc kubenswrapper[4728]: I0227 10:39:46.801338 4728 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-75c33cc1-07cf-48eb-83de-b6bda9b9b2de\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-75c33cc1-07cf-48eb-83de-b6bda9b9b2de\") pod \"logging-loki-compactor-0\" (UID: \"d201247f-3eb4-46c6-a46c-6c37f5a28219\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/a56a4f6a5251243dfaf7b990591d8e2259c578654ed867b7ee431d65599f7d35/globalmount\"" pod="openshift-logging/logging-loki-compactor-0" Feb 27 10:39:46 crc kubenswrapper[4728]: I0227 10:39:46.802751 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/d201247f-3eb4-46c6-a46c-6c37f5a28219-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"d201247f-3eb4-46c6-a46c-6c37f5a28219\") " pod="openshift-logging/logging-loki-compactor-0" Feb 27 10:39:46 crc kubenswrapper[4728]: I0227 10:39:46.809627 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/d201247f-3eb4-46c6-a46c-6c37f5a28219-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"d201247f-3eb4-46c6-a46c-6c37f5a28219\") " pod="openshift-logging/logging-loki-compactor-0" Feb 27 10:39:46 crc kubenswrapper[4728]: I0227 10:39:46.810794 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jnzp6\" (UniqueName: \"kubernetes.io/projected/d201247f-3eb4-46c6-a46c-6c37f5a28219-kube-api-access-jnzp6\") pod \"logging-loki-compactor-0\" (UID: \"d201247f-3eb4-46c6-a46c-6c37f5a28219\") " pod="openshift-logging/logging-loki-compactor-0" Feb 27 10:39:46 crc kubenswrapper[4728]: I0227 10:39:46.823924 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-75c33cc1-07cf-48eb-83de-b6bda9b9b2de\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-75c33cc1-07cf-48eb-83de-b6bda9b9b2de\") pod \"logging-loki-compactor-0\" (UID: \"d201247f-3eb4-46c6-a46c-6c37f5a28219\") " pod="openshift-logging/logging-loki-compactor-0" Feb 27 10:39:46 crc kubenswrapper[4728]: I0227 10:39:46.840984 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-ingester-0" Feb 27 10:39:46 crc kubenswrapper[4728]: I0227 10:39:46.858320 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-6c7d6ccd54-mdl8x"] Feb 27 10:39:46 crc kubenswrapper[4728]: W0227 10:39:46.863210 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9d307382_547a_4a24_b552_9c3c2390a947.slice/crio-f6d0b1f334753ad16a92c71b098a548962e3013e8ad40ad1ae388f912cc0f9ab WatchSource:0}: Error finding container f6d0b1f334753ad16a92c71b098a548962e3013e8ad40ad1ae388f912cc0f9ab: Status 404 returned error can't find the container with id f6d0b1f334753ad16a92c71b098a548962e3013e8ad40ad1ae388f912cc0f9ab Feb 27 10:39:46 crc kubenswrapper[4728]: I0227 10:39:46.891952 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-aa35fe92-fd85-45fe-bbd6-b1733bbc9a35\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-aa35fe92-fd85-45fe-bbd6-b1733bbc9a35\") pod \"logging-loki-index-gateway-0\" (UID: \"bc136f96-e5d5-4201-a560-a9759fee98d5\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 27 10:39:46 crc kubenswrapper[4728]: I0227 10:39:46.892016 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/bc136f96-e5d5-4201-a560-a9759fee98d5-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"bc136f96-e5d5-4201-a560-a9759fee98d5\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 27 10:39:46 crc kubenswrapper[4728]: I0227 10:39:46.892044 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/bc136f96-e5d5-4201-a560-a9759fee98d5-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"bc136f96-e5d5-4201-a560-a9759fee98d5\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 27 10:39:46 crc kubenswrapper[4728]: I0227 10:39:46.892068 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qp7jg\" (UniqueName: \"kubernetes.io/projected/bc136f96-e5d5-4201-a560-a9759fee98d5-kube-api-access-qp7jg\") pod \"logging-loki-index-gateway-0\" (UID: \"bc136f96-e5d5-4201-a560-a9759fee98d5\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 27 10:39:46 crc kubenswrapper[4728]: I0227 10:39:46.892084 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/bc136f96-e5d5-4201-a560-a9759fee98d5-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"bc136f96-e5d5-4201-a560-a9759fee98d5\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 27 10:39:46 crc kubenswrapper[4728]: I0227 10:39:46.892102 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc136f96-e5d5-4201-a560-a9759fee98d5-config\") pod \"logging-loki-index-gateway-0\" (UID: \"bc136f96-e5d5-4201-a560-a9759fee98d5\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 27 10:39:46 crc kubenswrapper[4728]: I0227 10:39:46.892131 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bc136f96-e5d5-4201-a560-a9759fee98d5-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"bc136f96-e5d5-4201-a560-a9759fee98d5\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 27 10:39:46 crc kubenswrapper[4728]: I0227 10:39:46.959408 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-compactor-0" Feb 27 10:39:46 crc kubenswrapper[4728]: I0227 10:39:46.994038 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-aa35fe92-fd85-45fe-bbd6-b1733bbc9a35\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-aa35fe92-fd85-45fe-bbd6-b1733bbc9a35\") pod \"logging-loki-index-gateway-0\" (UID: \"bc136f96-e5d5-4201-a560-a9759fee98d5\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 27 10:39:46 crc kubenswrapper[4728]: I0227 10:39:46.994112 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/bc136f96-e5d5-4201-a560-a9759fee98d5-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"bc136f96-e5d5-4201-a560-a9759fee98d5\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 27 10:39:46 crc kubenswrapper[4728]: I0227 10:39:46.994147 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/bc136f96-e5d5-4201-a560-a9759fee98d5-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"bc136f96-e5d5-4201-a560-a9759fee98d5\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 27 10:39:46 crc kubenswrapper[4728]: I0227 10:39:46.994182 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/bc136f96-e5d5-4201-a560-a9759fee98d5-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"bc136f96-e5d5-4201-a560-a9759fee98d5\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 27 10:39:46 crc kubenswrapper[4728]: I0227 10:39:46.994203 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qp7jg\" (UniqueName: \"kubernetes.io/projected/bc136f96-e5d5-4201-a560-a9759fee98d5-kube-api-access-qp7jg\") pod \"logging-loki-index-gateway-0\" (UID: \"bc136f96-e5d5-4201-a560-a9759fee98d5\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 27 10:39:46 crc kubenswrapper[4728]: I0227 10:39:46.994230 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc136f96-e5d5-4201-a560-a9759fee98d5-config\") pod \"logging-loki-index-gateway-0\" (UID: \"bc136f96-e5d5-4201-a560-a9759fee98d5\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 27 10:39:46 crc kubenswrapper[4728]: I0227 10:39:46.994272 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bc136f96-e5d5-4201-a560-a9759fee98d5-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"bc136f96-e5d5-4201-a560-a9759fee98d5\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 27 10:39:46 crc kubenswrapper[4728]: I0227 10:39:46.995415 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bc136f96-e5d5-4201-a560-a9759fee98d5-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"bc136f96-e5d5-4201-a560-a9759fee98d5\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 27 10:39:46 crc kubenswrapper[4728]: I0227 10:39:46.995523 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc136f96-e5d5-4201-a560-a9759fee98d5-config\") pod \"logging-loki-index-gateway-0\" (UID: \"bc136f96-e5d5-4201-a560-a9759fee98d5\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 27 10:39:46 crc kubenswrapper[4728]: I0227 10:39:46.996603 4728 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 27 10:39:46 crc kubenswrapper[4728]: I0227 10:39:46.996634 4728 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-aa35fe92-fd85-45fe-bbd6-b1733bbc9a35\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-aa35fe92-fd85-45fe-bbd6-b1733bbc9a35\") pod \"logging-loki-index-gateway-0\" (UID: \"bc136f96-e5d5-4201-a560-a9759fee98d5\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/0a1a20747131f90ee363b007ca2027478324decc074f613f06a39d8a515aa51c/globalmount\"" pod="openshift-logging/logging-loki-index-gateway-0" Feb 27 10:39:46 crc kubenswrapper[4728]: I0227 10:39:46.998829 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/bc136f96-e5d5-4201-a560-a9759fee98d5-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"bc136f96-e5d5-4201-a560-a9759fee98d5\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 27 10:39:46 crc kubenswrapper[4728]: I0227 10:39:46.999132 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/bc136f96-e5d5-4201-a560-a9759fee98d5-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"bc136f96-e5d5-4201-a560-a9759fee98d5\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 27 10:39:46 crc kubenswrapper[4728]: I0227 10:39:46.999924 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/bc136f96-e5d5-4201-a560-a9759fee98d5-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"bc136f96-e5d5-4201-a560-a9759fee98d5\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 27 10:39:47 crc kubenswrapper[4728]: I0227 10:39:47.010604 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qp7jg\" (UniqueName: \"kubernetes.io/projected/bc136f96-e5d5-4201-a560-a9759fee98d5-kube-api-access-qp7jg\") pod \"logging-loki-index-gateway-0\" (UID: \"bc136f96-e5d5-4201-a560-a9759fee98d5\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 27 10:39:47 crc kubenswrapper[4728]: I0227 10:39:47.034127 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-aa35fe92-fd85-45fe-bbd6-b1733bbc9a35\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-aa35fe92-fd85-45fe-bbd6-b1733bbc9a35\") pod \"logging-loki-index-gateway-0\" (UID: \"bc136f96-e5d5-4201-a560-a9759fee98d5\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 27 10:39:47 crc kubenswrapper[4728]: I0227 10:39:47.142051 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-index-gateway-0" Feb 27 10:39:47 crc kubenswrapper[4728]: I0227 10:39:47.302313 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Feb 27 10:39:47 crc kubenswrapper[4728]: W0227 10:39:47.312894 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod30b747c6_8aaf_4862_ab83_c642456f025a.slice/crio-1999dbc932605d53f338241ff4b543b3d16ba3659205ddb24407fadbc5b00399 WatchSource:0}: Error finding container 1999dbc932605d53f338241ff4b543b3d16ba3659205ddb24407fadbc5b00399: Status 404 returned error can't find the container with id 1999dbc932605d53f338241ff4b543b3d16ba3659205ddb24407fadbc5b00399 Feb 27 10:39:47 crc kubenswrapper[4728]: I0227 10:39:47.346280 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Feb 27 10:39:47 crc kubenswrapper[4728]: W0227 10:39:47.354976 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbc136f96_e5d5_4201_a560_a9759fee98d5.slice/crio-ff6828627d3304c521c558e46085b74df06384e4952ec2f26711234ae2e15b15 WatchSource:0}: Error finding container ff6828627d3304c521c558e46085b74df06384e4952ec2f26711234ae2e15b15: Status 404 returned error can't find the container with id ff6828627d3304c521c558e46085b74df06384e4952ec2f26711234ae2e15b15 Feb 27 10:39:47 crc kubenswrapper[4728]: I0227 10:39:47.388866 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Feb 27 10:39:47 crc kubenswrapper[4728]: W0227 10:39:47.398138 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd201247f_3eb4_46c6_a46c_6c37f5a28219.slice/crio-199473d5190899934395a84e8e7363cac99cf5073cafb608c2d1e2d9d401046e WatchSource:0}: Error finding container 199473d5190899934395a84e8e7363cac99cf5073cafb608c2d1e2d9d401046e: Status 404 returned error can't find the container with id 199473d5190899934395a84e8e7363cac99cf5073cafb608c2d1e2d9d401046e Feb 27 10:39:47 crc kubenswrapper[4728]: I0227 10:39:47.478575 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-6c7d6ccd54-mdl8x" event={"ID":"9d307382-547a-4a24-b552-9c3c2390a947","Type":"ContainerStarted","Data":"f6d0b1f334753ad16a92c71b098a548962e3013e8ad40ad1ae388f912cc0f9ab"} Feb 27 10:39:47 crc kubenswrapper[4728]: I0227 10:39:47.479740 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-index-gateway-0" event={"ID":"bc136f96-e5d5-4201-a560-a9759fee98d5","Type":"ContainerStarted","Data":"ff6828627d3304c521c558e46085b74df06384e4952ec2f26711234ae2e15b15"} Feb 27 10:39:47 crc kubenswrapper[4728]: I0227 10:39:47.481378 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-6c7d6ccd54-z9tn8" event={"ID":"a623f6cb-5632-4d1d-9754-7d146be81c79","Type":"ContainerStarted","Data":"fc34162cd0245d561590d0896d7d7cc1d3d342caec2266d7e26e815a3c9c0daa"} Feb 27 10:39:47 crc kubenswrapper[4728]: I0227 10:39:47.482274 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-compactor-0" event={"ID":"d201247f-3eb4-46c6-a46c-6c37f5a28219","Type":"ContainerStarted","Data":"199473d5190899934395a84e8e7363cac99cf5073cafb608c2d1e2d9d401046e"} Feb 27 10:39:47 crc kubenswrapper[4728]: I0227 10:39:47.483582 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-ingester-0" event={"ID":"30b747c6-8aaf-4862-ab83-c642456f025a","Type":"ContainerStarted","Data":"1999dbc932605d53f338241ff4b543b3d16ba3659205ddb24407fadbc5b00399"} Feb 27 10:39:51 crc kubenswrapper[4728]: I0227 10:39:51.515700 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-querier-76bf7b6d45-thpcv" event={"ID":"bd0adbab-805e-4ac7-b2a2-3c67275176e0","Type":"ContainerStarted","Data":"05e1ca7fcf6b4fda088511e507d1faa1177ce594ff5b3dd3c0efaeacb2153351"} Feb 27 10:39:51 crc kubenswrapper[4728]: I0227 10:39:51.516459 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-querier-76bf7b6d45-thpcv" Feb 27 10:39:51 crc kubenswrapper[4728]: I0227 10:39:51.518477 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-index-gateway-0" event={"ID":"bc136f96-e5d5-4201-a560-a9759fee98d5","Type":"ContainerStarted","Data":"540bbac66adaec9bce058c2d5a23c72cd4fa23c8bee11ee351dc3c5fce5f5113"} Feb 27 10:39:51 crc kubenswrapper[4728]: I0227 10:39:51.518683 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-index-gateway-0" Feb 27 10:39:51 crc kubenswrapper[4728]: I0227 10:39:51.521126 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-mgsnx" event={"ID":"404c78e7-51af-4a23-8db8-5c19bf876bdc","Type":"ContainerStarted","Data":"1b37f07459835696889a28ba7e414ed2e415b470039e702c020ac3b8421a3c2d"} Feb 27 10:39:51 crc kubenswrapper[4728]: I0227 10:39:51.521783 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-mgsnx" Feb 27 10:39:51 crc kubenswrapper[4728]: I0227 10:39:51.523252 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-6c7d6ccd54-z9tn8" event={"ID":"a623f6cb-5632-4d1d-9754-7d146be81c79","Type":"ContainerStarted","Data":"4a89970e1bc8cf80b3a7bacd91c4b0b835efc24f44496ed86ae4c2717f89fa60"} Feb 27 10:39:51 crc kubenswrapper[4728]: I0227 10:39:51.524862 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-compactor-0" event={"ID":"d201247f-3eb4-46c6-a46c-6c37f5a28219","Type":"ContainerStarted","Data":"e4b113e31328df2bde5fcd4cfe48561df95382260611399bb48c75eb5d80c0a2"} Feb 27 10:39:51 crc kubenswrapper[4728]: I0227 10:39:51.525022 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-compactor-0" Feb 27 10:39:51 crc kubenswrapper[4728]: I0227 10:39:51.527169 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-958gn" event={"ID":"63bf6e97-3442-45f7-a57b-b811efabb073","Type":"ContainerStarted","Data":"143d1e38b1cf762cd7bd89ba7aa26ad348f8e66acafa81955984a69edf38d855"} Feb 27 10:39:51 crc kubenswrapper[4728]: I0227 10:39:51.527290 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-958gn" Feb 27 10:39:51 crc kubenswrapper[4728]: I0227 10:39:51.531186 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-ingester-0" Feb 27 10:39:51 crc kubenswrapper[4728]: I0227 10:39:51.531855 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-ingester-0" event={"ID":"30b747c6-8aaf-4862-ab83-c642456f025a","Type":"ContainerStarted","Data":"16540af67934221632f2b30daa9cd6700ac749b5f502b8bdc108245fcfac3a73"} Feb 27 10:39:51 crc kubenswrapper[4728]: I0227 10:39:51.540245 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-6c7d6ccd54-mdl8x" event={"ID":"9d307382-547a-4a24-b552-9c3c2390a947","Type":"ContainerStarted","Data":"b57945b2200328586e8ac94e27ccff81ae8456e692ce5e14138845d07a45a407"} Feb 27 10:39:51 crc kubenswrapper[4728]: I0227 10:39:51.546204 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-querier-76bf7b6d45-thpcv" podStartSLOduration=2.455512667 podStartE2EDuration="6.546180659s" podCreationTimestamp="2026-02-27 10:39:45 +0000 UTC" firstStartedPulling="2026-02-27 10:39:46.418704995 +0000 UTC m=+806.381071101" lastFinishedPulling="2026-02-27 10:39:50.509372997 +0000 UTC m=+810.471739093" observedRunningTime="2026-02-27 10:39:51.534007259 +0000 UTC m=+811.496373365" watchObservedRunningTime="2026-02-27 10:39:51.546180659 +0000 UTC m=+811.508546775" Feb 27 10:39:51 crc kubenswrapper[4728]: I0227 10:39:51.562876 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-958gn" podStartSLOduration=2.533739152 podStartE2EDuration="6.56285133s" podCreationTimestamp="2026-02-27 10:39:45 +0000 UTC" firstStartedPulling="2026-02-27 10:39:46.428933061 +0000 UTC m=+806.391299167" lastFinishedPulling="2026-02-27 10:39:50.458045239 +0000 UTC m=+810.420411345" observedRunningTime="2026-02-27 10:39:51.553033044 +0000 UTC m=+811.515399150" watchObservedRunningTime="2026-02-27 10:39:51.56285133 +0000 UTC m=+811.525217436" Feb 27 10:39:51 crc kubenswrapper[4728]: I0227 10:39:51.582356 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-mgsnx" podStartSLOduration=2.484567451 podStartE2EDuration="6.582339617s" podCreationTimestamp="2026-02-27 10:39:45 +0000 UTC" firstStartedPulling="2026-02-27 10:39:46.349008529 +0000 UTC m=+806.311374625" lastFinishedPulling="2026-02-27 10:39:50.446780685 +0000 UTC m=+810.409146791" observedRunningTime="2026-02-27 10:39:51.579558072 +0000 UTC m=+811.541924198" watchObservedRunningTime="2026-02-27 10:39:51.582339617 +0000 UTC m=+811.544705723" Feb 27 10:39:51 crc kubenswrapper[4728]: I0227 10:39:51.606490 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-ingester-0" podStartSLOduration=3.357442618 podStartE2EDuration="6.606292805s" podCreationTimestamp="2026-02-27 10:39:45 +0000 UTC" firstStartedPulling="2026-02-27 10:39:47.315119927 +0000 UTC m=+807.277486033" lastFinishedPulling="2026-02-27 10:39:50.563970114 +0000 UTC m=+810.526336220" observedRunningTime="2026-02-27 10:39:51.600048366 +0000 UTC m=+811.562414472" watchObservedRunningTime="2026-02-27 10:39:51.606292805 +0000 UTC m=+811.568658901" Feb 27 10:39:51 crc kubenswrapper[4728]: I0227 10:39:51.620766 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-compactor-0" podStartSLOduration=3.510888669 podStartE2EDuration="6.620752456s" podCreationTimestamp="2026-02-27 10:39:45 +0000 UTC" firstStartedPulling="2026-02-27 10:39:47.400476757 +0000 UTC m=+807.362842853" lastFinishedPulling="2026-02-27 10:39:50.510340534 +0000 UTC m=+810.472706640" observedRunningTime="2026-02-27 10:39:51.620202321 +0000 UTC m=+811.582568447" watchObservedRunningTime="2026-02-27 10:39:51.620752456 +0000 UTC m=+811.583118562" Feb 27 10:39:51 crc kubenswrapper[4728]: I0227 10:39:51.645810 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-index-gateway-0" podStartSLOduration=3.484063592 podStartE2EDuration="6.645792663s" podCreationTimestamp="2026-02-27 10:39:45 +0000 UTC" firstStartedPulling="2026-02-27 10:39:47.357941436 +0000 UTC m=+807.320307542" lastFinishedPulling="2026-02-27 10:39:50.519670477 +0000 UTC m=+810.482036613" observedRunningTime="2026-02-27 10:39:51.641247671 +0000 UTC m=+811.603613767" watchObservedRunningTime="2026-02-27 10:39:51.645792663 +0000 UTC m=+811.608158769" Feb 27 10:39:53 crc kubenswrapper[4728]: I0227 10:39:53.566634 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-6c7d6ccd54-mdl8x" event={"ID":"9d307382-547a-4a24-b552-9c3c2390a947","Type":"ContainerStarted","Data":"1381f87036ff08c773e2622a545b48ad3d218528650ab76f5f120f568a8943f5"} Feb 27 10:39:53 crc kubenswrapper[4728]: I0227 10:39:53.567057 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-6c7d6ccd54-mdl8x" Feb 27 10:39:53 crc kubenswrapper[4728]: I0227 10:39:53.567081 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-6c7d6ccd54-mdl8x" Feb 27 10:39:53 crc kubenswrapper[4728]: I0227 10:39:53.570962 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-6c7d6ccd54-z9tn8" event={"ID":"a623f6cb-5632-4d1d-9754-7d146be81c79","Type":"ContainerStarted","Data":"f5fcae84a9b5e67695e67a7d614e7acb30b1ed19817529d2de7525be91198292"} Feb 27 10:39:53 crc kubenswrapper[4728]: I0227 10:39:53.590281 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-6c7d6ccd54-mdl8x" Feb 27 10:39:53 crc kubenswrapper[4728]: I0227 10:39:53.591075 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-6c7d6ccd54-mdl8x" Feb 27 10:39:53 crc kubenswrapper[4728]: I0227 10:39:53.608116 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-gateway-6c7d6ccd54-mdl8x" podStartSLOduration=2.405034711 podStartE2EDuration="8.608087224s" podCreationTimestamp="2026-02-27 10:39:45 +0000 UTC" firstStartedPulling="2026-02-27 10:39:46.865350089 +0000 UTC m=+806.827716195" lastFinishedPulling="2026-02-27 10:39:53.068402602 +0000 UTC m=+813.030768708" observedRunningTime="2026-02-27 10:39:53.606549472 +0000 UTC m=+813.568915588" watchObservedRunningTime="2026-02-27 10:39:53.608087224 +0000 UTC m=+813.570453360" Feb 27 10:39:53 crc kubenswrapper[4728]: I0227 10:39:53.638549 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-gateway-6c7d6ccd54-z9tn8" podStartSLOduration=2.356519628 podStartE2EDuration="8.638497106s" podCreationTimestamp="2026-02-27 10:39:45 +0000 UTC" firstStartedPulling="2026-02-27 10:39:46.781233083 +0000 UTC m=+806.743599189" lastFinishedPulling="2026-02-27 10:39:53.063210561 +0000 UTC m=+813.025576667" observedRunningTime="2026-02-27 10:39:53.628454384 +0000 UTC m=+813.590820530" watchObservedRunningTime="2026-02-27 10:39:53.638497106 +0000 UTC m=+813.600863252" Feb 27 10:39:54 crc kubenswrapper[4728]: I0227 10:39:54.579995 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-6c7d6ccd54-z9tn8" Feb 27 10:39:54 crc kubenswrapper[4728]: I0227 10:39:54.580437 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-6c7d6ccd54-z9tn8" Feb 27 10:39:54 crc kubenswrapper[4728]: I0227 10:39:54.593724 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-6c7d6ccd54-z9tn8" Feb 27 10:39:54 crc kubenswrapper[4728]: I0227 10:39:54.594717 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-6c7d6ccd54-z9tn8" Feb 27 10:40:00 crc kubenswrapper[4728]: I0227 10:40:00.144448 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536480-5pkgk"] Feb 27 10:40:00 crc kubenswrapper[4728]: I0227 10:40:00.147063 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536480-5pkgk" Feb 27 10:40:00 crc kubenswrapper[4728]: I0227 10:40:00.149439 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-wxdfj" Feb 27 10:40:00 crc kubenswrapper[4728]: I0227 10:40:00.149454 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 10:40:00 crc kubenswrapper[4728]: I0227 10:40:00.149758 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 10:40:00 crc kubenswrapper[4728]: I0227 10:40:00.151299 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536480-5pkgk"] Feb 27 10:40:00 crc kubenswrapper[4728]: I0227 10:40:00.231849 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvrc6\" (UniqueName: \"kubernetes.io/projected/6047c131-51e2-4139-b721-183ee9db08e3-kube-api-access-lvrc6\") pod \"auto-csr-approver-29536480-5pkgk\" (UID: \"6047c131-51e2-4139-b721-183ee9db08e3\") " pod="openshift-infra/auto-csr-approver-29536480-5pkgk" Feb 27 10:40:00 crc kubenswrapper[4728]: I0227 10:40:00.333007 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lvrc6\" (UniqueName: \"kubernetes.io/projected/6047c131-51e2-4139-b721-183ee9db08e3-kube-api-access-lvrc6\") pod \"auto-csr-approver-29536480-5pkgk\" (UID: \"6047c131-51e2-4139-b721-183ee9db08e3\") " pod="openshift-infra/auto-csr-approver-29536480-5pkgk" Feb 27 10:40:00 crc kubenswrapper[4728]: I0227 10:40:00.360028 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lvrc6\" (UniqueName: \"kubernetes.io/projected/6047c131-51e2-4139-b721-183ee9db08e3-kube-api-access-lvrc6\") pod \"auto-csr-approver-29536480-5pkgk\" (UID: \"6047c131-51e2-4139-b721-183ee9db08e3\") " pod="openshift-infra/auto-csr-approver-29536480-5pkgk" Feb 27 10:40:00 crc kubenswrapper[4728]: I0227 10:40:00.477668 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536480-5pkgk" Feb 27 10:40:00 crc kubenswrapper[4728]: I0227 10:40:00.950200 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536480-5pkgk"] Feb 27 10:40:00 crc kubenswrapper[4728]: W0227 10:40:00.956499 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6047c131_51e2_4139_b721_183ee9db08e3.slice/crio-08dd0d8c480f88b197659723947a33a21d294a06d66f69f6f3fabf41de03d3e9 WatchSource:0}: Error finding container 08dd0d8c480f88b197659723947a33a21d294a06d66f69f6f3fabf41de03d3e9: Status 404 returned error can't find the container with id 08dd0d8c480f88b197659723947a33a21d294a06d66f69f6f3fabf41de03d3e9 Feb 27 10:40:01 crc kubenswrapper[4728]: I0227 10:40:01.646370 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536480-5pkgk" event={"ID":"6047c131-51e2-4139-b721-183ee9db08e3","Type":"ContainerStarted","Data":"08dd0d8c480f88b197659723947a33a21d294a06d66f69f6f3fabf41de03d3e9"} Feb 27 10:40:02 crc kubenswrapper[4728]: I0227 10:40:02.653806 4728 generic.go:334] "Generic (PLEG): container finished" podID="6047c131-51e2-4139-b721-183ee9db08e3" containerID="32ffc77fce0486bb172e2a6183397a95b8d10e04eee3214d055edb961fd1a7d4" exitCode=0 Feb 27 10:40:02 crc kubenswrapper[4728]: I0227 10:40:02.653877 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536480-5pkgk" event={"ID":"6047c131-51e2-4139-b721-183ee9db08e3","Type":"ContainerDied","Data":"32ffc77fce0486bb172e2a6183397a95b8d10e04eee3214d055edb961fd1a7d4"} Feb 27 10:40:03 crc kubenswrapper[4728]: I0227 10:40:03.955905 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536480-5pkgk" Feb 27 10:40:04 crc kubenswrapper[4728]: I0227 10:40:04.099826 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lvrc6\" (UniqueName: \"kubernetes.io/projected/6047c131-51e2-4139-b721-183ee9db08e3-kube-api-access-lvrc6\") pod \"6047c131-51e2-4139-b721-183ee9db08e3\" (UID: \"6047c131-51e2-4139-b721-183ee9db08e3\") " Feb 27 10:40:04 crc kubenswrapper[4728]: I0227 10:40:04.105716 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6047c131-51e2-4139-b721-183ee9db08e3-kube-api-access-lvrc6" (OuterVolumeSpecName: "kube-api-access-lvrc6") pod "6047c131-51e2-4139-b721-183ee9db08e3" (UID: "6047c131-51e2-4139-b721-183ee9db08e3"). InnerVolumeSpecName "kube-api-access-lvrc6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:40:04 crc kubenswrapper[4728]: I0227 10:40:04.201860 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lvrc6\" (UniqueName: \"kubernetes.io/projected/6047c131-51e2-4139-b721-183ee9db08e3-kube-api-access-lvrc6\") on node \"crc\" DevicePath \"\"" Feb 27 10:40:04 crc kubenswrapper[4728]: I0227 10:40:04.669842 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536480-5pkgk" event={"ID":"6047c131-51e2-4139-b721-183ee9db08e3","Type":"ContainerDied","Data":"08dd0d8c480f88b197659723947a33a21d294a06d66f69f6f3fabf41de03d3e9"} Feb 27 10:40:04 crc kubenswrapper[4728]: I0227 10:40:04.669891 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="08dd0d8c480f88b197659723947a33a21d294a06d66f69f6f3fabf41de03d3e9" Feb 27 10:40:04 crc kubenswrapper[4728]: I0227 10:40:04.669962 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536480-5pkgk" Feb 27 10:40:05 crc kubenswrapper[4728]: I0227 10:40:05.048290 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536474-8lbnm"] Feb 27 10:40:05 crc kubenswrapper[4728]: I0227 10:40:05.059122 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536474-8lbnm"] Feb 27 10:40:05 crc kubenswrapper[4728]: I0227 10:40:05.709808 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-mgsnx" Feb 27 10:40:05 crc kubenswrapper[4728]: I0227 10:40:05.845612 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-querier-76bf7b6d45-thpcv" Feb 27 10:40:06 crc kubenswrapper[4728]: I0227 10:40:06.174001 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-958gn" Feb 27 10:40:06 crc kubenswrapper[4728]: I0227 10:40:06.737090 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c489801-99b4-490f-b2b9-f475aabf3f7b" path="/var/lib/kubelet/pods/4c489801-99b4-490f-b2b9-f475aabf3f7b/volumes" Feb 27 10:40:06 crc kubenswrapper[4728]: I0227 10:40:06.848590 4728 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: this instance owns no tokens Feb 27 10:40:06 crc kubenswrapper[4728]: I0227 10:40:06.848682 4728 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="30b747c6-8aaf-4862-ab83-c642456f025a" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 27 10:40:06 crc kubenswrapper[4728]: I0227 10:40:06.987127 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-compactor-0" Feb 27 10:40:07 crc kubenswrapper[4728]: I0227 10:40:07.149967 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-index-gateway-0" Feb 27 10:40:16 crc kubenswrapper[4728]: I0227 10:40:16.847127 4728 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: this instance owns no tokens Feb 27 10:40:16 crc kubenswrapper[4728]: I0227 10:40:16.847578 4728 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="30b747c6-8aaf-4862-ab83-c642456f025a" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 27 10:40:26 crc kubenswrapper[4728]: I0227 10:40:26.846350 4728 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: waiting for 15s after being ready Feb 27 10:40:26 crc kubenswrapper[4728]: I0227 10:40:26.846885 4728 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="30b747c6-8aaf-4862-ab83-c642456f025a" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 27 10:40:36 crc kubenswrapper[4728]: I0227 10:40:36.874744 4728 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: waiting for 15s after being ready Feb 27 10:40:36 crc kubenswrapper[4728]: I0227 10:40:36.875349 4728 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="30b747c6-8aaf-4862-ab83-c642456f025a" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 27 10:40:46 crc kubenswrapper[4728]: I0227 10:40:46.848947 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-ingester-0" Feb 27 10:40:47 crc kubenswrapper[4728]: I0227 10:40:47.854900 4728 scope.go:117] "RemoveContainer" containerID="5840238f596c5bcf14d0d11275e0feb412a632cfba64474a091404dc0cd896aa" Feb 27 10:41:04 crc kubenswrapper[4728]: I0227 10:41:04.376774 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/collector-6v4lk"] Feb 27 10:41:04 crc kubenswrapper[4728]: E0227 10:41:04.377577 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6047c131-51e2-4139-b721-183ee9db08e3" containerName="oc" Feb 27 10:41:04 crc kubenswrapper[4728]: I0227 10:41:04.377590 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="6047c131-51e2-4139-b721-183ee9db08e3" containerName="oc" Feb 27 10:41:04 crc kubenswrapper[4728]: I0227 10:41:04.377737 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="6047c131-51e2-4139-b721-183ee9db08e3" containerName="oc" Feb 27 10:41:04 crc kubenswrapper[4728]: I0227 10:41:04.378313 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-6v4lk" Feb 27 10:41:04 crc kubenswrapper[4728]: I0227 10:41:04.380841 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-metrics" Feb 27 10:41:04 crc kubenswrapper[4728]: I0227 10:41:04.381036 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-syslog-receiver" Feb 27 10:41:04 crc kubenswrapper[4728]: I0227 10:41:04.381295 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-config" Feb 27 10:41:04 crc kubenswrapper[4728]: I0227 10:41:04.381490 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-dockercfg-gdkzj" Feb 27 10:41:04 crc kubenswrapper[4728]: I0227 10:41:04.381798 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-token" Feb 27 10:41:04 crc kubenswrapper[4728]: I0227 10:41:04.391886 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-trustbundle" Feb 27 10:41:04 crc kubenswrapper[4728]: I0227 10:41:04.405223 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-6v4lk"] Feb 27 10:41:04 crc kubenswrapper[4728]: I0227 10:41:04.449804 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-logging/collector-6v4lk"] Feb 27 10:41:04 crc kubenswrapper[4728]: E0227 10:41:04.450934 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[collector-syslog-receiver collector-token config config-openshift-service-cacrt datadir entrypoint kube-api-access-w7nx7 metrics sa-token tmp trusted-ca], unattached volumes=[], failed to process volumes=[collector-syslog-receiver collector-token config config-openshift-service-cacrt datadir entrypoint kube-api-access-w7nx7 metrics sa-token tmp trusted-ca]: context canceled" pod="openshift-logging/collector-6v4lk" podUID="bcc6975b-cbb5-4b8d-9da1-548ac402f3f7" Feb 27 10:41:04 crc kubenswrapper[4728]: I0227 10:41:04.469683 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/bcc6975b-cbb5-4b8d-9da1-548ac402f3f7-tmp\") pod \"collector-6v4lk\" (UID: \"bcc6975b-cbb5-4b8d-9da1-548ac402f3f7\") " pod="openshift-logging/collector-6v4lk" Feb 27 10:41:04 crc kubenswrapper[4728]: I0227 10:41:04.469741 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/bcc6975b-cbb5-4b8d-9da1-548ac402f3f7-sa-token\") pod \"collector-6v4lk\" (UID: \"bcc6975b-cbb5-4b8d-9da1-548ac402f3f7\") " pod="openshift-logging/collector-6v4lk" Feb 27 10:41:04 crc kubenswrapper[4728]: I0227 10:41:04.469777 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/bcc6975b-cbb5-4b8d-9da1-548ac402f3f7-collector-token\") pod \"collector-6v4lk\" (UID: \"bcc6975b-cbb5-4b8d-9da1-548ac402f3f7\") " pod="openshift-logging/collector-6v4lk" Feb 27 10:41:04 crc kubenswrapper[4728]: I0227 10:41:04.469799 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/bcc6975b-cbb5-4b8d-9da1-548ac402f3f7-config-openshift-service-cacrt\") pod \"collector-6v4lk\" (UID: \"bcc6975b-cbb5-4b8d-9da1-548ac402f3f7\") " pod="openshift-logging/collector-6v4lk" Feb 27 10:41:04 crc kubenswrapper[4728]: I0227 10:41:04.469816 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w7nx7\" (UniqueName: \"kubernetes.io/projected/bcc6975b-cbb5-4b8d-9da1-548ac402f3f7-kube-api-access-w7nx7\") pod \"collector-6v4lk\" (UID: \"bcc6975b-cbb5-4b8d-9da1-548ac402f3f7\") " pod="openshift-logging/collector-6v4lk" Feb 27 10:41:04 crc kubenswrapper[4728]: I0227 10:41:04.469846 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bcc6975b-cbb5-4b8d-9da1-548ac402f3f7-config\") pod \"collector-6v4lk\" (UID: \"bcc6975b-cbb5-4b8d-9da1-548ac402f3f7\") " pod="openshift-logging/collector-6v4lk" Feb 27 10:41:04 crc kubenswrapper[4728]: I0227 10:41:04.469888 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bcc6975b-cbb5-4b8d-9da1-548ac402f3f7-trusted-ca\") pod \"collector-6v4lk\" (UID: \"bcc6975b-cbb5-4b8d-9da1-548ac402f3f7\") " pod="openshift-logging/collector-6v4lk" Feb 27 10:41:04 crc kubenswrapper[4728]: I0227 10:41:04.469922 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/bcc6975b-cbb5-4b8d-9da1-548ac402f3f7-datadir\") pod \"collector-6v4lk\" (UID: \"bcc6975b-cbb5-4b8d-9da1-548ac402f3f7\") " pod="openshift-logging/collector-6v4lk" Feb 27 10:41:04 crc kubenswrapper[4728]: I0227 10:41:04.469952 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/bcc6975b-cbb5-4b8d-9da1-548ac402f3f7-collector-syslog-receiver\") pod \"collector-6v4lk\" (UID: \"bcc6975b-cbb5-4b8d-9da1-548ac402f3f7\") " pod="openshift-logging/collector-6v4lk" Feb 27 10:41:04 crc kubenswrapper[4728]: I0227 10:41:04.469987 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/bcc6975b-cbb5-4b8d-9da1-548ac402f3f7-entrypoint\") pod \"collector-6v4lk\" (UID: \"bcc6975b-cbb5-4b8d-9da1-548ac402f3f7\") " pod="openshift-logging/collector-6v4lk" Feb 27 10:41:04 crc kubenswrapper[4728]: I0227 10:41:04.470004 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/bcc6975b-cbb5-4b8d-9da1-548ac402f3f7-metrics\") pod \"collector-6v4lk\" (UID: \"bcc6975b-cbb5-4b8d-9da1-548ac402f3f7\") " pod="openshift-logging/collector-6v4lk" Feb 27 10:41:04 crc kubenswrapper[4728]: I0227 10:41:04.572074 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/bcc6975b-cbb5-4b8d-9da1-548ac402f3f7-tmp\") pod \"collector-6v4lk\" (UID: \"bcc6975b-cbb5-4b8d-9da1-548ac402f3f7\") " pod="openshift-logging/collector-6v4lk" Feb 27 10:41:04 crc kubenswrapper[4728]: I0227 10:41:04.572371 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/bcc6975b-cbb5-4b8d-9da1-548ac402f3f7-sa-token\") pod \"collector-6v4lk\" (UID: \"bcc6975b-cbb5-4b8d-9da1-548ac402f3f7\") " pod="openshift-logging/collector-6v4lk" Feb 27 10:41:04 crc kubenswrapper[4728]: I0227 10:41:04.572588 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/bcc6975b-cbb5-4b8d-9da1-548ac402f3f7-collector-token\") pod \"collector-6v4lk\" (UID: \"bcc6975b-cbb5-4b8d-9da1-548ac402f3f7\") " pod="openshift-logging/collector-6v4lk" Feb 27 10:41:04 crc kubenswrapper[4728]: I0227 10:41:04.572736 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/bcc6975b-cbb5-4b8d-9da1-548ac402f3f7-config-openshift-service-cacrt\") pod \"collector-6v4lk\" (UID: \"bcc6975b-cbb5-4b8d-9da1-548ac402f3f7\") " pod="openshift-logging/collector-6v4lk" Feb 27 10:41:04 crc kubenswrapper[4728]: I0227 10:41:04.572859 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w7nx7\" (UniqueName: \"kubernetes.io/projected/bcc6975b-cbb5-4b8d-9da1-548ac402f3f7-kube-api-access-w7nx7\") pod \"collector-6v4lk\" (UID: \"bcc6975b-cbb5-4b8d-9da1-548ac402f3f7\") " pod="openshift-logging/collector-6v4lk" Feb 27 10:41:04 crc kubenswrapper[4728]: I0227 10:41:04.572999 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bcc6975b-cbb5-4b8d-9da1-548ac402f3f7-config\") pod \"collector-6v4lk\" (UID: \"bcc6975b-cbb5-4b8d-9da1-548ac402f3f7\") " pod="openshift-logging/collector-6v4lk" Feb 27 10:41:04 crc kubenswrapper[4728]: I0227 10:41:04.573147 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bcc6975b-cbb5-4b8d-9da1-548ac402f3f7-trusted-ca\") pod \"collector-6v4lk\" (UID: \"bcc6975b-cbb5-4b8d-9da1-548ac402f3f7\") " pod="openshift-logging/collector-6v4lk" Feb 27 10:41:04 crc kubenswrapper[4728]: I0227 10:41:04.573286 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/bcc6975b-cbb5-4b8d-9da1-548ac402f3f7-datadir\") pod \"collector-6v4lk\" (UID: \"bcc6975b-cbb5-4b8d-9da1-548ac402f3f7\") " pod="openshift-logging/collector-6v4lk" Feb 27 10:41:04 crc kubenswrapper[4728]: I0227 10:41:04.573432 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/bcc6975b-cbb5-4b8d-9da1-548ac402f3f7-collector-syslog-receiver\") pod \"collector-6v4lk\" (UID: \"bcc6975b-cbb5-4b8d-9da1-548ac402f3f7\") " pod="openshift-logging/collector-6v4lk" Feb 27 10:41:04 crc kubenswrapper[4728]: E0227 10:41:04.573586 4728 secret.go:188] Couldn't get secret openshift-logging/collector-syslog-receiver: secret "collector-syslog-receiver" not found Feb 27 10:41:04 crc kubenswrapper[4728]: E0227 10:41:04.573690 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bcc6975b-cbb5-4b8d-9da1-548ac402f3f7-collector-syslog-receiver podName:bcc6975b-cbb5-4b8d-9da1-548ac402f3f7 nodeName:}" failed. No retries permitted until 2026-02-27 10:41:05.073662733 +0000 UTC m=+885.036028869 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "collector-syslog-receiver" (UniqueName: "kubernetes.io/secret/bcc6975b-cbb5-4b8d-9da1-548ac402f3f7-collector-syslog-receiver") pod "collector-6v4lk" (UID: "bcc6975b-cbb5-4b8d-9da1-548ac402f3f7") : secret "collector-syslog-receiver" not found Feb 27 10:41:04 crc kubenswrapper[4728]: I0227 10:41:04.573794 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/bcc6975b-cbb5-4b8d-9da1-548ac402f3f7-config-openshift-service-cacrt\") pod \"collector-6v4lk\" (UID: \"bcc6975b-cbb5-4b8d-9da1-548ac402f3f7\") " pod="openshift-logging/collector-6v4lk" Feb 27 10:41:04 crc kubenswrapper[4728]: I0227 10:41:04.573802 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/bcc6975b-cbb5-4b8d-9da1-548ac402f3f7-entrypoint\") pod \"collector-6v4lk\" (UID: \"bcc6975b-cbb5-4b8d-9da1-548ac402f3f7\") " pod="openshift-logging/collector-6v4lk" Feb 27 10:41:04 crc kubenswrapper[4728]: I0227 10:41:04.573856 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bcc6975b-cbb5-4b8d-9da1-548ac402f3f7-config\") pod \"collector-6v4lk\" (UID: \"bcc6975b-cbb5-4b8d-9da1-548ac402f3f7\") " pod="openshift-logging/collector-6v4lk" Feb 27 10:41:04 crc kubenswrapper[4728]: I0227 10:41:04.573425 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/bcc6975b-cbb5-4b8d-9da1-548ac402f3f7-datadir\") pod \"collector-6v4lk\" (UID: \"bcc6975b-cbb5-4b8d-9da1-548ac402f3f7\") " pod="openshift-logging/collector-6v4lk" Feb 27 10:41:04 crc kubenswrapper[4728]: I0227 10:41:04.573908 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/bcc6975b-cbb5-4b8d-9da1-548ac402f3f7-metrics\") pod \"collector-6v4lk\" (UID: \"bcc6975b-cbb5-4b8d-9da1-548ac402f3f7\") " pod="openshift-logging/collector-6v4lk" Feb 27 10:41:04 crc kubenswrapper[4728]: I0227 10:41:04.574430 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bcc6975b-cbb5-4b8d-9da1-548ac402f3f7-trusted-ca\") pod \"collector-6v4lk\" (UID: \"bcc6975b-cbb5-4b8d-9da1-548ac402f3f7\") " pod="openshift-logging/collector-6v4lk" Feb 27 10:41:04 crc kubenswrapper[4728]: I0227 10:41:04.575040 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/bcc6975b-cbb5-4b8d-9da1-548ac402f3f7-entrypoint\") pod \"collector-6v4lk\" (UID: \"bcc6975b-cbb5-4b8d-9da1-548ac402f3f7\") " pod="openshift-logging/collector-6v4lk" Feb 27 10:41:04 crc kubenswrapper[4728]: I0227 10:41:04.577753 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/bcc6975b-cbb5-4b8d-9da1-548ac402f3f7-tmp\") pod \"collector-6v4lk\" (UID: \"bcc6975b-cbb5-4b8d-9da1-548ac402f3f7\") " pod="openshift-logging/collector-6v4lk" Feb 27 10:41:04 crc kubenswrapper[4728]: I0227 10:41:04.590050 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/bcc6975b-cbb5-4b8d-9da1-548ac402f3f7-sa-token\") pod \"collector-6v4lk\" (UID: \"bcc6975b-cbb5-4b8d-9da1-548ac402f3f7\") " pod="openshift-logging/collector-6v4lk" Feb 27 10:41:04 crc kubenswrapper[4728]: I0227 10:41:04.590957 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/bcc6975b-cbb5-4b8d-9da1-548ac402f3f7-metrics\") pod \"collector-6v4lk\" (UID: \"bcc6975b-cbb5-4b8d-9da1-548ac402f3f7\") " pod="openshift-logging/collector-6v4lk" Feb 27 10:41:04 crc kubenswrapper[4728]: I0227 10:41:04.591025 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w7nx7\" (UniqueName: \"kubernetes.io/projected/bcc6975b-cbb5-4b8d-9da1-548ac402f3f7-kube-api-access-w7nx7\") pod \"collector-6v4lk\" (UID: \"bcc6975b-cbb5-4b8d-9da1-548ac402f3f7\") " pod="openshift-logging/collector-6v4lk" Feb 27 10:41:04 crc kubenswrapper[4728]: I0227 10:41:04.594070 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/bcc6975b-cbb5-4b8d-9da1-548ac402f3f7-collector-token\") pod \"collector-6v4lk\" (UID: \"bcc6975b-cbb5-4b8d-9da1-548ac402f3f7\") " pod="openshift-logging/collector-6v4lk" Feb 27 10:41:05 crc kubenswrapper[4728]: I0227 10:41:05.082445 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/bcc6975b-cbb5-4b8d-9da1-548ac402f3f7-collector-syslog-receiver\") pod \"collector-6v4lk\" (UID: \"bcc6975b-cbb5-4b8d-9da1-548ac402f3f7\") " pod="openshift-logging/collector-6v4lk" Feb 27 10:41:05 crc kubenswrapper[4728]: I0227 10:41:05.086223 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/bcc6975b-cbb5-4b8d-9da1-548ac402f3f7-collector-syslog-receiver\") pod \"collector-6v4lk\" (UID: \"bcc6975b-cbb5-4b8d-9da1-548ac402f3f7\") " pod="openshift-logging/collector-6v4lk" Feb 27 10:41:05 crc kubenswrapper[4728]: I0227 10:41:05.222028 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-6v4lk" Feb 27 10:41:05 crc kubenswrapper[4728]: I0227 10:41:05.233641 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-6v4lk" Feb 27 10:41:05 crc kubenswrapper[4728]: I0227 10:41:05.285451 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bcc6975b-cbb5-4b8d-9da1-548ac402f3f7-trusted-ca\") pod \"bcc6975b-cbb5-4b8d-9da1-548ac402f3f7\" (UID: \"bcc6975b-cbb5-4b8d-9da1-548ac402f3f7\") " Feb 27 10:41:05 crc kubenswrapper[4728]: I0227 10:41:05.285558 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/bcc6975b-cbb5-4b8d-9da1-548ac402f3f7-tmp\") pod \"bcc6975b-cbb5-4b8d-9da1-548ac402f3f7\" (UID: \"bcc6975b-cbb5-4b8d-9da1-548ac402f3f7\") " Feb 27 10:41:05 crc kubenswrapper[4728]: I0227 10:41:05.285629 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/bcc6975b-cbb5-4b8d-9da1-548ac402f3f7-metrics\") pod \"bcc6975b-cbb5-4b8d-9da1-548ac402f3f7\" (UID: \"bcc6975b-cbb5-4b8d-9da1-548ac402f3f7\") " Feb 27 10:41:05 crc kubenswrapper[4728]: I0227 10:41:05.285674 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/bcc6975b-cbb5-4b8d-9da1-548ac402f3f7-collector-token\") pod \"bcc6975b-cbb5-4b8d-9da1-548ac402f3f7\" (UID: \"bcc6975b-cbb5-4b8d-9da1-548ac402f3f7\") " Feb 27 10:41:05 crc kubenswrapper[4728]: I0227 10:41:05.285710 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/bcc6975b-cbb5-4b8d-9da1-548ac402f3f7-sa-token\") pod \"bcc6975b-cbb5-4b8d-9da1-548ac402f3f7\" (UID: \"bcc6975b-cbb5-4b8d-9da1-548ac402f3f7\") " Feb 27 10:41:05 crc kubenswrapper[4728]: I0227 10:41:05.285759 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/bcc6975b-cbb5-4b8d-9da1-548ac402f3f7-config-openshift-service-cacrt\") pod \"bcc6975b-cbb5-4b8d-9da1-548ac402f3f7\" (UID: \"bcc6975b-cbb5-4b8d-9da1-548ac402f3f7\") " Feb 27 10:41:05 crc kubenswrapper[4728]: I0227 10:41:05.285794 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/bcc6975b-cbb5-4b8d-9da1-548ac402f3f7-entrypoint\") pod \"bcc6975b-cbb5-4b8d-9da1-548ac402f3f7\" (UID: \"bcc6975b-cbb5-4b8d-9da1-548ac402f3f7\") " Feb 27 10:41:05 crc kubenswrapper[4728]: I0227 10:41:05.285830 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7nx7\" (UniqueName: \"kubernetes.io/projected/bcc6975b-cbb5-4b8d-9da1-548ac402f3f7-kube-api-access-w7nx7\") pod \"bcc6975b-cbb5-4b8d-9da1-548ac402f3f7\" (UID: \"bcc6975b-cbb5-4b8d-9da1-548ac402f3f7\") " Feb 27 10:41:05 crc kubenswrapper[4728]: I0227 10:41:05.285892 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bcc6975b-cbb5-4b8d-9da1-548ac402f3f7-config\") pod \"bcc6975b-cbb5-4b8d-9da1-548ac402f3f7\" (UID: \"bcc6975b-cbb5-4b8d-9da1-548ac402f3f7\") " Feb 27 10:41:05 crc kubenswrapper[4728]: I0227 10:41:05.285940 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/bcc6975b-cbb5-4b8d-9da1-548ac402f3f7-datadir\") pod \"bcc6975b-cbb5-4b8d-9da1-548ac402f3f7\" (UID: \"bcc6975b-cbb5-4b8d-9da1-548ac402f3f7\") " Feb 27 10:41:05 crc kubenswrapper[4728]: I0227 10:41:05.285996 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/bcc6975b-cbb5-4b8d-9da1-548ac402f3f7-collector-syslog-receiver\") pod \"bcc6975b-cbb5-4b8d-9da1-548ac402f3f7\" (UID: \"bcc6975b-cbb5-4b8d-9da1-548ac402f3f7\") " Feb 27 10:41:05 crc kubenswrapper[4728]: I0227 10:41:05.286032 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bcc6975b-cbb5-4b8d-9da1-548ac402f3f7-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bcc6975b-cbb5-4b8d-9da1-548ac402f3f7" (UID: "bcc6975b-cbb5-4b8d-9da1-548ac402f3f7"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:41:05 crc kubenswrapper[4728]: I0227 10:41:05.286433 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bcc6975b-cbb5-4b8d-9da1-548ac402f3f7-config" (OuterVolumeSpecName: "config") pod "bcc6975b-cbb5-4b8d-9da1-548ac402f3f7" (UID: "bcc6975b-cbb5-4b8d-9da1-548ac402f3f7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:41:05 crc kubenswrapper[4728]: I0227 10:41:05.286447 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bcc6975b-cbb5-4b8d-9da1-548ac402f3f7-entrypoint" (OuterVolumeSpecName: "entrypoint") pod "bcc6975b-cbb5-4b8d-9da1-548ac402f3f7" (UID: "bcc6975b-cbb5-4b8d-9da1-548ac402f3f7"). InnerVolumeSpecName "entrypoint". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:41:05 crc kubenswrapper[4728]: I0227 10:41:05.286724 4728 reconciler_common.go:293] "Volume detached for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/bcc6975b-cbb5-4b8d-9da1-548ac402f3f7-entrypoint\") on node \"crc\" DevicePath \"\"" Feb 27 10:41:05 crc kubenswrapper[4728]: I0227 10:41:05.286739 4728 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bcc6975b-cbb5-4b8d-9da1-548ac402f3f7-config\") on node \"crc\" DevicePath \"\"" Feb 27 10:41:05 crc kubenswrapper[4728]: I0227 10:41:05.286749 4728 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bcc6975b-cbb5-4b8d-9da1-548ac402f3f7-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 27 10:41:05 crc kubenswrapper[4728]: I0227 10:41:05.287533 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bcc6975b-cbb5-4b8d-9da1-548ac402f3f7-datadir" (OuterVolumeSpecName: "datadir") pod "bcc6975b-cbb5-4b8d-9da1-548ac402f3f7" (UID: "bcc6975b-cbb5-4b8d-9da1-548ac402f3f7"). InnerVolumeSpecName "datadir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 10:41:05 crc kubenswrapper[4728]: I0227 10:41:05.287903 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bcc6975b-cbb5-4b8d-9da1-548ac402f3f7-config-openshift-service-cacrt" (OuterVolumeSpecName: "config-openshift-service-cacrt") pod "bcc6975b-cbb5-4b8d-9da1-548ac402f3f7" (UID: "bcc6975b-cbb5-4b8d-9da1-548ac402f3f7"). InnerVolumeSpecName "config-openshift-service-cacrt". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:41:05 crc kubenswrapper[4728]: I0227 10:41:05.288769 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bcc6975b-cbb5-4b8d-9da1-548ac402f3f7-metrics" (OuterVolumeSpecName: "metrics") pod "bcc6975b-cbb5-4b8d-9da1-548ac402f3f7" (UID: "bcc6975b-cbb5-4b8d-9da1-548ac402f3f7"). InnerVolumeSpecName "metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:41:05 crc kubenswrapper[4728]: I0227 10:41:05.289735 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bcc6975b-cbb5-4b8d-9da1-548ac402f3f7-kube-api-access-w7nx7" (OuterVolumeSpecName: "kube-api-access-w7nx7") pod "bcc6975b-cbb5-4b8d-9da1-548ac402f3f7" (UID: "bcc6975b-cbb5-4b8d-9da1-548ac402f3f7"). InnerVolumeSpecName "kube-api-access-w7nx7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:41:05 crc kubenswrapper[4728]: I0227 10:41:05.291275 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bcc6975b-cbb5-4b8d-9da1-548ac402f3f7-sa-token" (OuterVolumeSpecName: "sa-token") pod "bcc6975b-cbb5-4b8d-9da1-548ac402f3f7" (UID: "bcc6975b-cbb5-4b8d-9da1-548ac402f3f7"). InnerVolumeSpecName "sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:41:05 crc kubenswrapper[4728]: I0227 10:41:05.292429 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bcc6975b-cbb5-4b8d-9da1-548ac402f3f7-collector-syslog-receiver" (OuterVolumeSpecName: "collector-syslog-receiver") pod "bcc6975b-cbb5-4b8d-9da1-548ac402f3f7" (UID: "bcc6975b-cbb5-4b8d-9da1-548ac402f3f7"). InnerVolumeSpecName "collector-syslog-receiver". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:41:05 crc kubenswrapper[4728]: I0227 10:41:05.300850 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bcc6975b-cbb5-4b8d-9da1-548ac402f3f7-tmp" (OuterVolumeSpecName: "tmp") pod "bcc6975b-cbb5-4b8d-9da1-548ac402f3f7" (UID: "bcc6975b-cbb5-4b8d-9da1-548ac402f3f7"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:41:05 crc kubenswrapper[4728]: I0227 10:41:05.303253 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bcc6975b-cbb5-4b8d-9da1-548ac402f3f7-collector-token" (OuterVolumeSpecName: "collector-token") pod "bcc6975b-cbb5-4b8d-9da1-548ac402f3f7" (UID: "bcc6975b-cbb5-4b8d-9da1-548ac402f3f7"). InnerVolumeSpecName "collector-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:41:05 crc kubenswrapper[4728]: I0227 10:41:05.391470 4728 reconciler_common.go:293] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/bcc6975b-cbb5-4b8d-9da1-548ac402f3f7-tmp\") on node \"crc\" DevicePath \"\"" Feb 27 10:41:05 crc kubenswrapper[4728]: I0227 10:41:05.391512 4728 reconciler_common.go:293] "Volume detached for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/bcc6975b-cbb5-4b8d-9da1-548ac402f3f7-metrics\") on node \"crc\" DevicePath \"\"" Feb 27 10:41:05 crc kubenswrapper[4728]: I0227 10:41:05.391523 4728 reconciler_common.go:293] "Volume detached for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/bcc6975b-cbb5-4b8d-9da1-548ac402f3f7-collector-token\") on node \"crc\" DevicePath \"\"" Feb 27 10:41:05 crc kubenswrapper[4728]: I0227 10:41:05.391533 4728 reconciler_common.go:293] "Volume detached for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/bcc6975b-cbb5-4b8d-9da1-548ac402f3f7-sa-token\") on node \"crc\" DevicePath \"\"" Feb 27 10:41:05 crc kubenswrapper[4728]: I0227 10:41:05.391546 4728 reconciler_common.go:293] "Volume detached for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/bcc6975b-cbb5-4b8d-9da1-548ac402f3f7-config-openshift-service-cacrt\") on node \"crc\" DevicePath \"\"" Feb 27 10:41:05 crc kubenswrapper[4728]: I0227 10:41:05.391556 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7nx7\" (UniqueName: \"kubernetes.io/projected/bcc6975b-cbb5-4b8d-9da1-548ac402f3f7-kube-api-access-w7nx7\") on node \"crc\" DevicePath \"\"" Feb 27 10:41:05 crc kubenswrapper[4728]: I0227 10:41:05.391566 4728 reconciler_common.go:293] "Volume detached for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/bcc6975b-cbb5-4b8d-9da1-548ac402f3f7-datadir\") on node \"crc\" DevicePath \"\"" Feb 27 10:41:05 crc kubenswrapper[4728]: I0227 10:41:05.391574 4728 reconciler_common.go:293] "Volume detached for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/bcc6975b-cbb5-4b8d-9da1-548ac402f3f7-collector-syslog-receiver\") on node \"crc\" DevicePath \"\"" Feb 27 10:41:05 crc kubenswrapper[4728]: I0227 10:41:05.923151 4728 patch_prober.go:28] interesting pod/machine-config-daemon-mf2hh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 10:41:05 crc kubenswrapper[4728]: I0227 10:41:05.923337 4728 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 10:41:06 crc kubenswrapper[4728]: I0227 10:41:06.230986 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-6v4lk" Feb 27 10:41:06 crc kubenswrapper[4728]: I0227 10:41:06.302463 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-logging/collector-6v4lk"] Feb 27 10:41:06 crc kubenswrapper[4728]: I0227 10:41:06.315768 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-logging/collector-6v4lk"] Feb 27 10:41:06 crc kubenswrapper[4728]: I0227 10:41:06.324119 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/collector-44qdc"] Feb 27 10:41:06 crc kubenswrapper[4728]: I0227 10:41:06.325524 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-44qdc" Feb 27 10:41:06 crc kubenswrapper[4728]: I0227 10:41:06.328532 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-dockercfg-gdkzj" Feb 27 10:41:06 crc kubenswrapper[4728]: I0227 10:41:06.329113 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-token" Feb 27 10:41:06 crc kubenswrapper[4728]: I0227 10:41:06.329370 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-metrics" Feb 27 10:41:06 crc kubenswrapper[4728]: I0227 10:41:06.334179 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-config" Feb 27 10:41:06 crc kubenswrapper[4728]: I0227 10:41:06.335404 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-syslog-receiver" Feb 27 10:41:06 crc kubenswrapper[4728]: I0227 10:41:06.340135 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-trustbundle" Feb 27 10:41:06 crc kubenswrapper[4728]: I0227 10:41:06.349097 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-44qdc"] Feb 27 10:41:06 crc kubenswrapper[4728]: I0227 10:41:06.409544 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dclkp\" (UniqueName: \"kubernetes.io/projected/7cf5ad18-3d58-4d35-8255-c581c2d2b722-kube-api-access-dclkp\") pod \"collector-44qdc\" (UID: \"7cf5ad18-3d58-4d35-8255-c581c2d2b722\") " pod="openshift-logging/collector-44qdc" Feb 27 10:41:06 crc kubenswrapper[4728]: I0227 10:41:06.409599 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7cf5ad18-3d58-4d35-8255-c581c2d2b722-trusted-ca\") pod \"collector-44qdc\" (UID: \"7cf5ad18-3d58-4d35-8255-c581c2d2b722\") " pod="openshift-logging/collector-44qdc" Feb 27 10:41:06 crc kubenswrapper[4728]: I0227 10:41:06.409629 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/7cf5ad18-3d58-4d35-8255-c581c2d2b722-collector-syslog-receiver\") pod \"collector-44qdc\" (UID: \"7cf5ad18-3d58-4d35-8255-c581c2d2b722\") " pod="openshift-logging/collector-44qdc" Feb 27 10:41:06 crc kubenswrapper[4728]: I0227 10:41:06.409697 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/7cf5ad18-3d58-4d35-8255-c581c2d2b722-sa-token\") pod \"collector-44qdc\" (UID: \"7cf5ad18-3d58-4d35-8255-c581c2d2b722\") " pod="openshift-logging/collector-44qdc" Feb 27 10:41:06 crc kubenswrapper[4728]: I0227 10:41:06.409726 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/7cf5ad18-3d58-4d35-8255-c581c2d2b722-config-openshift-service-cacrt\") pod \"collector-44qdc\" (UID: \"7cf5ad18-3d58-4d35-8255-c581c2d2b722\") " pod="openshift-logging/collector-44qdc" Feb 27 10:41:06 crc kubenswrapper[4728]: I0227 10:41:06.409807 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/7cf5ad18-3d58-4d35-8255-c581c2d2b722-datadir\") pod \"collector-44qdc\" (UID: \"7cf5ad18-3d58-4d35-8255-c581c2d2b722\") " pod="openshift-logging/collector-44qdc" Feb 27 10:41:06 crc kubenswrapper[4728]: I0227 10:41:06.409932 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7cf5ad18-3d58-4d35-8255-c581c2d2b722-config\") pod \"collector-44qdc\" (UID: \"7cf5ad18-3d58-4d35-8255-c581c2d2b722\") " pod="openshift-logging/collector-44qdc" Feb 27 10:41:06 crc kubenswrapper[4728]: I0227 10:41:06.409992 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7cf5ad18-3d58-4d35-8255-c581c2d2b722-tmp\") pod \"collector-44qdc\" (UID: \"7cf5ad18-3d58-4d35-8255-c581c2d2b722\") " pod="openshift-logging/collector-44qdc" Feb 27 10:41:06 crc kubenswrapper[4728]: I0227 10:41:06.410100 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/7cf5ad18-3d58-4d35-8255-c581c2d2b722-metrics\") pod \"collector-44qdc\" (UID: \"7cf5ad18-3d58-4d35-8255-c581c2d2b722\") " pod="openshift-logging/collector-44qdc" Feb 27 10:41:06 crc kubenswrapper[4728]: I0227 10:41:06.410207 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/7cf5ad18-3d58-4d35-8255-c581c2d2b722-collector-token\") pod \"collector-44qdc\" (UID: \"7cf5ad18-3d58-4d35-8255-c581c2d2b722\") " pod="openshift-logging/collector-44qdc" Feb 27 10:41:06 crc kubenswrapper[4728]: I0227 10:41:06.410243 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/7cf5ad18-3d58-4d35-8255-c581c2d2b722-entrypoint\") pod \"collector-44qdc\" (UID: \"7cf5ad18-3d58-4d35-8255-c581c2d2b722\") " pod="openshift-logging/collector-44qdc" Feb 27 10:41:06 crc kubenswrapper[4728]: I0227 10:41:06.512303 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dclkp\" (UniqueName: \"kubernetes.io/projected/7cf5ad18-3d58-4d35-8255-c581c2d2b722-kube-api-access-dclkp\") pod \"collector-44qdc\" (UID: \"7cf5ad18-3d58-4d35-8255-c581c2d2b722\") " pod="openshift-logging/collector-44qdc" Feb 27 10:41:06 crc kubenswrapper[4728]: I0227 10:41:06.512362 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7cf5ad18-3d58-4d35-8255-c581c2d2b722-trusted-ca\") pod \"collector-44qdc\" (UID: \"7cf5ad18-3d58-4d35-8255-c581c2d2b722\") " pod="openshift-logging/collector-44qdc" Feb 27 10:41:06 crc kubenswrapper[4728]: I0227 10:41:06.512392 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/7cf5ad18-3d58-4d35-8255-c581c2d2b722-collector-syslog-receiver\") pod \"collector-44qdc\" (UID: \"7cf5ad18-3d58-4d35-8255-c581c2d2b722\") " pod="openshift-logging/collector-44qdc" Feb 27 10:41:06 crc kubenswrapper[4728]: I0227 10:41:06.512442 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/7cf5ad18-3d58-4d35-8255-c581c2d2b722-sa-token\") pod \"collector-44qdc\" (UID: \"7cf5ad18-3d58-4d35-8255-c581c2d2b722\") " pod="openshift-logging/collector-44qdc" Feb 27 10:41:06 crc kubenswrapper[4728]: I0227 10:41:06.512469 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/7cf5ad18-3d58-4d35-8255-c581c2d2b722-config-openshift-service-cacrt\") pod \"collector-44qdc\" (UID: \"7cf5ad18-3d58-4d35-8255-c581c2d2b722\") " pod="openshift-logging/collector-44qdc" Feb 27 10:41:06 crc kubenswrapper[4728]: I0227 10:41:06.512494 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/7cf5ad18-3d58-4d35-8255-c581c2d2b722-datadir\") pod \"collector-44qdc\" (UID: \"7cf5ad18-3d58-4d35-8255-c581c2d2b722\") " pod="openshift-logging/collector-44qdc" Feb 27 10:41:06 crc kubenswrapper[4728]: I0227 10:41:06.512543 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7cf5ad18-3d58-4d35-8255-c581c2d2b722-config\") pod \"collector-44qdc\" (UID: \"7cf5ad18-3d58-4d35-8255-c581c2d2b722\") " pod="openshift-logging/collector-44qdc" Feb 27 10:41:06 crc kubenswrapper[4728]: I0227 10:41:06.512570 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7cf5ad18-3d58-4d35-8255-c581c2d2b722-tmp\") pod \"collector-44qdc\" (UID: \"7cf5ad18-3d58-4d35-8255-c581c2d2b722\") " pod="openshift-logging/collector-44qdc" Feb 27 10:41:06 crc kubenswrapper[4728]: I0227 10:41:06.512621 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/7cf5ad18-3d58-4d35-8255-c581c2d2b722-metrics\") pod \"collector-44qdc\" (UID: \"7cf5ad18-3d58-4d35-8255-c581c2d2b722\") " pod="openshift-logging/collector-44qdc" Feb 27 10:41:06 crc kubenswrapper[4728]: I0227 10:41:06.512676 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/7cf5ad18-3d58-4d35-8255-c581c2d2b722-collector-token\") pod \"collector-44qdc\" (UID: \"7cf5ad18-3d58-4d35-8255-c581c2d2b722\") " pod="openshift-logging/collector-44qdc" Feb 27 10:41:06 crc kubenswrapper[4728]: I0227 10:41:06.512706 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/7cf5ad18-3d58-4d35-8255-c581c2d2b722-entrypoint\") pod \"collector-44qdc\" (UID: \"7cf5ad18-3d58-4d35-8255-c581c2d2b722\") " pod="openshift-logging/collector-44qdc" Feb 27 10:41:06 crc kubenswrapper[4728]: I0227 10:41:06.513370 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/7cf5ad18-3d58-4d35-8255-c581c2d2b722-datadir\") pod \"collector-44qdc\" (UID: \"7cf5ad18-3d58-4d35-8255-c581c2d2b722\") " pod="openshift-logging/collector-44qdc" Feb 27 10:41:06 crc kubenswrapper[4728]: I0227 10:41:06.514270 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/7cf5ad18-3d58-4d35-8255-c581c2d2b722-config-openshift-service-cacrt\") pod \"collector-44qdc\" (UID: \"7cf5ad18-3d58-4d35-8255-c581c2d2b722\") " pod="openshift-logging/collector-44qdc" Feb 27 10:41:06 crc kubenswrapper[4728]: I0227 10:41:06.514434 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/7cf5ad18-3d58-4d35-8255-c581c2d2b722-entrypoint\") pod \"collector-44qdc\" (UID: \"7cf5ad18-3d58-4d35-8255-c581c2d2b722\") " pod="openshift-logging/collector-44qdc" Feb 27 10:41:06 crc kubenswrapper[4728]: I0227 10:41:06.514865 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7cf5ad18-3d58-4d35-8255-c581c2d2b722-config\") pod \"collector-44qdc\" (UID: \"7cf5ad18-3d58-4d35-8255-c581c2d2b722\") " pod="openshift-logging/collector-44qdc" Feb 27 10:41:06 crc kubenswrapper[4728]: I0227 10:41:06.516175 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7cf5ad18-3d58-4d35-8255-c581c2d2b722-trusted-ca\") pod \"collector-44qdc\" (UID: \"7cf5ad18-3d58-4d35-8255-c581c2d2b722\") " pod="openshift-logging/collector-44qdc" Feb 27 10:41:06 crc kubenswrapper[4728]: I0227 10:41:06.519770 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/7cf5ad18-3d58-4d35-8255-c581c2d2b722-metrics\") pod \"collector-44qdc\" (UID: \"7cf5ad18-3d58-4d35-8255-c581c2d2b722\") " pod="openshift-logging/collector-44qdc" Feb 27 10:41:06 crc kubenswrapper[4728]: I0227 10:41:06.521021 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/7cf5ad18-3d58-4d35-8255-c581c2d2b722-collector-syslog-receiver\") pod \"collector-44qdc\" (UID: \"7cf5ad18-3d58-4d35-8255-c581c2d2b722\") " pod="openshift-logging/collector-44qdc" Feb 27 10:41:06 crc kubenswrapper[4728]: I0227 10:41:06.522353 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/7cf5ad18-3d58-4d35-8255-c581c2d2b722-collector-token\") pod \"collector-44qdc\" (UID: \"7cf5ad18-3d58-4d35-8255-c581c2d2b722\") " pod="openshift-logging/collector-44qdc" Feb 27 10:41:06 crc kubenswrapper[4728]: I0227 10:41:06.525735 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7cf5ad18-3d58-4d35-8255-c581c2d2b722-tmp\") pod \"collector-44qdc\" (UID: \"7cf5ad18-3d58-4d35-8255-c581c2d2b722\") " pod="openshift-logging/collector-44qdc" Feb 27 10:41:06 crc kubenswrapper[4728]: I0227 10:41:06.534789 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/7cf5ad18-3d58-4d35-8255-c581c2d2b722-sa-token\") pod \"collector-44qdc\" (UID: \"7cf5ad18-3d58-4d35-8255-c581c2d2b722\") " pod="openshift-logging/collector-44qdc" Feb 27 10:41:06 crc kubenswrapper[4728]: I0227 10:41:06.546052 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dclkp\" (UniqueName: \"kubernetes.io/projected/7cf5ad18-3d58-4d35-8255-c581c2d2b722-kube-api-access-dclkp\") pod \"collector-44qdc\" (UID: \"7cf5ad18-3d58-4d35-8255-c581c2d2b722\") " pod="openshift-logging/collector-44qdc" Feb 27 10:41:06 crc kubenswrapper[4728]: I0227 10:41:06.646974 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-44qdc" Feb 27 10:41:06 crc kubenswrapper[4728]: I0227 10:41:06.734263 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bcc6975b-cbb5-4b8d-9da1-548ac402f3f7" path="/var/lib/kubelet/pods/bcc6975b-cbb5-4b8d-9da1-548ac402f3f7/volumes" Feb 27 10:41:07 crc kubenswrapper[4728]: I0227 10:41:07.109739 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-44qdc"] Feb 27 10:41:07 crc kubenswrapper[4728]: I0227 10:41:07.132731 4728 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 27 10:41:07 crc kubenswrapper[4728]: I0227 10:41:07.241196 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/collector-44qdc" event={"ID":"7cf5ad18-3d58-4d35-8255-c581c2d2b722","Type":"ContainerStarted","Data":"1b81ccfcd6ec322057d55e30877c7ebabe56cbad33d0f09fcdc6dc4c6f2fb0ec"} Feb 27 10:41:08 crc kubenswrapper[4728]: I0227 10:41:08.549513 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-6p6pb"] Feb 27 10:41:08 crc kubenswrapper[4728]: I0227 10:41:08.570687 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6p6pb"] Feb 27 10:41:08 crc kubenswrapper[4728]: I0227 10:41:08.571198 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6p6pb" Feb 27 10:41:08 crc kubenswrapper[4728]: I0227 10:41:08.652614 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6wpw\" (UniqueName: \"kubernetes.io/projected/1e33db72-7e87-4461-a619-842b33254576-kube-api-access-g6wpw\") pod \"redhat-operators-6p6pb\" (UID: \"1e33db72-7e87-4461-a619-842b33254576\") " pod="openshift-marketplace/redhat-operators-6p6pb" Feb 27 10:41:08 crc kubenswrapper[4728]: I0227 10:41:08.652759 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e33db72-7e87-4461-a619-842b33254576-catalog-content\") pod \"redhat-operators-6p6pb\" (UID: \"1e33db72-7e87-4461-a619-842b33254576\") " pod="openshift-marketplace/redhat-operators-6p6pb" Feb 27 10:41:08 crc kubenswrapper[4728]: I0227 10:41:08.652827 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e33db72-7e87-4461-a619-842b33254576-utilities\") pod \"redhat-operators-6p6pb\" (UID: \"1e33db72-7e87-4461-a619-842b33254576\") " pod="openshift-marketplace/redhat-operators-6p6pb" Feb 27 10:41:08 crc kubenswrapper[4728]: I0227 10:41:08.754473 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e33db72-7e87-4461-a619-842b33254576-catalog-content\") pod \"redhat-operators-6p6pb\" (UID: \"1e33db72-7e87-4461-a619-842b33254576\") " pod="openshift-marketplace/redhat-operators-6p6pb" Feb 27 10:41:08 crc kubenswrapper[4728]: I0227 10:41:08.754687 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e33db72-7e87-4461-a619-842b33254576-utilities\") pod \"redhat-operators-6p6pb\" (UID: \"1e33db72-7e87-4461-a619-842b33254576\") " pod="openshift-marketplace/redhat-operators-6p6pb" Feb 27 10:41:08 crc kubenswrapper[4728]: I0227 10:41:08.754712 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g6wpw\" (UniqueName: \"kubernetes.io/projected/1e33db72-7e87-4461-a619-842b33254576-kube-api-access-g6wpw\") pod \"redhat-operators-6p6pb\" (UID: \"1e33db72-7e87-4461-a619-842b33254576\") " pod="openshift-marketplace/redhat-operators-6p6pb" Feb 27 10:41:08 crc kubenswrapper[4728]: I0227 10:41:08.755863 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e33db72-7e87-4461-a619-842b33254576-catalog-content\") pod \"redhat-operators-6p6pb\" (UID: \"1e33db72-7e87-4461-a619-842b33254576\") " pod="openshift-marketplace/redhat-operators-6p6pb" Feb 27 10:41:08 crc kubenswrapper[4728]: I0227 10:41:08.756126 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e33db72-7e87-4461-a619-842b33254576-utilities\") pod \"redhat-operators-6p6pb\" (UID: \"1e33db72-7e87-4461-a619-842b33254576\") " pod="openshift-marketplace/redhat-operators-6p6pb" Feb 27 10:41:08 crc kubenswrapper[4728]: I0227 10:41:08.774793 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g6wpw\" (UniqueName: \"kubernetes.io/projected/1e33db72-7e87-4461-a619-842b33254576-kube-api-access-g6wpw\") pod \"redhat-operators-6p6pb\" (UID: \"1e33db72-7e87-4461-a619-842b33254576\") " pod="openshift-marketplace/redhat-operators-6p6pb" Feb 27 10:41:08 crc kubenswrapper[4728]: I0227 10:41:08.909081 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6p6pb" Feb 27 10:41:09 crc kubenswrapper[4728]: I0227 10:41:09.338467 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6p6pb"] Feb 27 10:41:09 crc kubenswrapper[4728]: W0227 10:41:09.347611 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1e33db72_7e87_4461_a619_842b33254576.slice/crio-68044e71cf32f646ac9743bbaae503071d3d55440a5e585a9960ee882326f819 WatchSource:0}: Error finding container 68044e71cf32f646ac9743bbaae503071d3d55440a5e585a9960ee882326f819: Status 404 returned error can't find the container with id 68044e71cf32f646ac9743bbaae503071d3d55440a5e585a9960ee882326f819 Feb 27 10:41:10 crc kubenswrapper[4728]: I0227 10:41:10.267290 4728 generic.go:334] "Generic (PLEG): container finished" podID="1e33db72-7e87-4461-a619-842b33254576" containerID="30add3ab78f3849e2da035d722cb1e6a9c9d69b136b11aab50af17d5da5016e9" exitCode=0 Feb 27 10:41:10 crc kubenswrapper[4728]: I0227 10:41:10.267339 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6p6pb" event={"ID":"1e33db72-7e87-4461-a619-842b33254576","Type":"ContainerDied","Data":"30add3ab78f3849e2da035d722cb1e6a9c9d69b136b11aab50af17d5da5016e9"} Feb 27 10:41:10 crc kubenswrapper[4728]: I0227 10:41:10.267844 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6p6pb" event={"ID":"1e33db72-7e87-4461-a619-842b33254576","Type":"ContainerStarted","Data":"68044e71cf32f646ac9743bbaae503071d3d55440a5e585a9960ee882326f819"} Feb 27 10:41:12 crc kubenswrapper[4728]: I0227 10:41:12.942371 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-mkd8s"] Feb 27 10:41:12 crc kubenswrapper[4728]: I0227 10:41:12.950474 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mkd8s" Feb 27 10:41:12 crc kubenswrapper[4728]: I0227 10:41:12.958988 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mkd8s"] Feb 27 10:41:13 crc kubenswrapper[4728]: I0227 10:41:13.023674 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vj4k5\" (UniqueName: \"kubernetes.io/projected/9abe7519-9667-4b1a-9671-f27446a2fc06-kube-api-access-vj4k5\") pod \"redhat-marketplace-mkd8s\" (UID: \"9abe7519-9667-4b1a-9671-f27446a2fc06\") " pod="openshift-marketplace/redhat-marketplace-mkd8s" Feb 27 10:41:13 crc kubenswrapper[4728]: I0227 10:41:13.023748 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9abe7519-9667-4b1a-9671-f27446a2fc06-utilities\") pod \"redhat-marketplace-mkd8s\" (UID: \"9abe7519-9667-4b1a-9671-f27446a2fc06\") " pod="openshift-marketplace/redhat-marketplace-mkd8s" Feb 27 10:41:13 crc kubenswrapper[4728]: I0227 10:41:13.023798 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9abe7519-9667-4b1a-9671-f27446a2fc06-catalog-content\") pod \"redhat-marketplace-mkd8s\" (UID: \"9abe7519-9667-4b1a-9671-f27446a2fc06\") " pod="openshift-marketplace/redhat-marketplace-mkd8s" Feb 27 10:41:13 crc kubenswrapper[4728]: I0227 10:41:13.125562 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vj4k5\" (UniqueName: \"kubernetes.io/projected/9abe7519-9667-4b1a-9671-f27446a2fc06-kube-api-access-vj4k5\") pod \"redhat-marketplace-mkd8s\" (UID: \"9abe7519-9667-4b1a-9671-f27446a2fc06\") " pod="openshift-marketplace/redhat-marketplace-mkd8s" Feb 27 10:41:13 crc kubenswrapper[4728]: I0227 10:41:13.125641 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9abe7519-9667-4b1a-9671-f27446a2fc06-utilities\") pod \"redhat-marketplace-mkd8s\" (UID: \"9abe7519-9667-4b1a-9671-f27446a2fc06\") " pod="openshift-marketplace/redhat-marketplace-mkd8s" Feb 27 10:41:13 crc kubenswrapper[4728]: I0227 10:41:13.125678 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9abe7519-9667-4b1a-9671-f27446a2fc06-catalog-content\") pod \"redhat-marketplace-mkd8s\" (UID: \"9abe7519-9667-4b1a-9671-f27446a2fc06\") " pod="openshift-marketplace/redhat-marketplace-mkd8s" Feb 27 10:41:13 crc kubenswrapper[4728]: I0227 10:41:13.126401 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9abe7519-9667-4b1a-9671-f27446a2fc06-catalog-content\") pod \"redhat-marketplace-mkd8s\" (UID: \"9abe7519-9667-4b1a-9671-f27446a2fc06\") " pod="openshift-marketplace/redhat-marketplace-mkd8s" Feb 27 10:41:13 crc kubenswrapper[4728]: I0227 10:41:13.126611 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9abe7519-9667-4b1a-9671-f27446a2fc06-utilities\") pod \"redhat-marketplace-mkd8s\" (UID: \"9abe7519-9667-4b1a-9671-f27446a2fc06\") " pod="openshift-marketplace/redhat-marketplace-mkd8s" Feb 27 10:41:13 crc kubenswrapper[4728]: I0227 10:41:13.152121 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vj4k5\" (UniqueName: \"kubernetes.io/projected/9abe7519-9667-4b1a-9671-f27446a2fc06-kube-api-access-vj4k5\") pod \"redhat-marketplace-mkd8s\" (UID: \"9abe7519-9667-4b1a-9671-f27446a2fc06\") " pod="openshift-marketplace/redhat-marketplace-mkd8s" Feb 27 10:41:13 crc kubenswrapper[4728]: I0227 10:41:13.278673 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mkd8s" Feb 27 10:41:15 crc kubenswrapper[4728]: I0227 10:41:15.268346 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mkd8s"] Feb 27 10:41:15 crc kubenswrapper[4728]: W0227 10:41:15.276918 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9abe7519_9667_4b1a_9671_f27446a2fc06.slice/crio-d5c65d82e15439561ba6a64e7108dd29dea4f6437e129cb823d6787af9be0917 WatchSource:0}: Error finding container d5c65d82e15439561ba6a64e7108dd29dea4f6437e129cb823d6787af9be0917: Status 404 returned error can't find the container with id d5c65d82e15439561ba6a64e7108dd29dea4f6437e129cb823d6787af9be0917 Feb 27 10:41:15 crc kubenswrapper[4728]: I0227 10:41:15.312004 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mkd8s" event={"ID":"9abe7519-9667-4b1a-9671-f27446a2fc06","Type":"ContainerStarted","Data":"d5c65d82e15439561ba6a64e7108dd29dea4f6437e129cb823d6787af9be0917"} Feb 27 10:41:15 crc kubenswrapper[4728]: I0227 10:41:15.313802 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6p6pb" event={"ID":"1e33db72-7e87-4461-a619-842b33254576","Type":"ContainerStarted","Data":"7110fb0ac35f6e1db77d018d1775f0dcf71e4084988c91a186e1b0a19240d7d3"} Feb 27 10:41:15 crc kubenswrapper[4728]: I0227 10:41:15.315520 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/collector-44qdc" event={"ID":"7cf5ad18-3d58-4d35-8255-c581c2d2b722","Type":"ContainerStarted","Data":"40ed053a299411b5fafa21a6f023b7f30d34519c4565858debe3aed45bf0e73b"} Feb 27 10:41:16 crc kubenswrapper[4728]: I0227 10:41:16.329051 4728 generic.go:334] "Generic (PLEG): container finished" podID="9abe7519-9667-4b1a-9671-f27446a2fc06" containerID="01c1799f1e24d9458dfc40a8f1d923d53ff7c012702b89e6b32626b4bc74cdee" exitCode=0 Feb 27 10:41:16 crc kubenswrapper[4728]: I0227 10:41:16.329222 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mkd8s" event={"ID":"9abe7519-9667-4b1a-9671-f27446a2fc06","Type":"ContainerDied","Data":"01c1799f1e24d9458dfc40a8f1d923d53ff7c012702b89e6b32626b4bc74cdee"} Feb 27 10:41:16 crc kubenswrapper[4728]: I0227 10:41:16.332937 4728 generic.go:334] "Generic (PLEG): container finished" podID="1e33db72-7e87-4461-a619-842b33254576" containerID="7110fb0ac35f6e1db77d018d1775f0dcf71e4084988c91a186e1b0a19240d7d3" exitCode=0 Feb 27 10:41:16 crc kubenswrapper[4728]: I0227 10:41:16.333070 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6p6pb" event={"ID":"1e33db72-7e87-4461-a619-842b33254576","Type":"ContainerDied","Data":"7110fb0ac35f6e1db77d018d1775f0dcf71e4084988c91a186e1b0a19240d7d3"} Feb 27 10:41:16 crc kubenswrapper[4728]: I0227 10:41:16.356419 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/collector-44qdc" podStartSLOduration=2.661017259 podStartE2EDuration="10.356403131s" podCreationTimestamp="2026-02-27 10:41:06 +0000 UTC" firstStartedPulling="2026-02-27 10:41:07.132278268 +0000 UTC m=+887.094644404" lastFinishedPulling="2026-02-27 10:41:14.82766417 +0000 UTC m=+894.790030276" observedRunningTime="2026-02-27 10:41:15.349893932 +0000 UTC m=+895.312260038" watchObservedRunningTime="2026-02-27 10:41:16.356403131 +0000 UTC m=+896.318769247" Feb 27 10:41:17 crc kubenswrapper[4728]: I0227 10:41:17.342891 4728 generic.go:334] "Generic (PLEG): container finished" podID="9abe7519-9667-4b1a-9671-f27446a2fc06" containerID="29f80fffc9c765397d4a8310f8b2206e5f04b684ad66f8119a3c80ad1be299eb" exitCode=0 Feb 27 10:41:17 crc kubenswrapper[4728]: I0227 10:41:17.343133 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mkd8s" event={"ID":"9abe7519-9667-4b1a-9671-f27446a2fc06","Type":"ContainerDied","Data":"29f80fffc9c765397d4a8310f8b2206e5f04b684ad66f8119a3c80ad1be299eb"} Feb 27 10:41:17 crc kubenswrapper[4728]: I0227 10:41:17.352855 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6p6pb" event={"ID":"1e33db72-7e87-4461-a619-842b33254576","Type":"ContainerStarted","Data":"756c4aa633e66c75620eae784081c96ed66be2b069f60de20c4ea802c2eda993"} Feb 27 10:41:17 crc kubenswrapper[4728]: I0227 10:41:17.384761 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-6p6pb" podStartSLOduration=2.940480138 podStartE2EDuration="9.384746913s" podCreationTimestamp="2026-02-27 10:41:08 +0000 UTC" firstStartedPulling="2026-02-27 10:41:10.268992331 +0000 UTC m=+890.231358437" lastFinishedPulling="2026-02-27 10:41:16.713259096 +0000 UTC m=+896.675625212" observedRunningTime="2026-02-27 10:41:17.382745407 +0000 UTC m=+897.345111503" watchObservedRunningTime="2026-02-27 10:41:17.384746913 +0000 UTC m=+897.347113019" Feb 27 10:41:18 crc kubenswrapper[4728]: I0227 10:41:18.365913 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mkd8s" event={"ID":"9abe7519-9667-4b1a-9671-f27446a2fc06","Type":"ContainerStarted","Data":"9baa98cae8c01c88f741da9194a3d7505f71995a6ea84e8b7b82840d1379854a"} Feb 27 10:41:18 crc kubenswrapper[4728]: I0227 10:41:18.387337 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-mkd8s" podStartSLOduration=4.98721113 podStartE2EDuration="6.387307034s" podCreationTimestamp="2026-02-27 10:41:12 +0000 UTC" firstStartedPulling="2026-02-27 10:41:16.33196736 +0000 UTC m=+896.294333476" lastFinishedPulling="2026-02-27 10:41:17.732063284 +0000 UTC m=+897.694429380" observedRunningTime="2026-02-27 10:41:18.386953175 +0000 UTC m=+898.349319291" watchObservedRunningTime="2026-02-27 10:41:18.387307034 +0000 UTC m=+898.349673180" Feb 27 10:41:18 crc kubenswrapper[4728]: I0227 10:41:18.909558 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-6p6pb" Feb 27 10:41:18 crc kubenswrapper[4728]: I0227 10:41:18.909633 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-6p6pb" Feb 27 10:41:19 crc kubenswrapper[4728]: I0227 10:41:19.964587 4728 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-6p6pb" podUID="1e33db72-7e87-4461-a619-842b33254576" containerName="registry-server" probeResult="failure" output=< Feb 27 10:41:19 crc kubenswrapper[4728]: timeout: failed to connect service ":50051" within 1s Feb 27 10:41:19 crc kubenswrapper[4728]: > Feb 27 10:41:23 crc kubenswrapper[4728]: I0227 10:41:23.278898 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-mkd8s" Feb 27 10:41:23 crc kubenswrapper[4728]: I0227 10:41:23.279673 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-mkd8s" Feb 27 10:41:23 crc kubenswrapper[4728]: I0227 10:41:23.343732 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-mkd8s" Feb 27 10:41:23 crc kubenswrapper[4728]: I0227 10:41:23.348975 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-vdfqk"] Feb 27 10:41:23 crc kubenswrapper[4728]: I0227 10:41:23.351602 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vdfqk" Feb 27 10:41:23 crc kubenswrapper[4728]: I0227 10:41:23.359702 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vdfqk"] Feb 27 10:41:23 crc kubenswrapper[4728]: I0227 10:41:23.394899 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jc46\" (UniqueName: \"kubernetes.io/projected/60c50733-8a96-4bce-bdbe-830e8043cbe2-kube-api-access-5jc46\") pod \"community-operators-vdfqk\" (UID: \"60c50733-8a96-4bce-bdbe-830e8043cbe2\") " pod="openshift-marketplace/community-operators-vdfqk" Feb 27 10:41:23 crc kubenswrapper[4728]: I0227 10:41:23.394942 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/60c50733-8a96-4bce-bdbe-830e8043cbe2-catalog-content\") pod \"community-operators-vdfqk\" (UID: \"60c50733-8a96-4bce-bdbe-830e8043cbe2\") " pod="openshift-marketplace/community-operators-vdfqk" Feb 27 10:41:23 crc kubenswrapper[4728]: I0227 10:41:23.395153 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/60c50733-8a96-4bce-bdbe-830e8043cbe2-utilities\") pod \"community-operators-vdfqk\" (UID: \"60c50733-8a96-4bce-bdbe-830e8043cbe2\") " pod="openshift-marketplace/community-operators-vdfqk" Feb 27 10:41:23 crc kubenswrapper[4728]: I0227 10:41:23.450259 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-mkd8s" Feb 27 10:41:23 crc kubenswrapper[4728]: I0227 10:41:23.496703 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5jc46\" (UniqueName: \"kubernetes.io/projected/60c50733-8a96-4bce-bdbe-830e8043cbe2-kube-api-access-5jc46\") pod \"community-operators-vdfqk\" (UID: \"60c50733-8a96-4bce-bdbe-830e8043cbe2\") " pod="openshift-marketplace/community-operators-vdfqk" Feb 27 10:41:23 crc kubenswrapper[4728]: I0227 10:41:23.497001 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/60c50733-8a96-4bce-bdbe-830e8043cbe2-catalog-content\") pod \"community-operators-vdfqk\" (UID: \"60c50733-8a96-4bce-bdbe-830e8043cbe2\") " pod="openshift-marketplace/community-operators-vdfqk" Feb 27 10:41:23 crc kubenswrapper[4728]: I0227 10:41:23.497214 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/60c50733-8a96-4bce-bdbe-830e8043cbe2-utilities\") pod \"community-operators-vdfqk\" (UID: \"60c50733-8a96-4bce-bdbe-830e8043cbe2\") " pod="openshift-marketplace/community-operators-vdfqk" Feb 27 10:41:23 crc kubenswrapper[4728]: I0227 10:41:23.497779 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/60c50733-8a96-4bce-bdbe-830e8043cbe2-utilities\") pod \"community-operators-vdfqk\" (UID: \"60c50733-8a96-4bce-bdbe-830e8043cbe2\") " pod="openshift-marketplace/community-operators-vdfqk" Feb 27 10:41:23 crc kubenswrapper[4728]: I0227 10:41:23.497889 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/60c50733-8a96-4bce-bdbe-830e8043cbe2-catalog-content\") pod \"community-operators-vdfqk\" (UID: \"60c50733-8a96-4bce-bdbe-830e8043cbe2\") " pod="openshift-marketplace/community-operators-vdfqk" Feb 27 10:41:23 crc kubenswrapper[4728]: I0227 10:41:23.525442 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5jc46\" (UniqueName: \"kubernetes.io/projected/60c50733-8a96-4bce-bdbe-830e8043cbe2-kube-api-access-5jc46\") pod \"community-operators-vdfqk\" (UID: \"60c50733-8a96-4bce-bdbe-830e8043cbe2\") " pod="openshift-marketplace/community-operators-vdfqk" Feb 27 10:41:23 crc kubenswrapper[4728]: I0227 10:41:23.672887 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vdfqk" Feb 27 10:41:24 crc kubenswrapper[4728]: I0227 10:41:24.170609 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vdfqk"] Feb 27 10:41:24 crc kubenswrapper[4728]: W0227 10:41:24.183216 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod60c50733_8a96_4bce_bdbe_830e8043cbe2.slice/crio-863f1fed529238d9c625ab9a3d08f8a45125f57580e569bce0044adbd1f1d74a WatchSource:0}: Error finding container 863f1fed529238d9c625ab9a3d08f8a45125f57580e569bce0044adbd1f1d74a: Status 404 returned error can't find the container with id 863f1fed529238d9c625ab9a3d08f8a45125f57580e569bce0044adbd1f1d74a Feb 27 10:41:24 crc kubenswrapper[4728]: I0227 10:41:24.414985 4728 generic.go:334] "Generic (PLEG): container finished" podID="60c50733-8a96-4bce-bdbe-830e8043cbe2" containerID="42491003b4dbcb9dd05eda7f8d70fe72177f337487a3dc2b85b837a0c15488a8" exitCode=0 Feb 27 10:41:24 crc kubenswrapper[4728]: I0227 10:41:24.416640 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vdfqk" event={"ID":"60c50733-8a96-4bce-bdbe-830e8043cbe2","Type":"ContainerDied","Data":"42491003b4dbcb9dd05eda7f8d70fe72177f337487a3dc2b85b837a0c15488a8"} Feb 27 10:41:24 crc kubenswrapper[4728]: I0227 10:41:24.416684 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vdfqk" event={"ID":"60c50733-8a96-4bce-bdbe-830e8043cbe2","Type":"ContainerStarted","Data":"863f1fed529238d9c625ab9a3d08f8a45125f57580e569bce0044adbd1f1d74a"} Feb 27 10:41:25 crc kubenswrapper[4728]: I0227 10:41:25.440957 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vdfqk" event={"ID":"60c50733-8a96-4bce-bdbe-830e8043cbe2","Type":"ContainerStarted","Data":"ea3516e2fd2d55198c5cf8fd435bb49d07849fd4e2a7c101060668380a34afd6"} Feb 27 10:41:25 crc kubenswrapper[4728]: I0227 10:41:25.722267 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mkd8s"] Feb 27 10:41:25 crc kubenswrapper[4728]: I0227 10:41:25.722634 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-mkd8s" podUID="9abe7519-9667-4b1a-9671-f27446a2fc06" containerName="registry-server" containerID="cri-o://9baa98cae8c01c88f741da9194a3d7505f71995a6ea84e8b7b82840d1379854a" gracePeriod=2 Feb 27 10:41:26 crc kubenswrapper[4728]: I0227 10:41:26.458805 4728 generic.go:334] "Generic (PLEG): container finished" podID="60c50733-8a96-4bce-bdbe-830e8043cbe2" containerID="ea3516e2fd2d55198c5cf8fd435bb49d07849fd4e2a7c101060668380a34afd6" exitCode=0 Feb 27 10:41:26 crc kubenswrapper[4728]: I0227 10:41:26.458911 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vdfqk" event={"ID":"60c50733-8a96-4bce-bdbe-830e8043cbe2","Type":"ContainerDied","Data":"ea3516e2fd2d55198c5cf8fd435bb49d07849fd4e2a7c101060668380a34afd6"} Feb 27 10:41:26 crc kubenswrapper[4728]: I0227 10:41:26.463919 4728 generic.go:334] "Generic (PLEG): container finished" podID="9abe7519-9667-4b1a-9671-f27446a2fc06" containerID="9baa98cae8c01c88f741da9194a3d7505f71995a6ea84e8b7b82840d1379854a" exitCode=0 Feb 27 10:41:26 crc kubenswrapper[4728]: I0227 10:41:26.463977 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mkd8s" event={"ID":"9abe7519-9667-4b1a-9671-f27446a2fc06","Type":"ContainerDied","Data":"9baa98cae8c01c88f741da9194a3d7505f71995a6ea84e8b7b82840d1379854a"} Feb 27 10:41:26 crc kubenswrapper[4728]: I0227 10:41:26.597792 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mkd8s" Feb 27 10:41:26 crc kubenswrapper[4728]: I0227 10:41:26.653961 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9abe7519-9667-4b1a-9671-f27446a2fc06-catalog-content\") pod \"9abe7519-9667-4b1a-9671-f27446a2fc06\" (UID: \"9abe7519-9667-4b1a-9671-f27446a2fc06\") " Feb 27 10:41:26 crc kubenswrapper[4728]: I0227 10:41:26.654129 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9abe7519-9667-4b1a-9671-f27446a2fc06-utilities\") pod \"9abe7519-9667-4b1a-9671-f27446a2fc06\" (UID: \"9abe7519-9667-4b1a-9671-f27446a2fc06\") " Feb 27 10:41:26 crc kubenswrapper[4728]: I0227 10:41:26.654299 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vj4k5\" (UniqueName: \"kubernetes.io/projected/9abe7519-9667-4b1a-9671-f27446a2fc06-kube-api-access-vj4k5\") pod \"9abe7519-9667-4b1a-9671-f27446a2fc06\" (UID: \"9abe7519-9667-4b1a-9671-f27446a2fc06\") " Feb 27 10:41:26 crc kubenswrapper[4728]: I0227 10:41:26.655392 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9abe7519-9667-4b1a-9671-f27446a2fc06-utilities" (OuterVolumeSpecName: "utilities") pod "9abe7519-9667-4b1a-9671-f27446a2fc06" (UID: "9abe7519-9667-4b1a-9671-f27446a2fc06"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:41:26 crc kubenswrapper[4728]: I0227 10:41:26.661922 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9abe7519-9667-4b1a-9671-f27446a2fc06-kube-api-access-vj4k5" (OuterVolumeSpecName: "kube-api-access-vj4k5") pod "9abe7519-9667-4b1a-9671-f27446a2fc06" (UID: "9abe7519-9667-4b1a-9671-f27446a2fc06"). InnerVolumeSpecName "kube-api-access-vj4k5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:41:26 crc kubenswrapper[4728]: I0227 10:41:26.694684 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9abe7519-9667-4b1a-9671-f27446a2fc06-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9abe7519-9667-4b1a-9671-f27446a2fc06" (UID: "9abe7519-9667-4b1a-9671-f27446a2fc06"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:41:26 crc kubenswrapper[4728]: I0227 10:41:26.756412 4728 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9abe7519-9667-4b1a-9671-f27446a2fc06-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 10:41:26 crc kubenswrapper[4728]: I0227 10:41:26.756555 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vj4k5\" (UniqueName: \"kubernetes.io/projected/9abe7519-9667-4b1a-9671-f27446a2fc06-kube-api-access-vj4k5\") on node \"crc\" DevicePath \"\"" Feb 27 10:41:26 crc kubenswrapper[4728]: I0227 10:41:26.756768 4728 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9abe7519-9667-4b1a-9671-f27446a2fc06-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 10:41:27 crc kubenswrapper[4728]: I0227 10:41:27.472759 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mkd8s" event={"ID":"9abe7519-9667-4b1a-9671-f27446a2fc06","Type":"ContainerDied","Data":"d5c65d82e15439561ba6a64e7108dd29dea4f6437e129cb823d6787af9be0917"} Feb 27 10:41:27 crc kubenswrapper[4728]: I0227 10:41:27.472778 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mkd8s" Feb 27 10:41:27 crc kubenswrapper[4728]: I0227 10:41:27.473118 4728 scope.go:117] "RemoveContainer" containerID="9baa98cae8c01c88f741da9194a3d7505f71995a6ea84e8b7b82840d1379854a" Feb 27 10:41:27 crc kubenswrapper[4728]: I0227 10:41:27.476259 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vdfqk" event={"ID":"60c50733-8a96-4bce-bdbe-830e8043cbe2","Type":"ContainerStarted","Data":"74efbb907b2e1358ccf86e336e16dcf8f91a3bfc67cd4edcaeecd5d81e5313ca"} Feb 27 10:41:27 crc kubenswrapper[4728]: I0227 10:41:27.492376 4728 scope.go:117] "RemoveContainer" containerID="29f80fffc9c765397d4a8310f8b2206e5f04b684ad66f8119a3c80ad1be299eb" Feb 27 10:41:27 crc kubenswrapper[4728]: I0227 10:41:27.497394 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-vdfqk" podStartSLOduration=2.082359256 podStartE2EDuration="4.497377003s" podCreationTimestamp="2026-02-27 10:41:23 +0000 UTC" firstStartedPulling="2026-02-27 10:41:24.41778087 +0000 UTC m=+904.380146986" lastFinishedPulling="2026-02-27 10:41:26.832798617 +0000 UTC m=+906.795164733" observedRunningTime="2026-02-27 10:41:27.494824613 +0000 UTC m=+907.457190729" watchObservedRunningTime="2026-02-27 10:41:27.497377003 +0000 UTC m=+907.459743109" Feb 27 10:41:27 crc kubenswrapper[4728]: I0227 10:41:27.515179 4728 scope.go:117] "RemoveContainer" containerID="01c1799f1e24d9458dfc40a8f1d923d53ff7c012702b89e6b32626b4bc74cdee" Feb 27 10:41:27 crc kubenswrapper[4728]: I0227 10:41:27.518728 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mkd8s"] Feb 27 10:41:27 crc kubenswrapper[4728]: I0227 10:41:27.526468 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-mkd8s"] Feb 27 10:41:28 crc kubenswrapper[4728]: I0227 10:41:28.741977 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9abe7519-9667-4b1a-9671-f27446a2fc06" path="/var/lib/kubelet/pods/9abe7519-9667-4b1a-9671-f27446a2fc06/volumes" Feb 27 10:41:28 crc kubenswrapper[4728]: I0227 10:41:28.988218 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-6p6pb" Feb 27 10:41:29 crc kubenswrapper[4728]: I0227 10:41:29.050329 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-6p6pb" Feb 27 10:41:31 crc kubenswrapper[4728]: I0227 10:41:31.530602 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6p6pb"] Feb 27 10:41:31 crc kubenswrapper[4728]: I0227 10:41:31.531737 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-6p6pb" podUID="1e33db72-7e87-4461-a619-842b33254576" containerName="registry-server" containerID="cri-o://756c4aa633e66c75620eae784081c96ed66be2b069f60de20c4ea802c2eda993" gracePeriod=2 Feb 27 10:41:32 crc kubenswrapper[4728]: I0227 10:41:32.432899 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6p6pb" Feb 27 10:41:32 crc kubenswrapper[4728]: I0227 10:41:32.520386 4728 generic.go:334] "Generic (PLEG): container finished" podID="1e33db72-7e87-4461-a619-842b33254576" containerID="756c4aa633e66c75620eae784081c96ed66be2b069f60de20c4ea802c2eda993" exitCode=0 Feb 27 10:41:32 crc kubenswrapper[4728]: I0227 10:41:32.520426 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6p6pb" event={"ID":"1e33db72-7e87-4461-a619-842b33254576","Type":"ContainerDied","Data":"756c4aa633e66c75620eae784081c96ed66be2b069f60de20c4ea802c2eda993"} Feb 27 10:41:32 crc kubenswrapper[4728]: I0227 10:41:32.520460 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6p6pb" event={"ID":"1e33db72-7e87-4461-a619-842b33254576","Type":"ContainerDied","Data":"68044e71cf32f646ac9743bbaae503071d3d55440a5e585a9960ee882326f819"} Feb 27 10:41:32 crc kubenswrapper[4728]: I0227 10:41:32.520460 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6p6pb" Feb 27 10:41:32 crc kubenswrapper[4728]: I0227 10:41:32.520477 4728 scope.go:117] "RemoveContainer" containerID="756c4aa633e66c75620eae784081c96ed66be2b069f60de20c4ea802c2eda993" Feb 27 10:41:32 crc kubenswrapper[4728]: I0227 10:41:32.539551 4728 scope.go:117] "RemoveContainer" containerID="7110fb0ac35f6e1db77d018d1775f0dcf71e4084988c91a186e1b0a19240d7d3" Feb 27 10:41:32 crc kubenswrapper[4728]: I0227 10:41:32.555810 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e33db72-7e87-4461-a619-842b33254576-catalog-content\") pod \"1e33db72-7e87-4461-a619-842b33254576\" (UID: \"1e33db72-7e87-4461-a619-842b33254576\") " Feb 27 10:41:32 crc kubenswrapper[4728]: I0227 10:41:32.555925 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g6wpw\" (UniqueName: \"kubernetes.io/projected/1e33db72-7e87-4461-a619-842b33254576-kube-api-access-g6wpw\") pod \"1e33db72-7e87-4461-a619-842b33254576\" (UID: \"1e33db72-7e87-4461-a619-842b33254576\") " Feb 27 10:41:32 crc kubenswrapper[4728]: I0227 10:41:32.555991 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e33db72-7e87-4461-a619-842b33254576-utilities\") pod \"1e33db72-7e87-4461-a619-842b33254576\" (UID: \"1e33db72-7e87-4461-a619-842b33254576\") " Feb 27 10:41:32 crc kubenswrapper[4728]: I0227 10:41:32.557354 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1e33db72-7e87-4461-a619-842b33254576-utilities" (OuterVolumeSpecName: "utilities") pod "1e33db72-7e87-4461-a619-842b33254576" (UID: "1e33db72-7e87-4461-a619-842b33254576"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:41:32 crc kubenswrapper[4728]: I0227 10:41:32.560736 4728 scope.go:117] "RemoveContainer" containerID="30add3ab78f3849e2da035d722cb1e6a9c9d69b136b11aab50af17d5da5016e9" Feb 27 10:41:32 crc kubenswrapper[4728]: I0227 10:41:32.568751 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e33db72-7e87-4461-a619-842b33254576-kube-api-access-g6wpw" (OuterVolumeSpecName: "kube-api-access-g6wpw") pod "1e33db72-7e87-4461-a619-842b33254576" (UID: "1e33db72-7e87-4461-a619-842b33254576"). InnerVolumeSpecName "kube-api-access-g6wpw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:41:32 crc kubenswrapper[4728]: I0227 10:41:32.628922 4728 scope.go:117] "RemoveContainer" containerID="756c4aa633e66c75620eae784081c96ed66be2b069f60de20c4ea802c2eda993" Feb 27 10:41:32 crc kubenswrapper[4728]: E0227 10:41:32.629851 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"756c4aa633e66c75620eae784081c96ed66be2b069f60de20c4ea802c2eda993\": container with ID starting with 756c4aa633e66c75620eae784081c96ed66be2b069f60de20c4ea802c2eda993 not found: ID does not exist" containerID="756c4aa633e66c75620eae784081c96ed66be2b069f60de20c4ea802c2eda993" Feb 27 10:41:32 crc kubenswrapper[4728]: I0227 10:41:32.629907 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"756c4aa633e66c75620eae784081c96ed66be2b069f60de20c4ea802c2eda993"} err="failed to get container status \"756c4aa633e66c75620eae784081c96ed66be2b069f60de20c4ea802c2eda993\": rpc error: code = NotFound desc = could not find container \"756c4aa633e66c75620eae784081c96ed66be2b069f60de20c4ea802c2eda993\": container with ID starting with 756c4aa633e66c75620eae784081c96ed66be2b069f60de20c4ea802c2eda993 not found: ID does not exist" Feb 27 10:41:32 crc kubenswrapper[4728]: I0227 10:41:32.629932 4728 scope.go:117] "RemoveContainer" containerID="7110fb0ac35f6e1db77d018d1775f0dcf71e4084988c91a186e1b0a19240d7d3" Feb 27 10:41:32 crc kubenswrapper[4728]: E0227 10:41:32.630984 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7110fb0ac35f6e1db77d018d1775f0dcf71e4084988c91a186e1b0a19240d7d3\": container with ID starting with 7110fb0ac35f6e1db77d018d1775f0dcf71e4084988c91a186e1b0a19240d7d3 not found: ID does not exist" containerID="7110fb0ac35f6e1db77d018d1775f0dcf71e4084988c91a186e1b0a19240d7d3" Feb 27 10:41:32 crc kubenswrapper[4728]: I0227 10:41:32.631036 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7110fb0ac35f6e1db77d018d1775f0dcf71e4084988c91a186e1b0a19240d7d3"} err="failed to get container status \"7110fb0ac35f6e1db77d018d1775f0dcf71e4084988c91a186e1b0a19240d7d3\": rpc error: code = NotFound desc = could not find container \"7110fb0ac35f6e1db77d018d1775f0dcf71e4084988c91a186e1b0a19240d7d3\": container with ID starting with 7110fb0ac35f6e1db77d018d1775f0dcf71e4084988c91a186e1b0a19240d7d3 not found: ID does not exist" Feb 27 10:41:32 crc kubenswrapper[4728]: I0227 10:41:32.631052 4728 scope.go:117] "RemoveContainer" containerID="30add3ab78f3849e2da035d722cb1e6a9c9d69b136b11aab50af17d5da5016e9" Feb 27 10:41:32 crc kubenswrapper[4728]: E0227 10:41:32.631414 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"30add3ab78f3849e2da035d722cb1e6a9c9d69b136b11aab50af17d5da5016e9\": container with ID starting with 30add3ab78f3849e2da035d722cb1e6a9c9d69b136b11aab50af17d5da5016e9 not found: ID does not exist" containerID="30add3ab78f3849e2da035d722cb1e6a9c9d69b136b11aab50af17d5da5016e9" Feb 27 10:41:32 crc kubenswrapper[4728]: I0227 10:41:32.631594 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"30add3ab78f3849e2da035d722cb1e6a9c9d69b136b11aab50af17d5da5016e9"} err="failed to get container status \"30add3ab78f3849e2da035d722cb1e6a9c9d69b136b11aab50af17d5da5016e9\": rpc error: code = NotFound desc = could not find container \"30add3ab78f3849e2da035d722cb1e6a9c9d69b136b11aab50af17d5da5016e9\": container with ID starting with 30add3ab78f3849e2da035d722cb1e6a9c9d69b136b11aab50af17d5da5016e9 not found: ID does not exist" Feb 27 10:41:32 crc kubenswrapper[4728]: I0227 10:41:32.657672 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g6wpw\" (UniqueName: \"kubernetes.io/projected/1e33db72-7e87-4461-a619-842b33254576-kube-api-access-g6wpw\") on node \"crc\" DevicePath \"\"" Feb 27 10:41:32 crc kubenswrapper[4728]: I0227 10:41:32.657696 4728 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e33db72-7e87-4461-a619-842b33254576-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 10:41:32 crc kubenswrapper[4728]: I0227 10:41:32.676670 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1e33db72-7e87-4461-a619-842b33254576-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1e33db72-7e87-4461-a619-842b33254576" (UID: "1e33db72-7e87-4461-a619-842b33254576"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:41:32 crc kubenswrapper[4728]: I0227 10:41:32.759231 4728 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e33db72-7e87-4461-a619-842b33254576-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 10:41:32 crc kubenswrapper[4728]: I0227 10:41:32.877227 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6p6pb"] Feb 27 10:41:32 crc kubenswrapper[4728]: I0227 10:41:32.887089 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-6p6pb"] Feb 27 10:41:33 crc kubenswrapper[4728]: I0227 10:41:33.673325 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-vdfqk" Feb 27 10:41:33 crc kubenswrapper[4728]: I0227 10:41:33.673384 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-vdfqk" Feb 27 10:41:33 crc kubenswrapper[4728]: I0227 10:41:33.728422 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-vdfqk" Feb 27 10:41:34 crc kubenswrapper[4728]: I0227 10:41:34.616387 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-vdfqk" Feb 27 10:41:34 crc kubenswrapper[4728]: I0227 10:41:34.737115 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e33db72-7e87-4461-a619-842b33254576" path="/var/lib/kubelet/pods/1e33db72-7e87-4461-a619-842b33254576/volumes" Feb 27 10:41:35 crc kubenswrapper[4728]: I0227 10:41:35.719480 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vdfqk"] Feb 27 10:41:35 crc kubenswrapper[4728]: I0227 10:41:35.927467 4728 patch_prober.go:28] interesting pod/machine-config-daemon-mf2hh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 10:41:35 crc kubenswrapper[4728]: I0227 10:41:35.927562 4728 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 10:41:36 crc kubenswrapper[4728]: I0227 10:41:36.558845 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-vdfqk" podUID="60c50733-8a96-4bce-bdbe-830e8043cbe2" containerName="registry-server" containerID="cri-o://74efbb907b2e1358ccf86e336e16dcf8f91a3bfc67cd4edcaeecd5d81e5313ca" gracePeriod=2 Feb 27 10:41:36 crc kubenswrapper[4728]: I0227 10:41:36.956070 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vdfqk" Feb 27 10:41:37 crc kubenswrapper[4728]: I0227 10:41:37.030937 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/60c50733-8a96-4bce-bdbe-830e8043cbe2-catalog-content\") pod \"60c50733-8a96-4bce-bdbe-830e8043cbe2\" (UID: \"60c50733-8a96-4bce-bdbe-830e8043cbe2\") " Feb 27 10:41:37 crc kubenswrapper[4728]: I0227 10:41:37.031005 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/60c50733-8a96-4bce-bdbe-830e8043cbe2-utilities\") pod \"60c50733-8a96-4bce-bdbe-830e8043cbe2\" (UID: \"60c50733-8a96-4bce-bdbe-830e8043cbe2\") " Feb 27 10:41:37 crc kubenswrapper[4728]: I0227 10:41:37.031109 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5jc46\" (UniqueName: \"kubernetes.io/projected/60c50733-8a96-4bce-bdbe-830e8043cbe2-kube-api-access-5jc46\") pod \"60c50733-8a96-4bce-bdbe-830e8043cbe2\" (UID: \"60c50733-8a96-4bce-bdbe-830e8043cbe2\") " Feb 27 10:41:37 crc kubenswrapper[4728]: I0227 10:41:37.032881 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/60c50733-8a96-4bce-bdbe-830e8043cbe2-utilities" (OuterVolumeSpecName: "utilities") pod "60c50733-8a96-4bce-bdbe-830e8043cbe2" (UID: "60c50733-8a96-4bce-bdbe-830e8043cbe2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:41:37 crc kubenswrapper[4728]: I0227 10:41:37.035959 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60c50733-8a96-4bce-bdbe-830e8043cbe2-kube-api-access-5jc46" (OuterVolumeSpecName: "kube-api-access-5jc46") pod "60c50733-8a96-4bce-bdbe-830e8043cbe2" (UID: "60c50733-8a96-4bce-bdbe-830e8043cbe2"). InnerVolumeSpecName "kube-api-access-5jc46". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:41:37 crc kubenswrapper[4728]: I0227 10:41:37.081372 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/60c50733-8a96-4bce-bdbe-830e8043cbe2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "60c50733-8a96-4bce-bdbe-830e8043cbe2" (UID: "60c50733-8a96-4bce-bdbe-830e8043cbe2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:41:37 crc kubenswrapper[4728]: I0227 10:41:37.132692 4728 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/60c50733-8a96-4bce-bdbe-830e8043cbe2-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 10:41:37 crc kubenswrapper[4728]: I0227 10:41:37.132724 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5jc46\" (UniqueName: \"kubernetes.io/projected/60c50733-8a96-4bce-bdbe-830e8043cbe2-kube-api-access-5jc46\") on node \"crc\" DevicePath \"\"" Feb 27 10:41:37 crc kubenswrapper[4728]: I0227 10:41:37.132737 4728 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/60c50733-8a96-4bce-bdbe-830e8043cbe2-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 10:41:37 crc kubenswrapper[4728]: I0227 10:41:37.568948 4728 generic.go:334] "Generic (PLEG): container finished" podID="60c50733-8a96-4bce-bdbe-830e8043cbe2" containerID="74efbb907b2e1358ccf86e336e16dcf8f91a3bfc67cd4edcaeecd5d81e5313ca" exitCode=0 Feb 27 10:41:37 crc kubenswrapper[4728]: I0227 10:41:37.569042 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vdfqk" event={"ID":"60c50733-8a96-4bce-bdbe-830e8043cbe2","Type":"ContainerDied","Data":"74efbb907b2e1358ccf86e336e16dcf8f91a3bfc67cd4edcaeecd5d81e5313ca"} Feb 27 10:41:37 crc kubenswrapper[4728]: I0227 10:41:37.569076 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vdfqk" Feb 27 10:41:37 crc kubenswrapper[4728]: I0227 10:41:37.569095 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vdfqk" event={"ID":"60c50733-8a96-4bce-bdbe-830e8043cbe2","Type":"ContainerDied","Data":"863f1fed529238d9c625ab9a3d08f8a45125f57580e569bce0044adbd1f1d74a"} Feb 27 10:41:37 crc kubenswrapper[4728]: I0227 10:41:37.569119 4728 scope.go:117] "RemoveContainer" containerID="74efbb907b2e1358ccf86e336e16dcf8f91a3bfc67cd4edcaeecd5d81e5313ca" Feb 27 10:41:37 crc kubenswrapper[4728]: I0227 10:41:37.602663 4728 scope.go:117] "RemoveContainer" containerID="ea3516e2fd2d55198c5cf8fd435bb49d07849fd4e2a7c101060668380a34afd6" Feb 27 10:41:37 crc kubenswrapper[4728]: I0227 10:41:37.612568 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vdfqk"] Feb 27 10:41:37 crc kubenswrapper[4728]: I0227 10:41:37.623094 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-vdfqk"] Feb 27 10:41:37 crc kubenswrapper[4728]: I0227 10:41:37.629424 4728 scope.go:117] "RemoveContainer" containerID="42491003b4dbcb9dd05eda7f8d70fe72177f337487a3dc2b85b837a0c15488a8" Feb 27 10:41:37 crc kubenswrapper[4728]: I0227 10:41:37.698762 4728 scope.go:117] "RemoveContainer" containerID="74efbb907b2e1358ccf86e336e16dcf8f91a3bfc67cd4edcaeecd5d81e5313ca" Feb 27 10:41:37 crc kubenswrapper[4728]: E0227 10:41:37.699282 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"74efbb907b2e1358ccf86e336e16dcf8f91a3bfc67cd4edcaeecd5d81e5313ca\": container with ID starting with 74efbb907b2e1358ccf86e336e16dcf8f91a3bfc67cd4edcaeecd5d81e5313ca not found: ID does not exist" containerID="74efbb907b2e1358ccf86e336e16dcf8f91a3bfc67cd4edcaeecd5d81e5313ca" Feb 27 10:41:37 crc kubenswrapper[4728]: I0227 10:41:37.699411 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"74efbb907b2e1358ccf86e336e16dcf8f91a3bfc67cd4edcaeecd5d81e5313ca"} err="failed to get container status \"74efbb907b2e1358ccf86e336e16dcf8f91a3bfc67cd4edcaeecd5d81e5313ca\": rpc error: code = NotFound desc = could not find container \"74efbb907b2e1358ccf86e336e16dcf8f91a3bfc67cd4edcaeecd5d81e5313ca\": container with ID starting with 74efbb907b2e1358ccf86e336e16dcf8f91a3bfc67cd4edcaeecd5d81e5313ca not found: ID does not exist" Feb 27 10:41:37 crc kubenswrapper[4728]: I0227 10:41:37.699447 4728 scope.go:117] "RemoveContainer" containerID="ea3516e2fd2d55198c5cf8fd435bb49d07849fd4e2a7c101060668380a34afd6" Feb 27 10:41:37 crc kubenswrapper[4728]: E0227 10:41:37.700907 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ea3516e2fd2d55198c5cf8fd435bb49d07849fd4e2a7c101060668380a34afd6\": container with ID starting with ea3516e2fd2d55198c5cf8fd435bb49d07849fd4e2a7c101060668380a34afd6 not found: ID does not exist" containerID="ea3516e2fd2d55198c5cf8fd435bb49d07849fd4e2a7c101060668380a34afd6" Feb 27 10:41:37 crc kubenswrapper[4728]: I0227 10:41:37.700970 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea3516e2fd2d55198c5cf8fd435bb49d07849fd4e2a7c101060668380a34afd6"} err="failed to get container status \"ea3516e2fd2d55198c5cf8fd435bb49d07849fd4e2a7c101060668380a34afd6\": rpc error: code = NotFound desc = could not find container \"ea3516e2fd2d55198c5cf8fd435bb49d07849fd4e2a7c101060668380a34afd6\": container with ID starting with ea3516e2fd2d55198c5cf8fd435bb49d07849fd4e2a7c101060668380a34afd6 not found: ID does not exist" Feb 27 10:41:37 crc kubenswrapper[4728]: I0227 10:41:37.701005 4728 scope.go:117] "RemoveContainer" containerID="42491003b4dbcb9dd05eda7f8d70fe72177f337487a3dc2b85b837a0c15488a8" Feb 27 10:41:37 crc kubenswrapper[4728]: E0227 10:41:37.701413 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"42491003b4dbcb9dd05eda7f8d70fe72177f337487a3dc2b85b837a0c15488a8\": container with ID starting with 42491003b4dbcb9dd05eda7f8d70fe72177f337487a3dc2b85b837a0c15488a8 not found: ID does not exist" containerID="42491003b4dbcb9dd05eda7f8d70fe72177f337487a3dc2b85b837a0c15488a8" Feb 27 10:41:37 crc kubenswrapper[4728]: I0227 10:41:37.701439 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42491003b4dbcb9dd05eda7f8d70fe72177f337487a3dc2b85b837a0c15488a8"} err="failed to get container status \"42491003b4dbcb9dd05eda7f8d70fe72177f337487a3dc2b85b837a0c15488a8\": rpc error: code = NotFound desc = could not find container \"42491003b4dbcb9dd05eda7f8d70fe72177f337487a3dc2b85b837a0c15488a8\": container with ID starting with 42491003b4dbcb9dd05eda7f8d70fe72177f337487a3dc2b85b837a0c15488a8 not found: ID does not exist" Feb 27 10:41:38 crc kubenswrapper[4728]: I0227 10:41:38.733150 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="60c50733-8a96-4bce-bdbe-830e8043cbe2" path="/var/lib/kubelet/pods/60c50733-8a96-4bce-bdbe-830e8043cbe2/volumes" Feb 27 10:41:41 crc kubenswrapper[4728]: I0227 10:41:41.132080 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-rlqxl"] Feb 27 10:41:41 crc kubenswrapper[4728]: E0227 10:41:41.133279 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60c50733-8a96-4bce-bdbe-830e8043cbe2" containerName="registry-server" Feb 27 10:41:41 crc kubenswrapper[4728]: I0227 10:41:41.133302 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="60c50733-8a96-4bce-bdbe-830e8043cbe2" containerName="registry-server" Feb 27 10:41:41 crc kubenswrapper[4728]: E0227 10:41:41.133321 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e33db72-7e87-4461-a619-842b33254576" containerName="extract-utilities" Feb 27 10:41:41 crc kubenswrapper[4728]: I0227 10:41:41.133335 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e33db72-7e87-4461-a619-842b33254576" containerName="extract-utilities" Feb 27 10:41:41 crc kubenswrapper[4728]: E0227 10:41:41.133355 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e33db72-7e87-4461-a619-842b33254576" containerName="extract-content" Feb 27 10:41:41 crc kubenswrapper[4728]: I0227 10:41:41.133367 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e33db72-7e87-4461-a619-842b33254576" containerName="extract-content" Feb 27 10:41:41 crc kubenswrapper[4728]: E0227 10:41:41.133388 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e33db72-7e87-4461-a619-842b33254576" containerName="registry-server" Feb 27 10:41:41 crc kubenswrapper[4728]: I0227 10:41:41.133398 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e33db72-7e87-4461-a619-842b33254576" containerName="registry-server" Feb 27 10:41:41 crc kubenswrapper[4728]: E0227 10:41:41.133421 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9abe7519-9667-4b1a-9671-f27446a2fc06" containerName="extract-content" Feb 27 10:41:41 crc kubenswrapper[4728]: I0227 10:41:41.133431 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="9abe7519-9667-4b1a-9671-f27446a2fc06" containerName="extract-content" Feb 27 10:41:41 crc kubenswrapper[4728]: E0227 10:41:41.133453 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60c50733-8a96-4bce-bdbe-830e8043cbe2" containerName="extract-content" Feb 27 10:41:41 crc kubenswrapper[4728]: I0227 10:41:41.133464 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="60c50733-8a96-4bce-bdbe-830e8043cbe2" containerName="extract-content" Feb 27 10:41:41 crc kubenswrapper[4728]: E0227 10:41:41.133475 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9abe7519-9667-4b1a-9671-f27446a2fc06" containerName="extract-utilities" Feb 27 10:41:41 crc kubenswrapper[4728]: I0227 10:41:41.133486 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="9abe7519-9667-4b1a-9671-f27446a2fc06" containerName="extract-utilities" Feb 27 10:41:41 crc kubenswrapper[4728]: E0227 10:41:41.133535 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60c50733-8a96-4bce-bdbe-830e8043cbe2" containerName="extract-utilities" Feb 27 10:41:41 crc kubenswrapper[4728]: I0227 10:41:41.133548 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="60c50733-8a96-4bce-bdbe-830e8043cbe2" containerName="extract-utilities" Feb 27 10:41:41 crc kubenswrapper[4728]: E0227 10:41:41.133566 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9abe7519-9667-4b1a-9671-f27446a2fc06" containerName="registry-server" Feb 27 10:41:41 crc kubenswrapper[4728]: I0227 10:41:41.133576 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="9abe7519-9667-4b1a-9671-f27446a2fc06" containerName="registry-server" Feb 27 10:41:41 crc kubenswrapper[4728]: I0227 10:41:41.133772 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="60c50733-8a96-4bce-bdbe-830e8043cbe2" containerName="registry-server" Feb 27 10:41:41 crc kubenswrapper[4728]: I0227 10:41:41.133794 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="9abe7519-9667-4b1a-9671-f27446a2fc06" containerName="registry-server" Feb 27 10:41:41 crc kubenswrapper[4728]: I0227 10:41:41.133828 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e33db72-7e87-4461-a619-842b33254576" containerName="registry-server" Feb 27 10:41:41 crc kubenswrapper[4728]: I0227 10:41:41.135399 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rlqxl" Feb 27 10:41:41 crc kubenswrapper[4728]: I0227 10:41:41.149865 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rlqxl"] Feb 27 10:41:41 crc kubenswrapper[4728]: I0227 10:41:41.195032 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28mrw\" (UniqueName: \"kubernetes.io/projected/e9ffdfb6-f736-4aab-9a32-c2ac3be5bf66-kube-api-access-28mrw\") pod \"certified-operators-rlqxl\" (UID: \"e9ffdfb6-f736-4aab-9a32-c2ac3be5bf66\") " pod="openshift-marketplace/certified-operators-rlqxl" Feb 27 10:41:41 crc kubenswrapper[4728]: I0227 10:41:41.195290 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9ffdfb6-f736-4aab-9a32-c2ac3be5bf66-catalog-content\") pod \"certified-operators-rlqxl\" (UID: \"e9ffdfb6-f736-4aab-9a32-c2ac3be5bf66\") " pod="openshift-marketplace/certified-operators-rlqxl" Feb 27 10:41:41 crc kubenswrapper[4728]: I0227 10:41:41.195393 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9ffdfb6-f736-4aab-9a32-c2ac3be5bf66-utilities\") pod \"certified-operators-rlqxl\" (UID: \"e9ffdfb6-f736-4aab-9a32-c2ac3be5bf66\") " pod="openshift-marketplace/certified-operators-rlqxl" Feb 27 10:41:41 crc kubenswrapper[4728]: I0227 10:41:41.296848 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-28mrw\" (UniqueName: \"kubernetes.io/projected/e9ffdfb6-f736-4aab-9a32-c2ac3be5bf66-kube-api-access-28mrw\") pod \"certified-operators-rlqxl\" (UID: \"e9ffdfb6-f736-4aab-9a32-c2ac3be5bf66\") " pod="openshift-marketplace/certified-operators-rlqxl" Feb 27 10:41:41 crc kubenswrapper[4728]: I0227 10:41:41.296999 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9ffdfb6-f736-4aab-9a32-c2ac3be5bf66-catalog-content\") pod \"certified-operators-rlqxl\" (UID: \"e9ffdfb6-f736-4aab-9a32-c2ac3be5bf66\") " pod="openshift-marketplace/certified-operators-rlqxl" Feb 27 10:41:41 crc kubenswrapper[4728]: I0227 10:41:41.297055 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9ffdfb6-f736-4aab-9a32-c2ac3be5bf66-utilities\") pod \"certified-operators-rlqxl\" (UID: \"e9ffdfb6-f736-4aab-9a32-c2ac3be5bf66\") " pod="openshift-marketplace/certified-operators-rlqxl" Feb 27 10:41:41 crc kubenswrapper[4728]: I0227 10:41:41.297602 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9ffdfb6-f736-4aab-9a32-c2ac3be5bf66-catalog-content\") pod \"certified-operators-rlqxl\" (UID: \"e9ffdfb6-f736-4aab-9a32-c2ac3be5bf66\") " pod="openshift-marketplace/certified-operators-rlqxl" Feb 27 10:41:41 crc kubenswrapper[4728]: I0227 10:41:41.297652 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9ffdfb6-f736-4aab-9a32-c2ac3be5bf66-utilities\") pod \"certified-operators-rlqxl\" (UID: \"e9ffdfb6-f736-4aab-9a32-c2ac3be5bf66\") " pod="openshift-marketplace/certified-operators-rlqxl" Feb 27 10:41:41 crc kubenswrapper[4728]: I0227 10:41:41.323929 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-28mrw\" (UniqueName: \"kubernetes.io/projected/e9ffdfb6-f736-4aab-9a32-c2ac3be5bf66-kube-api-access-28mrw\") pod \"certified-operators-rlqxl\" (UID: \"e9ffdfb6-f736-4aab-9a32-c2ac3be5bf66\") " pod="openshift-marketplace/certified-operators-rlqxl" Feb 27 10:41:41 crc kubenswrapper[4728]: I0227 10:41:41.465082 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rlqxl" Feb 27 10:41:41 crc kubenswrapper[4728]: I0227 10:41:41.759292 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rlqxl"] Feb 27 10:41:42 crc kubenswrapper[4728]: I0227 10:41:42.608809 4728 generic.go:334] "Generic (PLEG): container finished" podID="e9ffdfb6-f736-4aab-9a32-c2ac3be5bf66" containerID="1d1447a9ae59bdde465f72680a1d31096570ca9e077b83a9789c82c433998de8" exitCode=0 Feb 27 10:41:42 crc kubenswrapper[4728]: I0227 10:41:42.609091 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rlqxl" event={"ID":"e9ffdfb6-f736-4aab-9a32-c2ac3be5bf66","Type":"ContainerDied","Data":"1d1447a9ae59bdde465f72680a1d31096570ca9e077b83a9789c82c433998de8"} Feb 27 10:41:42 crc kubenswrapper[4728]: I0227 10:41:42.609130 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rlqxl" event={"ID":"e9ffdfb6-f736-4aab-9a32-c2ac3be5bf66","Type":"ContainerStarted","Data":"7d459165a57de16fdf48a8326fd9ed4c1bd5ec4756995d2510daa1593ec4e828"} Feb 27 10:41:43 crc kubenswrapper[4728]: I0227 10:41:43.619055 4728 generic.go:334] "Generic (PLEG): container finished" podID="e9ffdfb6-f736-4aab-9a32-c2ac3be5bf66" containerID="4f92072f7401988fb80a9c97e44c94a727e36c93f9d20217a0973bad9e18b274" exitCode=0 Feb 27 10:41:43 crc kubenswrapper[4728]: I0227 10:41:43.619171 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rlqxl" event={"ID":"e9ffdfb6-f736-4aab-9a32-c2ac3be5bf66","Type":"ContainerDied","Data":"4f92072f7401988fb80a9c97e44c94a727e36c93f9d20217a0973bad9e18b274"} Feb 27 10:41:44 crc kubenswrapper[4728]: I0227 10:41:44.630229 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rlqxl" event={"ID":"e9ffdfb6-f736-4aab-9a32-c2ac3be5bf66","Type":"ContainerStarted","Data":"ba6a295f44100be1b142ca9e85f661d5ae7c3d9f30d9f3f153642d7285e7b0bb"} Feb 27 10:41:44 crc kubenswrapper[4728]: I0227 10:41:44.668268 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-rlqxl" podStartSLOduration=2.21287823 podStartE2EDuration="3.668246234s" podCreationTimestamp="2026-02-27 10:41:41 +0000 UTC" firstStartedPulling="2026-02-27 10:41:42.612755256 +0000 UTC m=+922.575121392" lastFinishedPulling="2026-02-27 10:41:44.06812329 +0000 UTC m=+924.030489396" observedRunningTime="2026-02-27 10:41:44.66334709 +0000 UTC m=+924.625713196" watchObservedRunningTime="2026-02-27 10:41:44.668246234 +0000 UTC m=+924.630612340" Feb 27 10:41:45 crc kubenswrapper[4728]: I0227 10:41:45.571560 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82wts77"] Feb 27 10:41:45 crc kubenswrapper[4728]: I0227 10:41:45.573161 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82wts77" Feb 27 10:41:45 crc kubenswrapper[4728]: I0227 10:41:45.575037 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 27 10:41:45 crc kubenswrapper[4728]: I0227 10:41:45.586802 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82wts77"] Feb 27 10:41:45 crc kubenswrapper[4728]: I0227 10:41:45.770620 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdfxv\" (UniqueName: \"kubernetes.io/projected/98dca69a-84ef-4bb6-b656-c345ecc13939-kube-api-access-qdfxv\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82wts77\" (UID: \"98dca69a-84ef-4bb6-b656-c345ecc13939\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82wts77" Feb 27 10:41:45 crc kubenswrapper[4728]: I0227 10:41:45.770759 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/98dca69a-84ef-4bb6-b656-c345ecc13939-bundle\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82wts77\" (UID: \"98dca69a-84ef-4bb6-b656-c345ecc13939\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82wts77" Feb 27 10:41:45 crc kubenswrapper[4728]: I0227 10:41:45.771138 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/98dca69a-84ef-4bb6-b656-c345ecc13939-util\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82wts77\" (UID: \"98dca69a-84ef-4bb6-b656-c345ecc13939\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82wts77" Feb 27 10:41:45 crc kubenswrapper[4728]: I0227 10:41:45.872924 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/98dca69a-84ef-4bb6-b656-c345ecc13939-util\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82wts77\" (UID: \"98dca69a-84ef-4bb6-b656-c345ecc13939\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82wts77" Feb 27 10:41:45 crc kubenswrapper[4728]: I0227 10:41:45.873023 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qdfxv\" (UniqueName: \"kubernetes.io/projected/98dca69a-84ef-4bb6-b656-c345ecc13939-kube-api-access-qdfxv\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82wts77\" (UID: \"98dca69a-84ef-4bb6-b656-c345ecc13939\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82wts77" Feb 27 10:41:45 crc kubenswrapper[4728]: I0227 10:41:45.873158 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/98dca69a-84ef-4bb6-b656-c345ecc13939-bundle\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82wts77\" (UID: \"98dca69a-84ef-4bb6-b656-c345ecc13939\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82wts77" Feb 27 10:41:45 crc kubenswrapper[4728]: I0227 10:41:45.873545 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/98dca69a-84ef-4bb6-b656-c345ecc13939-util\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82wts77\" (UID: \"98dca69a-84ef-4bb6-b656-c345ecc13939\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82wts77" Feb 27 10:41:45 crc kubenswrapper[4728]: I0227 10:41:45.873995 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/98dca69a-84ef-4bb6-b656-c345ecc13939-bundle\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82wts77\" (UID: \"98dca69a-84ef-4bb6-b656-c345ecc13939\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82wts77" Feb 27 10:41:45 crc kubenswrapper[4728]: I0227 10:41:45.907256 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qdfxv\" (UniqueName: \"kubernetes.io/projected/98dca69a-84ef-4bb6-b656-c345ecc13939-kube-api-access-qdfxv\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82wts77\" (UID: \"98dca69a-84ef-4bb6-b656-c345ecc13939\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82wts77" Feb 27 10:41:46 crc kubenswrapper[4728]: I0227 10:41:46.187036 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82wts77" Feb 27 10:41:46 crc kubenswrapper[4728]: I0227 10:41:46.385236 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82wts77"] Feb 27 10:41:46 crc kubenswrapper[4728]: I0227 10:41:46.644282 4728 generic.go:334] "Generic (PLEG): container finished" podID="98dca69a-84ef-4bb6-b656-c345ecc13939" containerID="7dcfe3b26c008eadd5680d5e7c459073e9d0bd70870e462af54d571254334d37" exitCode=0 Feb 27 10:41:46 crc kubenswrapper[4728]: I0227 10:41:46.644382 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82wts77" event={"ID":"98dca69a-84ef-4bb6-b656-c345ecc13939","Type":"ContainerDied","Data":"7dcfe3b26c008eadd5680d5e7c459073e9d0bd70870e462af54d571254334d37"} Feb 27 10:41:46 crc kubenswrapper[4728]: I0227 10:41:46.644753 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82wts77" event={"ID":"98dca69a-84ef-4bb6-b656-c345ecc13939","Type":"ContainerStarted","Data":"d5961d2958a2bc5409e275e79f5c891820d2c42d116181b8191a810ab84c2999"} Feb 27 10:41:48 crc kubenswrapper[4728]: I0227 10:41:48.660777 4728 generic.go:334] "Generic (PLEG): container finished" podID="98dca69a-84ef-4bb6-b656-c345ecc13939" containerID="5074a30b015cc6c4a6b3ec6a16583df219add8546201b6a6b40d96ba8d0097e2" exitCode=0 Feb 27 10:41:48 crc kubenswrapper[4728]: I0227 10:41:48.660977 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82wts77" event={"ID":"98dca69a-84ef-4bb6-b656-c345ecc13939","Type":"ContainerDied","Data":"5074a30b015cc6c4a6b3ec6a16583df219add8546201b6a6b40d96ba8d0097e2"} Feb 27 10:41:49 crc kubenswrapper[4728]: I0227 10:41:49.670842 4728 generic.go:334] "Generic (PLEG): container finished" podID="98dca69a-84ef-4bb6-b656-c345ecc13939" containerID="4f7e70ddb7f8a3ecfc9344e818cc3e970a1a9dec6d80f5f24d63d1967768ac30" exitCode=0 Feb 27 10:41:49 crc kubenswrapper[4728]: I0227 10:41:49.670956 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82wts77" event={"ID":"98dca69a-84ef-4bb6-b656-c345ecc13939","Type":"ContainerDied","Data":"4f7e70ddb7f8a3ecfc9344e818cc3e970a1a9dec6d80f5f24d63d1967768ac30"} Feb 27 10:41:51 crc kubenswrapper[4728]: I0227 10:41:51.060590 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82wts77" Feb 27 10:41:51 crc kubenswrapper[4728]: I0227 10:41:51.159028 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qdfxv\" (UniqueName: \"kubernetes.io/projected/98dca69a-84ef-4bb6-b656-c345ecc13939-kube-api-access-qdfxv\") pod \"98dca69a-84ef-4bb6-b656-c345ecc13939\" (UID: \"98dca69a-84ef-4bb6-b656-c345ecc13939\") " Feb 27 10:41:51 crc kubenswrapper[4728]: I0227 10:41:51.159175 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/98dca69a-84ef-4bb6-b656-c345ecc13939-util\") pod \"98dca69a-84ef-4bb6-b656-c345ecc13939\" (UID: \"98dca69a-84ef-4bb6-b656-c345ecc13939\") " Feb 27 10:41:51 crc kubenswrapper[4728]: I0227 10:41:51.159197 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/98dca69a-84ef-4bb6-b656-c345ecc13939-bundle\") pod \"98dca69a-84ef-4bb6-b656-c345ecc13939\" (UID: \"98dca69a-84ef-4bb6-b656-c345ecc13939\") " Feb 27 10:41:51 crc kubenswrapper[4728]: I0227 10:41:51.160230 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/98dca69a-84ef-4bb6-b656-c345ecc13939-bundle" (OuterVolumeSpecName: "bundle") pod "98dca69a-84ef-4bb6-b656-c345ecc13939" (UID: "98dca69a-84ef-4bb6-b656-c345ecc13939"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:41:51 crc kubenswrapper[4728]: I0227 10:41:51.166193 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98dca69a-84ef-4bb6-b656-c345ecc13939-kube-api-access-qdfxv" (OuterVolumeSpecName: "kube-api-access-qdfxv") pod "98dca69a-84ef-4bb6-b656-c345ecc13939" (UID: "98dca69a-84ef-4bb6-b656-c345ecc13939"). InnerVolumeSpecName "kube-api-access-qdfxv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:41:51 crc kubenswrapper[4728]: I0227 10:41:51.174040 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/98dca69a-84ef-4bb6-b656-c345ecc13939-util" (OuterVolumeSpecName: "util") pod "98dca69a-84ef-4bb6-b656-c345ecc13939" (UID: "98dca69a-84ef-4bb6-b656-c345ecc13939"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:41:51 crc kubenswrapper[4728]: I0227 10:41:51.261319 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qdfxv\" (UniqueName: \"kubernetes.io/projected/98dca69a-84ef-4bb6-b656-c345ecc13939-kube-api-access-qdfxv\") on node \"crc\" DevicePath \"\"" Feb 27 10:41:51 crc kubenswrapper[4728]: I0227 10:41:51.261351 4728 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/98dca69a-84ef-4bb6-b656-c345ecc13939-util\") on node \"crc\" DevicePath \"\"" Feb 27 10:41:51 crc kubenswrapper[4728]: I0227 10:41:51.261362 4728 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/98dca69a-84ef-4bb6-b656-c345ecc13939-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 10:41:51 crc kubenswrapper[4728]: I0227 10:41:51.465316 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-rlqxl" Feb 27 10:41:51 crc kubenswrapper[4728]: I0227 10:41:51.465421 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-rlqxl" Feb 27 10:41:51 crc kubenswrapper[4728]: I0227 10:41:51.531931 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-rlqxl" Feb 27 10:41:51 crc kubenswrapper[4728]: I0227 10:41:51.688072 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82wts77" event={"ID":"98dca69a-84ef-4bb6-b656-c345ecc13939","Type":"ContainerDied","Data":"d5961d2958a2bc5409e275e79f5c891820d2c42d116181b8191a810ab84c2999"} Feb 27 10:41:51 crc kubenswrapper[4728]: I0227 10:41:51.688125 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d5961d2958a2bc5409e275e79f5c891820d2c42d116181b8191a810ab84c2999" Feb 27 10:41:51 crc kubenswrapper[4728]: I0227 10:41:51.688132 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82wts77" Feb 27 10:41:51 crc kubenswrapper[4728]: I0227 10:41:51.759391 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-rlqxl" Feb 27 10:41:54 crc kubenswrapper[4728]: I0227 10:41:54.124375 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rlqxl"] Feb 27 10:41:54 crc kubenswrapper[4728]: I0227 10:41:54.124697 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-rlqxl" podUID="e9ffdfb6-f736-4aab-9a32-c2ac3be5bf66" containerName="registry-server" containerID="cri-o://ba6a295f44100be1b142ca9e85f661d5ae7c3d9f30d9f3f153642d7285e7b0bb" gracePeriod=2 Feb 27 10:41:54 crc kubenswrapper[4728]: E0227 10:41:54.318399 4728 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode9ffdfb6_f736_4aab_9a32_c2ac3be5bf66.slice/crio-ba6a295f44100be1b142ca9e85f661d5ae7c3d9f30d9f3f153642d7285e7b0bb.scope\": RecentStats: unable to find data in memory cache]" Feb 27 10:41:54 crc kubenswrapper[4728]: I0227 10:41:54.636034 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rlqxl" Feb 27 10:41:54 crc kubenswrapper[4728]: I0227 10:41:54.659496 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-28mrw\" (UniqueName: \"kubernetes.io/projected/e9ffdfb6-f736-4aab-9a32-c2ac3be5bf66-kube-api-access-28mrw\") pod \"e9ffdfb6-f736-4aab-9a32-c2ac3be5bf66\" (UID: \"e9ffdfb6-f736-4aab-9a32-c2ac3be5bf66\") " Feb 27 10:41:54 crc kubenswrapper[4728]: I0227 10:41:54.659722 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9ffdfb6-f736-4aab-9a32-c2ac3be5bf66-catalog-content\") pod \"e9ffdfb6-f736-4aab-9a32-c2ac3be5bf66\" (UID: \"e9ffdfb6-f736-4aab-9a32-c2ac3be5bf66\") " Feb 27 10:41:54 crc kubenswrapper[4728]: I0227 10:41:54.659903 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9ffdfb6-f736-4aab-9a32-c2ac3be5bf66-utilities\") pod \"e9ffdfb6-f736-4aab-9a32-c2ac3be5bf66\" (UID: \"e9ffdfb6-f736-4aab-9a32-c2ac3be5bf66\") " Feb 27 10:41:54 crc kubenswrapper[4728]: I0227 10:41:54.661864 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e9ffdfb6-f736-4aab-9a32-c2ac3be5bf66-utilities" (OuterVolumeSpecName: "utilities") pod "e9ffdfb6-f736-4aab-9a32-c2ac3be5bf66" (UID: "e9ffdfb6-f736-4aab-9a32-c2ac3be5bf66"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:41:54 crc kubenswrapper[4728]: I0227 10:41:54.670938 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9ffdfb6-f736-4aab-9a32-c2ac3be5bf66-kube-api-access-28mrw" (OuterVolumeSpecName: "kube-api-access-28mrw") pod "e9ffdfb6-f736-4aab-9a32-c2ac3be5bf66" (UID: "e9ffdfb6-f736-4aab-9a32-c2ac3be5bf66"). InnerVolumeSpecName "kube-api-access-28mrw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:41:54 crc kubenswrapper[4728]: I0227 10:41:54.731377 4728 generic.go:334] "Generic (PLEG): container finished" podID="e9ffdfb6-f736-4aab-9a32-c2ac3be5bf66" containerID="ba6a295f44100be1b142ca9e85f661d5ae7c3d9f30d9f3f153642d7285e7b0bb" exitCode=0 Feb 27 10:41:54 crc kubenswrapper[4728]: I0227 10:41:54.731468 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rlqxl" Feb 27 10:41:54 crc kubenswrapper[4728]: I0227 10:41:54.741447 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rlqxl" event={"ID":"e9ffdfb6-f736-4aab-9a32-c2ac3be5bf66","Type":"ContainerDied","Data":"ba6a295f44100be1b142ca9e85f661d5ae7c3d9f30d9f3f153642d7285e7b0bb"} Feb 27 10:41:54 crc kubenswrapper[4728]: I0227 10:41:54.741496 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rlqxl" event={"ID":"e9ffdfb6-f736-4aab-9a32-c2ac3be5bf66","Type":"ContainerDied","Data":"7d459165a57de16fdf48a8326fd9ed4c1bd5ec4756995d2510daa1593ec4e828"} Feb 27 10:41:54 crc kubenswrapper[4728]: I0227 10:41:54.741535 4728 scope.go:117] "RemoveContainer" containerID="ba6a295f44100be1b142ca9e85f661d5ae7c3d9f30d9f3f153642d7285e7b0bb" Feb 27 10:41:54 crc kubenswrapper[4728]: I0227 10:41:54.762766 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-28mrw\" (UniqueName: \"kubernetes.io/projected/e9ffdfb6-f736-4aab-9a32-c2ac3be5bf66-kube-api-access-28mrw\") on node \"crc\" DevicePath \"\"" Feb 27 10:41:54 crc kubenswrapper[4728]: I0227 10:41:54.762975 4728 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9ffdfb6-f736-4aab-9a32-c2ac3be5bf66-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 10:41:54 crc kubenswrapper[4728]: I0227 10:41:54.784339 4728 scope.go:117] "RemoveContainer" containerID="4f92072f7401988fb80a9c97e44c94a727e36c93f9d20217a0973bad9e18b274" Feb 27 10:41:54 crc kubenswrapper[4728]: I0227 10:41:54.826992 4728 scope.go:117] "RemoveContainer" containerID="1d1447a9ae59bdde465f72680a1d31096570ca9e077b83a9789c82c433998de8" Feb 27 10:41:54 crc kubenswrapper[4728]: I0227 10:41:54.850223 4728 scope.go:117] "RemoveContainer" containerID="ba6a295f44100be1b142ca9e85f661d5ae7c3d9f30d9f3f153642d7285e7b0bb" Feb 27 10:41:54 crc kubenswrapper[4728]: E0227 10:41:54.850628 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba6a295f44100be1b142ca9e85f661d5ae7c3d9f30d9f3f153642d7285e7b0bb\": container with ID starting with ba6a295f44100be1b142ca9e85f661d5ae7c3d9f30d9f3f153642d7285e7b0bb not found: ID does not exist" containerID="ba6a295f44100be1b142ca9e85f661d5ae7c3d9f30d9f3f153642d7285e7b0bb" Feb 27 10:41:54 crc kubenswrapper[4728]: I0227 10:41:54.850659 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba6a295f44100be1b142ca9e85f661d5ae7c3d9f30d9f3f153642d7285e7b0bb"} err="failed to get container status \"ba6a295f44100be1b142ca9e85f661d5ae7c3d9f30d9f3f153642d7285e7b0bb\": rpc error: code = NotFound desc = could not find container \"ba6a295f44100be1b142ca9e85f661d5ae7c3d9f30d9f3f153642d7285e7b0bb\": container with ID starting with ba6a295f44100be1b142ca9e85f661d5ae7c3d9f30d9f3f153642d7285e7b0bb not found: ID does not exist" Feb 27 10:41:54 crc kubenswrapper[4728]: I0227 10:41:54.850679 4728 scope.go:117] "RemoveContainer" containerID="4f92072f7401988fb80a9c97e44c94a727e36c93f9d20217a0973bad9e18b274" Feb 27 10:41:54 crc kubenswrapper[4728]: E0227 10:41:54.850884 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4f92072f7401988fb80a9c97e44c94a727e36c93f9d20217a0973bad9e18b274\": container with ID starting with 4f92072f7401988fb80a9c97e44c94a727e36c93f9d20217a0973bad9e18b274 not found: ID does not exist" containerID="4f92072f7401988fb80a9c97e44c94a727e36c93f9d20217a0973bad9e18b274" Feb 27 10:41:54 crc kubenswrapper[4728]: I0227 10:41:54.850909 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f92072f7401988fb80a9c97e44c94a727e36c93f9d20217a0973bad9e18b274"} err="failed to get container status \"4f92072f7401988fb80a9c97e44c94a727e36c93f9d20217a0973bad9e18b274\": rpc error: code = NotFound desc = could not find container \"4f92072f7401988fb80a9c97e44c94a727e36c93f9d20217a0973bad9e18b274\": container with ID starting with 4f92072f7401988fb80a9c97e44c94a727e36c93f9d20217a0973bad9e18b274 not found: ID does not exist" Feb 27 10:41:54 crc kubenswrapper[4728]: I0227 10:41:54.850925 4728 scope.go:117] "RemoveContainer" containerID="1d1447a9ae59bdde465f72680a1d31096570ca9e077b83a9789c82c433998de8" Feb 27 10:41:54 crc kubenswrapper[4728]: E0227 10:41:54.851100 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1d1447a9ae59bdde465f72680a1d31096570ca9e077b83a9789c82c433998de8\": container with ID starting with 1d1447a9ae59bdde465f72680a1d31096570ca9e077b83a9789c82c433998de8 not found: ID does not exist" containerID="1d1447a9ae59bdde465f72680a1d31096570ca9e077b83a9789c82c433998de8" Feb 27 10:41:54 crc kubenswrapper[4728]: I0227 10:41:54.851121 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d1447a9ae59bdde465f72680a1d31096570ca9e077b83a9789c82c433998de8"} err="failed to get container status \"1d1447a9ae59bdde465f72680a1d31096570ca9e077b83a9789c82c433998de8\": rpc error: code = NotFound desc = could not find container \"1d1447a9ae59bdde465f72680a1d31096570ca9e077b83a9789c82c433998de8\": container with ID starting with 1d1447a9ae59bdde465f72680a1d31096570ca9e077b83a9789c82c433998de8 not found: ID does not exist" Feb 27 10:41:54 crc kubenswrapper[4728]: I0227 10:41:54.939350 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e9ffdfb6-f736-4aab-9a32-c2ac3be5bf66-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e9ffdfb6-f736-4aab-9a32-c2ac3be5bf66" (UID: "e9ffdfb6-f736-4aab-9a32-c2ac3be5bf66"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:41:54 crc kubenswrapper[4728]: I0227 10:41:54.966656 4728 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9ffdfb6-f736-4aab-9a32-c2ac3be5bf66-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 10:41:55 crc kubenswrapper[4728]: I0227 10:41:55.075158 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rlqxl"] Feb 27 10:41:55 crc kubenswrapper[4728]: I0227 10:41:55.080292 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-rlqxl"] Feb 27 10:41:56 crc kubenswrapper[4728]: I0227 10:41:56.733137 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e9ffdfb6-f736-4aab-9a32-c2ac3be5bf66" path="/var/lib/kubelet/pods/e9ffdfb6-f736-4aab-9a32-c2ac3be5bf66/volumes" Feb 27 10:41:56 crc kubenswrapper[4728]: I0227 10:41:56.749902 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-75c5dccd6c-964rz"] Feb 27 10:41:56 crc kubenswrapper[4728]: E0227 10:41:56.750462 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9ffdfb6-f736-4aab-9a32-c2ac3be5bf66" containerName="extract-content" Feb 27 10:41:56 crc kubenswrapper[4728]: I0227 10:41:56.750497 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9ffdfb6-f736-4aab-9a32-c2ac3be5bf66" containerName="extract-content" Feb 27 10:41:56 crc kubenswrapper[4728]: E0227 10:41:56.750558 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98dca69a-84ef-4bb6-b656-c345ecc13939" containerName="util" Feb 27 10:41:56 crc kubenswrapper[4728]: I0227 10:41:56.750580 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="98dca69a-84ef-4bb6-b656-c345ecc13939" containerName="util" Feb 27 10:41:56 crc kubenswrapper[4728]: E0227 10:41:56.750619 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9ffdfb6-f736-4aab-9a32-c2ac3be5bf66" containerName="registry-server" Feb 27 10:41:56 crc kubenswrapper[4728]: I0227 10:41:56.750637 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9ffdfb6-f736-4aab-9a32-c2ac3be5bf66" containerName="registry-server" Feb 27 10:41:56 crc kubenswrapper[4728]: E0227 10:41:56.750658 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9ffdfb6-f736-4aab-9a32-c2ac3be5bf66" containerName="extract-utilities" Feb 27 10:41:56 crc kubenswrapper[4728]: I0227 10:41:56.750670 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9ffdfb6-f736-4aab-9a32-c2ac3be5bf66" containerName="extract-utilities" Feb 27 10:41:56 crc kubenswrapper[4728]: E0227 10:41:56.750701 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98dca69a-84ef-4bb6-b656-c345ecc13939" containerName="extract" Feb 27 10:41:56 crc kubenswrapper[4728]: I0227 10:41:56.750710 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="98dca69a-84ef-4bb6-b656-c345ecc13939" containerName="extract" Feb 27 10:41:56 crc kubenswrapper[4728]: E0227 10:41:56.750723 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98dca69a-84ef-4bb6-b656-c345ecc13939" containerName="pull" Feb 27 10:41:56 crc kubenswrapper[4728]: I0227 10:41:56.750731 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="98dca69a-84ef-4bb6-b656-c345ecc13939" containerName="pull" Feb 27 10:41:56 crc kubenswrapper[4728]: I0227 10:41:56.750918 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="98dca69a-84ef-4bb6-b656-c345ecc13939" containerName="extract" Feb 27 10:41:56 crc kubenswrapper[4728]: I0227 10:41:56.750951 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9ffdfb6-f736-4aab-9a32-c2ac3be5bf66" containerName="registry-server" Feb 27 10:41:56 crc kubenswrapper[4728]: I0227 10:41:56.751654 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-75c5dccd6c-964rz" Feb 27 10:41:56 crc kubenswrapper[4728]: I0227 10:41:56.755155 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Feb 27 10:41:56 crc kubenswrapper[4728]: I0227 10:41:56.755455 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-zsmc5" Feb 27 10:41:56 crc kubenswrapper[4728]: I0227 10:41:56.755946 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Feb 27 10:41:56 crc kubenswrapper[4728]: I0227 10:41:56.764488 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-75c5dccd6c-964rz"] Feb 27 10:41:56 crc kubenswrapper[4728]: I0227 10:41:56.796549 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j65fv\" (UniqueName: \"kubernetes.io/projected/f98e3444-0dcb-40fd-8833-4d492381c226-kube-api-access-j65fv\") pod \"nmstate-operator-75c5dccd6c-964rz\" (UID: \"f98e3444-0dcb-40fd-8833-4d492381c226\") " pod="openshift-nmstate/nmstate-operator-75c5dccd6c-964rz" Feb 27 10:41:56 crc kubenswrapper[4728]: I0227 10:41:56.898314 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j65fv\" (UniqueName: \"kubernetes.io/projected/f98e3444-0dcb-40fd-8833-4d492381c226-kube-api-access-j65fv\") pod \"nmstate-operator-75c5dccd6c-964rz\" (UID: \"f98e3444-0dcb-40fd-8833-4d492381c226\") " pod="openshift-nmstate/nmstate-operator-75c5dccd6c-964rz" Feb 27 10:41:56 crc kubenswrapper[4728]: I0227 10:41:56.916781 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j65fv\" (UniqueName: \"kubernetes.io/projected/f98e3444-0dcb-40fd-8833-4d492381c226-kube-api-access-j65fv\") pod \"nmstate-operator-75c5dccd6c-964rz\" (UID: \"f98e3444-0dcb-40fd-8833-4d492381c226\") " pod="openshift-nmstate/nmstate-operator-75c5dccd6c-964rz" Feb 27 10:41:57 crc kubenswrapper[4728]: I0227 10:41:57.071868 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-75c5dccd6c-964rz" Feb 27 10:41:57 crc kubenswrapper[4728]: I0227 10:41:57.528163 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-75c5dccd6c-964rz"] Feb 27 10:41:57 crc kubenswrapper[4728]: I0227 10:41:57.753910 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-75c5dccd6c-964rz" event={"ID":"f98e3444-0dcb-40fd-8833-4d492381c226","Type":"ContainerStarted","Data":"f1ac8a17994c49e43f4f7dfbdef8498dec5b5436f43e3c3e09534ec84d0a6de0"} Feb 27 10:42:00 crc kubenswrapper[4728]: I0227 10:42:00.132695 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536482-5svg8"] Feb 27 10:42:00 crc kubenswrapper[4728]: I0227 10:42:00.134467 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536482-5svg8" Feb 27 10:42:00 crc kubenswrapper[4728]: I0227 10:42:00.137095 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 10:42:00 crc kubenswrapper[4728]: I0227 10:42:00.138168 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 10:42:00 crc kubenswrapper[4728]: I0227 10:42:00.138333 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-wxdfj" Feb 27 10:42:00 crc kubenswrapper[4728]: I0227 10:42:00.144544 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmswj\" (UniqueName: \"kubernetes.io/projected/7f16b827-2fa4-4f30-9c9f-d5eeabaa1793-kube-api-access-wmswj\") pod \"auto-csr-approver-29536482-5svg8\" (UID: \"7f16b827-2fa4-4f30-9c9f-d5eeabaa1793\") " pod="openshift-infra/auto-csr-approver-29536482-5svg8" Feb 27 10:42:00 crc kubenswrapper[4728]: I0227 10:42:00.154176 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536482-5svg8"] Feb 27 10:42:00 crc kubenswrapper[4728]: I0227 10:42:00.246288 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wmswj\" (UniqueName: \"kubernetes.io/projected/7f16b827-2fa4-4f30-9c9f-d5eeabaa1793-kube-api-access-wmswj\") pod \"auto-csr-approver-29536482-5svg8\" (UID: \"7f16b827-2fa4-4f30-9c9f-d5eeabaa1793\") " pod="openshift-infra/auto-csr-approver-29536482-5svg8" Feb 27 10:42:00 crc kubenswrapper[4728]: I0227 10:42:00.270626 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wmswj\" (UniqueName: \"kubernetes.io/projected/7f16b827-2fa4-4f30-9c9f-d5eeabaa1793-kube-api-access-wmswj\") pod \"auto-csr-approver-29536482-5svg8\" (UID: \"7f16b827-2fa4-4f30-9c9f-d5eeabaa1793\") " pod="openshift-infra/auto-csr-approver-29536482-5svg8" Feb 27 10:42:00 crc kubenswrapper[4728]: I0227 10:42:00.455481 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536482-5svg8" Feb 27 10:42:00 crc kubenswrapper[4728]: I0227 10:42:00.783158 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-75c5dccd6c-964rz" event={"ID":"f98e3444-0dcb-40fd-8833-4d492381c226","Type":"ContainerStarted","Data":"74ab786d525f23f2f2732c79121c6e862fc8b6786934d131194c73d98f0219d8"} Feb 27 10:42:00 crc kubenswrapper[4728]: I0227 10:42:00.812374 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-75c5dccd6c-964rz" podStartSLOduration=2.523979584 podStartE2EDuration="4.812354717s" podCreationTimestamp="2026-02-27 10:41:56 +0000 UTC" firstStartedPulling="2026-02-27 10:41:57.535545191 +0000 UTC m=+937.497911307" lastFinishedPulling="2026-02-27 10:41:59.823920334 +0000 UTC m=+939.786286440" observedRunningTime="2026-02-27 10:42:00.810234059 +0000 UTC m=+940.772600175" watchObservedRunningTime="2026-02-27 10:42:00.812354717 +0000 UTC m=+940.774720833" Feb 27 10:42:00 crc kubenswrapper[4728]: I0227 10:42:00.974903 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536482-5svg8"] Feb 27 10:42:01 crc kubenswrapper[4728]: I0227 10:42:01.789627 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536482-5svg8" event={"ID":"7f16b827-2fa4-4f30-9c9f-d5eeabaa1793","Type":"ContainerStarted","Data":"0acd9cac2181a443a7933f682d820191d36c35be08d3bc23f875c03dfe445ad2"} Feb 27 10:42:02 crc kubenswrapper[4728]: I0227 10:42:02.800012 4728 generic.go:334] "Generic (PLEG): container finished" podID="7f16b827-2fa4-4f30-9c9f-d5eeabaa1793" containerID="511a8287396f7123a5ee403d95ae693f605a6e64df03c9c717e4dc3b0b14f42d" exitCode=0 Feb 27 10:42:02 crc kubenswrapper[4728]: I0227 10:42:02.800068 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536482-5svg8" event={"ID":"7f16b827-2fa4-4f30-9c9f-d5eeabaa1793","Type":"ContainerDied","Data":"511a8287396f7123a5ee403d95ae693f605a6e64df03c9c717e4dc3b0b14f42d"} Feb 27 10:42:04 crc kubenswrapper[4728]: I0227 10:42:04.161768 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536482-5svg8" Feb 27 10:42:04 crc kubenswrapper[4728]: I0227 10:42:04.224007 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wmswj\" (UniqueName: \"kubernetes.io/projected/7f16b827-2fa4-4f30-9c9f-d5eeabaa1793-kube-api-access-wmswj\") pod \"7f16b827-2fa4-4f30-9c9f-d5eeabaa1793\" (UID: \"7f16b827-2fa4-4f30-9c9f-d5eeabaa1793\") " Feb 27 10:42:04 crc kubenswrapper[4728]: I0227 10:42:04.229725 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f16b827-2fa4-4f30-9c9f-d5eeabaa1793-kube-api-access-wmswj" (OuterVolumeSpecName: "kube-api-access-wmswj") pod "7f16b827-2fa4-4f30-9c9f-d5eeabaa1793" (UID: "7f16b827-2fa4-4f30-9c9f-d5eeabaa1793"). InnerVolumeSpecName "kube-api-access-wmswj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:42:04 crc kubenswrapper[4728]: I0227 10:42:04.326491 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wmswj\" (UniqueName: \"kubernetes.io/projected/7f16b827-2fa4-4f30-9c9f-d5eeabaa1793-kube-api-access-wmswj\") on node \"crc\" DevicePath \"\"" Feb 27 10:42:04 crc kubenswrapper[4728]: I0227 10:42:04.827884 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536482-5svg8" event={"ID":"7f16b827-2fa4-4f30-9c9f-d5eeabaa1793","Type":"ContainerDied","Data":"0acd9cac2181a443a7933f682d820191d36c35be08d3bc23f875c03dfe445ad2"} Feb 27 10:42:04 crc kubenswrapper[4728]: I0227 10:42:04.828242 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0acd9cac2181a443a7933f682d820191d36c35be08d3bc23f875c03dfe445ad2" Feb 27 10:42:04 crc kubenswrapper[4728]: I0227 10:42:04.827987 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536482-5svg8" Feb 27 10:42:05 crc kubenswrapper[4728]: I0227 10:42:05.220307 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536476-hv4q5"] Feb 27 10:42:05 crc kubenswrapper[4728]: I0227 10:42:05.227085 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536476-hv4q5"] Feb 27 10:42:05 crc kubenswrapper[4728]: I0227 10:42:05.922613 4728 patch_prober.go:28] interesting pod/machine-config-daemon-mf2hh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 10:42:05 crc kubenswrapper[4728]: I0227 10:42:05.922676 4728 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 10:42:05 crc kubenswrapper[4728]: I0227 10:42:05.922722 4728 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" Feb 27 10:42:05 crc kubenswrapper[4728]: I0227 10:42:05.923413 4728 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0c1db5be2b8f7ae48c2eb85c7a1f9d89d594ab5c8b362069a65d852dc6140374"} pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 27 10:42:05 crc kubenswrapper[4728]: I0227 10:42:05.923462 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" containerName="machine-config-daemon" containerID="cri-o://0c1db5be2b8f7ae48c2eb85c7a1f9d89d594ab5c8b362069a65d852dc6140374" gracePeriod=600 Feb 27 10:42:06 crc kubenswrapper[4728]: I0227 10:42:06.640802 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-69594cc75-cz9mk"] Feb 27 10:42:06 crc kubenswrapper[4728]: E0227 10:42:06.641524 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f16b827-2fa4-4f30-9c9f-d5eeabaa1793" containerName="oc" Feb 27 10:42:06 crc kubenswrapper[4728]: I0227 10:42:06.641536 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f16b827-2fa4-4f30-9c9f-d5eeabaa1793" containerName="oc" Feb 27 10:42:06 crc kubenswrapper[4728]: I0227 10:42:06.641675 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f16b827-2fa4-4f30-9c9f-d5eeabaa1793" containerName="oc" Feb 27 10:42:06 crc kubenswrapper[4728]: I0227 10:42:06.642380 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-69594cc75-cz9mk" Feb 27 10:42:06 crc kubenswrapper[4728]: I0227 10:42:06.643984 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-fnffv" Feb 27 10:42:06 crc kubenswrapper[4728]: I0227 10:42:06.657893 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-69594cc75-cz9mk"] Feb 27 10:42:06 crc kubenswrapper[4728]: I0227 10:42:06.692586 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nv77s\" (UniqueName: \"kubernetes.io/projected/339eedb6-7c4e-4837-9cef-a76ee6398990-kube-api-access-nv77s\") pod \"nmstate-metrics-69594cc75-cz9mk\" (UID: \"339eedb6-7c4e-4837-9cef-a76ee6398990\") " pod="openshift-nmstate/nmstate-metrics-69594cc75-cz9mk" Feb 27 10:42:06 crc kubenswrapper[4728]: I0227 10:42:06.697783 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-786f45cff4-tw7dm"] Feb 27 10:42:06 crc kubenswrapper[4728]: I0227 10:42:06.698673 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-786f45cff4-tw7dm" Feb 27 10:42:06 crc kubenswrapper[4728]: I0227 10:42:06.702235 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Feb 27 10:42:06 crc kubenswrapper[4728]: I0227 10:42:06.718125 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-786f45cff4-tw7dm"] Feb 27 10:42:06 crc kubenswrapper[4728]: I0227 10:42:06.723841 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-njfzf"] Feb 27 10:42:06 crc kubenswrapper[4728]: I0227 10:42:06.724869 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-njfzf" Feb 27 10:42:06 crc kubenswrapper[4728]: I0227 10:42:06.734594 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf9c8f91-e276-4cbb-879a-505120485bf3" path="/var/lib/kubelet/pods/bf9c8f91-e276-4cbb-879a-505120485bf3/volumes" Feb 27 10:42:06 crc kubenswrapper[4728]: I0227 10:42:06.794523 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zlqbb\" (UniqueName: \"kubernetes.io/projected/c3efa330-29e0-41a2-aa09-86babc8aa9b4-kube-api-access-zlqbb\") pod \"nmstate-handler-njfzf\" (UID: \"c3efa330-29e0-41a2-aa09-86babc8aa9b4\") " pod="openshift-nmstate/nmstate-handler-njfzf" Feb 27 10:42:06 crc kubenswrapper[4728]: I0227 10:42:06.794577 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nv77s\" (UniqueName: \"kubernetes.io/projected/339eedb6-7c4e-4837-9cef-a76ee6398990-kube-api-access-nv77s\") pod \"nmstate-metrics-69594cc75-cz9mk\" (UID: \"339eedb6-7c4e-4837-9cef-a76ee6398990\") " pod="openshift-nmstate/nmstate-metrics-69594cc75-cz9mk" Feb 27 10:42:06 crc kubenswrapper[4728]: I0227 10:42:06.794685 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/c3efa330-29e0-41a2-aa09-86babc8aa9b4-ovs-socket\") pod \"nmstate-handler-njfzf\" (UID: \"c3efa330-29e0-41a2-aa09-86babc8aa9b4\") " pod="openshift-nmstate/nmstate-handler-njfzf" Feb 27 10:42:06 crc kubenswrapper[4728]: I0227 10:42:06.794709 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ddp47\" (UniqueName: \"kubernetes.io/projected/a28afdcc-7a97-430e-a333-5c1eac61d005-kube-api-access-ddp47\") pod \"nmstate-webhook-786f45cff4-tw7dm\" (UID: \"a28afdcc-7a97-430e-a333-5c1eac61d005\") " pod="openshift-nmstate/nmstate-webhook-786f45cff4-tw7dm" Feb 27 10:42:06 crc kubenswrapper[4728]: I0227 10:42:06.794745 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/a28afdcc-7a97-430e-a333-5c1eac61d005-tls-key-pair\") pod \"nmstate-webhook-786f45cff4-tw7dm\" (UID: \"a28afdcc-7a97-430e-a333-5c1eac61d005\") " pod="openshift-nmstate/nmstate-webhook-786f45cff4-tw7dm" Feb 27 10:42:06 crc kubenswrapper[4728]: I0227 10:42:06.794790 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/c3efa330-29e0-41a2-aa09-86babc8aa9b4-nmstate-lock\") pod \"nmstate-handler-njfzf\" (UID: \"c3efa330-29e0-41a2-aa09-86babc8aa9b4\") " pod="openshift-nmstate/nmstate-handler-njfzf" Feb 27 10:42:06 crc kubenswrapper[4728]: I0227 10:42:06.794808 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/c3efa330-29e0-41a2-aa09-86babc8aa9b4-dbus-socket\") pod \"nmstate-handler-njfzf\" (UID: \"c3efa330-29e0-41a2-aa09-86babc8aa9b4\") " pod="openshift-nmstate/nmstate-handler-njfzf" Feb 27 10:42:06 crc kubenswrapper[4728]: I0227 10:42:06.837465 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nv77s\" (UniqueName: \"kubernetes.io/projected/339eedb6-7c4e-4837-9cef-a76ee6398990-kube-api-access-nv77s\") pod \"nmstate-metrics-69594cc75-cz9mk\" (UID: \"339eedb6-7c4e-4837-9cef-a76ee6398990\") " pod="openshift-nmstate/nmstate-metrics-69594cc75-cz9mk" Feb 27 10:42:06 crc kubenswrapper[4728]: I0227 10:42:06.850914 4728 generic.go:334] "Generic (PLEG): container finished" podID="c2cfd349-f825-497b-b698-7fb6bc258b22" containerID="0c1db5be2b8f7ae48c2eb85c7a1f9d89d594ab5c8b362069a65d852dc6140374" exitCode=0 Feb 27 10:42:06 crc kubenswrapper[4728]: I0227 10:42:06.851277 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" event={"ID":"c2cfd349-f825-497b-b698-7fb6bc258b22","Type":"ContainerDied","Data":"0c1db5be2b8f7ae48c2eb85c7a1f9d89d594ab5c8b362069a65d852dc6140374"} Feb 27 10:42:06 crc kubenswrapper[4728]: I0227 10:42:06.851337 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" event={"ID":"c2cfd349-f825-497b-b698-7fb6bc258b22","Type":"ContainerStarted","Data":"25e402b0eb27122e7fbd4811edc6c8ff99dce0897b61a2efd27b0c5dbb0c9671"} Feb 27 10:42:06 crc kubenswrapper[4728]: I0227 10:42:06.851355 4728 scope.go:117] "RemoveContainer" containerID="7142bbcd5732490b77191220972aa455a45bbcb3be86cc2f77bc37171cdfdc5d" Feb 27 10:42:06 crc kubenswrapper[4728]: I0227 10:42:06.896440 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/c3efa330-29e0-41a2-aa09-86babc8aa9b4-ovs-socket\") pod \"nmstate-handler-njfzf\" (UID: \"c3efa330-29e0-41a2-aa09-86babc8aa9b4\") " pod="openshift-nmstate/nmstate-handler-njfzf" Feb 27 10:42:06 crc kubenswrapper[4728]: I0227 10:42:06.896523 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ddp47\" (UniqueName: \"kubernetes.io/projected/a28afdcc-7a97-430e-a333-5c1eac61d005-kube-api-access-ddp47\") pod \"nmstate-webhook-786f45cff4-tw7dm\" (UID: \"a28afdcc-7a97-430e-a333-5c1eac61d005\") " pod="openshift-nmstate/nmstate-webhook-786f45cff4-tw7dm" Feb 27 10:42:06 crc kubenswrapper[4728]: I0227 10:42:06.896578 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/c3efa330-29e0-41a2-aa09-86babc8aa9b4-ovs-socket\") pod \"nmstate-handler-njfzf\" (UID: \"c3efa330-29e0-41a2-aa09-86babc8aa9b4\") " pod="openshift-nmstate/nmstate-handler-njfzf" Feb 27 10:42:06 crc kubenswrapper[4728]: I0227 10:42:06.896614 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/a28afdcc-7a97-430e-a333-5c1eac61d005-tls-key-pair\") pod \"nmstate-webhook-786f45cff4-tw7dm\" (UID: \"a28afdcc-7a97-430e-a333-5c1eac61d005\") " pod="openshift-nmstate/nmstate-webhook-786f45cff4-tw7dm" Feb 27 10:42:06 crc kubenswrapper[4728]: E0227 10:42:06.896724 4728 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Feb 27 10:42:06 crc kubenswrapper[4728]: I0227 10:42:06.896916 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/c3efa330-29e0-41a2-aa09-86babc8aa9b4-nmstate-lock\") pod \"nmstate-handler-njfzf\" (UID: \"c3efa330-29e0-41a2-aa09-86babc8aa9b4\") " pod="openshift-nmstate/nmstate-handler-njfzf" Feb 27 10:42:06 crc kubenswrapper[4728]: I0227 10:42:06.896956 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/c3efa330-29e0-41a2-aa09-86babc8aa9b4-dbus-socket\") pod \"nmstate-handler-njfzf\" (UID: \"c3efa330-29e0-41a2-aa09-86babc8aa9b4\") " pod="openshift-nmstate/nmstate-handler-njfzf" Feb 27 10:42:06 crc kubenswrapper[4728]: I0227 10:42:06.896998 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zlqbb\" (UniqueName: \"kubernetes.io/projected/c3efa330-29e0-41a2-aa09-86babc8aa9b4-kube-api-access-zlqbb\") pod \"nmstate-handler-njfzf\" (UID: \"c3efa330-29e0-41a2-aa09-86babc8aa9b4\") " pod="openshift-nmstate/nmstate-handler-njfzf" Feb 27 10:42:06 crc kubenswrapper[4728]: I0227 10:42:06.897249 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-4cmrc"] Feb 27 10:42:06 crc kubenswrapper[4728]: E0227 10:42:06.897301 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a28afdcc-7a97-430e-a333-5c1eac61d005-tls-key-pair podName:a28afdcc-7a97-430e-a333-5c1eac61d005 nodeName:}" failed. No retries permitted until 2026-02-27 10:42:07.39728163 +0000 UTC m=+947.359647736 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/a28afdcc-7a97-430e-a333-5c1eac61d005-tls-key-pair") pod "nmstate-webhook-786f45cff4-tw7dm" (UID: "a28afdcc-7a97-430e-a333-5c1eac61d005") : secret "openshift-nmstate-webhook" not found Feb 27 10:42:06 crc kubenswrapper[4728]: I0227 10:42:06.897588 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/c3efa330-29e0-41a2-aa09-86babc8aa9b4-dbus-socket\") pod \"nmstate-handler-njfzf\" (UID: \"c3efa330-29e0-41a2-aa09-86babc8aa9b4\") " pod="openshift-nmstate/nmstate-handler-njfzf" Feb 27 10:42:06 crc kubenswrapper[4728]: I0227 10:42:06.898155 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-4cmrc" Feb 27 10:42:06 crc kubenswrapper[4728]: I0227 10:42:06.897275 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/c3efa330-29e0-41a2-aa09-86babc8aa9b4-nmstate-lock\") pod \"nmstate-handler-njfzf\" (UID: \"c3efa330-29e0-41a2-aa09-86babc8aa9b4\") " pod="openshift-nmstate/nmstate-handler-njfzf" Feb 27 10:42:06 crc kubenswrapper[4728]: I0227 10:42:06.902211 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Feb 27 10:42:06 crc kubenswrapper[4728]: I0227 10:42:06.902331 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-797f6" Feb 27 10:42:06 crc kubenswrapper[4728]: I0227 10:42:06.902601 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Feb 27 10:42:06 crc kubenswrapper[4728]: I0227 10:42:06.916574 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-4cmrc"] Feb 27 10:42:06 crc kubenswrapper[4728]: I0227 10:42:06.927567 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ddp47\" (UniqueName: \"kubernetes.io/projected/a28afdcc-7a97-430e-a333-5c1eac61d005-kube-api-access-ddp47\") pod \"nmstate-webhook-786f45cff4-tw7dm\" (UID: \"a28afdcc-7a97-430e-a333-5c1eac61d005\") " pod="openshift-nmstate/nmstate-webhook-786f45cff4-tw7dm" Feb 27 10:42:06 crc kubenswrapper[4728]: I0227 10:42:06.945877 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zlqbb\" (UniqueName: \"kubernetes.io/projected/c3efa330-29e0-41a2-aa09-86babc8aa9b4-kube-api-access-zlqbb\") pod \"nmstate-handler-njfzf\" (UID: \"c3efa330-29e0-41a2-aa09-86babc8aa9b4\") " pod="openshift-nmstate/nmstate-handler-njfzf" Feb 27 10:42:06 crc kubenswrapper[4728]: I0227 10:42:06.957227 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-69594cc75-cz9mk" Feb 27 10:42:07 crc kubenswrapper[4728]: I0227 10:42:06.998033 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/9e8ec681-17c4-4bcd-b81a-92de549c1523-plugin-serving-cert\") pod \"nmstate-console-plugin-5dcbbd79cf-4cmrc\" (UID: \"9e8ec681-17c4-4bcd-b81a-92de549c1523\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-4cmrc" Feb 27 10:42:07 crc kubenswrapper[4728]: I0227 10:42:06.998187 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/9e8ec681-17c4-4bcd-b81a-92de549c1523-nginx-conf\") pod \"nmstate-console-plugin-5dcbbd79cf-4cmrc\" (UID: \"9e8ec681-17c4-4bcd-b81a-92de549c1523\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-4cmrc" Feb 27 10:42:07 crc kubenswrapper[4728]: I0227 10:42:06.998277 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhdwc\" (UniqueName: \"kubernetes.io/projected/9e8ec681-17c4-4bcd-b81a-92de549c1523-kube-api-access-jhdwc\") pod \"nmstate-console-plugin-5dcbbd79cf-4cmrc\" (UID: \"9e8ec681-17c4-4bcd-b81a-92de549c1523\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-4cmrc" Feb 27 10:42:07 crc kubenswrapper[4728]: I0227 10:42:07.040360 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-njfzf" Feb 27 10:42:07 crc kubenswrapper[4728]: W0227 10:42:07.062245 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc3efa330_29e0_41a2_aa09_86babc8aa9b4.slice/crio-622a66932c9766589b79a7500ff35b26ada56390225b0fbfe6a3b4ba981c54f5 WatchSource:0}: Error finding container 622a66932c9766589b79a7500ff35b26ada56390225b0fbfe6a3b4ba981c54f5: Status 404 returned error can't find the container with id 622a66932c9766589b79a7500ff35b26ada56390225b0fbfe6a3b4ba981c54f5 Feb 27 10:42:07 crc kubenswrapper[4728]: I0227 10:42:07.107621 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/9e8ec681-17c4-4bcd-b81a-92de549c1523-nginx-conf\") pod \"nmstate-console-plugin-5dcbbd79cf-4cmrc\" (UID: \"9e8ec681-17c4-4bcd-b81a-92de549c1523\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-4cmrc" Feb 27 10:42:07 crc kubenswrapper[4728]: I0227 10:42:07.107950 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jhdwc\" (UniqueName: \"kubernetes.io/projected/9e8ec681-17c4-4bcd-b81a-92de549c1523-kube-api-access-jhdwc\") pod \"nmstate-console-plugin-5dcbbd79cf-4cmrc\" (UID: \"9e8ec681-17c4-4bcd-b81a-92de549c1523\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-4cmrc" Feb 27 10:42:07 crc kubenswrapper[4728]: I0227 10:42:07.108012 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/9e8ec681-17c4-4bcd-b81a-92de549c1523-plugin-serving-cert\") pod \"nmstate-console-plugin-5dcbbd79cf-4cmrc\" (UID: \"9e8ec681-17c4-4bcd-b81a-92de549c1523\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-4cmrc" Feb 27 10:42:07 crc kubenswrapper[4728]: E0227 10:42:07.108125 4728 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Feb 27 10:42:07 crc kubenswrapper[4728]: E0227 10:42:07.108169 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9e8ec681-17c4-4bcd-b81a-92de549c1523-plugin-serving-cert podName:9e8ec681-17c4-4bcd-b81a-92de549c1523 nodeName:}" failed. No retries permitted until 2026-02-27 10:42:07.60815568 +0000 UTC m=+947.570521786 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/9e8ec681-17c4-4bcd-b81a-92de549c1523-plugin-serving-cert") pod "nmstate-console-plugin-5dcbbd79cf-4cmrc" (UID: "9e8ec681-17c4-4bcd-b81a-92de549c1523") : secret "plugin-serving-cert" not found Feb 27 10:42:07 crc kubenswrapper[4728]: I0227 10:42:07.109064 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/9e8ec681-17c4-4bcd-b81a-92de549c1523-nginx-conf\") pod \"nmstate-console-plugin-5dcbbd79cf-4cmrc\" (UID: \"9e8ec681-17c4-4bcd-b81a-92de549c1523\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-4cmrc" Feb 27 10:42:07 crc kubenswrapper[4728]: I0227 10:42:07.115365 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-7b97b57845-jhmxj"] Feb 27 10:42:07 crc kubenswrapper[4728]: I0227 10:42:07.126766 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7b97b57845-jhmxj" Feb 27 10:42:07 crc kubenswrapper[4728]: I0227 10:42:07.133958 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7b97b57845-jhmxj"] Feb 27 10:42:07 crc kubenswrapper[4728]: I0227 10:42:07.149485 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jhdwc\" (UniqueName: \"kubernetes.io/projected/9e8ec681-17c4-4bcd-b81a-92de549c1523-kube-api-access-jhdwc\") pod \"nmstate-console-plugin-5dcbbd79cf-4cmrc\" (UID: \"9e8ec681-17c4-4bcd-b81a-92de549c1523\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-4cmrc" Feb 27 10:42:07 crc kubenswrapper[4728]: I0227 10:42:07.209236 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f82e439f-89e2-4143-b9c6-1935c3154d0c-console-serving-cert\") pod \"console-7b97b57845-jhmxj\" (UID: \"f82e439f-89e2-4143-b9c6-1935c3154d0c\") " pod="openshift-console/console-7b97b57845-jhmxj" Feb 27 10:42:07 crc kubenswrapper[4728]: I0227 10:42:07.209325 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f82e439f-89e2-4143-b9c6-1935c3154d0c-console-oauth-config\") pod \"console-7b97b57845-jhmxj\" (UID: \"f82e439f-89e2-4143-b9c6-1935c3154d0c\") " pod="openshift-console/console-7b97b57845-jhmxj" Feb 27 10:42:07 crc kubenswrapper[4728]: I0227 10:42:07.209346 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6lwfj\" (UniqueName: \"kubernetes.io/projected/f82e439f-89e2-4143-b9c6-1935c3154d0c-kube-api-access-6lwfj\") pod \"console-7b97b57845-jhmxj\" (UID: \"f82e439f-89e2-4143-b9c6-1935c3154d0c\") " pod="openshift-console/console-7b97b57845-jhmxj" Feb 27 10:42:07 crc kubenswrapper[4728]: I0227 10:42:07.209385 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f82e439f-89e2-4143-b9c6-1935c3154d0c-oauth-serving-cert\") pod \"console-7b97b57845-jhmxj\" (UID: \"f82e439f-89e2-4143-b9c6-1935c3154d0c\") " pod="openshift-console/console-7b97b57845-jhmxj" Feb 27 10:42:07 crc kubenswrapper[4728]: I0227 10:42:07.209414 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f82e439f-89e2-4143-b9c6-1935c3154d0c-console-config\") pod \"console-7b97b57845-jhmxj\" (UID: \"f82e439f-89e2-4143-b9c6-1935c3154d0c\") " pod="openshift-console/console-7b97b57845-jhmxj" Feb 27 10:42:07 crc kubenswrapper[4728]: I0227 10:42:07.209428 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f82e439f-89e2-4143-b9c6-1935c3154d0c-trusted-ca-bundle\") pod \"console-7b97b57845-jhmxj\" (UID: \"f82e439f-89e2-4143-b9c6-1935c3154d0c\") " pod="openshift-console/console-7b97b57845-jhmxj" Feb 27 10:42:07 crc kubenswrapper[4728]: I0227 10:42:07.209445 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f82e439f-89e2-4143-b9c6-1935c3154d0c-service-ca\") pod \"console-7b97b57845-jhmxj\" (UID: \"f82e439f-89e2-4143-b9c6-1935c3154d0c\") " pod="openshift-console/console-7b97b57845-jhmxj" Feb 27 10:42:07 crc kubenswrapper[4728]: I0227 10:42:07.310776 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f82e439f-89e2-4143-b9c6-1935c3154d0c-oauth-serving-cert\") pod \"console-7b97b57845-jhmxj\" (UID: \"f82e439f-89e2-4143-b9c6-1935c3154d0c\") " pod="openshift-console/console-7b97b57845-jhmxj" Feb 27 10:42:07 crc kubenswrapper[4728]: I0227 10:42:07.310858 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f82e439f-89e2-4143-b9c6-1935c3154d0c-console-config\") pod \"console-7b97b57845-jhmxj\" (UID: \"f82e439f-89e2-4143-b9c6-1935c3154d0c\") " pod="openshift-console/console-7b97b57845-jhmxj" Feb 27 10:42:07 crc kubenswrapper[4728]: I0227 10:42:07.310897 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f82e439f-89e2-4143-b9c6-1935c3154d0c-trusted-ca-bundle\") pod \"console-7b97b57845-jhmxj\" (UID: \"f82e439f-89e2-4143-b9c6-1935c3154d0c\") " pod="openshift-console/console-7b97b57845-jhmxj" Feb 27 10:42:07 crc kubenswrapper[4728]: I0227 10:42:07.310920 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f82e439f-89e2-4143-b9c6-1935c3154d0c-service-ca\") pod \"console-7b97b57845-jhmxj\" (UID: \"f82e439f-89e2-4143-b9c6-1935c3154d0c\") " pod="openshift-console/console-7b97b57845-jhmxj" Feb 27 10:42:07 crc kubenswrapper[4728]: I0227 10:42:07.311004 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f82e439f-89e2-4143-b9c6-1935c3154d0c-console-serving-cert\") pod \"console-7b97b57845-jhmxj\" (UID: \"f82e439f-89e2-4143-b9c6-1935c3154d0c\") " pod="openshift-console/console-7b97b57845-jhmxj" Feb 27 10:42:07 crc kubenswrapper[4728]: I0227 10:42:07.311065 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f82e439f-89e2-4143-b9c6-1935c3154d0c-console-oauth-config\") pod \"console-7b97b57845-jhmxj\" (UID: \"f82e439f-89e2-4143-b9c6-1935c3154d0c\") " pod="openshift-console/console-7b97b57845-jhmxj" Feb 27 10:42:07 crc kubenswrapper[4728]: I0227 10:42:07.311084 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6lwfj\" (UniqueName: \"kubernetes.io/projected/f82e439f-89e2-4143-b9c6-1935c3154d0c-kube-api-access-6lwfj\") pod \"console-7b97b57845-jhmxj\" (UID: \"f82e439f-89e2-4143-b9c6-1935c3154d0c\") " pod="openshift-console/console-7b97b57845-jhmxj" Feb 27 10:42:07 crc kubenswrapper[4728]: I0227 10:42:07.311928 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f82e439f-89e2-4143-b9c6-1935c3154d0c-oauth-serving-cert\") pod \"console-7b97b57845-jhmxj\" (UID: \"f82e439f-89e2-4143-b9c6-1935c3154d0c\") " pod="openshift-console/console-7b97b57845-jhmxj" Feb 27 10:42:07 crc kubenswrapper[4728]: I0227 10:42:07.311988 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f82e439f-89e2-4143-b9c6-1935c3154d0c-console-config\") pod \"console-7b97b57845-jhmxj\" (UID: \"f82e439f-89e2-4143-b9c6-1935c3154d0c\") " pod="openshift-console/console-7b97b57845-jhmxj" Feb 27 10:42:07 crc kubenswrapper[4728]: I0227 10:42:07.312121 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f82e439f-89e2-4143-b9c6-1935c3154d0c-service-ca\") pod \"console-7b97b57845-jhmxj\" (UID: \"f82e439f-89e2-4143-b9c6-1935c3154d0c\") " pod="openshift-console/console-7b97b57845-jhmxj" Feb 27 10:42:07 crc kubenswrapper[4728]: I0227 10:42:07.312191 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f82e439f-89e2-4143-b9c6-1935c3154d0c-trusted-ca-bundle\") pod \"console-7b97b57845-jhmxj\" (UID: \"f82e439f-89e2-4143-b9c6-1935c3154d0c\") " pod="openshift-console/console-7b97b57845-jhmxj" Feb 27 10:42:07 crc kubenswrapper[4728]: I0227 10:42:07.314000 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f82e439f-89e2-4143-b9c6-1935c3154d0c-console-serving-cert\") pod \"console-7b97b57845-jhmxj\" (UID: \"f82e439f-89e2-4143-b9c6-1935c3154d0c\") " pod="openshift-console/console-7b97b57845-jhmxj" Feb 27 10:42:07 crc kubenswrapper[4728]: I0227 10:42:07.316151 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f82e439f-89e2-4143-b9c6-1935c3154d0c-console-oauth-config\") pod \"console-7b97b57845-jhmxj\" (UID: \"f82e439f-89e2-4143-b9c6-1935c3154d0c\") " pod="openshift-console/console-7b97b57845-jhmxj" Feb 27 10:42:07 crc kubenswrapper[4728]: I0227 10:42:07.328544 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6lwfj\" (UniqueName: \"kubernetes.io/projected/f82e439f-89e2-4143-b9c6-1935c3154d0c-kube-api-access-6lwfj\") pod \"console-7b97b57845-jhmxj\" (UID: \"f82e439f-89e2-4143-b9c6-1935c3154d0c\") " pod="openshift-console/console-7b97b57845-jhmxj" Feb 27 10:42:07 crc kubenswrapper[4728]: I0227 10:42:07.412540 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/a28afdcc-7a97-430e-a333-5c1eac61d005-tls-key-pair\") pod \"nmstate-webhook-786f45cff4-tw7dm\" (UID: \"a28afdcc-7a97-430e-a333-5c1eac61d005\") " pod="openshift-nmstate/nmstate-webhook-786f45cff4-tw7dm" Feb 27 10:42:07 crc kubenswrapper[4728]: I0227 10:42:07.417549 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/a28afdcc-7a97-430e-a333-5c1eac61d005-tls-key-pair\") pod \"nmstate-webhook-786f45cff4-tw7dm\" (UID: \"a28afdcc-7a97-430e-a333-5c1eac61d005\") " pod="openshift-nmstate/nmstate-webhook-786f45cff4-tw7dm" Feb 27 10:42:07 crc kubenswrapper[4728]: I0227 10:42:07.450665 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-69594cc75-cz9mk"] Feb 27 10:42:07 crc kubenswrapper[4728]: I0227 10:42:07.486210 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7b97b57845-jhmxj" Feb 27 10:42:07 crc kubenswrapper[4728]: I0227 10:42:07.611884 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-786f45cff4-tw7dm" Feb 27 10:42:07 crc kubenswrapper[4728]: I0227 10:42:07.615960 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/9e8ec681-17c4-4bcd-b81a-92de549c1523-plugin-serving-cert\") pod \"nmstate-console-plugin-5dcbbd79cf-4cmrc\" (UID: \"9e8ec681-17c4-4bcd-b81a-92de549c1523\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-4cmrc" Feb 27 10:42:07 crc kubenswrapper[4728]: I0227 10:42:07.621074 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/9e8ec681-17c4-4bcd-b81a-92de549c1523-plugin-serving-cert\") pod \"nmstate-console-plugin-5dcbbd79cf-4cmrc\" (UID: \"9e8ec681-17c4-4bcd-b81a-92de549c1523\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-4cmrc" Feb 27 10:42:07 crc kubenswrapper[4728]: I0227 10:42:07.712804 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7b97b57845-jhmxj"] Feb 27 10:42:07 crc kubenswrapper[4728]: I0227 10:42:07.863647 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-njfzf" event={"ID":"c3efa330-29e0-41a2-aa09-86babc8aa9b4","Type":"ContainerStarted","Data":"622a66932c9766589b79a7500ff35b26ada56390225b0fbfe6a3b4ba981c54f5"} Feb 27 10:42:07 crc kubenswrapper[4728]: I0227 10:42:07.865378 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7b97b57845-jhmxj" event={"ID":"f82e439f-89e2-4143-b9c6-1935c3154d0c","Type":"ContainerStarted","Data":"8256301e31844a2e02d2e5f2b1a3306096242ecd13811ee048cc9d88b1bae632"} Feb 27 10:42:07 crc kubenswrapper[4728]: I0227 10:42:07.868710 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-69594cc75-cz9mk" event={"ID":"339eedb6-7c4e-4837-9cef-a76ee6398990","Type":"ContainerStarted","Data":"02f8b15cbfee8cd017443b86d4470c3d834dbfe4844ab5660e05b4f7788894dd"} Feb 27 10:42:07 crc kubenswrapper[4728]: I0227 10:42:07.891114 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-4cmrc" Feb 27 10:42:08 crc kubenswrapper[4728]: I0227 10:42:08.075088 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-786f45cff4-tw7dm"] Feb 27 10:42:08 crc kubenswrapper[4728]: I0227 10:42:08.334497 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-4cmrc"] Feb 27 10:42:08 crc kubenswrapper[4728]: W0227 10:42:08.334840 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9e8ec681_17c4_4bcd_b81a_92de549c1523.slice/crio-a7340e88acc2ccfdd4e9c3698a08b179d626236c1e64f052838555e7a906a2dd WatchSource:0}: Error finding container a7340e88acc2ccfdd4e9c3698a08b179d626236c1e64f052838555e7a906a2dd: Status 404 returned error can't find the container with id a7340e88acc2ccfdd4e9c3698a08b179d626236c1e64f052838555e7a906a2dd Feb 27 10:42:08 crc kubenswrapper[4728]: I0227 10:42:08.878883 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7b97b57845-jhmxj" event={"ID":"f82e439f-89e2-4143-b9c6-1935c3154d0c","Type":"ContainerStarted","Data":"7940b5dc9d3f0c57017adf88f3e553f526d8fb2e1a7753dfe7f8ec881961af03"} Feb 27 10:42:08 crc kubenswrapper[4728]: I0227 10:42:08.882089 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-786f45cff4-tw7dm" event={"ID":"a28afdcc-7a97-430e-a333-5c1eac61d005","Type":"ContainerStarted","Data":"054c96b094f5bce2458010bba86ceed50724afcf12d7b7f998805a3a9e9de75c"} Feb 27 10:42:08 crc kubenswrapper[4728]: I0227 10:42:08.883603 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-4cmrc" event={"ID":"9e8ec681-17c4-4bcd-b81a-92de549c1523","Type":"ContainerStarted","Data":"a7340e88acc2ccfdd4e9c3698a08b179d626236c1e64f052838555e7a906a2dd"} Feb 27 10:42:08 crc kubenswrapper[4728]: I0227 10:42:08.905458 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-7b97b57845-jhmxj" podStartSLOduration=1.9054306479999998 podStartE2EDuration="1.905430648s" podCreationTimestamp="2026-02-27 10:42:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:42:08.896193034 +0000 UTC m=+948.858559160" watchObservedRunningTime="2026-02-27 10:42:08.905430648 +0000 UTC m=+948.867796764" Feb 27 10:42:10 crc kubenswrapper[4728]: I0227 10:42:10.899743 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-njfzf" event={"ID":"c3efa330-29e0-41a2-aa09-86babc8aa9b4","Type":"ContainerStarted","Data":"1d7a1a6b2b4c5d3add046e359391311af0d5c97a1fb17a55bee970832932bd0d"} Feb 27 10:42:10 crc kubenswrapper[4728]: I0227 10:42:10.900342 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-njfzf" Feb 27 10:42:10 crc kubenswrapper[4728]: I0227 10:42:10.903033 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-786f45cff4-tw7dm" event={"ID":"a28afdcc-7a97-430e-a333-5c1eac61d005","Type":"ContainerStarted","Data":"38e2b66adf5b8ae2a3d31e71f1c3403ebdbfd8804e5e05bd10cf982df41e6d3f"} Feb 27 10:42:10 crc kubenswrapper[4728]: I0227 10:42:10.906923 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-69594cc75-cz9mk" event={"ID":"339eedb6-7c4e-4837-9cef-a76ee6398990","Type":"ContainerStarted","Data":"3d0e8d0f5cac29c0dc30fa55bd1021bf728d4ec7a02d2325378452b6fb2ed7a1"} Feb 27 10:42:10 crc kubenswrapper[4728]: I0227 10:42:10.919104 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-njfzf" podStartSLOduration=1.830916387 podStartE2EDuration="4.919081956s" podCreationTimestamp="2026-02-27 10:42:06 +0000 UTC" firstStartedPulling="2026-02-27 10:42:07.067649756 +0000 UTC m=+947.030015862" lastFinishedPulling="2026-02-27 10:42:10.155815325 +0000 UTC m=+950.118181431" observedRunningTime="2026-02-27 10:42:10.914080068 +0000 UTC m=+950.876446184" watchObservedRunningTime="2026-02-27 10:42:10.919081956 +0000 UTC m=+950.881448092" Feb 27 10:42:10 crc kubenswrapper[4728]: I0227 10:42:10.940045 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-786f45cff4-tw7dm" podStartSLOduration=2.8879541570000002 podStartE2EDuration="4.940024082s" podCreationTimestamp="2026-02-27 10:42:06 +0000 UTC" firstStartedPulling="2026-02-27 10:42:08.091173654 +0000 UTC m=+948.053539760" lastFinishedPulling="2026-02-27 10:42:10.143243579 +0000 UTC m=+950.105609685" observedRunningTime="2026-02-27 10:42:10.928906196 +0000 UTC m=+950.891272312" watchObservedRunningTime="2026-02-27 10:42:10.940024082 +0000 UTC m=+950.902390198" Feb 27 10:42:11 crc kubenswrapper[4728]: I0227 10:42:11.920182 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-4cmrc" event={"ID":"9e8ec681-17c4-4bcd-b81a-92de549c1523","Type":"ContainerStarted","Data":"b08e7118ede2806a6b5f8d54c9831763f0baa6fde2b6620fce95055ef31e7a3c"} Feb 27 10:42:11 crc kubenswrapper[4728]: I0227 10:42:11.922024 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-786f45cff4-tw7dm" Feb 27 10:42:11 crc kubenswrapper[4728]: I0227 10:42:11.945984 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-4cmrc" podStartSLOduration=3.00622049 podStartE2EDuration="5.945962996s" podCreationTimestamp="2026-02-27 10:42:06 +0000 UTC" firstStartedPulling="2026-02-27 10:42:08.337198221 +0000 UTC m=+948.299564327" lastFinishedPulling="2026-02-27 10:42:11.276940727 +0000 UTC m=+951.239306833" observedRunningTime="2026-02-27 10:42:11.941230976 +0000 UTC m=+951.903597122" watchObservedRunningTime="2026-02-27 10:42:11.945962996 +0000 UTC m=+951.908329112" Feb 27 10:42:17 crc kubenswrapper[4728]: I0227 10:42:17.068065 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-njfzf" Feb 27 10:42:17 crc kubenswrapper[4728]: I0227 10:42:17.487326 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-7b97b57845-jhmxj" Feb 27 10:42:17 crc kubenswrapper[4728]: I0227 10:42:17.487836 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-7b97b57845-jhmxj" Feb 27 10:42:17 crc kubenswrapper[4728]: I0227 10:42:17.494932 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-7b97b57845-jhmxj" Feb 27 10:42:17 crc kubenswrapper[4728]: I0227 10:42:17.990832 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-7b97b57845-jhmxj" Feb 27 10:42:18 crc kubenswrapper[4728]: I0227 10:42:18.096542 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-6bb5cd776f-wblt4"] Feb 27 10:42:21 crc kubenswrapper[4728]: I0227 10:42:21.011530 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-69594cc75-cz9mk" event={"ID":"339eedb6-7c4e-4837-9cef-a76ee6398990","Type":"ContainerStarted","Data":"93f864760a44dab0452b1d726ec5ed4f15de5e95faf40c3feffd72c3f44ee069"} Feb 27 10:42:21 crc kubenswrapper[4728]: I0227 10:42:21.036334 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-69594cc75-cz9mk" podStartSLOduration=1.829397876 podStartE2EDuration="15.036312343s" podCreationTimestamp="2026-02-27 10:42:06 +0000 UTC" firstStartedPulling="2026-02-27 10:42:07.46083942 +0000 UTC m=+947.423205536" lastFinishedPulling="2026-02-27 10:42:20.667753887 +0000 UTC m=+960.630120003" observedRunningTime="2026-02-27 10:42:21.030969726 +0000 UTC m=+960.993335832" watchObservedRunningTime="2026-02-27 10:42:21.036312343 +0000 UTC m=+960.998678449" Feb 27 10:42:27 crc kubenswrapper[4728]: I0227 10:42:27.620076 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-786f45cff4-tw7dm" Feb 27 10:42:43 crc kubenswrapper[4728]: I0227 10:42:43.146898 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-6bb5cd776f-wblt4" podUID="debddd2a-191c-4a46-950a-866e0e68b4be" containerName="console" containerID="cri-o://ea55bdb38600049e2e0e775a2ede0430adeff6fa8455d21d408572240d8ff1d7" gracePeriod=15 Feb 27 10:42:43 crc kubenswrapper[4728]: I0227 10:42:43.609481 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-6bb5cd776f-wblt4_debddd2a-191c-4a46-950a-866e0e68b4be/console/0.log" Feb 27 10:42:43 crc kubenswrapper[4728]: I0227 10:42:43.609753 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6bb5cd776f-wblt4" Feb 27 10:42:43 crc kubenswrapper[4728]: I0227 10:42:43.758658 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/debddd2a-191c-4a46-950a-866e0e68b4be-console-oauth-config\") pod \"debddd2a-191c-4a46-950a-866e0e68b4be\" (UID: \"debddd2a-191c-4a46-950a-866e0e68b4be\") " Feb 27 10:42:43 crc kubenswrapper[4728]: I0227 10:42:43.759852 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-52fvp\" (UniqueName: \"kubernetes.io/projected/debddd2a-191c-4a46-950a-866e0e68b4be-kube-api-access-52fvp\") pod \"debddd2a-191c-4a46-950a-866e0e68b4be\" (UID: \"debddd2a-191c-4a46-950a-866e0e68b4be\") " Feb 27 10:42:43 crc kubenswrapper[4728]: I0227 10:42:43.760031 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/debddd2a-191c-4a46-950a-866e0e68b4be-trusted-ca-bundle\") pod \"debddd2a-191c-4a46-950a-866e0e68b4be\" (UID: \"debddd2a-191c-4a46-950a-866e0e68b4be\") " Feb 27 10:42:43 crc kubenswrapper[4728]: I0227 10:42:43.760091 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/debddd2a-191c-4a46-950a-866e0e68b4be-service-ca\") pod \"debddd2a-191c-4a46-950a-866e0e68b4be\" (UID: \"debddd2a-191c-4a46-950a-866e0e68b4be\") " Feb 27 10:42:43 crc kubenswrapper[4728]: I0227 10:42:43.760110 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/debddd2a-191c-4a46-950a-866e0e68b4be-oauth-serving-cert\") pod \"debddd2a-191c-4a46-950a-866e0e68b4be\" (UID: \"debddd2a-191c-4a46-950a-866e0e68b4be\") " Feb 27 10:42:43 crc kubenswrapper[4728]: I0227 10:42:43.760178 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/debddd2a-191c-4a46-950a-866e0e68b4be-console-config\") pod \"debddd2a-191c-4a46-950a-866e0e68b4be\" (UID: \"debddd2a-191c-4a46-950a-866e0e68b4be\") " Feb 27 10:42:43 crc kubenswrapper[4728]: I0227 10:42:43.760234 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/debddd2a-191c-4a46-950a-866e0e68b4be-console-serving-cert\") pod \"debddd2a-191c-4a46-950a-866e0e68b4be\" (UID: \"debddd2a-191c-4a46-950a-866e0e68b4be\") " Feb 27 10:42:43 crc kubenswrapper[4728]: I0227 10:42:43.760968 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/debddd2a-191c-4a46-950a-866e0e68b4be-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "debddd2a-191c-4a46-950a-866e0e68b4be" (UID: "debddd2a-191c-4a46-950a-866e0e68b4be"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:42:43 crc kubenswrapper[4728]: I0227 10:42:43.760993 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/debddd2a-191c-4a46-950a-866e0e68b4be-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "debddd2a-191c-4a46-950a-866e0e68b4be" (UID: "debddd2a-191c-4a46-950a-866e0e68b4be"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:42:43 crc kubenswrapper[4728]: I0227 10:42:43.761016 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/debddd2a-191c-4a46-950a-866e0e68b4be-console-config" (OuterVolumeSpecName: "console-config") pod "debddd2a-191c-4a46-950a-866e0e68b4be" (UID: "debddd2a-191c-4a46-950a-866e0e68b4be"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:42:43 crc kubenswrapper[4728]: I0227 10:42:43.760979 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/debddd2a-191c-4a46-950a-866e0e68b4be-service-ca" (OuterVolumeSpecName: "service-ca") pod "debddd2a-191c-4a46-950a-866e0e68b4be" (UID: "debddd2a-191c-4a46-950a-866e0e68b4be"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:42:43 crc kubenswrapper[4728]: I0227 10:42:43.764946 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/debddd2a-191c-4a46-950a-866e0e68b4be-kube-api-access-52fvp" (OuterVolumeSpecName: "kube-api-access-52fvp") pod "debddd2a-191c-4a46-950a-866e0e68b4be" (UID: "debddd2a-191c-4a46-950a-866e0e68b4be"). InnerVolumeSpecName "kube-api-access-52fvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:42:43 crc kubenswrapper[4728]: I0227 10:42:43.771781 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/debddd2a-191c-4a46-950a-866e0e68b4be-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "debddd2a-191c-4a46-950a-866e0e68b4be" (UID: "debddd2a-191c-4a46-950a-866e0e68b4be"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:42:43 crc kubenswrapper[4728]: I0227 10:42:43.775696 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/debddd2a-191c-4a46-950a-866e0e68b4be-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "debddd2a-191c-4a46-950a-866e0e68b4be" (UID: "debddd2a-191c-4a46-950a-866e0e68b4be"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:42:43 crc kubenswrapper[4728]: I0227 10:42:43.862147 4728 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/debddd2a-191c-4a46-950a-866e0e68b4be-service-ca\") on node \"crc\" DevicePath \"\"" Feb 27 10:42:43 crc kubenswrapper[4728]: I0227 10:42:43.862190 4728 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/debddd2a-191c-4a46-950a-866e0e68b4be-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 10:42:43 crc kubenswrapper[4728]: I0227 10:42:43.862224 4728 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/debddd2a-191c-4a46-950a-866e0e68b4be-console-config\") on node \"crc\" DevicePath \"\"" Feb 27 10:42:43 crc kubenswrapper[4728]: I0227 10:42:43.862236 4728 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/debddd2a-191c-4a46-950a-866e0e68b4be-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 10:42:43 crc kubenswrapper[4728]: I0227 10:42:43.862249 4728 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/debddd2a-191c-4a46-950a-866e0e68b4be-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 27 10:42:43 crc kubenswrapper[4728]: I0227 10:42:43.862260 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-52fvp\" (UniqueName: \"kubernetes.io/projected/debddd2a-191c-4a46-950a-866e0e68b4be-kube-api-access-52fvp\") on node \"crc\" DevicePath \"\"" Feb 27 10:42:43 crc kubenswrapper[4728]: I0227 10:42:43.862270 4728 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/debddd2a-191c-4a46-950a-866e0e68b4be-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 10:42:44 crc kubenswrapper[4728]: I0227 10:42:44.204410 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-6bb5cd776f-wblt4_debddd2a-191c-4a46-950a-866e0e68b4be/console/0.log" Feb 27 10:42:44 crc kubenswrapper[4728]: I0227 10:42:44.204476 4728 generic.go:334] "Generic (PLEG): container finished" podID="debddd2a-191c-4a46-950a-866e0e68b4be" containerID="ea55bdb38600049e2e0e775a2ede0430adeff6fa8455d21d408572240d8ff1d7" exitCode=2 Feb 27 10:42:44 crc kubenswrapper[4728]: I0227 10:42:44.204526 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6bb5cd776f-wblt4" event={"ID":"debddd2a-191c-4a46-950a-866e0e68b4be","Type":"ContainerDied","Data":"ea55bdb38600049e2e0e775a2ede0430adeff6fa8455d21d408572240d8ff1d7"} Feb 27 10:42:44 crc kubenswrapper[4728]: I0227 10:42:44.204567 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6bb5cd776f-wblt4" event={"ID":"debddd2a-191c-4a46-950a-866e0e68b4be","Type":"ContainerDied","Data":"071f821e8671397138ea2e1cf973c6d8629abc425a21ecff570df99169acd675"} Feb 27 10:42:44 crc kubenswrapper[4728]: I0227 10:42:44.204588 4728 scope.go:117] "RemoveContainer" containerID="ea55bdb38600049e2e0e775a2ede0430adeff6fa8455d21d408572240d8ff1d7" Feb 27 10:42:44 crc kubenswrapper[4728]: I0227 10:42:44.204598 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6bb5cd776f-wblt4" Feb 27 10:42:44 crc kubenswrapper[4728]: I0227 10:42:44.242141 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-6bb5cd776f-wblt4"] Feb 27 10:42:44 crc kubenswrapper[4728]: I0227 10:42:44.242920 4728 scope.go:117] "RemoveContainer" containerID="ea55bdb38600049e2e0e775a2ede0430adeff6fa8455d21d408572240d8ff1d7" Feb 27 10:42:44 crc kubenswrapper[4728]: E0227 10:42:44.243350 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ea55bdb38600049e2e0e775a2ede0430adeff6fa8455d21d408572240d8ff1d7\": container with ID starting with ea55bdb38600049e2e0e775a2ede0430adeff6fa8455d21d408572240d8ff1d7 not found: ID does not exist" containerID="ea55bdb38600049e2e0e775a2ede0430adeff6fa8455d21d408572240d8ff1d7" Feb 27 10:42:44 crc kubenswrapper[4728]: I0227 10:42:44.243388 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea55bdb38600049e2e0e775a2ede0430adeff6fa8455d21d408572240d8ff1d7"} err="failed to get container status \"ea55bdb38600049e2e0e775a2ede0430adeff6fa8455d21d408572240d8ff1d7\": rpc error: code = NotFound desc = could not find container \"ea55bdb38600049e2e0e775a2ede0430adeff6fa8455d21d408572240d8ff1d7\": container with ID starting with ea55bdb38600049e2e0e775a2ede0430adeff6fa8455d21d408572240d8ff1d7 not found: ID does not exist" Feb 27 10:42:44 crc kubenswrapper[4728]: I0227 10:42:44.249323 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-6bb5cd776f-wblt4"] Feb 27 10:42:44 crc kubenswrapper[4728]: I0227 10:42:44.734261 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="debddd2a-191c-4a46-950a-866e0e68b4be" path="/var/lib/kubelet/pods/debddd2a-191c-4a46-950a-866e0e68b4be/volumes" Feb 27 10:42:46 crc kubenswrapper[4728]: I0227 10:42:46.954829 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4ffgjc"] Feb 27 10:42:46 crc kubenswrapper[4728]: E0227 10:42:46.955388 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="debddd2a-191c-4a46-950a-866e0e68b4be" containerName="console" Feb 27 10:42:46 crc kubenswrapper[4728]: I0227 10:42:46.955401 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="debddd2a-191c-4a46-950a-866e0e68b4be" containerName="console" Feb 27 10:42:46 crc kubenswrapper[4728]: I0227 10:42:46.955559 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="debddd2a-191c-4a46-950a-866e0e68b4be" containerName="console" Feb 27 10:42:46 crc kubenswrapper[4728]: I0227 10:42:46.956580 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4ffgjc" Feb 27 10:42:46 crc kubenswrapper[4728]: I0227 10:42:46.958806 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 27 10:42:46 crc kubenswrapper[4728]: I0227 10:42:46.973041 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4ffgjc"] Feb 27 10:42:47 crc kubenswrapper[4728]: I0227 10:42:47.112416 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a6dd47e0-f719-4fb7-99ea-33468a6bdc97-bundle\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4ffgjc\" (UID: \"a6dd47e0-f719-4fb7-99ea-33468a6bdc97\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4ffgjc" Feb 27 10:42:47 crc kubenswrapper[4728]: I0227 10:42:47.112521 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a6dd47e0-f719-4fb7-99ea-33468a6bdc97-util\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4ffgjc\" (UID: \"a6dd47e0-f719-4fb7-99ea-33468a6bdc97\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4ffgjc" Feb 27 10:42:47 crc kubenswrapper[4728]: I0227 10:42:47.112604 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c85p2\" (UniqueName: \"kubernetes.io/projected/a6dd47e0-f719-4fb7-99ea-33468a6bdc97-kube-api-access-c85p2\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4ffgjc\" (UID: \"a6dd47e0-f719-4fb7-99ea-33468a6bdc97\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4ffgjc" Feb 27 10:42:47 crc kubenswrapper[4728]: I0227 10:42:47.214560 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c85p2\" (UniqueName: \"kubernetes.io/projected/a6dd47e0-f719-4fb7-99ea-33468a6bdc97-kube-api-access-c85p2\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4ffgjc\" (UID: \"a6dd47e0-f719-4fb7-99ea-33468a6bdc97\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4ffgjc" Feb 27 10:42:47 crc kubenswrapper[4728]: I0227 10:42:47.214697 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a6dd47e0-f719-4fb7-99ea-33468a6bdc97-bundle\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4ffgjc\" (UID: \"a6dd47e0-f719-4fb7-99ea-33468a6bdc97\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4ffgjc" Feb 27 10:42:47 crc kubenswrapper[4728]: I0227 10:42:47.214841 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a6dd47e0-f719-4fb7-99ea-33468a6bdc97-util\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4ffgjc\" (UID: \"a6dd47e0-f719-4fb7-99ea-33468a6bdc97\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4ffgjc" Feb 27 10:42:47 crc kubenswrapper[4728]: I0227 10:42:47.215186 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a6dd47e0-f719-4fb7-99ea-33468a6bdc97-bundle\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4ffgjc\" (UID: \"a6dd47e0-f719-4fb7-99ea-33468a6bdc97\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4ffgjc" Feb 27 10:42:47 crc kubenswrapper[4728]: I0227 10:42:47.215550 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a6dd47e0-f719-4fb7-99ea-33468a6bdc97-util\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4ffgjc\" (UID: \"a6dd47e0-f719-4fb7-99ea-33468a6bdc97\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4ffgjc" Feb 27 10:42:47 crc kubenswrapper[4728]: I0227 10:42:47.242230 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c85p2\" (UniqueName: \"kubernetes.io/projected/a6dd47e0-f719-4fb7-99ea-33468a6bdc97-kube-api-access-c85p2\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4ffgjc\" (UID: \"a6dd47e0-f719-4fb7-99ea-33468a6bdc97\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4ffgjc" Feb 27 10:42:47 crc kubenswrapper[4728]: I0227 10:42:47.283903 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4ffgjc" Feb 27 10:42:47 crc kubenswrapper[4728]: I0227 10:42:47.752435 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4ffgjc"] Feb 27 10:42:48 crc kubenswrapper[4728]: I0227 10:42:48.015424 4728 scope.go:117] "RemoveContainer" containerID="ed6689cb9a50987b89277af9ed5f9a6e61e2993f7d3db69e312c777ebb5bd9ea" Feb 27 10:42:48 crc kubenswrapper[4728]: I0227 10:42:48.238266 4728 generic.go:334] "Generic (PLEG): container finished" podID="a6dd47e0-f719-4fb7-99ea-33468a6bdc97" containerID="30d269dbff1ee83ec510ac8c400ba2ffe3bfc551cc1496452b4afe0d45a2a093" exitCode=0 Feb 27 10:42:48 crc kubenswrapper[4728]: I0227 10:42:48.238307 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4ffgjc" event={"ID":"a6dd47e0-f719-4fb7-99ea-33468a6bdc97","Type":"ContainerDied","Data":"30d269dbff1ee83ec510ac8c400ba2ffe3bfc551cc1496452b4afe0d45a2a093"} Feb 27 10:42:48 crc kubenswrapper[4728]: I0227 10:42:48.238331 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4ffgjc" event={"ID":"a6dd47e0-f719-4fb7-99ea-33468a6bdc97","Type":"ContainerStarted","Data":"d9cc1ece9206862409a8454813909adcd54ad3b101dc2a924256836ef32aa0ab"} Feb 27 10:42:50 crc kubenswrapper[4728]: I0227 10:42:50.259628 4728 generic.go:334] "Generic (PLEG): container finished" podID="a6dd47e0-f719-4fb7-99ea-33468a6bdc97" containerID="db5ba0fe7c7306ea8c8668491978fad39a494b56d9ef24721d5dd698b8d216e6" exitCode=0 Feb 27 10:42:50 crc kubenswrapper[4728]: I0227 10:42:50.259807 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4ffgjc" event={"ID":"a6dd47e0-f719-4fb7-99ea-33468a6bdc97","Type":"ContainerDied","Data":"db5ba0fe7c7306ea8c8668491978fad39a494b56d9ef24721d5dd698b8d216e6"} Feb 27 10:42:51 crc kubenswrapper[4728]: I0227 10:42:51.270714 4728 generic.go:334] "Generic (PLEG): container finished" podID="a6dd47e0-f719-4fb7-99ea-33468a6bdc97" containerID="9ba264e5433da0a3f7a1b70861451227eb0f456abf05e9a0ccb9cd44293f6ef4" exitCode=0 Feb 27 10:42:51 crc kubenswrapper[4728]: I0227 10:42:51.270763 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4ffgjc" event={"ID":"a6dd47e0-f719-4fb7-99ea-33468a6bdc97","Type":"ContainerDied","Data":"9ba264e5433da0a3f7a1b70861451227eb0f456abf05e9a0ccb9cd44293f6ef4"} Feb 27 10:42:52 crc kubenswrapper[4728]: I0227 10:42:52.577639 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4ffgjc" Feb 27 10:42:52 crc kubenswrapper[4728]: I0227 10:42:52.722854 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a6dd47e0-f719-4fb7-99ea-33468a6bdc97-bundle\") pod \"a6dd47e0-f719-4fb7-99ea-33468a6bdc97\" (UID: \"a6dd47e0-f719-4fb7-99ea-33468a6bdc97\") " Feb 27 10:42:52 crc kubenswrapper[4728]: I0227 10:42:52.722979 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a6dd47e0-f719-4fb7-99ea-33468a6bdc97-util\") pod \"a6dd47e0-f719-4fb7-99ea-33468a6bdc97\" (UID: \"a6dd47e0-f719-4fb7-99ea-33468a6bdc97\") " Feb 27 10:42:52 crc kubenswrapper[4728]: I0227 10:42:52.723150 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c85p2\" (UniqueName: \"kubernetes.io/projected/a6dd47e0-f719-4fb7-99ea-33468a6bdc97-kube-api-access-c85p2\") pod \"a6dd47e0-f719-4fb7-99ea-33468a6bdc97\" (UID: \"a6dd47e0-f719-4fb7-99ea-33468a6bdc97\") " Feb 27 10:42:52 crc kubenswrapper[4728]: I0227 10:42:52.725630 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a6dd47e0-f719-4fb7-99ea-33468a6bdc97-bundle" (OuterVolumeSpecName: "bundle") pod "a6dd47e0-f719-4fb7-99ea-33468a6bdc97" (UID: "a6dd47e0-f719-4fb7-99ea-33468a6bdc97"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:42:52 crc kubenswrapper[4728]: I0227 10:42:52.730951 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6dd47e0-f719-4fb7-99ea-33468a6bdc97-kube-api-access-c85p2" (OuterVolumeSpecName: "kube-api-access-c85p2") pod "a6dd47e0-f719-4fb7-99ea-33468a6bdc97" (UID: "a6dd47e0-f719-4fb7-99ea-33468a6bdc97"). InnerVolumeSpecName "kube-api-access-c85p2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:42:52 crc kubenswrapper[4728]: I0227 10:42:52.737020 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a6dd47e0-f719-4fb7-99ea-33468a6bdc97-util" (OuterVolumeSpecName: "util") pod "a6dd47e0-f719-4fb7-99ea-33468a6bdc97" (UID: "a6dd47e0-f719-4fb7-99ea-33468a6bdc97"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:42:52 crc kubenswrapper[4728]: I0227 10:42:52.826565 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c85p2\" (UniqueName: \"kubernetes.io/projected/a6dd47e0-f719-4fb7-99ea-33468a6bdc97-kube-api-access-c85p2\") on node \"crc\" DevicePath \"\"" Feb 27 10:42:52 crc kubenswrapper[4728]: I0227 10:42:52.826984 4728 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a6dd47e0-f719-4fb7-99ea-33468a6bdc97-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 10:42:52 crc kubenswrapper[4728]: I0227 10:42:52.827247 4728 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a6dd47e0-f719-4fb7-99ea-33468a6bdc97-util\") on node \"crc\" DevicePath \"\"" Feb 27 10:42:53 crc kubenswrapper[4728]: I0227 10:42:53.288234 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4ffgjc" event={"ID":"a6dd47e0-f719-4fb7-99ea-33468a6bdc97","Type":"ContainerDied","Data":"d9cc1ece9206862409a8454813909adcd54ad3b101dc2a924256836ef32aa0ab"} Feb 27 10:42:53 crc kubenswrapper[4728]: I0227 10:42:53.288270 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d9cc1ece9206862409a8454813909adcd54ad3b101dc2a924256836ef32aa0ab" Feb 27 10:42:53 crc kubenswrapper[4728]: I0227 10:42:53.288422 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4ffgjc" Feb 27 10:43:01 crc kubenswrapper[4728]: I0227 10:43:01.796753 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-7dcfc4d7fd-n4jxt"] Feb 27 10:43:01 crc kubenswrapper[4728]: E0227 10:43:01.797438 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6dd47e0-f719-4fb7-99ea-33468a6bdc97" containerName="util" Feb 27 10:43:01 crc kubenswrapper[4728]: I0227 10:43:01.797449 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6dd47e0-f719-4fb7-99ea-33468a6bdc97" containerName="util" Feb 27 10:43:01 crc kubenswrapper[4728]: E0227 10:43:01.797459 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6dd47e0-f719-4fb7-99ea-33468a6bdc97" containerName="pull" Feb 27 10:43:01 crc kubenswrapper[4728]: I0227 10:43:01.797466 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6dd47e0-f719-4fb7-99ea-33468a6bdc97" containerName="pull" Feb 27 10:43:01 crc kubenswrapper[4728]: E0227 10:43:01.797477 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6dd47e0-f719-4fb7-99ea-33468a6bdc97" containerName="extract" Feb 27 10:43:01 crc kubenswrapper[4728]: I0227 10:43:01.797483 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6dd47e0-f719-4fb7-99ea-33468a6bdc97" containerName="extract" Feb 27 10:43:01 crc kubenswrapper[4728]: I0227 10:43:01.797623 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6dd47e0-f719-4fb7-99ea-33468a6bdc97" containerName="extract" Feb 27 10:43:01 crc kubenswrapper[4728]: I0227 10:43:01.798098 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-7dcfc4d7fd-n4jxt" Feb 27 10:43:01 crc kubenswrapper[4728]: I0227 10:43:01.801490 4728 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Feb 27 10:43:01 crc kubenswrapper[4728]: I0227 10:43:01.801547 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Feb 27 10:43:01 crc kubenswrapper[4728]: I0227 10:43:01.801713 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Feb 27 10:43:01 crc kubenswrapper[4728]: I0227 10:43:01.801754 4728 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-8hl9q" Feb 27 10:43:01 crc kubenswrapper[4728]: I0227 10:43:01.801762 4728 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Feb 27 10:43:01 crc kubenswrapper[4728]: I0227 10:43:01.829765 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-7dcfc4d7fd-n4jxt"] Feb 27 10:43:01 crc kubenswrapper[4728]: I0227 10:43:01.890829 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljgzh\" (UniqueName: \"kubernetes.io/projected/7a354dce-cc4d-4d02-bfe1-24cdd16ad1c1-kube-api-access-ljgzh\") pod \"metallb-operator-controller-manager-7dcfc4d7fd-n4jxt\" (UID: \"7a354dce-cc4d-4d02-bfe1-24cdd16ad1c1\") " pod="metallb-system/metallb-operator-controller-manager-7dcfc4d7fd-n4jxt" Feb 27 10:43:01 crc kubenswrapper[4728]: I0227 10:43:01.890875 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7a354dce-cc4d-4d02-bfe1-24cdd16ad1c1-webhook-cert\") pod \"metallb-operator-controller-manager-7dcfc4d7fd-n4jxt\" (UID: \"7a354dce-cc4d-4d02-bfe1-24cdd16ad1c1\") " pod="metallb-system/metallb-operator-controller-manager-7dcfc4d7fd-n4jxt" Feb 27 10:43:01 crc kubenswrapper[4728]: I0227 10:43:01.891057 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7a354dce-cc4d-4d02-bfe1-24cdd16ad1c1-apiservice-cert\") pod \"metallb-operator-controller-manager-7dcfc4d7fd-n4jxt\" (UID: \"7a354dce-cc4d-4d02-bfe1-24cdd16ad1c1\") " pod="metallb-system/metallb-operator-controller-manager-7dcfc4d7fd-n4jxt" Feb 27 10:43:01 crc kubenswrapper[4728]: I0227 10:43:01.992128 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ljgzh\" (UniqueName: \"kubernetes.io/projected/7a354dce-cc4d-4d02-bfe1-24cdd16ad1c1-kube-api-access-ljgzh\") pod \"metallb-operator-controller-manager-7dcfc4d7fd-n4jxt\" (UID: \"7a354dce-cc4d-4d02-bfe1-24cdd16ad1c1\") " pod="metallb-system/metallb-operator-controller-manager-7dcfc4d7fd-n4jxt" Feb 27 10:43:01 crc kubenswrapper[4728]: I0227 10:43:01.992177 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7a354dce-cc4d-4d02-bfe1-24cdd16ad1c1-webhook-cert\") pod \"metallb-operator-controller-manager-7dcfc4d7fd-n4jxt\" (UID: \"7a354dce-cc4d-4d02-bfe1-24cdd16ad1c1\") " pod="metallb-system/metallb-operator-controller-manager-7dcfc4d7fd-n4jxt" Feb 27 10:43:01 crc kubenswrapper[4728]: I0227 10:43:01.992280 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7a354dce-cc4d-4d02-bfe1-24cdd16ad1c1-apiservice-cert\") pod \"metallb-operator-controller-manager-7dcfc4d7fd-n4jxt\" (UID: \"7a354dce-cc4d-4d02-bfe1-24cdd16ad1c1\") " pod="metallb-system/metallb-operator-controller-manager-7dcfc4d7fd-n4jxt" Feb 27 10:43:01 crc kubenswrapper[4728]: I0227 10:43:01.997802 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7a354dce-cc4d-4d02-bfe1-24cdd16ad1c1-apiservice-cert\") pod \"metallb-operator-controller-manager-7dcfc4d7fd-n4jxt\" (UID: \"7a354dce-cc4d-4d02-bfe1-24cdd16ad1c1\") " pod="metallb-system/metallb-operator-controller-manager-7dcfc4d7fd-n4jxt" Feb 27 10:43:01 crc kubenswrapper[4728]: I0227 10:43:01.998039 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7a354dce-cc4d-4d02-bfe1-24cdd16ad1c1-webhook-cert\") pod \"metallb-operator-controller-manager-7dcfc4d7fd-n4jxt\" (UID: \"7a354dce-cc4d-4d02-bfe1-24cdd16ad1c1\") " pod="metallb-system/metallb-operator-controller-manager-7dcfc4d7fd-n4jxt" Feb 27 10:43:02 crc kubenswrapper[4728]: I0227 10:43:02.010628 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ljgzh\" (UniqueName: \"kubernetes.io/projected/7a354dce-cc4d-4d02-bfe1-24cdd16ad1c1-kube-api-access-ljgzh\") pod \"metallb-operator-controller-manager-7dcfc4d7fd-n4jxt\" (UID: \"7a354dce-cc4d-4d02-bfe1-24cdd16ad1c1\") " pod="metallb-system/metallb-operator-controller-manager-7dcfc4d7fd-n4jxt" Feb 27 10:43:02 crc kubenswrapper[4728]: I0227 10:43:02.070367 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-676f4c84b9-j2lx5"] Feb 27 10:43:02 crc kubenswrapper[4728]: I0227 10:43:02.079638 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-676f4c84b9-j2lx5" Feb 27 10:43:02 crc kubenswrapper[4728]: I0227 10:43:02.084270 4728 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Feb 27 10:43:02 crc kubenswrapper[4728]: I0227 10:43:02.084469 4728 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-5t9rh" Feb 27 10:43:02 crc kubenswrapper[4728]: I0227 10:43:02.084513 4728 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 27 10:43:02 crc kubenswrapper[4728]: I0227 10:43:02.098556 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-676f4c84b9-j2lx5"] Feb 27 10:43:02 crc kubenswrapper[4728]: I0227 10:43:02.118120 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-7dcfc4d7fd-n4jxt" Feb 27 10:43:02 crc kubenswrapper[4728]: I0227 10:43:02.195980 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q47vh\" (UniqueName: \"kubernetes.io/projected/54c10ca0-417a-46fc-9d21-ffc4e939073d-kube-api-access-q47vh\") pod \"metallb-operator-webhook-server-676f4c84b9-j2lx5\" (UID: \"54c10ca0-417a-46fc-9d21-ffc4e939073d\") " pod="metallb-system/metallb-operator-webhook-server-676f4c84b9-j2lx5" Feb 27 10:43:02 crc kubenswrapper[4728]: I0227 10:43:02.196384 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/54c10ca0-417a-46fc-9d21-ffc4e939073d-apiservice-cert\") pod \"metallb-operator-webhook-server-676f4c84b9-j2lx5\" (UID: \"54c10ca0-417a-46fc-9d21-ffc4e939073d\") " pod="metallb-system/metallb-operator-webhook-server-676f4c84b9-j2lx5" Feb 27 10:43:02 crc kubenswrapper[4728]: I0227 10:43:02.196463 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/54c10ca0-417a-46fc-9d21-ffc4e939073d-webhook-cert\") pod \"metallb-operator-webhook-server-676f4c84b9-j2lx5\" (UID: \"54c10ca0-417a-46fc-9d21-ffc4e939073d\") " pod="metallb-system/metallb-operator-webhook-server-676f4c84b9-j2lx5" Feb 27 10:43:02 crc kubenswrapper[4728]: I0227 10:43:02.297813 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/54c10ca0-417a-46fc-9d21-ffc4e939073d-webhook-cert\") pod \"metallb-operator-webhook-server-676f4c84b9-j2lx5\" (UID: \"54c10ca0-417a-46fc-9d21-ffc4e939073d\") " pod="metallb-system/metallb-operator-webhook-server-676f4c84b9-j2lx5" Feb 27 10:43:02 crc kubenswrapper[4728]: I0227 10:43:02.297972 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q47vh\" (UniqueName: \"kubernetes.io/projected/54c10ca0-417a-46fc-9d21-ffc4e939073d-kube-api-access-q47vh\") pod \"metallb-operator-webhook-server-676f4c84b9-j2lx5\" (UID: \"54c10ca0-417a-46fc-9d21-ffc4e939073d\") " pod="metallb-system/metallb-operator-webhook-server-676f4c84b9-j2lx5" Feb 27 10:43:02 crc kubenswrapper[4728]: I0227 10:43:02.298016 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/54c10ca0-417a-46fc-9d21-ffc4e939073d-apiservice-cert\") pod \"metallb-operator-webhook-server-676f4c84b9-j2lx5\" (UID: \"54c10ca0-417a-46fc-9d21-ffc4e939073d\") " pod="metallb-system/metallb-operator-webhook-server-676f4c84b9-j2lx5" Feb 27 10:43:02 crc kubenswrapper[4728]: I0227 10:43:02.315768 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/54c10ca0-417a-46fc-9d21-ffc4e939073d-webhook-cert\") pod \"metallb-operator-webhook-server-676f4c84b9-j2lx5\" (UID: \"54c10ca0-417a-46fc-9d21-ffc4e939073d\") " pod="metallb-system/metallb-operator-webhook-server-676f4c84b9-j2lx5" Feb 27 10:43:02 crc kubenswrapper[4728]: I0227 10:43:02.315794 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/54c10ca0-417a-46fc-9d21-ffc4e939073d-apiservice-cert\") pod \"metallb-operator-webhook-server-676f4c84b9-j2lx5\" (UID: \"54c10ca0-417a-46fc-9d21-ffc4e939073d\") " pod="metallb-system/metallb-operator-webhook-server-676f4c84b9-j2lx5" Feb 27 10:43:02 crc kubenswrapper[4728]: I0227 10:43:02.320579 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q47vh\" (UniqueName: \"kubernetes.io/projected/54c10ca0-417a-46fc-9d21-ffc4e939073d-kube-api-access-q47vh\") pod \"metallb-operator-webhook-server-676f4c84b9-j2lx5\" (UID: \"54c10ca0-417a-46fc-9d21-ffc4e939073d\") " pod="metallb-system/metallb-operator-webhook-server-676f4c84b9-j2lx5" Feb 27 10:43:02 crc kubenswrapper[4728]: I0227 10:43:02.365896 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-7dcfc4d7fd-n4jxt"] Feb 27 10:43:02 crc kubenswrapper[4728]: I0227 10:43:02.401855 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-676f4c84b9-j2lx5" Feb 27 10:43:02 crc kubenswrapper[4728]: I0227 10:43:02.868035 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-676f4c84b9-j2lx5"] Feb 27 10:43:02 crc kubenswrapper[4728]: W0227 10:43:02.874070 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod54c10ca0_417a_46fc_9d21_ffc4e939073d.slice/crio-44ed6d6fa9552649ab3498ec51901ffbd284cfbb4af42f60a17af175e10b6ded WatchSource:0}: Error finding container 44ed6d6fa9552649ab3498ec51901ffbd284cfbb4af42f60a17af175e10b6ded: Status 404 returned error can't find the container with id 44ed6d6fa9552649ab3498ec51901ffbd284cfbb4af42f60a17af175e10b6ded Feb 27 10:43:03 crc kubenswrapper[4728]: I0227 10:43:03.385238 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-7dcfc4d7fd-n4jxt" event={"ID":"7a354dce-cc4d-4d02-bfe1-24cdd16ad1c1","Type":"ContainerStarted","Data":"91785579a752fb7ff709c084bef162a2dd0d2fa4fc491e47581a8bd05c002b4b"} Feb 27 10:43:03 crc kubenswrapper[4728]: I0227 10:43:03.386630 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-676f4c84b9-j2lx5" event={"ID":"54c10ca0-417a-46fc-9d21-ffc4e939073d","Type":"ContainerStarted","Data":"44ed6d6fa9552649ab3498ec51901ffbd284cfbb4af42f60a17af175e10b6ded"} Feb 27 10:43:05 crc kubenswrapper[4728]: I0227 10:43:05.422193 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-7dcfc4d7fd-n4jxt" event={"ID":"7a354dce-cc4d-4d02-bfe1-24cdd16ad1c1","Type":"ContainerStarted","Data":"39f11bb51d1048a5c406a34fc7a0c738088eacc872e4f47aa046047784001d29"} Feb 27 10:43:05 crc kubenswrapper[4728]: I0227 10:43:05.422753 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-7dcfc4d7fd-n4jxt" Feb 27 10:43:05 crc kubenswrapper[4728]: I0227 10:43:05.449890 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-7dcfc4d7fd-n4jxt" podStartSLOduration=1.644799596 podStartE2EDuration="4.449872038s" podCreationTimestamp="2026-02-27 10:43:01 +0000 UTC" firstStartedPulling="2026-02-27 10:43:02.371786028 +0000 UTC m=+1002.334152134" lastFinishedPulling="2026-02-27 10:43:05.17685843 +0000 UTC m=+1005.139224576" observedRunningTime="2026-02-27 10:43:05.44702382 +0000 UTC m=+1005.409389946" watchObservedRunningTime="2026-02-27 10:43:05.449872038 +0000 UTC m=+1005.412238144" Feb 27 10:43:08 crc kubenswrapper[4728]: I0227 10:43:08.450181 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-676f4c84b9-j2lx5" event={"ID":"54c10ca0-417a-46fc-9d21-ffc4e939073d","Type":"ContainerStarted","Data":"332ad2dd19940e8e87024a2eb3b9363a1a800d188b55d8a9dd0cd5fb36c6c5a1"} Feb 27 10:43:08 crc kubenswrapper[4728]: I0227 10:43:08.450933 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-676f4c84b9-j2lx5" Feb 27 10:43:08 crc kubenswrapper[4728]: I0227 10:43:08.472466 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-676f4c84b9-j2lx5" podStartSLOduration=1.8885206700000001 podStartE2EDuration="6.472448993s" podCreationTimestamp="2026-02-27 10:43:02 +0000 UTC" firstStartedPulling="2026-02-27 10:43:02.875745358 +0000 UTC m=+1002.838111464" lastFinishedPulling="2026-02-27 10:43:07.459673681 +0000 UTC m=+1007.422039787" observedRunningTime="2026-02-27 10:43:08.469089891 +0000 UTC m=+1008.431455997" watchObservedRunningTime="2026-02-27 10:43:08.472448993 +0000 UTC m=+1008.434815099" Feb 27 10:43:22 crc kubenswrapper[4728]: I0227 10:43:22.410940 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-676f4c84b9-j2lx5" Feb 27 10:43:42 crc kubenswrapper[4728]: I0227 10:43:42.122442 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-7dcfc4d7fd-n4jxt" Feb 27 10:43:43 crc kubenswrapper[4728]: I0227 10:43:43.057633 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7f989f654f-4cwsw"] Feb 27 10:43:43 crc kubenswrapper[4728]: I0227 10:43:43.058865 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-4cwsw" Feb 27 10:43:43 crc kubenswrapper[4728]: I0227 10:43:43.063873 4728 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-5g79p" Feb 27 10:43:43 crc kubenswrapper[4728]: I0227 10:43:43.064445 4728 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Feb 27 10:43:43 crc kubenswrapper[4728]: I0227 10:43:43.064782 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-2mg6c"] Feb 27 10:43:43 crc kubenswrapper[4728]: I0227 10:43:43.068949 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-2mg6c" Feb 27 10:43:43 crc kubenswrapper[4728]: I0227 10:43:43.072604 4728 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Feb 27 10:43:43 crc kubenswrapper[4728]: I0227 10:43:43.078196 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Feb 27 10:43:43 crc kubenswrapper[4728]: I0227 10:43:43.080027 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7f989f654f-4cwsw"] Feb 27 10:43:43 crc kubenswrapper[4728]: I0227 10:43:43.160687 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/59e183d9-0890-454a-8d87-779c957c6b18-metrics-certs\") pod \"frr-k8s-2mg6c\" (UID: \"59e183d9-0890-454a-8d87-779c957c6b18\") " pod="metallb-system/frr-k8s-2mg6c" Feb 27 10:43:43 crc kubenswrapper[4728]: I0227 10:43:43.160765 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/59e183d9-0890-454a-8d87-779c957c6b18-frr-sockets\") pod \"frr-k8s-2mg6c\" (UID: \"59e183d9-0890-454a-8d87-779c957c6b18\") " pod="metallb-system/frr-k8s-2mg6c" Feb 27 10:43:43 crc kubenswrapper[4728]: I0227 10:43:43.160850 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/59e183d9-0890-454a-8d87-779c957c6b18-frr-conf\") pod \"frr-k8s-2mg6c\" (UID: \"59e183d9-0890-454a-8d87-779c957c6b18\") " pod="metallb-system/frr-k8s-2mg6c" Feb 27 10:43:43 crc kubenswrapper[4728]: I0227 10:43:43.160891 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/59e183d9-0890-454a-8d87-779c957c6b18-metrics\") pod \"frr-k8s-2mg6c\" (UID: \"59e183d9-0890-454a-8d87-779c957c6b18\") " pod="metallb-system/frr-k8s-2mg6c" Feb 27 10:43:43 crc kubenswrapper[4728]: I0227 10:43:43.160922 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/59e183d9-0890-454a-8d87-779c957c6b18-frr-startup\") pod \"frr-k8s-2mg6c\" (UID: \"59e183d9-0890-454a-8d87-779c957c6b18\") " pod="metallb-system/frr-k8s-2mg6c" Feb 27 10:43:43 crc kubenswrapper[4728]: I0227 10:43:43.160966 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/59e183d9-0890-454a-8d87-779c957c6b18-reloader\") pod \"frr-k8s-2mg6c\" (UID: \"59e183d9-0890-454a-8d87-779c957c6b18\") " pod="metallb-system/frr-k8s-2mg6c" Feb 27 10:43:43 crc kubenswrapper[4728]: I0227 10:43:43.161005 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g8rs6\" (UniqueName: \"kubernetes.io/projected/59e183d9-0890-454a-8d87-779c957c6b18-kube-api-access-g8rs6\") pod \"frr-k8s-2mg6c\" (UID: \"59e183d9-0890-454a-8d87-779c957c6b18\") " pod="metallb-system/frr-k8s-2mg6c" Feb 27 10:43:43 crc kubenswrapper[4728]: I0227 10:43:43.161054 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5d70cb53-6bc9-4605-bd13-8a60aa7fff09-cert\") pod \"frr-k8s-webhook-server-7f989f654f-4cwsw\" (UID: \"5d70cb53-6bc9-4605-bd13-8a60aa7fff09\") " pod="metallb-system/frr-k8s-webhook-server-7f989f654f-4cwsw" Feb 27 10:43:43 crc kubenswrapper[4728]: I0227 10:43:43.161076 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgfms\" (UniqueName: \"kubernetes.io/projected/5d70cb53-6bc9-4605-bd13-8a60aa7fff09-kube-api-access-xgfms\") pod \"frr-k8s-webhook-server-7f989f654f-4cwsw\" (UID: \"5d70cb53-6bc9-4605-bd13-8a60aa7fff09\") " pod="metallb-system/frr-k8s-webhook-server-7f989f654f-4cwsw" Feb 27 10:43:43 crc kubenswrapper[4728]: I0227 10:43:43.181078 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-cld6r"] Feb 27 10:43:43 crc kubenswrapper[4728]: I0227 10:43:43.182470 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-cld6r" Feb 27 10:43:43 crc kubenswrapper[4728]: I0227 10:43:43.190994 4728 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Feb 27 10:43:43 crc kubenswrapper[4728]: I0227 10:43:43.191094 4728 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-bpbdd" Feb 27 10:43:43 crc kubenswrapper[4728]: I0227 10:43:43.197222 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Feb 27 10:43:43 crc kubenswrapper[4728]: I0227 10:43:43.207536 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-86ddb6bd46-6jf48"] Feb 27 10:43:43 crc kubenswrapper[4728]: I0227 10:43:43.208223 4728 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Feb 27 10:43:43 crc kubenswrapper[4728]: I0227 10:43:43.209373 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-86ddb6bd46-6jf48" Feb 27 10:43:43 crc kubenswrapper[4728]: I0227 10:43:43.217914 4728 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Feb 27 10:43:43 crc kubenswrapper[4728]: I0227 10:43:43.224840 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-86ddb6bd46-6jf48"] Feb 27 10:43:43 crc kubenswrapper[4728]: I0227 10:43:43.262260 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/59e183d9-0890-454a-8d87-779c957c6b18-metrics\") pod \"frr-k8s-2mg6c\" (UID: \"59e183d9-0890-454a-8d87-779c957c6b18\") " pod="metallb-system/frr-k8s-2mg6c" Feb 27 10:43:43 crc kubenswrapper[4728]: I0227 10:43:43.262304 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9mmf\" (UniqueName: \"kubernetes.io/projected/e32ec3a5-e73e-4a97-806c-3108464a20ef-kube-api-access-c9mmf\") pod \"controller-86ddb6bd46-6jf48\" (UID: \"e32ec3a5-e73e-4a97-806c-3108464a20ef\") " pod="metallb-system/controller-86ddb6bd46-6jf48" Feb 27 10:43:43 crc kubenswrapper[4728]: I0227 10:43:43.262329 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/59e183d9-0890-454a-8d87-779c957c6b18-frr-startup\") pod \"frr-k8s-2mg6c\" (UID: \"59e183d9-0890-454a-8d87-779c957c6b18\") " pod="metallb-system/frr-k8s-2mg6c" Feb 27 10:43:43 crc kubenswrapper[4728]: I0227 10:43:43.262359 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/bfdda6c7-f942-4d1b-b0f6-7f169505a8cc-memberlist\") pod \"speaker-cld6r\" (UID: \"bfdda6c7-f942-4d1b-b0f6-7f169505a8cc\") " pod="metallb-system/speaker-cld6r" Feb 27 10:43:43 crc kubenswrapper[4728]: I0227 10:43:43.262375 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/bfdda6c7-f942-4d1b-b0f6-7f169505a8cc-metallb-excludel2\") pod \"speaker-cld6r\" (UID: \"bfdda6c7-f942-4d1b-b0f6-7f169505a8cc\") " pod="metallb-system/speaker-cld6r" Feb 27 10:43:43 crc kubenswrapper[4728]: I0227 10:43:43.262401 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/59e183d9-0890-454a-8d87-779c957c6b18-reloader\") pod \"frr-k8s-2mg6c\" (UID: \"59e183d9-0890-454a-8d87-779c957c6b18\") " pod="metallb-system/frr-k8s-2mg6c" Feb 27 10:43:43 crc kubenswrapper[4728]: I0227 10:43:43.262425 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g8rs6\" (UniqueName: \"kubernetes.io/projected/59e183d9-0890-454a-8d87-779c957c6b18-kube-api-access-g8rs6\") pod \"frr-k8s-2mg6c\" (UID: \"59e183d9-0890-454a-8d87-779c957c6b18\") " pod="metallb-system/frr-k8s-2mg6c" Feb 27 10:43:43 crc kubenswrapper[4728]: I0227 10:43:43.262456 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e32ec3a5-e73e-4a97-806c-3108464a20ef-metrics-certs\") pod \"controller-86ddb6bd46-6jf48\" (UID: \"e32ec3a5-e73e-4a97-806c-3108464a20ef\") " pod="metallb-system/controller-86ddb6bd46-6jf48" Feb 27 10:43:43 crc kubenswrapper[4728]: I0227 10:43:43.262478 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5d70cb53-6bc9-4605-bd13-8a60aa7fff09-cert\") pod \"frr-k8s-webhook-server-7f989f654f-4cwsw\" (UID: \"5d70cb53-6bc9-4605-bd13-8a60aa7fff09\") " pod="metallb-system/frr-k8s-webhook-server-7f989f654f-4cwsw" Feb 27 10:43:43 crc kubenswrapper[4728]: I0227 10:43:43.262574 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xgfms\" (UniqueName: \"kubernetes.io/projected/5d70cb53-6bc9-4605-bd13-8a60aa7fff09-kube-api-access-xgfms\") pod \"frr-k8s-webhook-server-7f989f654f-4cwsw\" (UID: \"5d70cb53-6bc9-4605-bd13-8a60aa7fff09\") " pod="metallb-system/frr-k8s-webhook-server-7f989f654f-4cwsw" Feb 27 10:43:43 crc kubenswrapper[4728]: I0227 10:43:43.262609 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e32ec3a5-e73e-4a97-806c-3108464a20ef-cert\") pod \"controller-86ddb6bd46-6jf48\" (UID: \"e32ec3a5-e73e-4a97-806c-3108464a20ef\") " pod="metallb-system/controller-86ddb6bd46-6jf48" Feb 27 10:43:43 crc kubenswrapper[4728]: I0227 10:43:43.262630 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gc5h9\" (UniqueName: \"kubernetes.io/projected/bfdda6c7-f942-4d1b-b0f6-7f169505a8cc-kube-api-access-gc5h9\") pod \"speaker-cld6r\" (UID: \"bfdda6c7-f942-4d1b-b0f6-7f169505a8cc\") " pod="metallb-system/speaker-cld6r" Feb 27 10:43:43 crc kubenswrapper[4728]: I0227 10:43:43.262653 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/59e183d9-0890-454a-8d87-779c957c6b18-metrics-certs\") pod \"frr-k8s-2mg6c\" (UID: \"59e183d9-0890-454a-8d87-779c957c6b18\") " pod="metallb-system/frr-k8s-2mg6c" Feb 27 10:43:43 crc kubenswrapper[4728]: I0227 10:43:43.262673 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/59e183d9-0890-454a-8d87-779c957c6b18-frr-sockets\") pod \"frr-k8s-2mg6c\" (UID: \"59e183d9-0890-454a-8d87-779c957c6b18\") " pod="metallb-system/frr-k8s-2mg6c" Feb 27 10:43:43 crc kubenswrapper[4728]: I0227 10:43:43.262705 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/59e183d9-0890-454a-8d87-779c957c6b18-frr-conf\") pod \"frr-k8s-2mg6c\" (UID: \"59e183d9-0890-454a-8d87-779c957c6b18\") " pod="metallb-system/frr-k8s-2mg6c" Feb 27 10:43:43 crc kubenswrapper[4728]: I0227 10:43:43.262720 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bfdda6c7-f942-4d1b-b0f6-7f169505a8cc-metrics-certs\") pod \"speaker-cld6r\" (UID: \"bfdda6c7-f942-4d1b-b0f6-7f169505a8cc\") " pod="metallb-system/speaker-cld6r" Feb 27 10:43:43 crc kubenswrapper[4728]: I0227 10:43:43.264335 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/59e183d9-0890-454a-8d87-779c957c6b18-metrics\") pod \"frr-k8s-2mg6c\" (UID: \"59e183d9-0890-454a-8d87-779c957c6b18\") " pod="metallb-system/frr-k8s-2mg6c" Feb 27 10:43:43 crc kubenswrapper[4728]: I0227 10:43:43.265038 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/59e183d9-0890-454a-8d87-779c957c6b18-frr-startup\") pod \"frr-k8s-2mg6c\" (UID: \"59e183d9-0890-454a-8d87-779c957c6b18\") " pod="metallb-system/frr-k8s-2mg6c" Feb 27 10:43:43 crc kubenswrapper[4728]: I0227 10:43:43.265216 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/59e183d9-0890-454a-8d87-779c957c6b18-reloader\") pod \"frr-k8s-2mg6c\" (UID: \"59e183d9-0890-454a-8d87-779c957c6b18\") " pod="metallb-system/frr-k8s-2mg6c" Feb 27 10:43:43 crc kubenswrapper[4728]: I0227 10:43:43.270766 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/59e183d9-0890-454a-8d87-779c957c6b18-frr-sockets\") pod \"frr-k8s-2mg6c\" (UID: \"59e183d9-0890-454a-8d87-779c957c6b18\") " pod="metallb-system/frr-k8s-2mg6c" Feb 27 10:43:43 crc kubenswrapper[4728]: I0227 10:43:43.272917 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/59e183d9-0890-454a-8d87-779c957c6b18-frr-conf\") pod \"frr-k8s-2mg6c\" (UID: \"59e183d9-0890-454a-8d87-779c957c6b18\") " pod="metallb-system/frr-k8s-2mg6c" Feb 27 10:43:43 crc kubenswrapper[4728]: I0227 10:43:43.278109 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5d70cb53-6bc9-4605-bd13-8a60aa7fff09-cert\") pod \"frr-k8s-webhook-server-7f989f654f-4cwsw\" (UID: \"5d70cb53-6bc9-4605-bd13-8a60aa7fff09\") " pod="metallb-system/frr-k8s-webhook-server-7f989f654f-4cwsw" Feb 27 10:43:43 crc kubenswrapper[4728]: I0227 10:43:43.278950 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/59e183d9-0890-454a-8d87-779c957c6b18-metrics-certs\") pod \"frr-k8s-2mg6c\" (UID: \"59e183d9-0890-454a-8d87-779c957c6b18\") " pod="metallb-system/frr-k8s-2mg6c" Feb 27 10:43:43 crc kubenswrapper[4728]: I0227 10:43:43.293230 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g8rs6\" (UniqueName: \"kubernetes.io/projected/59e183d9-0890-454a-8d87-779c957c6b18-kube-api-access-g8rs6\") pod \"frr-k8s-2mg6c\" (UID: \"59e183d9-0890-454a-8d87-779c957c6b18\") " pod="metallb-system/frr-k8s-2mg6c" Feb 27 10:43:43 crc kubenswrapper[4728]: I0227 10:43:43.332164 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xgfms\" (UniqueName: \"kubernetes.io/projected/5d70cb53-6bc9-4605-bd13-8a60aa7fff09-kube-api-access-xgfms\") pod \"frr-k8s-webhook-server-7f989f654f-4cwsw\" (UID: \"5d70cb53-6bc9-4605-bd13-8a60aa7fff09\") " pod="metallb-system/frr-k8s-webhook-server-7f989f654f-4cwsw" Feb 27 10:43:43 crc kubenswrapper[4728]: I0227 10:43:43.366950 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bfdda6c7-f942-4d1b-b0f6-7f169505a8cc-metrics-certs\") pod \"speaker-cld6r\" (UID: \"bfdda6c7-f942-4d1b-b0f6-7f169505a8cc\") " pod="metallb-system/speaker-cld6r" Feb 27 10:43:43 crc kubenswrapper[4728]: I0227 10:43:43.366999 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c9mmf\" (UniqueName: \"kubernetes.io/projected/e32ec3a5-e73e-4a97-806c-3108464a20ef-kube-api-access-c9mmf\") pod \"controller-86ddb6bd46-6jf48\" (UID: \"e32ec3a5-e73e-4a97-806c-3108464a20ef\") " pod="metallb-system/controller-86ddb6bd46-6jf48" Feb 27 10:43:43 crc kubenswrapper[4728]: I0227 10:43:43.367028 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/bfdda6c7-f942-4d1b-b0f6-7f169505a8cc-memberlist\") pod \"speaker-cld6r\" (UID: \"bfdda6c7-f942-4d1b-b0f6-7f169505a8cc\") " pod="metallb-system/speaker-cld6r" Feb 27 10:43:43 crc kubenswrapper[4728]: I0227 10:43:43.367044 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/bfdda6c7-f942-4d1b-b0f6-7f169505a8cc-metallb-excludel2\") pod \"speaker-cld6r\" (UID: \"bfdda6c7-f942-4d1b-b0f6-7f169505a8cc\") " pod="metallb-system/speaker-cld6r" Feb 27 10:43:43 crc kubenswrapper[4728]: I0227 10:43:43.367085 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e32ec3a5-e73e-4a97-806c-3108464a20ef-metrics-certs\") pod \"controller-86ddb6bd46-6jf48\" (UID: \"e32ec3a5-e73e-4a97-806c-3108464a20ef\") " pod="metallb-system/controller-86ddb6bd46-6jf48" Feb 27 10:43:43 crc kubenswrapper[4728]: I0227 10:43:43.367129 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e32ec3a5-e73e-4a97-806c-3108464a20ef-cert\") pod \"controller-86ddb6bd46-6jf48\" (UID: \"e32ec3a5-e73e-4a97-806c-3108464a20ef\") " pod="metallb-system/controller-86ddb6bd46-6jf48" Feb 27 10:43:43 crc kubenswrapper[4728]: I0227 10:43:43.367154 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gc5h9\" (UniqueName: \"kubernetes.io/projected/bfdda6c7-f942-4d1b-b0f6-7f169505a8cc-kube-api-access-gc5h9\") pod \"speaker-cld6r\" (UID: \"bfdda6c7-f942-4d1b-b0f6-7f169505a8cc\") " pod="metallb-system/speaker-cld6r" Feb 27 10:43:43 crc kubenswrapper[4728]: E0227 10:43:43.367533 4728 secret.go:188] Couldn't get secret metallb-system/speaker-certs-secret: secret "speaker-certs-secret" not found Feb 27 10:43:43 crc kubenswrapper[4728]: E0227 10:43:43.367578 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bfdda6c7-f942-4d1b-b0f6-7f169505a8cc-metrics-certs podName:bfdda6c7-f942-4d1b-b0f6-7f169505a8cc nodeName:}" failed. No retries permitted until 2026-02-27 10:43:43.867563434 +0000 UTC m=+1043.829929540 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/bfdda6c7-f942-4d1b-b0f6-7f169505a8cc-metrics-certs") pod "speaker-cld6r" (UID: "bfdda6c7-f942-4d1b-b0f6-7f169505a8cc") : secret "speaker-certs-secret" not found Feb 27 10:43:43 crc kubenswrapper[4728]: E0227 10:43:43.367724 4728 secret.go:188] Couldn't get secret metallb-system/controller-certs-secret: secret "controller-certs-secret" not found Feb 27 10:43:43 crc kubenswrapper[4728]: E0227 10:43:43.367750 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e32ec3a5-e73e-4a97-806c-3108464a20ef-metrics-certs podName:e32ec3a5-e73e-4a97-806c-3108464a20ef nodeName:}" failed. No retries permitted until 2026-02-27 10:43:43.867743389 +0000 UTC m=+1043.830109495 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e32ec3a5-e73e-4a97-806c-3108464a20ef-metrics-certs") pod "controller-86ddb6bd46-6jf48" (UID: "e32ec3a5-e73e-4a97-806c-3108464a20ef") : secret "controller-certs-secret" not found Feb 27 10:43:43 crc kubenswrapper[4728]: E0227 10:43:43.368024 4728 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 27 10:43:43 crc kubenswrapper[4728]: E0227 10:43:43.368056 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bfdda6c7-f942-4d1b-b0f6-7f169505a8cc-memberlist podName:bfdda6c7-f942-4d1b-b0f6-7f169505a8cc nodeName:}" failed. No retries permitted until 2026-02-27 10:43:43.868047347 +0000 UTC m=+1043.830413453 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/bfdda6c7-f942-4d1b-b0f6-7f169505a8cc-memberlist") pod "speaker-cld6r" (UID: "bfdda6c7-f942-4d1b-b0f6-7f169505a8cc") : secret "metallb-memberlist" not found Feb 27 10:43:43 crc kubenswrapper[4728]: I0227 10:43:43.368281 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/bfdda6c7-f942-4d1b-b0f6-7f169505a8cc-metallb-excludel2\") pod \"speaker-cld6r\" (UID: \"bfdda6c7-f942-4d1b-b0f6-7f169505a8cc\") " pod="metallb-system/speaker-cld6r" Feb 27 10:43:43 crc kubenswrapper[4728]: I0227 10:43:43.371682 4728 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 27 10:43:43 crc kubenswrapper[4728]: I0227 10:43:43.385877 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-4cwsw" Feb 27 10:43:43 crc kubenswrapper[4728]: I0227 10:43:43.396998 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-2mg6c" Feb 27 10:43:43 crc kubenswrapper[4728]: I0227 10:43:43.399006 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e32ec3a5-e73e-4a97-806c-3108464a20ef-cert\") pod \"controller-86ddb6bd46-6jf48\" (UID: \"e32ec3a5-e73e-4a97-806c-3108464a20ef\") " pod="metallb-system/controller-86ddb6bd46-6jf48" Feb 27 10:43:43 crc kubenswrapper[4728]: I0227 10:43:43.407083 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c9mmf\" (UniqueName: \"kubernetes.io/projected/e32ec3a5-e73e-4a97-806c-3108464a20ef-kube-api-access-c9mmf\") pod \"controller-86ddb6bd46-6jf48\" (UID: \"e32ec3a5-e73e-4a97-806c-3108464a20ef\") " pod="metallb-system/controller-86ddb6bd46-6jf48" Feb 27 10:43:43 crc kubenswrapper[4728]: I0227 10:43:43.416051 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gc5h9\" (UniqueName: \"kubernetes.io/projected/bfdda6c7-f942-4d1b-b0f6-7f169505a8cc-kube-api-access-gc5h9\") pod \"speaker-cld6r\" (UID: \"bfdda6c7-f942-4d1b-b0f6-7f169505a8cc\") " pod="metallb-system/speaker-cld6r" Feb 27 10:43:43 crc kubenswrapper[4728]: W0227 10:43:43.867160 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5d70cb53_6bc9_4605_bd13_8a60aa7fff09.slice/crio-17f8d6bd5e71ac464192aad137bbdc8362abf66cfaafbd66599fd3cbd2ac235d WatchSource:0}: Error finding container 17f8d6bd5e71ac464192aad137bbdc8362abf66cfaafbd66599fd3cbd2ac235d: Status 404 returned error can't find the container with id 17f8d6bd5e71ac464192aad137bbdc8362abf66cfaafbd66599fd3cbd2ac235d Feb 27 10:43:43 crc kubenswrapper[4728]: I0227 10:43:43.874049 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7f989f654f-4cwsw"] Feb 27 10:43:43 crc kubenswrapper[4728]: I0227 10:43:43.878864 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bfdda6c7-f942-4d1b-b0f6-7f169505a8cc-metrics-certs\") pod \"speaker-cld6r\" (UID: \"bfdda6c7-f942-4d1b-b0f6-7f169505a8cc\") " pod="metallb-system/speaker-cld6r" Feb 27 10:43:43 crc kubenswrapper[4728]: I0227 10:43:43.878958 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/bfdda6c7-f942-4d1b-b0f6-7f169505a8cc-memberlist\") pod \"speaker-cld6r\" (UID: \"bfdda6c7-f942-4d1b-b0f6-7f169505a8cc\") " pod="metallb-system/speaker-cld6r" Feb 27 10:43:43 crc kubenswrapper[4728]: I0227 10:43:43.879061 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e32ec3a5-e73e-4a97-806c-3108464a20ef-metrics-certs\") pod \"controller-86ddb6bd46-6jf48\" (UID: \"e32ec3a5-e73e-4a97-806c-3108464a20ef\") " pod="metallb-system/controller-86ddb6bd46-6jf48" Feb 27 10:43:43 crc kubenswrapper[4728]: E0227 10:43:43.879277 4728 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 27 10:43:43 crc kubenswrapper[4728]: E0227 10:43:43.879428 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bfdda6c7-f942-4d1b-b0f6-7f169505a8cc-memberlist podName:bfdda6c7-f942-4d1b-b0f6-7f169505a8cc nodeName:}" failed. No retries permitted until 2026-02-27 10:43:44.87940633 +0000 UTC m=+1044.841772516 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/bfdda6c7-f942-4d1b-b0f6-7f169505a8cc-memberlist") pod "speaker-cld6r" (UID: "bfdda6c7-f942-4d1b-b0f6-7f169505a8cc") : secret "metallb-memberlist" not found Feb 27 10:43:43 crc kubenswrapper[4728]: I0227 10:43:43.885836 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e32ec3a5-e73e-4a97-806c-3108464a20ef-metrics-certs\") pod \"controller-86ddb6bd46-6jf48\" (UID: \"e32ec3a5-e73e-4a97-806c-3108464a20ef\") " pod="metallb-system/controller-86ddb6bd46-6jf48" Feb 27 10:43:43 crc kubenswrapper[4728]: I0227 10:43:43.887361 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bfdda6c7-f942-4d1b-b0f6-7f169505a8cc-metrics-certs\") pod \"speaker-cld6r\" (UID: \"bfdda6c7-f942-4d1b-b0f6-7f169505a8cc\") " pod="metallb-system/speaker-cld6r" Feb 27 10:43:43 crc kubenswrapper[4728]: I0227 10:43:43.919721 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-2mg6c" event={"ID":"59e183d9-0890-454a-8d87-779c957c6b18","Type":"ContainerStarted","Data":"da083ba5f477ab57f9f510f2336161207374799a959ab62d576a6d29822b7170"} Feb 27 10:43:43 crc kubenswrapper[4728]: I0227 10:43:43.921411 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-4cwsw" event={"ID":"5d70cb53-6bc9-4605-bd13-8a60aa7fff09","Type":"ContainerStarted","Data":"17f8d6bd5e71ac464192aad137bbdc8362abf66cfaafbd66599fd3cbd2ac235d"} Feb 27 10:43:44 crc kubenswrapper[4728]: I0227 10:43:44.182873 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-86ddb6bd46-6jf48" Feb 27 10:43:44 crc kubenswrapper[4728]: I0227 10:43:44.500104 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-86ddb6bd46-6jf48"] Feb 27 10:43:44 crc kubenswrapper[4728]: W0227 10:43:44.501278 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode32ec3a5_e73e_4a97_806c_3108464a20ef.slice/crio-55ecda0c6c663f770cfb16520adfbbd15cd9030cfb0f44c0d61730f83caefd3d WatchSource:0}: Error finding container 55ecda0c6c663f770cfb16520adfbbd15cd9030cfb0f44c0d61730f83caefd3d: Status 404 returned error can't find the container with id 55ecda0c6c663f770cfb16520adfbbd15cd9030cfb0f44c0d61730f83caefd3d Feb 27 10:43:44 crc kubenswrapper[4728]: I0227 10:43:44.929492 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-86ddb6bd46-6jf48" event={"ID":"e32ec3a5-e73e-4a97-806c-3108464a20ef","Type":"ContainerStarted","Data":"f8766ca9f53b3ab5ec321b0a8312d6db7d8dd214955912868be7a06813c182b2"} Feb 27 10:43:44 crc kubenswrapper[4728]: I0227 10:43:44.929816 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-86ddb6bd46-6jf48" event={"ID":"e32ec3a5-e73e-4a97-806c-3108464a20ef","Type":"ContainerStarted","Data":"8380ca31dec9292579374f78001fcd37cfb7ba18d54dec6e80076767043b3dda"} Feb 27 10:43:44 crc kubenswrapper[4728]: I0227 10:43:44.929983 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-86ddb6bd46-6jf48" Feb 27 10:43:44 crc kubenswrapper[4728]: I0227 10:43:44.930035 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-86ddb6bd46-6jf48" event={"ID":"e32ec3a5-e73e-4a97-806c-3108464a20ef","Type":"ContainerStarted","Data":"55ecda0c6c663f770cfb16520adfbbd15cd9030cfb0f44c0d61730f83caefd3d"} Feb 27 10:43:44 crc kubenswrapper[4728]: I0227 10:43:44.933157 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/bfdda6c7-f942-4d1b-b0f6-7f169505a8cc-memberlist\") pod \"speaker-cld6r\" (UID: \"bfdda6c7-f942-4d1b-b0f6-7f169505a8cc\") " pod="metallb-system/speaker-cld6r" Feb 27 10:43:44 crc kubenswrapper[4728]: I0227 10:43:44.940135 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/bfdda6c7-f942-4d1b-b0f6-7f169505a8cc-memberlist\") pod \"speaker-cld6r\" (UID: \"bfdda6c7-f942-4d1b-b0f6-7f169505a8cc\") " pod="metallb-system/speaker-cld6r" Feb 27 10:43:44 crc kubenswrapper[4728]: I0227 10:43:44.963357 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-86ddb6bd46-6jf48" podStartSLOduration=1.963338759 podStartE2EDuration="1.963338759s" podCreationTimestamp="2026-02-27 10:43:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:43:44.958215869 +0000 UTC m=+1044.920581975" watchObservedRunningTime="2026-02-27 10:43:44.963338759 +0000 UTC m=+1044.925704865" Feb 27 10:43:45 crc kubenswrapper[4728]: I0227 10:43:45.001759 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-cld6r" Feb 27 10:43:45 crc kubenswrapper[4728]: I0227 10:43:45.938890 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-cld6r" event={"ID":"bfdda6c7-f942-4d1b-b0f6-7f169505a8cc","Type":"ContainerStarted","Data":"e93767c3a76b58f0c9b284ddd257c98662b7aca5d78f89c9fa3473c21820a343"} Feb 27 10:43:45 crc kubenswrapper[4728]: I0227 10:43:45.939224 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-cld6r" event={"ID":"bfdda6c7-f942-4d1b-b0f6-7f169505a8cc","Type":"ContainerStarted","Data":"0289521020fa5aa3ae2b78907b96ec41945e1c7e559f4334e553f15f4921e89c"} Feb 27 10:43:45 crc kubenswrapper[4728]: I0227 10:43:45.939235 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-cld6r" event={"ID":"bfdda6c7-f942-4d1b-b0f6-7f169505a8cc","Type":"ContainerStarted","Data":"1d0d18ba521b77b78166254251bd61834c7916b8f2f71246057adf81c690d8f3"} Feb 27 10:43:45 crc kubenswrapper[4728]: I0227 10:43:45.939654 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-cld6r" Feb 27 10:43:50 crc kubenswrapper[4728]: I0227 10:43:50.747943 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-cld6r" podStartSLOduration=7.747924623 podStartE2EDuration="7.747924623s" podCreationTimestamp="2026-02-27 10:43:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:43:45.960658177 +0000 UTC m=+1045.923024283" watchObservedRunningTime="2026-02-27 10:43:50.747924623 +0000 UTC m=+1050.710290729" Feb 27 10:43:51 crc kubenswrapper[4728]: I0227 10:43:51.988738 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-4cwsw" event={"ID":"5d70cb53-6bc9-4605-bd13-8a60aa7fff09","Type":"ContainerStarted","Data":"508b40ec66f26207b5eebe0385294fc4d4abaf1104a63bb3ac78d702fff7bdba"} Feb 27 10:43:51 crc kubenswrapper[4728]: I0227 10:43:51.990155 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-4cwsw" Feb 27 10:43:51 crc kubenswrapper[4728]: I0227 10:43:51.991997 4728 generic.go:334] "Generic (PLEG): container finished" podID="59e183d9-0890-454a-8d87-779c957c6b18" containerID="1d98bfdfa57045b648cd0a71a245047321b63425a36ac04780c00a5ca4027d1d" exitCode=0 Feb 27 10:43:51 crc kubenswrapper[4728]: I0227 10:43:51.992035 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-2mg6c" event={"ID":"59e183d9-0890-454a-8d87-779c957c6b18","Type":"ContainerDied","Data":"1d98bfdfa57045b648cd0a71a245047321b63425a36ac04780c00a5ca4027d1d"} Feb 27 10:43:52 crc kubenswrapper[4728]: I0227 10:43:52.018443 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-4cwsw" podStartSLOduration=1.5028225339999999 podStartE2EDuration="9.018422923s" podCreationTimestamp="2026-02-27 10:43:43 +0000 UTC" firstStartedPulling="2026-02-27 10:43:43.871211414 +0000 UTC m=+1043.833577530" lastFinishedPulling="2026-02-27 10:43:51.386811813 +0000 UTC m=+1051.349177919" observedRunningTime="2026-02-27 10:43:52.01540291 +0000 UTC m=+1051.977769046" watchObservedRunningTime="2026-02-27 10:43:52.018422923 +0000 UTC m=+1051.980789029" Feb 27 10:43:53 crc kubenswrapper[4728]: I0227 10:43:53.003395 4728 generic.go:334] "Generic (PLEG): container finished" podID="59e183d9-0890-454a-8d87-779c957c6b18" containerID="eadbff424b9426460e27ad57c1e83c61987eb9f5cf2cb9c795f8b2e9b420696a" exitCode=0 Feb 27 10:43:53 crc kubenswrapper[4728]: I0227 10:43:53.003564 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-2mg6c" event={"ID":"59e183d9-0890-454a-8d87-779c957c6b18","Type":"ContainerDied","Data":"eadbff424b9426460e27ad57c1e83c61987eb9f5cf2cb9c795f8b2e9b420696a"} Feb 27 10:43:54 crc kubenswrapper[4728]: I0227 10:43:54.014601 4728 generic.go:334] "Generic (PLEG): container finished" podID="59e183d9-0890-454a-8d87-779c957c6b18" containerID="3503cde3e773c2579ccccceecb4400ccb255f06a379fef25bed834d41026a2d2" exitCode=0 Feb 27 10:43:54 crc kubenswrapper[4728]: I0227 10:43:54.014664 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-2mg6c" event={"ID":"59e183d9-0890-454a-8d87-779c957c6b18","Type":"ContainerDied","Data":"3503cde3e773c2579ccccceecb4400ccb255f06a379fef25bed834d41026a2d2"} Feb 27 10:43:54 crc kubenswrapper[4728]: I0227 10:43:54.186632 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-86ddb6bd46-6jf48" Feb 27 10:43:55 crc kubenswrapper[4728]: I0227 10:43:55.007459 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-cld6r" Feb 27 10:43:55 crc kubenswrapper[4728]: I0227 10:43:55.037870 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-2mg6c" event={"ID":"59e183d9-0890-454a-8d87-779c957c6b18","Type":"ContainerStarted","Data":"41a4d9aa65cf9779ad4a2b3718e4db50b67615cde1f03e074b007a437d089c83"} Feb 27 10:43:55 crc kubenswrapper[4728]: I0227 10:43:55.037919 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-2mg6c" event={"ID":"59e183d9-0890-454a-8d87-779c957c6b18","Type":"ContainerStarted","Data":"26a76920d17a37db7d4f43f547e30e2d201f44a31d9d9404ae493c228d8e9d39"} Feb 27 10:43:55 crc kubenswrapper[4728]: I0227 10:43:55.037962 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-2mg6c" event={"ID":"59e183d9-0890-454a-8d87-779c957c6b18","Type":"ContainerStarted","Data":"5c5b0cc5db0bc6a0db0816b52f42f0e0839d7c8d983ff49427b88f62a6daa090"} Feb 27 10:43:56 crc kubenswrapper[4728]: I0227 10:43:56.055811 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-2mg6c" event={"ID":"59e183d9-0890-454a-8d87-779c957c6b18","Type":"ContainerStarted","Data":"43098ef7d65d39915e3bdb0d69c729e07ea8961a1bd626c451def26c23414e8d"} Feb 27 10:43:56 crc kubenswrapper[4728]: I0227 10:43:56.056166 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-2mg6c" event={"ID":"59e183d9-0890-454a-8d87-779c957c6b18","Type":"ContainerStarted","Data":"2756cd3d53dcf44b051334dc357f2873cdd3cf0a1577cfd80addb526884d8224"} Feb 27 10:43:56 crc kubenswrapper[4728]: I0227 10:43:56.056184 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-2mg6c" event={"ID":"59e183d9-0890-454a-8d87-779c957c6b18","Type":"ContainerStarted","Data":"d515e79291c3dbac4610519a9b8f93a1bd751f20faa96ae945367cf9ce36702c"} Feb 27 10:43:56 crc kubenswrapper[4728]: I0227 10:43:56.056364 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-2mg6c" Feb 27 10:43:56 crc kubenswrapper[4728]: I0227 10:43:56.090249 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-2mg6c" podStartSLOduration=5.28721704 podStartE2EDuration="13.090230013s" podCreationTimestamp="2026-02-27 10:43:43 +0000 UTC" firstStartedPulling="2026-02-27 10:43:43.556010556 +0000 UTC m=+1043.518376662" lastFinishedPulling="2026-02-27 10:43:51.359023529 +0000 UTC m=+1051.321389635" observedRunningTime="2026-02-27 10:43:56.083881229 +0000 UTC m=+1056.046247355" watchObservedRunningTime="2026-02-27 10:43:56.090230013 +0000 UTC m=+1056.052596119" Feb 27 10:43:57 crc kubenswrapper[4728]: I0227 10:43:57.934173 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-jrh44"] Feb 27 10:43:57 crc kubenswrapper[4728]: I0227 10:43:57.935762 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-jrh44" Feb 27 10:43:57 crc kubenswrapper[4728]: I0227 10:43:57.937778 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Feb 27 10:43:57 crc kubenswrapper[4728]: I0227 10:43:57.937880 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-w7k6t" Feb 27 10:43:57 crc kubenswrapper[4728]: I0227 10:43:57.943146 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Feb 27 10:43:57 crc kubenswrapper[4728]: I0227 10:43:57.947695 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-jrh44"] Feb 27 10:43:58 crc kubenswrapper[4728]: I0227 10:43:58.091618 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6bjk\" (UniqueName: \"kubernetes.io/projected/529b1284-0992-4ee8-888e-fa3531547bac-kube-api-access-f6bjk\") pod \"openstack-operator-index-jrh44\" (UID: \"529b1284-0992-4ee8-888e-fa3531547bac\") " pod="openstack-operators/openstack-operator-index-jrh44" Feb 27 10:43:58 crc kubenswrapper[4728]: I0227 10:43:58.192863 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f6bjk\" (UniqueName: \"kubernetes.io/projected/529b1284-0992-4ee8-888e-fa3531547bac-kube-api-access-f6bjk\") pod \"openstack-operator-index-jrh44\" (UID: \"529b1284-0992-4ee8-888e-fa3531547bac\") " pod="openstack-operators/openstack-operator-index-jrh44" Feb 27 10:43:58 crc kubenswrapper[4728]: I0227 10:43:58.215391 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f6bjk\" (UniqueName: \"kubernetes.io/projected/529b1284-0992-4ee8-888e-fa3531547bac-kube-api-access-f6bjk\") pod \"openstack-operator-index-jrh44\" (UID: \"529b1284-0992-4ee8-888e-fa3531547bac\") " pod="openstack-operators/openstack-operator-index-jrh44" Feb 27 10:43:58 crc kubenswrapper[4728]: I0227 10:43:58.258480 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-jrh44" Feb 27 10:43:58 crc kubenswrapper[4728]: I0227 10:43:58.398103 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-2mg6c" Feb 27 10:43:58 crc kubenswrapper[4728]: I0227 10:43:58.449434 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-2mg6c" Feb 27 10:43:58 crc kubenswrapper[4728]: W0227 10:43:58.732708 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod529b1284_0992_4ee8_888e_fa3531547bac.slice/crio-70dab85b5cdbb5f77bd4de253af8c62f27feddc3009c8d7a1b1fd73f1c8632fd WatchSource:0}: Error finding container 70dab85b5cdbb5f77bd4de253af8c62f27feddc3009c8d7a1b1fd73f1c8632fd: Status 404 returned error can't find the container with id 70dab85b5cdbb5f77bd4de253af8c62f27feddc3009c8d7a1b1fd73f1c8632fd Feb 27 10:43:58 crc kubenswrapper[4728]: I0227 10:43:58.741816 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-jrh44"] Feb 27 10:43:59 crc kubenswrapper[4728]: I0227 10:43:59.077583 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-jrh44" event={"ID":"529b1284-0992-4ee8-888e-fa3531547bac","Type":"ContainerStarted","Data":"70dab85b5cdbb5f77bd4de253af8c62f27feddc3009c8d7a1b1fd73f1c8632fd"} Feb 27 10:44:00 crc kubenswrapper[4728]: I0227 10:44:00.137163 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536484-b8wn9"] Feb 27 10:44:00 crc kubenswrapper[4728]: I0227 10:44:00.139374 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536484-b8wn9" Feb 27 10:44:00 crc kubenswrapper[4728]: I0227 10:44:00.146101 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-wxdfj" Feb 27 10:44:00 crc kubenswrapper[4728]: I0227 10:44:00.146158 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 10:44:00 crc kubenswrapper[4728]: I0227 10:44:00.146585 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 10:44:00 crc kubenswrapper[4728]: I0227 10:44:00.153987 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536484-b8wn9"] Feb 27 10:44:00 crc kubenswrapper[4728]: I0227 10:44:00.238802 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7d892\" (UniqueName: \"kubernetes.io/projected/e2071f71-edef-47c5-a08a-3ed7891d66c0-kube-api-access-7d892\") pod \"auto-csr-approver-29536484-b8wn9\" (UID: \"e2071f71-edef-47c5-a08a-3ed7891d66c0\") " pod="openshift-infra/auto-csr-approver-29536484-b8wn9" Feb 27 10:44:00 crc kubenswrapper[4728]: I0227 10:44:00.340428 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7d892\" (UniqueName: \"kubernetes.io/projected/e2071f71-edef-47c5-a08a-3ed7891d66c0-kube-api-access-7d892\") pod \"auto-csr-approver-29536484-b8wn9\" (UID: \"e2071f71-edef-47c5-a08a-3ed7891d66c0\") " pod="openshift-infra/auto-csr-approver-29536484-b8wn9" Feb 27 10:44:00 crc kubenswrapper[4728]: I0227 10:44:00.359250 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7d892\" (UniqueName: \"kubernetes.io/projected/e2071f71-edef-47c5-a08a-3ed7891d66c0-kube-api-access-7d892\") pod \"auto-csr-approver-29536484-b8wn9\" (UID: \"e2071f71-edef-47c5-a08a-3ed7891d66c0\") " pod="openshift-infra/auto-csr-approver-29536484-b8wn9" Feb 27 10:44:00 crc kubenswrapper[4728]: I0227 10:44:00.479456 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536484-b8wn9" Feb 27 10:44:01 crc kubenswrapper[4728]: I0227 10:44:01.109943 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-jrh44"] Feb 27 10:44:01 crc kubenswrapper[4728]: I0227 10:44:01.724011 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-z7cr9"] Feb 27 10:44:01 crc kubenswrapper[4728]: I0227 10:44:01.737525 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-z7cr9" Feb 27 10:44:01 crc kubenswrapper[4728]: I0227 10:44:01.738395 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-z7cr9"] Feb 27 10:44:01 crc kubenswrapper[4728]: I0227 10:44:01.864331 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zj4kn\" (UniqueName: \"kubernetes.io/projected/da26da61-4baa-47ad-a1eb-57d3fd410f22-kube-api-access-zj4kn\") pod \"openstack-operator-index-z7cr9\" (UID: \"da26da61-4baa-47ad-a1eb-57d3fd410f22\") " pod="openstack-operators/openstack-operator-index-z7cr9" Feb 27 10:44:01 crc kubenswrapper[4728]: I0227 10:44:01.967260 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zj4kn\" (UniqueName: \"kubernetes.io/projected/da26da61-4baa-47ad-a1eb-57d3fd410f22-kube-api-access-zj4kn\") pod \"openstack-operator-index-z7cr9\" (UID: \"da26da61-4baa-47ad-a1eb-57d3fd410f22\") " pod="openstack-operators/openstack-operator-index-z7cr9" Feb 27 10:44:02 crc kubenswrapper[4728]: I0227 10:44:02.003042 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zj4kn\" (UniqueName: \"kubernetes.io/projected/da26da61-4baa-47ad-a1eb-57d3fd410f22-kube-api-access-zj4kn\") pod \"openstack-operator-index-z7cr9\" (UID: \"da26da61-4baa-47ad-a1eb-57d3fd410f22\") " pod="openstack-operators/openstack-operator-index-z7cr9" Feb 27 10:44:02 crc kubenswrapper[4728]: I0227 10:44:02.003096 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536484-b8wn9"] Feb 27 10:44:02 crc kubenswrapper[4728]: W0227 10:44:02.006213 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode2071f71_edef_47c5_a08a_3ed7891d66c0.slice/crio-3833532769b3ea7d07adebdabc5ae937b068ee72709a3e28593be64a00c4f096 WatchSource:0}: Error finding container 3833532769b3ea7d07adebdabc5ae937b068ee72709a3e28593be64a00c4f096: Status 404 returned error can't find the container with id 3833532769b3ea7d07adebdabc5ae937b068ee72709a3e28593be64a00c4f096 Feb 27 10:44:02 crc kubenswrapper[4728]: I0227 10:44:02.071458 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-z7cr9" Feb 27 10:44:02 crc kubenswrapper[4728]: I0227 10:44:02.122822 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-jrh44" event={"ID":"529b1284-0992-4ee8-888e-fa3531547bac","Type":"ContainerStarted","Data":"805766d696e0a8d662893e222fe1b82c3bea88387b4deba9fd6382649ccd32f9"} Feb 27 10:44:02 crc kubenswrapper[4728]: I0227 10:44:02.123050 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-jrh44" podUID="529b1284-0992-4ee8-888e-fa3531547bac" containerName="registry-server" containerID="cri-o://805766d696e0a8d662893e222fe1b82c3bea88387b4deba9fd6382649ccd32f9" gracePeriod=2 Feb 27 10:44:02 crc kubenswrapper[4728]: I0227 10:44:02.126448 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536484-b8wn9" event={"ID":"e2071f71-edef-47c5-a08a-3ed7891d66c0","Type":"ContainerStarted","Data":"3833532769b3ea7d07adebdabc5ae937b068ee72709a3e28593be64a00c4f096"} Feb 27 10:44:02 crc kubenswrapper[4728]: I0227 10:44:02.157635 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-jrh44" podStartSLOduration=2.370973488 podStartE2EDuration="5.157608704s" podCreationTimestamp="2026-02-27 10:43:57 +0000 UTC" firstStartedPulling="2026-02-27 10:43:58.738859724 +0000 UTC m=+1058.701225840" lastFinishedPulling="2026-02-27 10:44:01.52549495 +0000 UTC m=+1061.487861056" observedRunningTime="2026-02-27 10:44:02.143915377 +0000 UTC m=+1062.106281553" watchObservedRunningTime="2026-02-27 10:44:02.157608704 +0000 UTC m=+1062.119974840" Feb 27 10:44:02 crc kubenswrapper[4728]: I0227 10:44:02.381790 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-z7cr9"] Feb 27 10:44:02 crc kubenswrapper[4728]: W0227 10:44:02.386830 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podda26da61_4baa_47ad_a1eb_57d3fd410f22.slice/crio-bf44d12731151292da124eff5baf05eb47dda7f1e73630bdaa77ec6a60d2308c WatchSource:0}: Error finding container bf44d12731151292da124eff5baf05eb47dda7f1e73630bdaa77ec6a60d2308c: Status 404 returned error can't find the container with id bf44d12731151292da124eff5baf05eb47dda7f1e73630bdaa77ec6a60d2308c Feb 27 10:44:02 crc kubenswrapper[4728]: I0227 10:44:02.651192 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-jrh44" Feb 27 10:44:02 crc kubenswrapper[4728]: I0227 10:44:02.780578 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f6bjk\" (UniqueName: \"kubernetes.io/projected/529b1284-0992-4ee8-888e-fa3531547bac-kube-api-access-f6bjk\") pod \"529b1284-0992-4ee8-888e-fa3531547bac\" (UID: \"529b1284-0992-4ee8-888e-fa3531547bac\") " Feb 27 10:44:02 crc kubenswrapper[4728]: I0227 10:44:02.786252 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/529b1284-0992-4ee8-888e-fa3531547bac-kube-api-access-f6bjk" (OuterVolumeSpecName: "kube-api-access-f6bjk") pod "529b1284-0992-4ee8-888e-fa3531547bac" (UID: "529b1284-0992-4ee8-888e-fa3531547bac"). InnerVolumeSpecName "kube-api-access-f6bjk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:44:02 crc kubenswrapper[4728]: I0227 10:44:02.883948 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f6bjk\" (UniqueName: \"kubernetes.io/projected/529b1284-0992-4ee8-888e-fa3531547bac-kube-api-access-f6bjk\") on node \"crc\" DevicePath \"\"" Feb 27 10:44:03 crc kubenswrapper[4728]: I0227 10:44:03.137830 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-z7cr9" event={"ID":"da26da61-4baa-47ad-a1eb-57d3fd410f22","Type":"ContainerStarted","Data":"e5aadcfa60ca8d05faf3d14ebda3af19f62a75dec19669719e84a546a1d58476"} Feb 27 10:44:03 crc kubenswrapper[4728]: I0227 10:44:03.138097 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-z7cr9" event={"ID":"da26da61-4baa-47ad-a1eb-57d3fd410f22","Type":"ContainerStarted","Data":"bf44d12731151292da124eff5baf05eb47dda7f1e73630bdaa77ec6a60d2308c"} Feb 27 10:44:03 crc kubenswrapper[4728]: I0227 10:44:03.145886 4728 generic.go:334] "Generic (PLEG): container finished" podID="529b1284-0992-4ee8-888e-fa3531547bac" containerID="805766d696e0a8d662893e222fe1b82c3bea88387b4deba9fd6382649ccd32f9" exitCode=0 Feb 27 10:44:03 crc kubenswrapper[4728]: I0227 10:44:03.145936 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-jrh44" Feb 27 10:44:03 crc kubenswrapper[4728]: I0227 10:44:03.146031 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-jrh44" event={"ID":"529b1284-0992-4ee8-888e-fa3531547bac","Type":"ContainerDied","Data":"805766d696e0a8d662893e222fe1b82c3bea88387b4deba9fd6382649ccd32f9"} Feb 27 10:44:03 crc kubenswrapper[4728]: I0227 10:44:03.146113 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-jrh44" event={"ID":"529b1284-0992-4ee8-888e-fa3531547bac","Type":"ContainerDied","Data":"70dab85b5cdbb5f77bd4de253af8c62f27feddc3009c8d7a1b1fd73f1c8632fd"} Feb 27 10:44:03 crc kubenswrapper[4728]: I0227 10:44:03.146162 4728 scope.go:117] "RemoveContainer" containerID="805766d696e0a8d662893e222fe1b82c3bea88387b4deba9fd6382649ccd32f9" Feb 27 10:44:03 crc kubenswrapper[4728]: I0227 10:44:03.150072 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536484-b8wn9" event={"ID":"e2071f71-edef-47c5-a08a-3ed7891d66c0","Type":"ContainerStarted","Data":"a3e438cea8d8eca61ab4f50315598a30164960ef35bd6515c9bae8df76d98d83"} Feb 27 10:44:03 crc kubenswrapper[4728]: I0227 10:44:03.174922 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-z7cr9" podStartSLOduration=2.133408418 podStartE2EDuration="2.17489375s" podCreationTimestamp="2026-02-27 10:44:01 +0000 UTC" firstStartedPulling="2026-02-27 10:44:02.391050133 +0000 UTC m=+1062.353416239" lastFinishedPulling="2026-02-27 10:44:02.432535435 +0000 UTC m=+1062.394901571" observedRunningTime="2026-02-27 10:44:03.156107034 +0000 UTC m=+1063.118473140" watchObservedRunningTime="2026-02-27 10:44:03.17489375 +0000 UTC m=+1063.137259896" Feb 27 10:44:03 crc kubenswrapper[4728]: I0227 10:44:03.182757 4728 scope.go:117] "RemoveContainer" containerID="805766d696e0a8d662893e222fe1b82c3bea88387b4deba9fd6382649ccd32f9" Feb 27 10:44:03 crc kubenswrapper[4728]: E0227 10:44:03.183908 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"805766d696e0a8d662893e222fe1b82c3bea88387b4deba9fd6382649ccd32f9\": container with ID starting with 805766d696e0a8d662893e222fe1b82c3bea88387b4deba9fd6382649ccd32f9 not found: ID does not exist" containerID="805766d696e0a8d662893e222fe1b82c3bea88387b4deba9fd6382649ccd32f9" Feb 27 10:44:03 crc kubenswrapper[4728]: I0227 10:44:03.183965 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"805766d696e0a8d662893e222fe1b82c3bea88387b4deba9fd6382649ccd32f9"} err="failed to get container status \"805766d696e0a8d662893e222fe1b82c3bea88387b4deba9fd6382649ccd32f9\": rpc error: code = NotFound desc = could not find container \"805766d696e0a8d662893e222fe1b82c3bea88387b4deba9fd6382649ccd32f9\": container with ID starting with 805766d696e0a8d662893e222fe1b82c3bea88387b4deba9fd6382649ccd32f9 not found: ID does not exist" Feb 27 10:44:03 crc kubenswrapper[4728]: I0227 10:44:03.214147 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29536484-b8wn9" podStartSLOduration=2.445805219 podStartE2EDuration="3.214122829s" podCreationTimestamp="2026-02-27 10:44:00 +0000 UTC" firstStartedPulling="2026-02-27 10:44:02.010814787 +0000 UTC m=+1061.973180893" lastFinishedPulling="2026-02-27 10:44:02.779132387 +0000 UTC m=+1062.741498503" observedRunningTime="2026-02-27 10:44:03.178567532 +0000 UTC m=+1063.140933648" watchObservedRunningTime="2026-02-27 10:44:03.214122829 +0000 UTC m=+1063.176488975" Feb 27 10:44:03 crc kubenswrapper[4728]: I0227 10:44:03.230476 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-jrh44"] Feb 27 10:44:03 crc kubenswrapper[4728]: I0227 10:44:03.241492 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-jrh44"] Feb 27 10:44:03 crc kubenswrapper[4728]: I0227 10:44:03.396155 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-4cwsw" Feb 27 10:44:04 crc kubenswrapper[4728]: I0227 10:44:04.164132 4728 generic.go:334] "Generic (PLEG): container finished" podID="e2071f71-edef-47c5-a08a-3ed7891d66c0" containerID="a3e438cea8d8eca61ab4f50315598a30164960ef35bd6515c9bae8df76d98d83" exitCode=0 Feb 27 10:44:04 crc kubenswrapper[4728]: I0227 10:44:04.164757 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536484-b8wn9" event={"ID":"e2071f71-edef-47c5-a08a-3ed7891d66c0","Type":"ContainerDied","Data":"a3e438cea8d8eca61ab4f50315598a30164960ef35bd6515c9bae8df76d98d83"} Feb 27 10:44:04 crc kubenswrapper[4728]: I0227 10:44:04.733474 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="529b1284-0992-4ee8-888e-fa3531547bac" path="/var/lib/kubelet/pods/529b1284-0992-4ee8-888e-fa3531547bac/volumes" Feb 27 10:44:05 crc kubenswrapper[4728]: I0227 10:44:05.629430 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536484-b8wn9" Feb 27 10:44:05 crc kubenswrapper[4728]: I0227 10:44:05.738687 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7d892\" (UniqueName: \"kubernetes.io/projected/e2071f71-edef-47c5-a08a-3ed7891d66c0-kube-api-access-7d892\") pod \"e2071f71-edef-47c5-a08a-3ed7891d66c0\" (UID: \"e2071f71-edef-47c5-a08a-3ed7891d66c0\") " Feb 27 10:44:05 crc kubenswrapper[4728]: I0227 10:44:05.744117 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2071f71-edef-47c5-a08a-3ed7891d66c0-kube-api-access-7d892" (OuterVolumeSpecName: "kube-api-access-7d892") pod "e2071f71-edef-47c5-a08a-3ed7891d66c0" (UID: "e2071f71-edef-47c5-a08a-3ed7891d66c0"). InnerVolumeSpecName "kube-api-access-7d892". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:44:05 crc kubenswrapper[4728]: I0227 10:44:05.840264 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7d892\" (UniqueName: \"kubernetes.io/projected/e2071f71-edef-47c5-a08a-3ed7891d66c0-kube-api-access-7d892\") on node \"crc\" DevicePath \"\"" Feb 27 10:44:06 crc kubenswrapper[4728]: I0227 10:44:06.188374 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536484-b8wn9" event={"ID":"e2071f71-edef-47c5-a08a-3ed7891d66c0","Type":"ContainerDied","Data":"3833532769b3ea7d07adebdabc5ae937b068ee72709a3e28593be64a00c4f096"} Feb 27 10:44:06 crc kubenswrapper[4728]: I0227 10:44:06.188434 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3833532769b3ea7d07adebdabc5ae937b068ee72709a3e28593be64a00c4f096" Feb 27 10:44:06 crc kubenswrapper[4728]: I0227 10:44:06.188467 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536484-b8wn9" Feb 27 10:44:06 crc kubenswrapper[4728]: I0227 10:44:06.247245 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536478-gfnt7"] Feb 27 10:44:06 crc kubenswrapper[4728]: I0227 10:44:06.259023 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536478-gfnt7"] Feb 27 10:44:06 crc kubenswrapper[4728]: I0227 10:44:06.736403 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc56479a-8090-419f-bcb1-5557bdae6677" path="/var/lib/kubelet/pods/cc56479a-8090-419f-bcb1-5557bdae6677/volumes" Feb 27 10:44:12 crc kubenswrapper[4728]: I0227 10:44:12.071982 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-z7cr9" Feb 27 10:44:12 crc kubenswrapper[4728]: I0227 10:44:12.072577 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-z7cr9" Feb 27 10:44:12 crc kubenswrapper[4728]: I0227 10:44:12.103541 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-z7cr9" Feb 27 10:44:12 crc kubenswrapper[4728]: I0227 10:44:12.289869 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-z7cr9" Feb 27 10:44:13 crc kubenswrapper[4728]: I0227 10:44:13.400327 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-2mg6c" Feb 27 10:44:13 crc kubenswrapper[4728]: I0227 10:44:13.548221 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/434c282bf32f6a987b87de30c7c4520ff9c0517cb06f6f8414bcb82fa6slzlq"] Feb 27 10:44:13 crc kubenswrapper[4728]: E0227 10:44:13.548571 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2071f71-edef-47c5-a08a-3ed7891d66c0" containerName="oc" Feb 27 10:44:13 crc kubenswrapper[4728]: I0227 10:44:13.548592 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2071f71-edef-47c5-a08a-3ed7891d66c0" containerName="oc" Feb 27 10:44:13 crc kubenswrapper[4728]: E0227 10:44:13.548636 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="529b1284-0992-4ee8-888e-fa3531547bac" containerName="registry-server" Feb 27 10:44:13 crc kubenswrapper[4728]: I0227 10:44:13.548645 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="529b1284-0992-4ee8-888e-fa3531547bac" containerName="registry-server" Feb 27 10:44:13 crc kubenswrapper[4728]: I0227 10:44:13.548812 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="529b1284-0992-4ee8-888e-fa3531547bac" containerName="registry-server" Feb 27 10:44:13 crc kubenswrapper[4728]: I0227 10:44:13.548837 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2071f71-edef-47c5-a08a-3ed7891d66c0" containerName="oc" Feb 27 10:44:13 crc kubenswrapper[4728]: I0227 10:44:13.550041 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/434c282bf32f6a987b87de30c7c4520ff9c0517cb06f6f8414bcb82fa6slzlq" Feb 27 10:44:13 crc kubenswrapper[4728]: I0227 10:44:13.551738 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-pkk9q" Feb 27 10:44:13 crc kubenswrapper[4728]: I0227 10:44:13.566202 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/434c282bf32f6a987b87de30c7c4520ff9c0517cb06f6f8414bcb82fa6slzlq"] Feb 27 10:44:13 crc kubenswrapper[4728]: I0227 10:44:13.572575 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a67acb1e-a1c9-44b6-805f-39313a1961cf-bundle\") pod \"434c282bf32f6a987b87de30c7c4520ff9c0517cb06f6f8414bcb82fa6slzlq\" (UID: \"a67acb1e-a1c9-44b6-805f-39313a1961cf\") " pod="openstack-operators/434c282bf32f6a987b87de30c7c4520ff9c0517cb06f6f8414bcb82fa6slzlq" Feb 27 10:44:13 crc kubenswrapper[4728]: I0227 10:44:13.572678 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dcw4d\" (UniqueName: \"kubernetes.io/projected/a67acb1e-a1c9-44b6-805f-39313a1961cf-kube-api-access-dcw4d\") pod \"434c282bf32f6a987b87de30c7c4520ff9c0517cb06f6f8414bcb82fa6slzlq\" (UID: \"a67acb1e-a1c9-44b6-805f-39313a1961cf\") " pod="openstack-operators/434c282bf32f6a987b87de30c7c4520ff9c0517cb06f6f8414bcb82fa6slzlq" Feb 27 10:44:13 crc kubenswrapper[4728]: I0227 10:44:13.572761 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a67acb1e-a1c9-44b6-805f-39313a1961cf-util\") pod \"434c282bf32f6a987b87de30c7c4520ff9c0517cb06f6f8414bcb82fa6slzlq\" (UID: \"a67acb1e-a1c9-44b6-805f-39313a1961cf\") " pod="openstack-operators/434c282bf32f6a987b87de30c7c4520ff9c0517cb06f6f8414bcb82fa6slzlq" Feb 27 10:44:13 crc kubenswrapper[4728]: I0227 10:44:13.674526 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a67acb1e-a1c9-44b6-805f-39313a1961cf-bundle\") pod \"434c282bf32f6a987b87de30c7c4520ff9c0517cb06f6f8414bcb82fa6slzlq\" (UID: \"a67acb1e-a1c9-44b6-805f-39313a1961cf\") " pod="openstack-operators/434c282bf32f6a987b87de30c7c4520ff9c0517cb06f6f8414bcb82fa6slzlq" Feb 27 10:44:13 crc kubenswrapper[4728]: I0227 10:44:13.675029 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dcw4d\" (UniqueName: \"kubernetes.io/projected/a67acb1e-a1c9-44b6-805f-39313a1961cf-kube-api-access-dcw4d\") pod \"434c282bf32f6a987b87de30c7c4520ff9c0517cb06f6f8414bcb82fa6slzlq\" (UID: \"a67acb1e-a1c9-44b6-805f-39313a1961cf\") " pod="openstack-operators/434c282bf32f6a987b87de30c7c4520ff9c0517cb06f6f8414bcb82fa6slzlq" Feb 27 10:44:13 crc kubenswrapper[4728]: I0227 10:44:13.675082 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a67acb1e-a1c9-44b6-805f-39313a1961cf-bundle\") pod \"434c282bf32f6a987b87de30c7c4520ff9c0517cb06f6f8414bcb82fa6slzlq\" (UID: \"a67acb1e-a1c9-44b6-805f-39313a1961cf\") " pod="openstack-operators/434c282bf32f6a987b87de30c7c4520ff9c0517cb06f6f8414bcb82fa6slzlq" Feb 27 10:44:13 crc kubenswrapper[4728]: I0227 10:44:13.675476 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a67acb1e-a1c9-44b6-805f-39313a1961cf-util\") pod \"434c282bf32f6a987b87de30c7c4520ff9c0517cb06f6f8414bcb82fa6slzlq\" (UID: \"a67acb1e-a1c9-44b6-805f-39313a1961cf\") " pod="openstack-operators/434c282bf32f6a987b87de30c7c4520ff9c0517cb06f6f8414bcb82fa6slzlq" Feb 27 10:44:13 crc kubenswrapper[4728]: I0227 10:44:13.675983 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a67acb1e-a1c9-44b6-805f-39313a1961cf-util\") pod \"434c282bf32f6a987b87de30c7c4520ff9c0517cb06f6f8414bcb82fa6slzlq\" (UID: \"a67acb1e-a1c9-44b6-805f-39313a1961cf\") " pod="openstack-operators/434c282bf32f6a987b87de30c7c4520ff9c0517cb06f6f8414bcb82fa6slzlq" Feb 27 10:44:13 crc kubenswrapper[4728]: I0227 10:44:13.701601 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dcw4d\" (UniqueName: \"kubernetes.io/projected/a67acb1e-a1c9-44b6-805f-39313a1961cf-kube-api-access-dcw4d\") pod \"434c282bf32f6a987b87de30c7c4520ff9c0517cb06f6f8414bcb82fa6slzlq\" (UID: \"a67acb1e-a1c9-44b6-805f-39313a1961cf\") " pod="openstack-operators/434c282bf32f6a987b87de30c7c4520ff9c0517cb06f6f8414bcb82fa6slzlq" Feb 27 10:44:13 crc kubenswrapper[4728]: I0227 10:44:13.873110 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/434c282bf32f6a987b87de30c7c4520ff9c0517cb06f6f8414bcb82fa6slzlq" Feb 27 10:44:14 crc kubenswrapper[4728]: I0227 10:44:14.323227 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/434c282bf32f6a987b87de30c7c4520ff9c0517cb06f6f8414bcb82fa6slzlq"] Feb 27 10:44:14 crc kubenswrapper[4728]: W0227 10:44:14.331434 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda67acb1e_a1c9_44b6_805f_39313a1961cf.slice/crio-fabcb7d5cda418ec0a1ccc2cab7e0bce25a7be5f457003785efe969489a3eb24 WatchSource:0}: Error finding container fabcb7d5cda418ec0a1ccc2cab7e0bce25a7be5f457003785efe969489a3eb24: Status 404 returned error can't find the container with id fabcb7d5cda418ec0a1ccc2cab7e0bce25a7be5f457003785efe969489a3eb24 Feb 27 10:44:15 crc kubenswrapper[4728]: I0227 10:44:15.281233 4728 generic.go:334] "Generic (PLEG): container finished" podID="a67acb1e-a1c9-44b6-805f-39313a1961cf" containerID="371402e04d0d8b751a4ebc40584fc2c4d25c7a6d81ae3192d7d8cbd6aa2fe72e" exitCode=0 Feb 27 10:44:15 crc kubenswrapper[4728]: I0227 10:44:15.281303 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/434c282bf32f6a987b87de30c7c4520ff9c0517cb06f6f8414bcb82fa6slzlq" event={"ID":"a67acb1e-a1c9-44b6-805f-39313a1961cf","Type":"ContainerDied","Data":"371402e04d0d8b751a4ebc40584fc2c4d25c7a6d81ae3192d7d8cbd6aa2fe72e"} Feb 27 10:44:15 crc kubenswrapper[4728]: I0227 10:44:15.281573 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/434c282bf32f6a987b87de30c7c4520ff9c0517cb06f6f8414bcb82fa6slzlq" event={"ID":"a67acb1e-a1c9-44b6-805f-39313a1961cf","Type":"ContainerStarted","Data":"fabcb7d5cda418ec0a1ccc2cab7e0bce25a7be5f457003785efe969489a3eb24"} Feb 27 10:44:17 crc kubenswrapper[4728]: I0227 10:44:17.303303 4728 generic.go:334] "Generic (PLEG): container finished" podID="a67acb1e-a1c9-44b6-805f-39313a1961cf" containerID="7bae68570fb78d1a5c1116d466864beb94dbb262c02d23cfcb7c24afd84389de" exitCode=0 Feb 27 10:44:17 crc kubenswrapper[4728]: I0227 10:44:17.303448 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/434c282bf32f6a987b87de30c7c4520ff9c0517cb06f6f8414bcb82fa6slzlq" event={"ID":"a67acb1e-a1c9-44b6-805f-39313a1961cf","Type":"ContainerDied","Data":"7bae68570fb78d1a5c1116d466864beb94dbb262c02d23cfcb7c24afd84389de"} Feb 27 10:44:18 crc kubenswrapper[4728]: I0227 10:44:18.316600 4728 generic.go:334] "Generic (PLEG): container finished" podID="a67acb1e-a1c9-44b6-805f-39313a1961cf" containerID="8d8a76761cbc4d7b8df53f415c8452273b59aa3d98e37f45664221d98b95cc5a" exitCode=0 Feb 27 10:44:18 crc kubenswrapper[4728]: I0227 10:44:18.316733 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/434c282bf32f6a987b87de30c7c4520ff9c0517cb06f6f8414bcb82fa6slzlq" event={"ID":"a67acb1e-a1c9-44b6-805f-39313a1961cf","Type":"ContainerDied","Data":"8d8a76761cbc4d7b8df53f415c8452273b59aa3d98e37f45664221d98b95cc5a"} Feb 27 10:44:19 crc kubenswrapper[4728]: I0227 10:44:19.699815 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/434c282bf32f6a987b87de30c7c4520ff9c0517cb06f6f8414bcb82fa6slzlq" Feb 27 10:44:19 crc kubenswrapper[4728]: I0227 10:44:19.888958 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a67acb1e-a1c9-44b6-805f-39313a1961cf-bundle\") pod \"a67acb1e-a1c9-44b6-805f-39313a1961cf\" (UID: \"a67acb1e-a1c9-44b6-805f-39313a1961cf\") " Feb 27 10:44:19 crc kubenswrapper[4728]: I0227 10:44:19.889127 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a67acb1e-a1c9-44b6-805f-39313a1961cf-util\") pod \"a67acb1e-a1c9-44b6-805f-39313a1961cf\" (UID: \"a67acb1e-a1c9-44b6-805f-39313a1961cf\") " Feb 27 10:44:19 crc kubenswrapper[4728]: I0227 10:44:19.889219 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dcw4d\" (UniqueName: \"kubernetes.io/projected/a67acb1e-a1c9-44b6-805f-39313a1961cf-kube-api-access-dcw4d\") pod \"a67acb1e-a1c9-44b6-805f-39313a1961cf\" (UID: \"a67acb1e-a1c9-44b6-805f-39313a1961cf\") " Feb 27 10:44:19 crc kubenswrapper[4728]: I0227 10:44:19.890246 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a67acb1e-a1c9-44b6-805f-39313a1961cf-bundle" (OuterVolumeSpecName: "bundle") pod "a67acb1e-a1c9-44b6-805f-39313a1961cf" (UID: "a67acb1e-a1c9-44b6-805f-39313a1961cf"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:44:19 crc kubenswrapper[4728]: I0227 10:44:19.898812 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a67acb1e-a1c9-44b6-805f-39313a1961cf-kube-api-access-dcw4d" (OuterVolumeSpecName: "kube-api-access-dcw4d") pod "a67acb1e-a1c9-44b6-805f-39313a1961cf" (UID: "a67acb1e-a1c9-44b6-805f-39313a1961cf"). InnerVolumeSpecName "kube-api-access-dcw4d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:44:19 crc kubenswrapper[4728]: I0227 10:44:19.906353 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a67acb1e-a1c9-44b6-805f-39313a1961cf-util" (OuterVolumeSpecName: "util") pod "a67acb1e-a1c9-44b6-805f-39313a1961cf" (UID: "a67acb1e-a1c9-44b6-805f-39313a1961cf"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:44:19 crc kubenswrapper[4728]: I0227 10:44:19.990931 4728 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a67acb1e-a1c9-44b6-805f-39313a1961cf-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 10:44:19 crc kubenswrapper[4728]: I0227 10:44:19.990961 4728 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a67acb1e-a1c9-44b6-805f-39313a1961cf-util\") on node \"crc\" DevicePath \"\"" Feb 27 10:44:19 crc kubenswrapper[4728]: I0227 10:44:19.990990 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dcw4d\" (UniqueName: \"kubernetes.io/projected/a67acb1e-a1c9-44b6-805f-39313a1961cf-kube-api-access-dcw4d\") on node \"crc\" DevicePath \"\"" Feb 27 10:44:20 crc kubenswrapper[4728]: I0227 10:44:20.341355 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/434c282bf32f6a987b87de30c7c4520ff9c0517cb06f6f8414bcb82fa6slzlq" event={"ID":"a67acb1e-a1c9-44b6-805f-39313a1961cf","Type":"ContainerDied","Data":"fabcb7d5cda418ec0a1ccc2cab7e0bce25a7be5f457003785efe969489a3eb24"} Feb 27 10:44:20 crc kubenswrapper[4728]: I0227 10:44:20.341411 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fabcb7d5cda418ec0a1ccc2cab7e0bce25a7be5f457003785efe969489a3eb24" Feb 27 10:44:20 crc kubenswrapper[4728]: I0227 10:44:20.341497 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/434c282bf32f6a987b87de30c7c4520ff9c0517cb06f6f8414bcb82fa6slzlq" Feb 27 10:44:25 crc kubenswrapper[4728]: I0227 10:44:25.847705 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-76c4d9668d-f265t"] Feb 27 10:44:25 crc kubenswrapper[4728]: E0227 10:44:25.848742 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a67acb1e-a1c9-44b6-805f-39313a1961cf" containerName="util" Feb 27 10:44:25 crc kubenswrapper[4728]: I0227 10:44:25.848763 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="a67acb1e-a1c9-44b6-805f-39313a1961cf" containerName="util" Feb 27 10:44:25 crc kubenswrapper[4728]: E0227 10:44:25.848782 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a67acb1e-a1c9-44b6-805f-39313a1961cf" containerName="pull" Feb 27 10:44:25 crc kubenswrapper[4728]: I0227 10:44:25.848793 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="a67acb1e-a1c9-44b6-805f-39313a1961cf" containerName="pull" Feb 27 10:44:25 crc kubenswrapper[4728]: E0227 10:44:25.848817 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a67acb1e-a1c9-44b6-805f-39313a1961cf" containerName="extract" Feb 27 10:44:25 crc kubenswrapper[4728]: I0227 10:44:25.848846 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="a67acb1e-a1c9-44b6-805f-39313a1961cf" containerName="extract" Feb 27 10:44:25 crc kubenswrapper[4728]: I0227 10:44:25.849085 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="a67acb1e-a1c9-44b6-805f-39313a1961cf" containerName="extract" Feb 27 10:44:25 crc kubenswrapper[4728]: I0227 10:44:25.849799 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-76c4d9668d-f265t" Feb 27 10:44:25 crc kubenswrapper[4728]: I0227 10:44:25.857548 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-47sbg" Feb 27 10:44:25 crc kubenswrapper[4728]: I0227 10:44:25.882218 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-76c4d9668d-f265t"] Feb 27 10:44:25 crc kubenswrapper[4728]: I0227 10:44:25.904685 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qbpws\" (UniqueName: \"kubernetes.io/projected/d0eba51d-03cc-4b20-88ea-58f1d34276e5-kube-api-access-qbpws\") pod \"openstack-operator-controller-init-76c4d9668d-f265t\" (UID: \"d0eba51d-03cc-4b20-88ea-58f1d34276e5\") " pod="openstack-operators/openstack-operator-controller-init-76c4d9668d-f265t" Feb 27 10:44:26 crc kubenswrapper[4728]: I0227 10:44:26.006805 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qbpws\" (UniqueName: \"kubernetes.io/projected/d0eba51d-03cc-4b20-88ea-58f1d34276e5-kube-api-access-qbpws\") pod \"openstack-operator-controller-init-76c4d9668d-f265t\" (UID: \"d0eba51d-03cc-4b20-88ea-58f1d34276e5\") " pod="openstack-operators/openstack-operator-controller-init-76c4d9668d-f265t" Feb 27 10:44:26 crc kubenswrapper[4728]: I0227 10:44:26.030087 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qbpws\" (UniqueName: \"kubernetes.io/projected/d0eba51d-03cc-4b20-88ea-58f1d34276e5-kube-api-access-qbpws\") pod \"openstack-operator-controller-init-76c4d9668d-f265t\" (UID: \"d0eba51d-03cc-4b20-88ea-58f1d34276e5\") " pod="openstack-operators/openstack-operator-controller-init-76c4d9668d-f265t" Feb 27 10:44:26 crc kubenswrapper[4728]: I0227 10:44:26.170261 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-76c4d9668d-f265t" Feb 27 10:44:26 crc kubenswrapper[4728]: I0227 10:44:26.636546 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-76c4d9668d-f265t"] Feb 27 10:44:26 crc kubenswrapper[4728]: W0227 10:44:26.642004 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd0eba51d_03cc_4b20_88ea_58f1d34276e5.slice/crio-e55d65fa15a801e3d235415bb02263888ac66793c0eb448432dc96580ce4d4b3 WatchSource:0}: Error finding container e55d65fa15a801e3d235415bb02263888ac66793c0eb448432dc96580ce4d4b3: Status 404 returned error can't find the container with id e55d65fa15a801e3d235415bb02263888ac66793c0eb448432dc96580ce4d4b3 Feb 27 10:44:27 crc kubenswrapper[4728]: I0227 10:44:27.414219 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-76c4d9668d-f265t" event={"ID":"d0eba51d-03cc-4b20-88ea-58f1d34276e5","Type":"ContainerStarted","Data":"e55d65fa15a801e3d235415bb02263888ac66793c0eb448432dc96580ce4d4b3"} Feb 27 10:44:32 crc kubenswrapper[4728]: I0227 10:44:32.459888 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-76c4d9668d-f265t" event={"ID":"d0eba51d-03cc-4b20-88ea-58f1d34276e5","Type":"ContainerStarted","Data":"e6e8344a48603e825197bde0d014dc3eca154f696ad93d676cff105f40fdf2ad"} Feb 27 10:44:32 crc kubenswrapper[4728]: I0227 10:44:32.460429 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-76c4d9668d-f265t" Feb 27 10:44:32 crc kubenswrapper[4728]: I0227 10:44:32.504301 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-76c4d9668d-f265t" podStartSLOduration=2.711565268 podStartE2EDuration="7.504281994s" podCreationTimestamp="2026-02-27 10:44:25 +0000 UTC" firstStartedPulling="2026-02-27 10:44:26.643658019 +0000 UTC m=+1086.606024165" lastFinishedPulling="2026-02-27 10:44:31.436374795 +0000 UTC m=+1091.398740891" observedRunningTime="2026-02-27 10:44:32.498637419 +0000 UTC m=+1092.461003535" watchObservedRunningTime="2026-02-27 10:44:32.504281994 +0000 UTC m=+1092.466648110" Feb 27 10:44:35 crc kubenswrapper[4728]: I0227 10:44:35.922196 4728 patch_prober.go:28] interesting pod/machine-config-daemon-mf2hh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 10:44:35 crc kubenswrapper[4728]: I0227 10:44:35.922641 4728 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 10:44:36 crc kubenswrapper[4728]: I0227 10:44:36.177543 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-76c4d9668d-f265t" Feb 27 10:44:48 crc kubenswrapper[4728]: I0227 10:44:48.185140 4728 scope.go:117] "RemoveContainer" containerID="47cf2e934432f313b3d956021e7985061ccf5e96e4cad0a9fc224fd456cb56e0" Feb 27 10:44:56 crc kubenswrapper[4728]: I0227 10:44:56.924642 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-6db6876945-8dzvn"] Feb 27 10:44:56 crc kubenswrapper[4728]: I0227 10:44:56.926233 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-6db6876945-8dzvn" Feb 27 10:44:56 crc kubenswrapper[4728]: I0227 10:44:56.928198 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-wdsjh" Feb 27 10:44:56 crc kubenswrapper[4728]: I0227 10:44:56.935376 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-55d77d7b5c-2whpj"] Feb 27 10:44:56 crc kubenswrapper[4728]: I0227 10:44:56.936472 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-2whpj" Feb 27 10:44:56 crc kubenswrapper[4728]: I0227 10:44:56.940997 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-6db6876945-8dzvn"] Feb 27 10:44:56 crc kubenswrapper[4728]: I0227 10:44:56.950846 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-fdq7x" Feb 27 10:44:56 crc kubenswrapper[4728]: I0227 10:44:56.963602 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-5d87c9d997-l2ffb"] Feb 27 10:44:56 crc kubenswrapper[4728]: I0227 10:44:56.964963 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-5d87c9d997-l2ffb" Feb 27 10:44:56 crc kubenswrapper[4728]: I0227 10:44:56.968932 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-njqh2" Feb 27 10:44:56 crc kubenswrapper[4728]: I0227 10:44:56.978497 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-5d87c9d997-l2ffb"] Feb 27 10:44:56 crc kubenswrapper[4728]: I0227 10:44:56.982262 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tx9kr\" (UniqueName: \"kubernetes.io/projected/46e1e2f4-2677-4d4f-88c3-22c7f3942e12-kube-api-access-tx9kr\") pod \"barbican-operator-controller-manager-6db6876945-8dzvn\" (UID: \"46e1e2f4-2677-4d4f-88c3-22c7f3942e12\") " pod="openstack-operators/barbican-operator-controller-manager-6db6876945-8dzvn" Feb 27 10:44:57 crc kubenswrapper[4728]: I0227 10:44:57.011804 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-55d77d7b5c-2whpj"] Feb 27 10:44:57 crc kubenswrapper[4728]: I0227 10:44:57.049364 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-cf99c678f-sfznl"] Feb 27 10:44:57 crc kubenswrapper[4728]: I0227 10:44:57.051763 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-cf99c678f-sfznl" Feb 27 10:44:57 crc kubenswrapper[4728]: I0227 10:44:57.056176 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-zw5d6" Feb 27 10:44:57 crc kubenswrapper[4728]: I0227 10:44:57.069089 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-64db6967f8-j8qfr"] Feb 27 10:44:57 crc kubenswrapper[4728]: I0227 10:44:57.070120 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-64db6967f8-j8qfr" Feb 27 10:44:57 crc kubenswrapper[4728]: I0227 10:44:57.071970 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-vpr8f" Feb 27 10:44:57 crc kubenswrapper[4728]: I0227 10:44:57.085839 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tx9kr\" (UniqueName: \"kubernetes.io/projected/46e1e2f4-2677-4d4f-88c3-22c7f3942e12-kube-api-access-tx9kr\") pod \"barbican-operator-controller-manager-6db6876945-8dzvn\" (UID: \"46e1e2f4-2677-4d4f-88c3-22c7f3942e12\") " pod="openstack-operators/barbican-operator-controller-manager-6db6876945-8dzvn" Feb 27 10:44:57 crc kubenswrapper[4728]: I0227 10:44:57.085893 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2sjng\" (UniqueName: \"kubernetes.io/projected/c34e67f9-f7a4-4918-85c5-9d26f0f47f83-kube-api-access-2sjng\") pod \"heat-operator-controller-manager-cf99c678f-sfznl\" (UID: \"c34e67f9-f7a4-4918-85c5-9d26f0f47f83\") " pod="openstack-operators/heat-operator-controller-manager-cf99c678f-sfznl" Feb 27 10:44:57 crc kubenswrapper[4728]: I0227 10:44:57.085957 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghvbk\" (UniqueName: \"kubernetes.io/projected/6fd71010-803b-40dc-8cba-72c0a8987b5b-kube-api-access-ghvbk\") pod \"cinder-operator-controller-manager-55d77d7b5c-2whpj\" (UID: \"6fd71010-803b-40dc-8cba-72c0a8987b5b\") " pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-2whpj" Feb 27 10:44:57 crc kubenswrapper[4728]: I0227 10:44:57.085983 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26ppk\" (UniqueName: \"kubernetes.io/projected/65769a2d-f0c4-4af4-b5ce-f5918e90bfbf-kube-api-access-26ppk\") pod \"designate-operator-controller-manager-5d87c9d997-l2ffb\" (UID: \"65769a2d-f0c4-4af4-b5ce-f5918e90bfbf\") " pod="openstack-operators/designate-operator-controller-manager-5d87c9d997-l2ffb" Feb 27 10:44:57 crc kubenswrapper[4728]: I0227 10:44:57.086001 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mklkc\" (UniqueName: \"kubernetes.io/projected/7f038385-91be-41e0-b79c-0f6160bdf07a-kube-api-access-mklkc\") pod \"glance-operator-controller-manager-64db6967f8-j8qfr\" (UID: \"7f038385-91be-41e0-b79c-0f6160bdf07a\") " pod="openstack-operators/glance-operator-controller-manager-64db6967f8-j8qfr" Feb 27 10:44:57 crc kubenswrapper[4728]: I0227 10:44:57.098156 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-cf99c678f-sfznl"] Feb 27 10:44:57 crc kubenswrapper[4728]: I0227 10:44:57.114832 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-78bc7f9bd9-2prk8"] Feb 27 10:44:57 crc kubenswrapper[4728]: I0227 10:44:57.115895 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-78bc7f9bd9-2prk8" Feb 27 10:44:57 crc kubenswrapper[4728]: I0227 10:44:57.140578 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-64db6967f8-j8qfr"] Feb 27 10:44:57 crc kubenswrapper[4728]: I0227 10:44:57.141025 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-974vr" Feb 27 10:44:57 crc kubenswrapper[4728]: I0227 10:44:57.150957 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-f7fcc58b9-2gxfg"] Feb 27 10:44:57 crc kubenswrapper[4728]: I0227 10:44:57.152042 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-f7fcc58b9-2gxfg" Feb 27 10:44:57 crc kubenswrapper[4728]: I0227 10:44:57.154049 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Feb 27 10:44:57 crc kubenswrapper[4728]: I0227 10:44:57.154271 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-8gb5t" Feb 27 10:44:57 crc kubenswrapper[4728]: I0227 10:44:57.165356 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tx9kr\" (UniqueName: \"kubernetes.io/projected/46e1e2f4-2677-4d4f-88c3-22c7f3942e12-kube-api-access-tx9kr\") pod \"barbican-operator-controller-manager-6db6876945-8dzvn\" (UID: \"46e1e2f4-2677-4d4f-88c3-22c7f3942e12\") " pod="openstack-operators/barbican-operator-controller-manager-6db6876945-8dzvn" Feb 27 10:44:57 crc kubenswrapper[4728]: I0227 10:44:57.172390 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-78bc7f9bd9-2prk8"] Feb 27 10:44:57 crc kubenswrapper[4728]: I0227 10:44:57.186686 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ghvbk\" (UniqueName: \"kubernetes.io/projected/6fd71010-803b-40dc-8cba-72c0a8987b5b-kube-api-access-ghvbk\") pod \"cinder-operator-controller-manager-55d77d7b5c-2whpj\" (UID: \"6fd71010-803b-40dc-8cba-72c0a8987b5b\") " pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-2whpj" Feb 27 10:44:57 crc kubenswrapper[4728]: I0227 10:44:57.186734 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-26ppk\" (UniqueName: \"kubernetes.io/projected/65769a2d-f0c4-4af4-b5ce-f5918e90bfbf-kube-api-access-26ppk\") pod \"designate-operator-controller-manager-5d87c9d997-l2ffb\" (UID: \"65769a2d-f0c4-4af4-b5ce-f5918e90bfbf\") " pod="openstack-operators/designate-operator-controller-manager-5d87c9d997-l2ffb" Feb 27 10:44:57 crc kubenswrapper[4728]: I0227 10:44:57.186753 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mklkc\" (UniqueName: \"kubernetes.io/projected/7f038385-91be-41e0-b79c-0f6160bdf07a-kube-api-access-mklkc\") pod \"glance-operator-controller-manager-64db6967f8-j8qfr\" (UID: \"7f038385-91be-41e0-b79c-0f6160bdf07a\") " pod="openstack-operators/glance-operator-controller-manager-64db6967f8-j8qfr" Feb 27 10:44:57 crc kubenswrapper[4728]: I0227 10:44:57.186780 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pb8jh\" (UniqueName: \"kubernetes.io/projected/35b1713c-2089-47f7-8537-468fc7a8f79e-kube-api-access-pb8jh\") pod \"horizon-operator-controller-manager-78bc7f9bd9-2prk8\" (UID: \"35b1713c-2089-47f7-8537-468fc7a8f79e\") " pod="openstack-operators/horizon-operator-controller-manager-78bc7f9bd9-2prk8" Feb 27 10:44:57 crc kubenswrapper[4728]: I0227 10:44:57.186830 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0c1ed09d-4dfa-4cf1-b7c3-4562f3811ed9-cert\") pod \"infra-operator-controller-manager-f7fcc58b9-2gxfg\" (UID: \"0c1ed09d-4dfa-4cf1-b7c3-4562f3811ed9\") " pod="openstack-operators/infra-operator-controller-manager-f7fcc58b9-2gxfg" Feb 27 10:44:57 crc kubenswrapper[4728]: I0227 10:44:57.186868 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2sjng\" (UniqueName: \"kubernetes.io/projected/c34e67f9-f7a4-4918-85c5-9d26f0f47f83-kube-api-access-2sjng\") pod \"heat-operator-controller-manager-cf99c678f-sfznl\" (UID: \"c34e67f9-f7a4-4918-85c5-9d26f0f47f83\") " pod="openstack-operators/heat-operator-controller-manager-cf99c678f-sfznl" Feb 27 10:44:57 crc kubenswrapper[4728]: I0227 10:44:57.186929 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f8v4q\" (UniqueName: \"kubernetes.io/projected/0c1ed09d-4dfa-4cf1-b7c3-4562f3811ed9-kube-api-access-f8v4q\") pod \"infra-operator-controller-manager-f7fcc58b9-2gxfg\" (UID: \"0c1ed09d-4dfa-4cf1-b7c3-4562f3811ed9\") " pod="openstack-operators/infra-operator-controller-manager-f7fcc58b9-2gxfg" Feb 27 10:44:57 crc kubenswrapper[4728]: I0227 10:44:57.201036 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-545456dc4-qrz29"] Feb 27 10:44:57 crc kubenswrapper[4728]: I0227 10:44:57.202062 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-545456dc4-qrz29" Feb 27 10:44:57 crc kubenswrapper[4728]: I0227 10:44:57.210746 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-btgr8" Feb 27 10:44:57 crc kubenswrapper[4728]: I0227 10:44:57.215555 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-55ffd4876b-qkrxj"] Feb 27 10:44:57 crc kubenswrapper[4728]: I0227 10:44:57.223830 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-55ffd4876b-qkrxj" Feb 27 10:44:57 crc kubenswrapper[4728]: I0227 10:44:57.224254 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2sjng\" (UniqueName: \"kubernetes.io/projected/c34e67f9-f7a4-4918-85c5-9d26f0f47f83-kube-api-access-2sjng\") pod \"heat-operator-controller-manager-cf99c678f-sfznl\" (UID: \"c34e67f9-f7a4-4918-85c5-9d26f0f47f83\") " pod="openstack-operators/heat-operator-controller-manager-cf99c678f-sfznl" Feb 27 10:44:57 crc kubenswrapper[4728]: I0227 10:44:57.229011 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-mb47v" Feb 27 10:44:57 crc kubenswrapper[4728]: I0227 10:44:57.232368 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ghvbk\" (UniqueName: \"kubernetes.io/projected/6fd71010-803b-40dc-8cba-72c0a8987b5b-kube-api-access-ghvbk\") pod \"cinder-operator-controller-manager-55d77d7b5c-2whpj\" (UID: \"6fd71010-803b-40dc-8cba-72c0a8987b5b\") " pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-2whpj" Feb 27 10:44:57 crc kubenswrapper[4728]: I0227 10:44:57.233126 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mklkc\" (UniqueName: \"kubernetes.io/projected/7f038385-91be-41e0-b79c-0f6160bdf07a-kube-api-access-mklkc\") pod \"glance-operator-controller-manager-64db6967f8-j8qfr\" (UID: \"7f038385-91be-41e0-b79c-0f6160bdf07a\") " pod="openstack-operators/glance-operator-controller-manager-64db6967f8-j8qfr" Feb 27 10:44:57 crc kubenswrapper[4728]: I0227 10:44:57.233700 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-f7fcc58b9-2gxfg"] Feb 27 10:44:57 crc kubenswrapper[4728]: I0227 10:44:57.247336 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-26ppk\" (UniqueName: \"kubernetes.io/projected/65769a2d-f0c4-4af4-b5ce-f5918e90bfbf-kube-api-access-26ppk\") pod \"designate-operator-controller-manager-5d87c9d997-l2ffb\" (UID: \"65769a2d-f0c4-4af4-b5ce-f5918e90bfbf\") " pod="openstack-operators/designate-operator-controller-manager-5d87c9d997-l2ffb" Feb 27 10:44:57 crc kubenswrapper[4728]: I0227 10:44:57.258586 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-545456dc4-qrz29"] Feb 27 10:44:57 crc kubenswrapper[4728]: I0227 10:44:57.259017 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-6db6876945-8dzvn" Feb 27 10:44:57 crc kubenswrapper[4728]: I0227 10:44:57.277984 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-2whpj" Feb 27 10:44:57 crc kubenswrapper[4728]: I0227 10:44:57.286109 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-55ffd4876b-qkrxj"] Feb 27 10:44:57 crc kubenswrapper[4728]: I0227 10:44:57.288227 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0c1ed09d-4dfa-4cf1-b7c3-4562f3811ed9-cert\") pod \"infra-operator-controller-manager-f7fcc58b9-2gxfg\" (UID: \"0c1ed09d-4dfa-4cf1-b7c3-4562f3811ed9\") " pod="openstack-operators/infra-operator-controller-manager-f7fcc58b9-2gxfg" Feb 27 10:44:57 crc kubenswrapper[4728]: I0227 10:44:57.288325 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f8v4q\" (UniqueName: \"kubernetes.io/projected/0c1ed09d-4dfa-4cf1-b7c3-4562f3811ed9-kube-api-access-f8v4q\") pod \"infra-operator-controller-manager-f7fcc58b9-2gxfg\" (UID: \"0c1ed09d-4dfa-4cf1-b7c3-4562f3811ed9\") " pod="openstack-operators/infra-operator-controller-manager-f7fcc58b9-2gxfg" Feb 27 10:44:57 crc kubenswrapper[4728]: I0227 10:44:57.288374 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pb8jh\" (UniqueName: \"kubernetes.io/projected/35b1713c-2089-47f7-8537-468fc7a8f79e-kube-api-access-pb8jh\") pod \"horizon-operator-controller-manager-78bc7f9bd9-2prk8\" (UID: \"35b1713c-2089-47f7-8537-468fc7a8f79e\") " pod="openstack-operators/horizon-operator-controller-manager-78bc7f9bd9-2prk8" Feb 27 10:44:57 crc kubenswrapper[4728]: E0227 10:44:57.289280 4728 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 27 10:44:57 crc kubenswrapper[4728]: E0227 10:44:57.289333 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0c1ed09d-4dfa-4cf1-b7c3-4562f3811ed9-cert podName:0c1ed09d-4dfa-4cf1-b7c3-4562f3811ed9 nodeName:}" failed. No retries permitted until 2026-02-27 10:44:57.789316472 +0000 UTC m=+1117.751682578 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/0c1ed09d-4dfa-4cf1-b7c3-4562f3811ed9-cert") pod "infra-operator-controller-manager-f7fcc58b9-2gxfg" (UID: "0c1ed09d-4dfa-4cf1-b7c3-4562f3811ed9") : secret "infra-operator-webhook-server-cert" not found Feb 27 10:44:57 crc kubenswrapper[4728]: I0227 10:44:57.289756 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-5d87c9d997-l2ffb" Feb 27 10:44:57 crc kubenswrapper[4728]: I0227 10:44:57.297889 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-67d996989d-ngg85"] Feb 27 10:44:57 crc kubenswrapper[4728]: I0227 10:44:57.299650 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-67d996989d-ngg85" Feb 27 10:44:57 crc kubenswrapper[4728]: I0227 10:44:57.306797 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-vw44x" Feb 27 10:44:57 crc kubenswrapper[4728]: I0227 10:44:57.313519 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-556b8b874-rxh2d"] Feb 27 10:44:57 crc kubenswrapper[4728]: I0227 10:44:57.316200 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-556b8b874-rxh2d" Feb 27 10:44:57 crc kubenswrapper[4728]: I0227 10:44:57.317331 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pb8jh\" (UniqueName: \"kubernetes.io/projected/35b1713c-2089-47f7-8537-468fc7a8f79e-kube-api-access-pb8jh\") pod \"horizon-operator-controller-manager-78bc7f9bd9-2prk8\" (UID: \"35b1713c-2089-47f7-8537-468fc7a8f79e\") " pod="openstack-operators/horizon-operator-controller-manager-78bc7f9bd9-2prk8" Feb 27 10:44:57 crc kubenswrapper[4728]: I0227 10:44:57.320938 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-dt2b6" Feb 27 10:44:57 crc kubenswrapper[4728]: I0227 10:44:57.326809 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-67d996989d-ngg85"] Feb 27 10:44:57 crc kubenswrapper[4728]: I0227 10:44:57.338084 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f8v4q\" (UniqueName: \"kubernetes.io/projected/0c1ed09d-4dfa-4cf1-b7c3-4562f3811ed9-kube-api-access-f8v4q\") pod \"infra-operator-controller-manager-f7fcc58b9-2gxfg\" (UID: \"0c1ed09d-4dfa-4cf1-b7c3-4562f3811ed9\") " pod="openstack-operators/infra-operator-controller-manager-f7fcc58b9-2gxfg" Feb 27 10:44:57 crc kubenswrapper[4728]: I0227 10:44:57.357705 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-54688575f-5xx5w"] Feb 27 10:44:57 crc kubenswrapper[4728]: I0227 10:44:57.358968 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-54688575f-5xx5w" Feb 27 10:44:57 crc kubenswrapper[4728]: I0227 10:44:57.360847 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-k7fdm" Feb 27 10:44:57 crc kubenswrapper[4728]: I0227 10:44:57.375994 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-cf99c678f-sfznl" Feb 27 10:44:57 crc kubenswrapper[4728]: I0227 10:44:57.390194 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2frk\" (UniqueName: \"kubernetes.io/projected/df32cd75-8d92-4dbe-9e96-0c943a0f2614-kube-api-access-b2frk\") pod \"keystone-operator-controller-manager-55ffd4876b-qkrxj\" (UID: \"df32cd75-8d92-4dbe-9e96-0c943a0f2614\") " pod="openstack-operators/keystone-operator-controller-manager-55ffd4876b-qkrxj" Feb 27 10:44:57 crc kubenswrapper[4728]: I0227 10:44:57.390271 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wwh4\" (UniqueName: \"kubernetes.io/projected/20a64934-5596-44eb-9646-115ff6b4e9c8-kube-api-access-9wwh4\") pod \"ironic-operator-controller-manager-545456dc4-qrz29\" (UID: \"20a64934-5596-44eb-9646-115ff6b4e9c8\") " pod="openstack-operators/ironic-operator-controller-manager-545456dc4-qrz29" Feb 27 10:44:57 crc kubenswrapper[4728]: I0227 10:44:57.390646 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-64db6967f8-j8qfr" Feb 27 10:44:57 crc kubenswrapper[4728]: I0227 10:44:57.437231 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-556b8b874-rxh2d"] Feb 27 10:44:57 crc kubenswrapper[4728]: I0227 10:44:57.446641 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-74b6b5dc96-zwlhz"] Feb 27 10:44:57 crc kubenswrapper[4728]: I0227 10:44:57.447899 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-74b6b5dc96-zwlhz" Feb 27 10:44:57 crc kubenswrapper[4728]: I0227 10:44:57.451946 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-wvgmb" Feb 27 10:44:57 crc kubenswrapper[4728]: I0227 10:44:57.455116 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-6f5z2"] Feb 27 10:44:57 crc kubenswrapper[4728]: I0227 10:44:57.456772 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-6f5z2" Feb 27 10:44:57 crc kubenswrapper[4728]: I0227 10:44:57.461196 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-tw8k9" Feb 27 10:44:57 crc kubenswrapper[4728]: I0227 10:44:57.463734 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-6f5z2"] Feb 27 10:44:57 crc kubenswrapper[4728]: I0227 10:44:57.476012 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-78bc7f9bd9-2prk8" Feb 27 10:44:57 crc kubenswrapper[4728]: I0227 10:44:57.486357 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-54688575f-5xx5w"] Feb 27 10:44:57 crc kubenswrapper[4728]: I0227 10:44:57.491838 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5hd8\" (UniqueName: \"kubernetes.io/projected/c0bf29f3-9fc3-4d2e-b2f6-979e2ed8f3ce-kube-api-access-k5hd8\") pod \"neutron-operator-controller-manager-54688575f-5xx5w\" (UID: \"c0bf29f3-9fc3-4d2e-b2f6-979e2ed8f3ce\") " pod="openstack-operators/neutron-operator-controller-manager-54688575f-5xx5w" Feb 27 10:44:57 crc kubenswrapper[4728]: I0227 10:44:57.491920 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9c9z9\" (UniqueName: \"kubernetes.io/projected/9556c640-95cc-4030-a15d-eb61a6bcca3b-kube-api-access-9c9z9\") pod \"manila-operator-controller-manager-67d996989d-ngg85\" (UID: \"9556c640-95cc-4030-a15d-eb61a6bcca3b\") " pod="openstack-operators/manila-operator-controller-manager-67d996989d-ngg85" Feb 27 10:44:57 crc kubenswrapper[4728]: I0227 10:44:57.491975 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5sltg\" (UniqueName: \"kubernetes.io/projected/9a392698-f140-4562-b72c-7cbe0a868f1c-kube-api-access-5sltg\") pod \"mariadb-operator-controller-manager-556b8b874-rxh2d\" (UID: \"9a392698-f140-4562-b72c-7cbe0a868f1c\") " pod="openstack-operators/mariadb-operator-controller-manager-556b8b874-rxh2d" Feb 27 10:44:57 crc kubenswrapper[4728]: I0227 10:44:57.491995 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b2frk\" (UniqueName: \"kubernetes.io/projected/df32cd75-8d92-4dbe-9e96-0c943a0f2614-kube-api-access-b2frk\") pod \"keystone-operator-controller-manager-55ffd4876b-qkrxj\" (UID: \"df32cd75-8d92-4dbe-9e96-0c943a0f2614\") " pod="openstack-operators/keystone-operator-controller-manager-55ffd4876b-qkrxj" Feb 27 10:44:57 crc kubenswrapper[4728]: I0227 10:44:57.492039 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9wwh4\" (UniqueName: \"kubernetes.io/projected/20a64934-5596-44eb-9646-115ff6b4e9c8-kube-api-access-9wwh4\") pod \"ironic-operator-controller-manager-545456dc4-qrz29\" (UID: \"20a64934-5596-44eb-9646-115ff6b4e9c8\") " pod="openstack-operators/ironic-operator-controller-manager-545456dc4-qrz29" Feb 27 10:44:57 crc kubenswrapper[4728]: I0227 10:44:57.499816 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-74b6b5dc96-zwlhz"] Feb 27 10:44:57 crc kubenswrapper[4728]: I0227 10:44:57.517316 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b2frk\" (UniqueName: \"kubernetes.io/projected/df32cd75-8d92-4dbe-9e96-0c943a0f2614-kube-api-access-b2frk\") pod \"keystone-operator-controller-manager-55ffd4876b-qkrxj\" (UID: \"df32cd75-8d92-4dbe-9e96-0c943a0f2614\") " pod="openstack-operators/keystone-operator-controller-manager-55ffd4876b-qkrxj" Feb 27 10:44:57 crc kubenswrapper[4728]: I0227 10:44:57.522172 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9wwh4\" (UniqueName: \"kubernetes.io/projected/20a64934-5596-44eb-9646-115ff6b4e9c8-kube-api-access-9wwh4\") pod \"ironic-operator-controller-manager-545456dc4-qrz29\" (UID: \"20a64934-5596-44eb-9646-115ff6b4e9c8\") " pod="openstack-operators/ironic-operator-controller-manager-545456dc4-qrz29" Feb 27 10:44:57 crc kubenswrapper[4728]: I0227 10:44:57.531044 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-75684d597f-hk2dt"] Feb 27 10:44:57 crc kubenswrapper[4728]: I0227 10:44:57.534680 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-75684d597f-hk2dt" Feb 27 10:44:57 crc kubenswrapper[4728]: I0227 10:44:57.538788 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-2nxxz" Feb 27 10:44:57 crc kubenswrapper[4728]: I0227 10:44:57.549403 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cht92p"] Feb 27 10:44:57 crc kubenswrapper[4728]: I0227 10:44:57.551427 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cht92p" Feb 27 10:44:57 crc kubenswrapper[4728]: I0227 10:44:57.559023 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-wvdhx" Feb 27 10:44:57 crc kubenswrapper[4728]: I0227 10:44:57.565386 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Feb 27 10:44:57 crc kubenswrapper[4728]: I0227 10:44:57.594909 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqrp5\" (UniqueName: \"kubernetes.io/projected/8303b246-6796-4521-8c7c-95234d371456-kube-api-access-fqrp5\") pod \"octavia-operator-controller-manager-5d86c7ddb7-6f5z2\" (UID: \"8303b246-6796-4521-8c7c-95234d371456\") " pod="openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-6f5z2" Feb 27 10:44:57 crc kubenswrapper[4728]: I0227 10:44:57.595538 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k5hd8\" (UniqueName: \"kubernetes.io/projected/c0bf29f3-9fc3-4d2e-b2f6-979e2ed8f3ce-kube-api-access-k5hd8\") pod \"neutron-operator-controller-manager-54688575f-5xx5w\" (UID: \"c0bf29f3-9fc3-4d2e-b2f6-979e2ed8f3ce\") " pod="openstack-operators/neutron-operator-controller-manager-54688575f-5xx5w" Feb 27 10:44:57 crc kubenswrapper[4728]: I0227 10:44:57.595860 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9c9z9\" (UniqueName: \"kubernetes.io/projected/9556c640-95cc-4030-a15d-eb61a6bcca3b-kube-api-access-9c9z9\") pod \"manila-operator-controller-manager-67d996989d-ngg85\" (UID: \"9556c640-95cc-4030-a15d-eb61a6bcca3b\") " pod="openstack-operators/manila-operator-controller-manager-67d996989d-ngg85" Feb 27 10:44:57 crc kubenswrapper[4728]: I0227 10:44:57.597278 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-648564c9fc-rrwdq"] Feb 27 10:44:57 crc kubenswrapper[4728]: I0227 10:44:57.598134 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5sltg\" (UniqueName: \"kubernetes.io/projected/9a392698-f140-4562-b72c-7cbe0a868f1c-kube-api-access-5sltg\") pod \"mariadb-operator-controller-manager-556b8b874-rxh2d\" (UID: \"9a392698-f140-4562-b72c-7cbe0a868f1c\") " pod="openstack-operators/mariadb-operator-controller-manager-556b8b874-rxh2d" Feb 27 10:44:57 crc kubenswrapper[4728]: I0227 10:44:57.598178 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6dbsd\" (UniqueName: \"kubernetes.io/projected/2343eecc-62d9-4859-9cf9-6a3ec71f4906-kube-api-access-6dbsd\") pod \"nova-operator-controller-manager-74b6b5dc96-zwlhz\" (UID: \"2343eecc-62d9-4859-9cf9-6a3ec71f4906\") " pod="openstack-operators/nova-operator-controller-manager-74b6b5dc96-zwlhz" Feb 27 10:44:57 crc kubenswrapper[4728]: I0227 10:44:57.598558 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-648564c9fc-rrwdq" Feb 27 10:44:57 crc kubenswrapper[4728]: I0227 10:44:57.604983 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-75684d597f-hk2dt"] Feb 27 10:44:57 crc kubenswrapper[4728]: I0227 10:44:57.612827 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-648564c9fc-rrwdq"] Feb 27 10:44:57 crc kubenswrapper[4728]: I0227 10:44:57.632030 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-g2bjk" Feb 27 10:44:57 crc kubenswrapper[4728]: I0227 10:44:57.635652 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5hd8\" (UniqueName: \"kubernetes.io/projected/c0bf29f3-9fc3-4d2e-b2f6-979e2ed8f3ce-kube-api-access-k5hd8\") pod \"neutron-operator-controller-manager-54688575f-5xx5w\" (UID: \"c0bf29f3-9fc3-4d2e-b2f6-979e2ed8f3ce\") " pod="openstack-operators/neutron-operator-controller-manager-54688575f-5xx5w" Feb 27 10:44:57 crc kubenswrapper[4728]: I0227 10:44:57.640660 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5sltg\" (UniqueName: \"kubernetes.io/projected/9a392698-f140-4562-b72c-7cbe0a868f1c-kube-api-access-5sltg\") pod \"mariadb-operator-controller-manager-556b8b874-rxh2d\" (UID: \"9a392698-f140-4562-b72c-7cbe0a868f1c\") " pod="openstack-operators/mariadb-operator-controller-manager-556b8b874-rxh2d" Feb 27 10:44:57 crc kubenswrapper[4728]: I0227 10:44:57.646083 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9c9z9\" (UniqueName: \"kubernetes.io/projected/9556c640-95cc-4030-a15d-eb61a6bcca3b-kube-api-access-9c9z9\") pod \"manila-operator-controller-manager-67d996989d-ngg85\" (UID: \"9556c640-95cc-4030-a15d-eb61a6bcca3b\") " pod="openstack-operators/manila-operator-controller-manager-67d996989d-ngg85" Feb 27 10:44:57 crc kubenswrapper[4728]: I0227 10:44:57.646490 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-545456dc4-qrz29" Feb 27 10:44:57 crc kubenswrapper[4728]: I0227 10:44:57.679958 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cht92p"] Feb 27 10:44:57 crc kubenswrapper[4728]: I0227 10:44:57.688697 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-9b9ff9f4d-4s265"] Feb 27 10:44:57 crc kubenswrapper[4728]: I0227 10:44:57.690281 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-9b9ff9f4d-4s265" Feb 27 10:44:57 crc kubenswrapper[4728]: I0227 10:44:57.692999 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-f4vt9" Feb 27 10:44:57 crc kubenswrapper[4728]: I0227 10:44:57.695449 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-55ffd4876b-qkrxj" Feb 27 10:44:57 crc kubenswrapper[4728]: I0227 10:44:57.703823 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-psp2m\" (UniqueName: \"kubernetes.io/projected/4c16843a-96c9-45b3-9203-6beef6d0c61c-kube-api-access-psp2m\") pod \"placement-operator-controller-manager-648564c9fc-rrwdq\" (UID: \"4c16843a-96c9-45b3-9203-6beef6d0c61c\") " pod="openstack-operators/placement-operator-controller-manager-648564c9fc-rrwdq" Feb 27 10:44:57 crc kubenswrapper[4728]: I0227 10:44:57.704628 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxc57\" (UniqueName: \"kubernetes.io/projected/d0eb0111-8103-4481-af5b-9507858073ef-kube-api-access-jxc57\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cht92p\" (UID: \"d0eb0111-8103-4481-af5b-9507858073ef\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cht92p" Feb 27 10:44:57 crc kubenswrapper[4728]: I0227 10:44:57.705822 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-54688575f-5xx5w" Feb 27 10:44:57 crc kubenswrapper[4728]: I0227 10:44:57.706378 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-67d996989d-ngg85" Feb 27 10:44:57 crc kubenswrapper[4728]: I0227 10:44:57.706953 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-556b8b874-rxh2d" Feb 27 10:44:57 crc kubenswrapper[4728]: I0227 10:44:57.707620 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6dbsd\" (UniqueName: \"kubernetes.io/projected/2343eecc-62d9-4859-9cf9-6a3ec71f4906-kube-api-access-6dbsd\") pod \"nova-operator-controller-manager-74b6b5dc96-zwlhz\" (UID: \"2343eecc-62d9-4859-9cf9-6a3ec71f4906\") " pod="openstack-operators/nova-operator-controller-manager-74b6b5dc96-zwlhz" Feb 27 10:44:57 crc kubenswrapper[4728]: I0227 10:44:57.707745 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zz284\" (UniqueName: \"kubernetes.io/projected/10e9cde6-9b86-4bb8-9171-f73a3a034411-kube-api-access-zz284\") pod \"ovn-operator-controller-manager-75684d597f-hk2dt\" (UID: \"10e9cde6-9b86-4bb8-9171-f73a3a034411\") " pod="openstack-operators/ovn-operator-controller-manager-75684d597f-hk2dt" Feb 27 10:44:57 crc kubenswrapper[4728]: I0227 10:44:57.708011 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fqrp5\" (UniqueName: \"kubernetes.io/projected/8303b246-6796-4521-8c7c-95234d371456-kube-api-access-fqrp5\") pod \"octavia-operator-controller-manager-5d86c7ddb7-6f5z2\" (UID: \"8303b246-6796-4521-8c7c-95234d371456\") " pod="openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-6f5z2" Feb 27 10:44:57 crc kubenswrapper[4728]: I0227 10:44:57.708130 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rvnl\" (UniqueName: \"kubernetes.io/projected/11dda47d-9181-4a72-a9ea-68874d9ca367-kube-api-access-5rvnl\") pod \"swift-operator-controller-manager-9b9ff9f4d-4s265\" (UID: \"11dda47d-9181-4a72-a9ea-68874d9ca367\") " pod="openstack-operators/swift-operator-controller-manager-9b9ff9f4d-4s265" Feb 27 10:44:57 crc kubenswrapper[4728]: I0227 10:44:57.708235 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d0eb0111-8103-4481-af5b-9507858073ef-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cht92p\" (UID: \"d0eb0111-8103-4481-af5b-9507858073ef\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cht92p" Feb 27 10:44:57 crc kubenswrapper[4728]: I0227 10:44:57.738728 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-9b9ff9f4d-4s265"] Feb 27 10:44:57 crc kubenswrapper[4728]: I0227 10:44:57.747650 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6dbsd\" (UniqueName: \"kubernetes.io/projected/2343eecc-62d9-4859-9cf9-6a3ec71f4906-kube-api-access-6dbsd\") pod \"nova-operator-controller-manager-74b6b5dc96-zwlhz\" (UID: \"2343eecc-62d9-4859-9cf9-6a3ec71f4906\") " pod="openstack-operators/nova-operator-controller-manager-74b6b5dc96-zwlhz" Feb 27 10:44:57 crc kubenswrapper[4728]: I0227 10:44:57.776782 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-7776f7b585-jhzvv"] Feb 27 10:44:57 crc kubenswrapper[4728]: I0227 10:44:57.778590 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-7776f7b585-jhzvv" Feb 27 10:44:57 crc kubenswrapper[4728]: I0227 10:44:57.781291 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-ktllr" Feb 27 10:44:57 crc kubenswrapper[4728]: I0227 10:44:57.794245 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fqrp5\" (UniqueName: \"kubernetes.io/projected/8303b246-6796-4521-8c7c-95234d371456-kube-api-access-fqrp5\") pod \"octavia-operator-controller-manager-5d86c7ddb7-6f5z2\" (UID: \"8303b246-6796-4521-8c7c-95234d371456\") " pod="openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-6f5z2" Feb 27 10:44:57 crc kubenswrapper[4728]: I0227 10:44:57.805992 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-7776f7b585-jhzvv"] Feb 27 10:44:57 crc kubenswrapper[4728]: I0227 10:44:57.811848 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0c1ed09d-4dfa-4cf1-b7c3-4562f3811ed9-cert\") pod \"infra-operator-controller-manager-f7fcc58b9-2gxfg\" (UID: \"0c1ed09d-4dfa-4cf1-b7c3-4562f3811ed9\") " pod="openstack-operators/infra-operator-controller-manager-f7fcc58b9-2gxfg" Feb 27 10:44:57 crc kubenswrapper[4728]: I0227 10:44:57.811893 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-psp2m\" (UniqueName: \"kubernetes.io/projected/4c16843a-96c9-45b3-9203-6beef6d0c61c-kube-api-access-psp2m\") pod \"placement-operator-controller-manager-648564c9fc-rrwdq\" (UID: \"4c16843a-96c9-45b3-9203-6beef6d0c61c\") " pod="openstack-operators/placement-operator-controller-manager-648564c9fc-rrwdq" Feb 27 10:44:57 crc kubenswrapper[4728]: I0227 10:44:57.811937 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jxc57\" (UniqueName: \"kubernetes.io/projected/d0eb0111-8103-4481-af5b-9507858073ef-kube-api-access-jxc57\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cht92p\" (UID: \"d0eb0111-8103-4481-af5b-9507858073ef\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cht92p" Feb 27 10:44:57 crc kubenswrapper[4728]: I0227 10:44:57.811967 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zz284\" (UniqueName: \"kubernetes.io/projected/10e9cde6-9b86-4bb8-9171-f73a3a034411-kube-api-access-zz284\") pod \"ovn-operator-controller-manager-75684d597f-hk2dt\" (UID: \"10e9cde6-9b86-4bb8-9171-f73a3a034411\") " pod="openstack-operators/ovn-operator-controller-manager-75684d597f-hk2dt" Feb 27 10:44:57 crc kubenswrapper[4728]: I0227 10:44:57.812023 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5rvnl\" (UniqueName: \"kubernetes.io/projected/11dda47d-9181-4a72-a9ea-68874d9ca367-kube-api-access-5rvnl\") pod \"swift-operator-controller-manager-9b9ff9f4d-4s265\" (UID: \"11dda47d-9181-4a72-a9ea-68874d9ca367\") " pod="openstack-operators/swift-operator-controller-manager-9b9ff9f4d-4s265" Feb 27 10:44:57 crc kubenswrapper[4728]: I0227 10:44:57.812056 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d0eb0111-8103-4481-af5b-9507858073ef-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cht92p\" (UID: \"d0eb0111-8103-4481-af5b-9507858073ef\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cht92p" Feb 27 10:44:57 crc kubenswrapper[4728]: I0227 10:44:57.812138 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nc9kl\" (UniqueName: \"kubernetes.io/projected/4af1c14f-ff9c-4c9c-a110-4f6b462c7acd-kube-api-access-nc9kl\") pod \"telemetry-operator-controller-manager-7776f7b585-jhzvv\" (UID: \"4af1c14f-ff9c-4c9c-a110-4f6b462c7acd\") " pod="openstack-operators/telemetry-operator-controller-manager-7776f7b585-jhzvv" Feb 27 10:44:57 crc kubenswrapper[4728]: E0227 10:44:57.812310 4728 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 27 10:44:57 crc kubenswrapper[4728]: E0227 10:44:57.812354 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0c1ed09d-4dfa-4cf1-b7c3-4562f3811ed9-cert podName:0c1ed09d-4dfa-4cf1-b7c3-4562f3811ed9 nodeName:}" failed. No retries permitted until 2026-02-27 10:44:58.812337486 +0000 UTC m=+1118.774703592 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/0c1ed09d-4dfa-4cf1-b7c3-4562f3811ed9-cert") pod "infra-operator-controller-manager-f7fcc58b9-2gxfg" (UID: "0c1ed09d-4dfa-4cf1-b7c3-4562f3811ed9") : secret "infra-operator-webhook-server-cert" not found Feb 27 10:44:57 crc kubenswrapper[4728]: E0227 10:44:57.812850 4728 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 27 10:44:57 crc kubenswrapper[4728]: E0227 10:44:57.812881 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d0eb0111-8103-4481-af5b-9507858073ef-cert podName:d0eb0111-8103-4481-af5b-9507858073ef nodeName:}" failed. No retries permitted until 2026-02-27 10:44:58.31287353 +0000 UTC m=+1118.275239636 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/d0eb0111-8103-4481-af5b-9507858073ef-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cht92p" (UID: "d0eb0111-8103-4481-af5b-9507858073ef") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 27 10:44:57 crc kubenswrapper[4728]: I0227 10:44:57.836723 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5rvnl\" (UniqueName: \"kubernetes.io/projected/11dda47d-9181-4a72-a9ea-68874d9ca367-kube-api-access-5rvnl\") pod \"swift-operator-controller-manager-9b9ff9f4d-4s265\" (UID: \"11dda47d-9181-4a72-a9ea-68874d9ca367\") " pod="openstack-operators/swift-operator-controller-manager-9b9ff9f4d-4s265" Feb 27 10:44:57 crc kubenswrapper[4728]: I0227 10:44:57.837081 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-74b6b5dc96-zwlhz" Feb 27 10:44:57 crc kubenswrapper[4728]: I0227 10:44:57.843019 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zz284\" (UniqueName: \"kubernetes.io/projected/10e9cde6-9b86-4bb8-9171-f73a3a034411-kube-api-access-zz284\") pod \"ovn-operator-controller-manager-75684d597f-hk2dt\" (UID: \"10e9cde6-9b86-4bb8-9171-f73a3a034411\") " pod="openstack-operators/ovn-operator-controller-manager-75684d597f-hk2dt" Feb 27 10:44:57 crc kubenswrapper[4728]: I0227 10:44:57.843269 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-6f5z2" Feb 27 10:44:57 crc kubenswrapper[4728]: I0227 10:44:57.844468 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-psp2m\" (UniqueName: \"kubernetes.io/projected/4c16843a-96c9-45b3-9203-6beef6d0c61c-kube-api-access-psp2m\") pod \"placement-operator-controller-manager-648564c9fc-rrwdq\" (UID: \"4c16843a-96c9-45b3-9203-6beef6d0c61c\") " pod="openstack-operators/placement-operator-controller-manager-648564c9fc-rrwdq" Feb 27 10:44:57 crc kubenswrapper[4728]: I0227 10:44:57.846355 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jxc57\" (UniqueName: \"kubernetes.io/projected/d0eb0111-8103-4481-af5b-9507858073ef-kube-api-access-jxc57\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cht92p\" (UID: \"d0eb0111-8103-4481-af5b-9507858073ef\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cht92p" Feb 27 10:44:57 crc kubenswrapper[4728]: I0227 10:44:57.846396 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-55b5ff4dbb-tq6s8"] Feb 27 10:44:57 crc kubenswrapper[4728]: I0227 10:44:57.852266 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-55b5ff4dbb-tq6s8" Feb 27 10:44:57 crc kubenswrapper[4728]: I0227 10:44:57.853932 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-55b5ff4dbb-tq6s8"] Feb 27 10:44:57 crc kubenswrapper[4728]: I0227 10:44:57.854818 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-r8dml" Feb 27 10:44:57 crc kubenswrapper[4728]: I0227 10:44:57.870244 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-75684d597f-hk2dt" Feb 27 10:44:57 crc kubenswrapper[4728]: I0227 10:44:57.894936 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-bccc79885-jnbr4"] Feb 27 10:44:57 crc kubenswrapper[4728]: I0227 10:44:57.896202 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-jnbr4" Feb 27 10:44:57 crc kubenswrapper[4728]: I0227 10:44:57.908185 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-sb86c" Feb 27 10:44:57 crc kubenswrapper[4728]: I0227 10:44:57.916013 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qvsv\" (UniqueName: \"kubernetes.io/projected/d246350e-c4cc-4f56-948b-3515032e7645-kube-api-access-9qvsv\") pod \"watcher-operator-controller-manager-bccc79885-jnbr4\" (UID: \"d246350e-c4cc-4f56-948b-3515032e7645\") " pod="openstack-operators/watcher-operator-controller-manager-bccc79885-jnbr4" Feb 27 10:44:57 crc kubenswrapper[4728]: I0227 10:44:57.916151 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjpbx\" (UniqueName: \"kubernetes.io/projected/e81ced4b-a6cf-4dba-964a-52f8bcbd82ae-kube-api-access-sjpbx\") pod \"test-operator-controller-manager-55b5ff4dbb-tq6s8\" (UID: \"e81ced4b-a6cf-4dba-964a-52f8bcbd82ae\") " pod="openstack-operators/test-operator-controller-manager-55b5ff4dbb-tq6s8" Feb 27 10:44:57 crc kubenswrapper[4728]: I0227 10:44:57.916349 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nc9kl\" (UniqueName: \"kubernetes.io/projected/4af1c14f-ff9c-4c9c-a110-4f6b462c7acd-kube-api-access-nc9kl\") pod \"telemetry-operator-controller-manager-7776f7b585-jhzvv\" (UID: \"4af1c14f-ff9c-4c9c-a110-4f6b462c7acd\") " pod="openstack-operators/telemetry-operator-controller-manager-7776f7b585-jhzvv" Feb 27 10:44:57 crc kubenswrapper[4728]: I0227 10:44:57.921825 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-bccc79885-jnbr4"] Feb 27 10:44:57 crc kubenswrapper[4728]: I0227 10:44:57.952366 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nc9kl\" (UniqueName: \"kubernetes.io/projected/4af1c14f-ff9c-4c9c-a110-4f6b462c7acd-kube-api-access-nc9kl\") pod \"telemetry-operator-controller-manager-7776f7b585-jhzvv\" (UID: \"4af1c14f-ff9c-4c9c-a110-4f6b462c7acd\") " pod="openstack-operators/telemetry-operator-controller-manager-7776f7b585-jhzvv" Feb 27 10:44:57 crc kubenswrapper[4728]: I0227 10:44:57.990664 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-648564c9fc-rrwdq" Feb 27 10:44:58 crc kubenswrapper[4728]: I0227 10:44:58.017335 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9qvsv\" (UniqueName: \"kubernetes.io/projected/d246350e-c4cc-4f56-948b-3515032e7645-kube-api-access-9qvsv\") pod \"watcher-operator-controller-manager-bccc79885-jnbr4\" (UID: \"d246350e-c4cc-4f56-948b-3515032e7645\") " pod="openstack-operators/watcher-operator-controller-manager-bccc79885-jnbr4" Feb 27 10:44:58 crc kubenswrapper[4728]: I0227 10:44:58.017402 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sjpbx\" (UniqueName: \"kubernetes.io/projected/e81ced4b-a6cf-4dba-964a-52f8bcbd82ae-kube-api-access-sjpbx\") pod \"test-operator-controller-manager-55b5ff4dbb-tq6s8\" (UID: \"e81ced4b-a6cf-4dba-964a-52f8bcbd82ae\") " pod="openstack-operators/test-operator-controller-manager-55b5ff4dbb-tq6s8" Feb 27 10:44:58 crc kubenswrapper[4728]: I0227 10:44:58.027253 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-6d5879f6b9-x7nrw"] Feb 27 10:44:58 crc kubenswrapper[4728]: I0227 10:44:58.028354 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-6d5879f6b9-x7nrw" Feb 27 10:44:58 crc kubenswrapper[4728]: I0227 10:44:58.032974 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Feb 27 10:44:58 crc kubenswrapper[4728]: I0227 10:44:58.033159 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-6wjxw" Feb 27 10:44:58 crc kubenswrapper[4728]: I0227 10:44:58.033253 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Feb 27 10:44:58 crc kubenswrapper[4728]: I0227 10:44:58.050297 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-6d5879f6b9-x7nrw"] Feb 27 10:44:58 crc kubenswrapper[4728]: I0227 10:44:58.059605 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-9b9ff9f4d-4s265" Feb 27 10:44:58 crc kubenswrapper[4728]: I0227 10:44:58.077432 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sjpbx\" (UniqueName: \"kubernetes.io/projected/e81ced4b-a6cf-4dba-964a-52f8bcbd82ae-kube-api-access-sjpbx\") pod \"test-operator-controller-manager-55b5ff4dbb-tq6s8\" (UID: \"e81ced4b-a6cf-4dba-964a-52f8bcbd82ae\") " pod="openstack-operators/test-operator-controller-manager-55b5ff4dbb-tq6s8" Feb 27 10:44:58 crc kubenswrapper[4728]: I0227 10:44:58.085303 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9qvsv\" (UniqueName: \"kubernetes.io/projected/d246350e-c4cc-4f56-948b-3515032e7645-kube-api-access-9qvsv\") pod \"watcher-operator-controller-manager-bccc79885-jnbr4\" (UID: \"d246350e-c4cc-4f56-948b-3515032e7645\") " pod="openstack-operators/watcher-operator-controller-manager-bccc79885-jnbr4" Feb 27 10:44:58 crc kubenswrapper[4728]: I0227 10:44:58.115863 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-7776f7b585-jhzvv" Feb 27 10:44:58 crc kubenswrapper[4728]: I0227 10:44:58.118177 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6k9m4\" (UniqueName: \"kubernetes.io/projected/08f5d7df-9e0f-4c13-8799-bcb605842ffd-kube-api-access-6k9m4\") pod \"openstack-operator-controller-manager-6d5879f6b9-x7nrw\" (UID: \"08f5d7df-9e0f-4c13-8799-bcb605842ffd\") " pod="openstack-operators/openstack-operator-controller-manager-6d5879f6b9-x7nrw" Feb 27 10:44:58 crc kubenswrapper[4728]: I0227 10:44:58.118285 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/08f5d7df-9e0f-4c13-8799-bcb605842ffd-metrics-certs\") pod \"openstack-operator-controller-manager-6d5879f6b9-x7nrw\" (UID: \"08f5d7df-9e0f-4c13-8799-bcb605842ffd\") " pod="openstack-operators/openstack-operator-controller-manager-6d5879f6b9-x7nrw" Feb 27 10:44:58 crc kubenswrapper[4728]: I0227 10:44:58.118349 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/08f5d7df-9e0f-4c13-8799-bcb605842ffd-webhook-certs\") pod \"openstack-operator-controller-manager-6d5879f6b9-x7nrw\" (UID: \"08f5d7df-9e0f-4c13-8799-bcb605842ffd\") " pod="openstack-operators/openstack-operator-controller-manager-6d5879f6b9-x7nrw" Feb 27 10:44:58 crc kubenswrapper[4728]: I0227 10:44:58.182481 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-55b5ff4dbb-tq6s8" Feb 27 10:44:58 crc kubenswrapper[4728]: I0227 10:44:58.186868 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5z9sj"] Feb 27 10:44:58 crc kubenswrapper[4728]: I0227 10:44:58.188773 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5z9sj" Feb 27 10:44:58 crc kubenswrapper[4728]: I0227 10:44:58.198451 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-x58mm" Feb 27 10:44:58 crc kubenswrapper[4728]: I0227 10:44:58.222224 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6k9m4\" (UniqueName: \"kubernetes.io/projected/08f5d7df-9e0f-4c13-8799-bcb605842ffd-kube-api-access-6k9m4\") pod \"openstack-operator-controller-manager-6d5879f6b9-x7nrw\" (UID: \"08f5d7df-9e0f-4c13-8799-bcb605842ffd\") " pod="openstack-operators/openstack-operator-controller-manager-6d5879f6b9-x7nrw" Feb 27 10:44:58 crc kubenswrapper[4728]: I0227 10:44:58.222366 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/08f5d7df-9e0f-4c13-8799-bcb605842ffd-metrics-certs\") pod \"openstack-operator-controller-manager-6d5879f6b9-x7nrw\" (UID: \"08f5d7df-9e0f-4c13-8799-bcb605842ffd\") " pod="openstack-operators/openstack-operator-controller-manager-6d5879f6b9-x7nrw" Feb 27 10:44:58 crc kubenswrapper[4728]: I0227 10:44:58.222404 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/08f5d7df-9e0f-4c13-8799-bcb605842ffd-webhook-certs\") pod \"openstack-operator-controller-manager-6d5879f6b9-x7nrw\" (UID: \"08f5d7df-9e0f-4c13-8799-bcb605842ffd\") " pod="openstack-operators/openstack-operator-controller-manager-6d5879f6b9-x7nrw" Feb 27 10:44:58 crc kubenswrapper[4728]: E0227 10:44:58.224078 4728 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 27 10:44:58 crc kubenswrapper[4728]: E0227 10:44:58.225708 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/08f5d7df-9e0f-4c13-8799-bcb605842ffd-metrics-certs podName:08f5d7df-9e0f-4c13-8799-bcb605842ffd nodeName:}" failed. No retries permitted until 2026-02-27 10:44:58.725684063 +0000 UTC m=+1118.688050169 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/08f5d7df-9e0f-4c13-8799-bcb605842ffd-metrics-certs") pod "openstack-operator-controller-manager-6d5879f6b9-x7nrw" (UID: "08f5d7df-9e0f-4c13-8799-bcb605842ffd") : secret "metrics-server-cert" not found Feb 27 10:44:58 crc kubenswrapper[4728]: E0227 10:44:58.224155 4728 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 27 10:44:58 crc kubenswrapper[4728]: E0227 10:44:58.226073 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/08f5d7df-9e0f-4c13-8799-bcb605842ffd-webhook-certs podName:08f5d7df-9e0f-4c13-8799-bcb605842ffd nodeName:}" failed. No retries permitted until 2026-02-27 10:44:58.726056973 +0000 UTC m=+1118.688423079 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/08f5d7df-9e0f-4c13-8799-bcb605842ffd-webhook-certs") pod "openstack-operator-controller-manager-6d5879f6b9-x7nrw" (UID: "08f5d7df-9e0f-4c13-8799-bcb605842ffd") : secret "webhook-server-cert" not found Feb 27 10:44:58 crc kubenswrapper[4728]: I0227 10:44:58.242845 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5z9sj"] Feb 27 10:44:58 crc kubenswrapper[4728]: I0227 10:44:58.250738 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6k9m4\" (UniqueName: \"kubernetes.io/projected/08f5d7df-9e0f-4c13-8799-bcb605842ffd-kube-api-access-6k9m4\") pod \"openstack-operator-controller-manager-6d5879f6b9-x7nrw\" (UID: \"08f5d7df-9e0f-4c13-8799-bcb605842ffd\") " pod="openstack-operators/openstack-operator-controller-manager-6d5879f6b9-x7nrw" Feb 27 10:44:58 crc kubenswrapper[4728]: I0227 10:44:58.251121 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-jnbr4" Feb 27 10:44:58 crc kubenswrapper[4728]: I0227 10:44:58.276586 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-6db6876945-8dzvn"] Feb 27 10:44:58 crc kubenswrapper[4728]: I0227 10:44:58.295575 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-cf99c678f-sfznl"] Feb 27 10:44:58 crc kubenswrapper[4728]: I0227 10:44:58.344845 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d0eb0111-8103-4481-af5b-9507858073ef-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cht92p\" (UID: \"d0eb0111-8103-4481-af5b-9507858073ef\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cht92p" Feb 27 10:44:58 crc kubenswrapper[4728]: I0227 10:44:58.345103 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6z7l7\" (UniqueName: \"kubernetes.io/projected/980c029c-25d4-4063-be4d-ea30564c2120-kube-api-access-6z7l7\") pod \"rabbitmq-cluster-operator-manager-668c99d594-5z9sj\" (UID: \"980c029c-25d4-4063-be4d-ea30564c2120\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5z9sj" Feb 27 10:44:58 crc kubenswrapper[4728]: E0227 10:44:58.345319 4728 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 27 10:44:58 crc kubenswrapper[4728]: E0227 10:44:58.345363 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d0eb0111-8103-4481-af5b-9507858073ef-cert podName:d0eb0111-8103-4481-af5b-9507858073ef nodeName:}" failed. No retries permitted until 2026-02-27 10:44:59.345348304 +0000 UTC m=+1119.307714410 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/d0eb0111-8103-4481-af5b-9507858073ef-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cht92p" (UID: "d0eb0111-8103-4481-af5b-9507858073ef") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 27 10:44:58 crc kubenswrapper[4728]: I0227 10:44:58.430400 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-55d77d7b5c-2whpj"] Feb 27 10:44:58 crc kubenswrapper[4728]: I0227 10:44:58.447728 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6z7l7\" (UniqueName: \"kubernetes.io/projected/980c029c-25d4-4063-be4d-ea30564c2120-kube-api-access-6z7l7\") pod \"rabbitmq-cluster-operator-manager-668c99d594-5z9sj\" (UID: \"980c029c-25d4-4063-be4d-ea30564c2120\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5z9sj" Feb 27 10:44:58 crc kubenswrapper[4728]: I0227 10:44:58.466202 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6z7l7\" (UniqueName: \"kubernetes.io/projected/980c029c-25d4-4063-be4d-ea30564c2120-kube-api-access-6z7l7\") pod \"rabbitmq-cluster-operator-manager-668c99d594-5z9sj\" (UID: \"980c029c-25d4-4063-be4d-ea30564c2120\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5z9sj" Feb 27 10:44:58 crc kubenswrapper[4728]: I0227 10:44:58.483080 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-5d87c9d997-l2ffb"] Feb 27 10:44:58 crc kubenswrapper[4728]: I0227 10:44:58.562549 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-64db6967f8-j8qfr"] Feb 27 10:44:58 crc kubenswrapper[4728]: W0227 10:44:58.585663 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7f038385_91be_41e0_b79c_0f6160bdf07a.slice/crio-b5a2e4990d84e06c7c7ad125804f32346de8cc7479b6d8a0a2f0ce67cf64ce29 WatchSource:0}: Error finding container b5a2e4990d84e06c7c7ad125804f32346de8cc7479b6d8a0a2f0ce67cf64ce29: Status 404 returned error can't find the container with id b5a2e4990d84e06c7c7ad125804f32346de8cc7479b6d8a0a2f0ce67cf64ce29 Feb 27 10:44:58 crc kubenswrapper[4728]: I0227 10:44:58.649480 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-545456dc4-qrz29"] Feb 27 10:44:58 crc kubenswrapper[4728]: I0227 10:44:58.650631 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5z9sj" Feb 27 10:44:58 crc kubenswrapper[4728]: I0227 10:44:58.670279 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-78bc7f9bd9-2prk8"] Feb 27 10:44:58 crc kubenswrapper[4728]: I0227 10:44:58.670476 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-2whpj" event={"ID":"6fd71010-803b-40dc-8cba-72c0a8987b5b","Type":"ContainerStarted","Data":"4f4f3950c0fe174b176582e474caf381864d11f90424780125782e9892c1b02b"} Feb 27 10:44:58 crc kubenswrapper[4728]: I0227 10:44:58.671664 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-6db6876945-8dzvn" event={"ID":"46e1e2f4-2677-4d4f-88c3-22c7f3942e12","Type":"ContainerStarted","Data":"ad7dcac1ed94f282cab2aad9b7e249eb5418733cd8fc6c44de3340b4b9e10e46"} Feb 27 10:44:58 crc kubenswrapper[4728]: I0227 10:44:58.672290 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-cf99c678f-sfznl" event={"ID":"c34e67f9-f7a4-4918-85c5-9d26f0f47f83","Type":"ContainerStarted","Data":"1088759693df53489819a4f68cc15d093e67d08c027c6a1d39d17e4040e3bee2"} Feb 27 10:44:58 crc kubenswrapper[4728]: I0227 10:44:58.672885 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-64db6967f8-j8qfr" event={"ID":"7f038385-91be-41e0-b79c-0f6160bdf07a","Type":"ContainerStarted","Data":"b5a2e4990d84e06c7c7ad125804f32346de8cc7479b6d8a0a2f0ce67cf64ce29"} Feb 27 10:44:58 crc kubenswrapper[4728]: I0227 10:44:58.673788 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-5d87c9d997-l2ffb" event={"ID":"65769a2d-f0c4-4af4-b5ce-f5918e90bfbf","Type":"ContainerStarted","Data":"4acea7374aeb9ca02aff1733c562f507d131ab50dd20ea1576b931e4f6a0795a"} Feb 27 10:44:58 crc kubenswrapper[4728]: I0227 10:44:58.751715 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/08f5d7df-9e0f-4c13-8799-bcb605842ffd-metrics-certs\") pod \"openstack-operator-controller-manager-6d5879f6b9-x7nrw\" (UID: \"08f5d7df-9e0f-4c13-8799-bcb605842ffd\") " pod="openstack-operators/openstack-operator-controller-manager-6d5879f6b9-x7nrw" Feb 27 10:44:58 crc kubenswrapper[4728]: I0227 10:44:58.751782 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/08f5d7df-9e0f-4c13-8799-bcb605842ffd-webhook-certs\") pod \"openstack-operator-controller-manager-6d5879f6b9-x7nrw\" (UID: \"08f5d7df-9e0f-4c13-8799-bcb605842ffd\") " pod="openstack-operators/openstack-operator-controller-manager-6d5879f6b9-x7nrw" Feb 27 10:44:58 crc kubenswrapper[4728]: E0227 10:44:58.751898 4728 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 27 10:44:58 crc kubenswrapper[4728]: E0227 10:44:58.751969 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/08f5d7df-9e0f-4c13-8799-bcb605842ffd-metrics-certs podName:08f5d7df-9e0f-4c13-8799-bcb605842ffd nodeName:}" failed. No retries permitted until 2026-02-27 10:44:59.751951336 +0000 UTC m=+1119.714317442 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/08f5d7df-9e0f-4c13-8799-bcb605842ffd-metrics-certs") pod "openstack-operator-controller-manager-6d5879f6b9-x7nrw" (UID: "08f5d7df-9e0f-4c13-8799-bcb605842ffd") : secret "metrics-server-cert" not found Feb 27 10:44:58 crc kubenswrapper[4728]: E0227 10:44:58.753220 4728 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 27 10:44:58 crc kubenswrapper[4728]: E0227 10:44:58.753275 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/08f5d7df-9e0f-4c13-8799-bcb605842ffd-webhook-certs podName:08f5d7df-9e0f-4c13-8799-bcb605842ffd nodeName:}" failed. No retries permitted until 2026-02-27 10:44:59.753260132 +0000 UTC m=+1119.715626238 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/08f5d7df-9e0f-4c13-8799-bcb605842ffd-webhook-certs") pod "openstack-operator-controller-manager-6d5879f6b9-x7nrw" (UID: "08f5d7df-9e0f-4c13-8799-bcb605842ffd") : secret "webhook-server-cert" not found Feb 27 10:44:58 crc kubenswrapper[4728]: I0227 10:44:58.854212 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0c1ed09d-4dfa-4cf1-b7c3-4562f3811ed9-cert\") pod \"infra-operator-controller-manager-f7fcc58b9-2gxfg\" (UID: \"0c1ed09d-4dfa-4cf1-b7c3-4562f3811ed9\") " pod="openstack-operators/infra-operator-controller-manager-f7fcc58b9-2gxfg" Feb 27 10:44:58 crc kubenswrapper[4728]: E0227 10:44:58.854451 4728 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 27 10:44:58 crc kubenswrapper[4728]: E0227 10:44:58.854497 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0c1ed09d-4dfa-4cf1-b7c3-4562f3811ed9-cert podName:0c1ed09d-4dfa-4cf1-b7c3-4562f3811ed9 nodeName:}" failed. No retries permitted until 2026-02-27 10:45:00.854482896 +0000 UTC m=+1120.816849002 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/0c1ed09d-4dfa-4cf1-b7c3-4562f3811ed9-cert") pod "infra-operator-controller-manager-f7fcc58b9-2gxfg" (UID: "0c1ed09d-4dfa-4cf1-b7c3-4562f3811ed9") : secret "infra-operator-webhook-server-cert" not found Feb 27 10:44:58 crc kubenswrapper[4728]: I0227 10:44:58.878355 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-54688575f-5xx5w"] Feb 27 10:44:58 crc kubenswrapper[4728]: W0227 10:44:58.888829 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc0bf29f3_9fc3_4d2e_b2f6_979e2ed8f3ce.slice/crio-83fdd56d87c8feca3843074ea51cf34be53fe0f937452b1259ebfa7ca95d16b8 WatchSource:0}: Error finding container 83fdd56d87c8feca3843074ea51cf34be53fe0f937452b1259ebfa7ca95d16b8: Status 404 returned error can't find the container with id 83fdd56d87c8feca3843074ea51cf34be53fe0f937452b1259ebfa7ca95d16b8 Feb 27 10:44:58 crc kubenswrapper[4728]: I0227 10:44:58.897394 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-67d996989d-ngg85"] Feb 27 10:44:58 crc kubenswrapper[4728]: W0227 10:44:58.899928 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9556c640_95cc_4030_a15d_eb61a6bcca3b.slice/crio-1d6a62a6b7203fd7926742d8b95f7b4345d0af0889976ea0024631ccacaf9f2e WatchSource:0}: Error finding container 1d6a62a6b7203fd7926742d8b95f7b4345d0af0889976ea0024631ccacaf9f2e: Status 404 returned error can't find the container with id 1d6a62a6b7203fd7926742d8b95f7b4345d0af0889976ea0024631ccacaf9f2e Feb 27 10:44:59 crc kubenswrapper[4728]: I0227 10:44:59.080880 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-74b6b5dc96-zwlhz"] Feb 27 10:44:59 crc kubenswrapper[4728]: I0227 10:44:59.088268 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-55ffd4876b-qkrxj"] Feb 27 10:44:59 crc kubenswrapper[4728]: W0227 10:44:59.105245 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddf32cd75_8d92_4dbe_9e96_0c943a0f2614.slice/crio-35fb9d0994a09a4eda0e07b50d255083c18f160c1943f695e6c16ba1d65c4ff2 WatchSource:0}: Error finding container 35fb9d0994a09a4eda0e07b50d255083c18f160c1943f695e6c16ba1d65c4ff2: Status 404 returned error can't find the container with id 35fb9d0994a09a4eda0e07b50d255083c18f160c1943f695e6c16ba1d65c4ff2 Feb 27 10:44:59 crc kubenswrapper[4728]: I0227 10:44:59.360750 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-55b5ff4dbb-tq6s8"] Feb 27 10:44:59 crc kubenswrapper[4728]: I0227 10:44:59.370734 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-556b8b874-rxh2d"] Feb 27 10:44:59 crc kubenswrapper[4728]: I0227 10:44:59.371693 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d0eb0111-8103-4481-af5b-9507858073ef-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cht92p\" (UID: \"d0eb0111-8103-4481-af5b-9507858073ef\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cht92p" Feb 27 10:44:59 crc kubenswrapper[4728]: E0227 10:44:59.373123 4728 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 27 10:44:59 crc kubenswrapper[4728]: E0227 10:44:59.373180 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d0eb0111-8103-4481-af5b-9507858073ef-cert podName:d0eb0111-8103-4481-af5b-9507858073ef nodeName:}" failed. No retries permitted until 2026-02-27 10:45:01.373162731 +0000 UTC m=+1121.335528837 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/d0eb0111-8103-4481-af5b-9507858073ef-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cht92p" (UID: "d0eb0111-8103-4481-af5b-9507858073ef") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 27 10:44:59 crc kubenswrapper[4728]: W0227 10:44:59.387163 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9a392698_f140_4562_b72c_7cbe0a868f1c.slice/crio-408172cd372e96aae04812dec12646e36e3fd06e198592751ed12e63c268a8ec WatchSource:0}: Error finding container 408172cd372e96aae04812dec12646e36e3fd06e198592751ed12e63c268a8ec: Status 404 returned error can't find the container with id 408172cd372e96aae04812dec12646e36e3fd06e198592751ed12e63c268a8ec Feb 27 10:44:59 crc kubenswrapper[4728]: I0227 10:44:59.388743 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-75684d597f-hk2dt"] Feb 27 10:44:59 crc kubenswrapper[4728]: W0227 10:44:59.389194 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod10e9cde6_9b86_4bb8_9171_f73a3a034411.slice/crio-0df5390c4832d7f59fd04a9fb4ea6054a493d8b7dd625a0512deb2d63a602481 WatchSource:0}: Error finding container 0df5390c4832d7f59fd04a9fb4ea6054a493d8b7dd625a0512deb2d63a602481: Status 404 returned error can't find the container with id 0df5390c4832d7f59fd04a9fb4ea6054a493d8b7dd625a0512deb2d63a602481 Feb 27 10:44:59 crc kubenswrapper[4728]: W0227 10:44:59.395047 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode81ced4b_a6cf_4dba_964a_52f8bcbd82ae.slice/crio-356b9bc8e4f9a9d8b2cc4e59ea451105d37063ed7fe1e0e2f5b04483d26f47c7 WatchSource:0}: Error finding container 356b9bc8e4f9a9d8b2cc4e59ea451105d37063ed7fe1e0e2f5b04483d26f47c7: Status 404 returned error can't find the container with id 356b9bc8e4f9a9d8b2cc4e59ea451105d37063ed7fe1e0e2f5b04483d26f47c7 Feb 27 10:44:59 crc kubenswrapper[4728]: I0227 10:44:59.406374 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-648564c9fc-rrwdq"] Feb 27 10:44:59 crc kubenswrapper[4728]: W0227 10:44:59.414392 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4c16843a_96c9_45b3_9203_6beef6d0c61c.slice/crio-e4b26f1a366a56b12fef3ddf5fe8f5f4dd9463d30261b6f0f613cf5d8127d0d4 WatchSource:0}: Error finding container e4b26f1a366a56b12fef3ddf5fe8f5f4dd9463d30261b6f0f613cf5d8127d0d4: Status 404 returned error can't find the container with id e4b26f1a366a56b12fef3ddf5fe8f5f4dd9463d30261b6f0f613cf5d8127d0d4 Feb 27 10:44:59 crc kubenswrapper[4728]: I0227 10:44:59.615916 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-9b9ff9f4d-4s265"] Feb 27 10:44:59 crc kubenswrapper[4728]: I0227 10:44:59.629693 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-7776f7b585-jhzvv"] Feb 27 10:44:59 crc kubenswrapper[4728]: I0227 10:44:59.632353 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-bccc79885-jnbr4"] Feb 27 10:44:59 crc kubenswrapper[4728]: I0227 10:44:59.638456 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5z9sj"] Feb 27 10:44:59 crc kubenswrapper[4728]: I0227 10:44:59.646715 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-6f5z2"] Feb 27 10:44:59 crc kubenswrapper[4728]: W0227 10:44:59.650916 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod11dda47d_9181_4a72_a9ea_68874d9ca367.slice/crio-5575b0b69a9e9887d0e4a5c0f4f73148f5576d03e71403aa5d6e91ecf63090e1 WatchSource:0}: Error finding container 5575b0b69a9e9887d0e4a5c0f4f73148f5576d03e71403aa5d6e91ecf63090e1: Status 404 returned error can't find the container with id 5575b0b69a9e9887d0e4a5c0f4f73148f5576d03e71403aa5d6e91ecf63090e1 Feb 27 10:44:59 crc kubenswrapper[4728]: E0227 10:44:59.658362 4728 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:2d59045b8d8e6f9c5483c4fdda7c5057218d553200dc4bcf26789980ac1d9abd,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fqrp5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-5d86c7ddb7-6f5z2_openstack-operators(8303b246-6796-4521-8c7c-95234d371456): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 27 10:44:59 crc kubenswrapper[4728]: E0227 10:44:59.659530 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-6f5z2" podUID="8303b246-6796-4521-8c7c-95234d371456" Feb 27 10:44:59 crc kubenswrapper[4728]: E0227 10:44:59.662033 4728 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:06311600a491c689493552e7ff26e36df740fa4e7c143fca874bef19f24afb97,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9qvsv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-bccc79885-jnbr4_openstack-operators(d246350e-c4cc-4f56-948b-3515032e7645): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 27 10:44:59 crc kubenswrapper[4728]: E0227 10:44:59.663143 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-jnbr4" podUID="d246350e-c4cc-4f56-948b-3515032e7645" Feb 27 10:44:59 crc kubenswrapper[4728]: E0227 10:44:59.665738 4728 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6z7l7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-5z9sj_openstack-operators(980c029c-25d4-4063-be4d-ea30564c2120): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 27 10:44:59 crc kubenswrapper[4728]: E0227 10:44:59.666003 4728 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.102.83.22:5001/openstack-k8s-operators/telemetry-operator:aecf86554bb8ec55563815e8bb164483ca142491,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nc9kl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-7776f7b585-jhzvv_openstack-operators(4af1c14f-ff9c-4c9c-a110-4f6b462c7acd): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 27 10:44:59 crc kubenswrapper[4728]: E0227 10:44:59.667481 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/telemetry-operator-controller-manager-7776f7b585-jhzvv" podUID="4af1c14f-ff9c-4c9c-a110-4f6b462c7acd" Feb 27 10:44:59 crc kubenswrapper[4728]: E0227 10:44:59.667565 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5z9sj" podUID="980c029c-25d4-4063-be4d-ea30564c2120" Feb 27 10:44:59 crc kubenswrapper[4728]: I0227 10:44:59.687675 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-78bc7f9bd9-2prk8" event={"ID":"35b1713c-2089-47f7-8537-468fc7a8f79e","Type":"ContainerStarted","Data":"79b0dadaaba3662ad43595021ab5664d2732a682bccc6b05b949e1d76f0f996a"} Feb 27 10:44:59 crc kubenswrapper[4728]: I0227 10:44:59.689605 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-55ffd4876b-qkrxj" event={"ID":"df32cd75-8d92-4dbe-9e96-0c943a0f2614","Type":"ContainerStarted","Data":"35fb9d0994a09a4eda0e07b50d255083c18f160c1943f695e6c16ba1d65c4ff2"} Feb 27 10:44:59 crc kubenswrapper[4728]: I0227 10:44:59.691948 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-55b5ff4dbb-tq6s8" event={"ID":"e81ced4b-a6cf-4dba-964a-52f8bcbd82ae","Type":"ContainerStarted","Data":"356b9bc8e4f9a9d8b2cc4e59ea451105d37063ed7fe1e0e2f5b04483d26f47c7"} Feb 27 10:44:59 crc kubenswrapper[4728]: I0227 10:44:59.694722 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-545456dc4-qrz29" event={"ID":"20a64934-5596-44eb-9646-115ff6b4e9c8","Type":"ContainerStarted","Data":"649fc5642a96b5277397bc6f7a284b08fd42153f7e6644dff4f51525ae46f603"} Feb 27 10:44:59 crc kubenswrapper[4728]: I0227 10:44:59.695813 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-74b6b5dc96-zwlhz" event={"ID":"2343eecc-62d9-4859-9cf9-6a3ec71f4906","Type":"ContainerStarted","Data":"bb4e06686066c8f1171f9ffd0f7413f13a4f8cc750376f0d5a1f56085bbf7fd8"} Feb 27 10:44:59 crc kubenswrapper[4728]: I0227 10:44:59.699164 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5z9sj" event={"ID":"980c029c-25d4-4063-be4d-ea30564c2120","Type":"ContainerStarted","Data":"50257bf0d1b74344d046238de4488c3a5933528d2a48e5a8c14abb60da5ad91b"} Feb 27 10:44:59 crc kubenswrapper[4728]: E0227 10:44:59.701231 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5z9sj" podUID="980c029c-25d4-4063-be4d-ea30564c2120" Feb 27 10:44:59 crc kubenswrapper[4728]: I0227 10:44:59.702870 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-6f5z2" event={"ID":"8303b246-6796-4521-8c7c-95234d371456","Type":"ContainerStarted","Data":"5e388e9b4c09bc5b895ab71fc0e57c69bcf03e11e51815a7fff060765cb9ab2e"} Feb 27 10:44:59 crc kubenswrapper[4728]: I0227 10:44:59.704281 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-54688575f-5xx5w" event={"ID":"c0bf29f3-9fc3-4d2e-b2f6-979e2ed8f3ce","Type":"ContainerStarted","Data":"83fdd56d87c8feca3843074ea51cf34be53fe0f937452b1259ebfa7ca95d16b8"} Feb 27 10:44:59 crc kubenswrapper[4728]: E0227 10:44:59.704378 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:2d59045b8d8e6f9c5483c4fdda7c5057218d553200dc4bcf26789980ac1d9abd\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-6f5z2" podUID="8303b246-6796-4521-8c7c-95234d371456" Feb 27 10:44:59 crc kubenswrapper[4728]: I0227 10:44:59.707216 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-556b8b874-rxh2d" event={"ID":"9a392698-f140-4562-b72c-7cbe0a868f1c","Type":"ContainerStarted","Data":"408172cd372e96aae04812dec12646e36e3fd06e198592751ed12e63c268a8ec"} Feb 27 10:44:59 crc kubenswrapper[4728]: I0227 10:44:59.715679 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-67d996989d-ngg85" event={"ID":"9556c640-95cc-4030-a15d-eb61a6bcca3b","Type":"ContainerStarted","Data":"1d6a62a6b7203fd7926742d8b95f7b4345d0af0889976ea0024631ccacaf9f2e"} Feb 27 10:44:59 crc kubenswrapper[4728]: I0227 10:44:59.723621 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-7776f7b585-jhzvv" event={"ID":"4af1c14f-ff9c-4c9c-a110-4f6b462c7acd","Type":"ContainerStarted","Data":"0b88d59305da8f0988dfa759cbe0f47c675bf42cc185a79b0a51512b876a087d"} Feb 27 10:44:59 crc kubenswrapper[4728]: E0227 10:44:59.726569 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.22:5001/openstack-k8s-operators/telemetry-operator:aecf86554bb8ec55563815e8bb164483ca142491\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-7776f7b585-jhzvv" podUID="4af1c14f-ff9c-4c9c-a110-4f6b462c7acd" Feb 27 10:44:59 crc kubenswrapper[4728]: I0227 10:44:59.730595 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-9b9ff9f4d-4s265" event={"ID":"11dda47d-9181-4a72-a9ea-68874d9ca367","Type":"ContainerStarted","Data":"5575b0b69a9e9887d0e4a5c0f4f73148f5576d03e71403aa5d6e91ecf63090e1"} Feb 27 10:44:59 crc kubenswrapper[4728]: I0227 10:44:59.734349 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-jnbr4" event={"ID":"d246350e-c4cc-4f56-948b-3515032e7645","Type":"ContainerStarted","Data":"6bfa8e011c2297d1bf8c08a15026d7d883858327fb7c237344e28b398ea7f514"} Feb 27 10:44:59 crc kubenswrapper[4728]: E0227 10:44:59.735367 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:06311600a491c689493552e7ff26e36df740fa4e7c143fca874bef19f24afb97\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-jnbr4" podUID="d246350e-c4cc-4f56-948b-3515032e7645" Feb 27 10:44:59 crc kubenswrapper[4728]: I0227 10:44:59.748762 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-75684d597f-hk2dt" event={"ID":"10e9cde6-9b86-4bb8-9171-f73a3a034411","Type":"ContainerStarted","Data":"0df5390c4832d7f59fd04a9fb4ea6054a493d8b7dd625a0512deb2d63a602481"} Feb 27 10:44:59 crc kubenswrapper[4728]: I0227 10:44:59.750176 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-648564c9fc-rrwdq" event={"ID":"4c16843a-96c9-45b3-9203-6beef6d0c61c","Type":"ContainerStarted","Data":"e4b26f1a366a56b12fef3ddf5fe8f5f4dd9463d30261b6f0f613cf5d8127d0d4"} Feb 27 10:44:59 crc kubenswrapper[4728]: I0227 10:44:59.779451 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/08f5d7df-9e0f-4c13-8799-bcb605842ffd-webhook-certs\") pod \"openstack-operator-controller-manager-6d5879f6b9-x7nrw\" (UID: \"08f5d7df-9e0f-4c13-8799-bcb605842ffd\") " pod="openstack-operators/openstack-operator-controller-manager-6d5879f6b9-x7nrw" Feb 27 10:44:59 crc kubenswrapper[4728]: I0227 10:44:59.779863 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/08f5d7df-9e0f-4c13-8799-bcb605842ffd-metrics-certs\") pod \"openstack-operator-controller-manager-6d5879f6b9-x7nrw\" (UID: \"08f5d7df-9e0f-4c13-8799-bcb605842ffd\") " pod="openstack-operators/openstack-operator-controller-manager-6d5879f6b9-x7nrw" Feb 27 10:44:59 crc kubenswrapper[4728]: E0227 10:44:59.781054 4728 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 27 10:44:59 crc kubenswrapper[4728]: E0227 10:44:59.781111 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/08f5d7df-9e0f-4c13-8799-bcb605842ffd-webhook-certs podName:08f5d7df-9e0f-4c13-8799-bcb605842ffd nodeName:}" failed. No retries permitted until 2026-02-27 10:45:01.781096149 +0000 UTC m=+1121.743462255 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/08f5d7df-9e0f-4c13-8799-bcb605842ffd-webhook-certs") pod "openstack-operator-controller-manager-6d5879f6b9-x7nrw" (UID: "08f5d7df-9e0f-4c13-8799-bcb605842ffd") : secret "webhook-server-cert" not found Feb 27 10:44:59 crc kubenswrapper[4728]: E0227 10:44:59.786027 4728 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 27 10:44:59 crc kubenswrapper[4728]: E0227 10:44:59.786145 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/08f5d7df-9e0f-4c13-8799-bcb605842ffd-metrics-certs podName:08f5d7df-9e0f-4c13-8799-bcb605842ffd nodeName:}" failed. No retries permitted until 2026-02-27 10:45:01.786124068 +0000 UTC m=+1121.748490174 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/08f5d7df-9e0f-4c13-8799-bcb605842ffd-metrics-certs") pod "openstack-operator-controller-manager-6d5879f6b9-x7nrw" (UID: "08f5d7df-9e0f-4c13-8799-bcb605842ffd") : secret "metrics-server-cert" not found Feb 27 10:45:00 crc kubenswrapper[4728]: I0227 10:45:00.144249 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29536485-vwhxc"] Feb 27 10:45:00 crc kubenswrapper[4728]: I0227 10:45:00.145663 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29536485-vwhxc" Feb 27 10:45:00 crc kubenswrapper[4728]: I0227 10:45:00.150771 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 27 10:45:00 crc kubenswrapper[4728]: I0227 10:45:00.151097 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 27 10:45:00 crc kubenswrapper[4728]: I0227 10:45:00.156287 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29536485-vwhxc"] Feb 27 10:45:00 crc kubenswrapper[4728]: I0227 10:45:00.192182 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvqdv\" (UniqueName: \"kubernetes.io/projected/90322058-3b16-4e4a-8116-7f02e4865437-kube-api-access-vvqdv\") pod \"collect-profiles-29536485-vwhxc\" (UID: \"90322058-3b16-4e4a-8116-7f02e4865437\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536485-vwhxc" Feb 27 10:45:00 crc kubenswrapper[4728]: I0227 10:45:00.192318 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/90322058-3b16-4e4a-8116-7f02e4865437-secret-volume\") pod \"collect-profiles-29536485-vwhxc\" (UID: \"90322058-3b16-4e4a-8116-7f02e4865437\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536485-vwhxc" Feb 27 10:45:00 crc kubenswrapper[4728]: I0227 10:45:00.192364 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/90322058-3b16-4e4a-8116-7f02e4865437-config-volume\") pod \"collect-profiles-29536485-vwhxc\" (UID: \"90322058-3b16-4e4a-8116-7f02e4865437\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536485-vwhxc" Feb 27 10:45:00 crc kubenswrapper[4728]: I0227 10:45:00.294536 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/90322058-3b16-4e4a-8116-7f02e4865437-config-volume\") pod \"collect-profiles-29536485-vwhxc\" (UID: \"90322058-3b16-4e4a-8116-7f02e4865437\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536485-vwhxc" Feb 27 10:45:00 crc kubenswrapper[4728]: I0227 10:45:00.294798 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vvqdv\" (UniqueName: \"kubernetes.io/projected/90322058-3b16-4e4a-8116-7f02e4865437-kube-api-access-vvqdv\") pod \"collect-profiles-29536485-vwhxc\" (UID: \"90322058-3b16-4e4a-8116-7f02e4865437\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536485-vwhxc" Feb 27 10:45:00 crc kubenswrapper[4728]: I0227 10:45:00.294918 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/90322058-3b16-4e4a-8116-7f02e4865437-secret-volume\") pod \"collect-profiles-29536485-vwhxc\" (UID: \"90322058-3b16-4e4a-8116-7f02e4865437\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536485-vwhxc" Feb 27 10:45:00 crc kubenswrapper[4728]: I0227 10:45:00.295654 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/90322058-3b16-4e4a-8116-7f02e4865437-config-volume\") pod \"collect-profiles-29536485-vwhxc\" (UID: \"90322058-3b16-4e4a-8116-7f02e4865437\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536485-vwhxc" Feb 27 10:45:00 crc kubenswrapper[4728]: I0227 10:45:00.310318 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/90322058-3b16-4e4a-8116-7f02e4865437-secret-volume\") pod \"collect-profiles-29536485-vwhxc\" (UID: \"90322058-3b16-4e4a-8116-7f02e4865437\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536485-vwhxc" Feb 27 10:45:00 crc kubenswrapper[4728]: I0227 10:45:00.311751 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vvqdv\" (UniqueName: \"kubernetes.io/projected/90322058-3b16-4e4a-8116-7f02e4865437-kube-api-access-vvqdv\") pod \"collect-profiles-29536485-vwhxc\" (UID: \"90322058-3b16-4e4a-8116-7f02e4865437\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536485-vwhxc" Feb 27 10:45:00 crc kubenswrapper[4728]: I0227 10:45:00.471745 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29536485-vwhxc" Feb 27 10:45:00 crc kubenswrapper[4728]: E0227 10:45:00.761657 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5z9sj" podUID="980c029c-25d4-4063-be4d-ea30564c2120" Feb 27 10:45:00 crc kubenswrapper[4728]: E0227 10:45:00.761985 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:2d59045b8d8e6f9c5483c4fdda7c5057218d553200dc4bcf26789980ac1d9abd\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-6f5z2" podUID="8303b246-6796-4521-8c7c-95234d371456" Feb 27 10:45:00 crc kubenswrapper[4728]: E0227 10:45:00.762025 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.22:5001/openstack-k8s-operators/telemetry-operator:aecf86554bb8ec55563815e8bb164483ca142491\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-7776f7b585-jhzvv" podUID="4af1c14f-ff9c-4c9c-a110-4f6b462c7acd" Feb 27 10:45:00 crc kubenswrapper[4728]: E0227 10:45:00.762056 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:06311600a491c689493552e7ff26e36df740fa4e7c143fca874bef19f24afb97\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-jnbr4" podUID="d246350e-c4cc-4f56-948b-3515032e7645" Feb 27 10:45:00 crc kubenswrapper[4728]: I0227 10:45:00.913859 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0c1ed09d-4dfa-4cf1-b7c3-4562f3811ed9-cert\") pod \"infra-operator-controller-manager-f7fcc58b9-2gxfg\" (UID: \"0c1ed09d-4dfa-4cf1-b7c3-4562f3811ed9\") " pod="openstack-operators/infra-operator-controller-manager-f7fcc58b9-2gxfg" Feb 27 10:45:00 crc kubenswrapper[4728]: E0227 10:45:00.914060 4728 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 27 10:45:00 crc kubenswrapper[4728]: E0227 10:45:00.914128 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0c1ed09d-4dfa-4cf1-b7c3-4562f3811ed9-cert podName:0c1ed09d-4dfa-4cf1-b7c3-4562f3811ed9 nodeName:}" failed. No retries permitted until 2026-02-27 10:45:04.914106658 +0000 UTC m=+1124.876472764 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/0c1ed09d-4dfa-4cf1-b7c3-4562f3811ed9-cert") pod "infra-operator-controller-manager-f7fcc58b9-2gxfg" (UID: "0c1ed09d-4dfa-4cf1-b7c3-4562f3811ed9") : secret "infra-operator-webhook-server-cert" not found Feb 27 10:45:01 crc kubenswrapper[4728]: I0227 10:45:01.426866 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d0eb0111-8103-4481-af5b-9507858073ef-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cht92p\" (UID: \"d0eb0111-8103-4481-af5b-9507858073ef\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cht92p" Feb 27 10:45:01 crc kubenswrapper[4728]: E0227 10:45:01.427100 4728 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 27 10:45:01 crc kubenswrapper[4728]: E0227 10:45:01.427169 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d0eb0111-8103-4481-af5b-9507858073ef-cert podName:d0eb0111-8103-4481-af5b-9507858073ef nodeName:}" failed. No retries permitted until 2026-02-27 10:45:05.427151268 +0000 UTC m=+1125.389517374 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/d0eb0111-8103-4481-af5b-9507858073ef-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cht92p" (UID: "d0eb0111-8103-4481-af5b-9507858073ef") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 27 10:45:01 crc kubenswrapper[4728]: I0227 10:45:01.834225 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/08f5d7df-9e0f-4c13-8799-bcb605842ffd-metrics-certs\") pod \"openstack-operator-controller-manager-6d5879f6b9-x7nrw\" (UID: \"08f5d7df-9e0f-4c13-8799-bcb605842ffd\") " pod="openstack-operators/openstack-operator-controller-manager-6d5879f6b9-x7nrw" Feb 27 10:45:01 crc kubenswrapper[4728]: I0227 10:45:01.834294 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/08f5d7df-9e0f-4c13-8799-bcb605842ffd-webhook-certs\") pod \"openstack-operator-controller-manager-6d5879f6b9-x7nrw\" (UID: \"08f5d7df-9e0f-4c13-8799-bcb605842ffd\") " pod="openstack-operators/openstack-operator-controller-manager-6d5879f6b9-x7nrw" Feb 27 10:45:01 crc kubenswrapper[4728]: E0227 10:45:01.834407 4728 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 27 10:45:01 crc kubenswrapper[4728]: E0227 10:45:01.834476 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/08f5d7df-9e0f-4c13-8799-bcb605842ffd-metrics-certs podName:08f5d7df-9e0f-4c13-8799-bcb605842ffd nodeName:}" failed. No retries permitted until 2026-02-27 10:45:05.8344589 +0000 UTC m=+1125.796825006 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/08f5d7df-9e0f-4c13-8799-bcb605842ffd-metrics-certs") pod "openstack-operator-controller-manager-6d5879f6b9-x7nrw" (UID: "08f5d7df-9e0f-4c13-8799-bcb605842ffd") : secret "metrics-server-cert" not found Feb 27 10:45:01 crc kubenswrapper[4728]: E0227 10:45:01.834557 4728 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 27 10:45:01 crc kubenswrapper[4728]: E0227 10:45:01.834603 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/08f5d7df-9e0f-4c13-8799-bcb605842ffd-webhook-certs podName:08f5d7df-9e0f-4c13-8799-bcb605842ffd nodeName:}" failed. No retries permitted until 2026-02-27 10:45:05.834588763 +0000 UTC m=+1125.796954869 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/08f5d7df-9e0f-4c13-8799-bcb605842ffd-webhook-certs") pod "openstack-operator-controller-manager-6d5879f6b9-x7nrw" (UID: "08f5d7df-9e0f-4c13-8799-bcb605842ffd") : secret "webhook-server-cert" not found Feb 27 10:45:04 crc kubenswrapper[4728]: I0227 10:45:04.995481 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0c1ed09d-4dfa-4cf1-b7c3-4562f3811ed9-cert\") pod \"infra-operator-controller-manager-f7fcc58b9-2gxfg\" (UID: \"0c1ed09d-4dfa-4cf1-b7c3-4562f3811ed9\") " pod="openstack-operators/infra-operator-controller-manager-f7fcc58b9-2gxfg" Feb 27 10:45:04 crc kubenswrapper[4728]: E0227 10:45:04.995636 4728 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 27 10:45:04 crc kubenswrapper[4728]: E0227 10:45:04.996295 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0c1ed09d-4dfa-4cf1-b7c3-4562f3811ed9-cert podName:0c1ed09d-4dfa-4cf1-b7c3-4562f3811ed9 nodeName:}" failed. No retries permitted until 2026-02-27 10:45:12.996275113 +0000 UTC m=+1132.958641219 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/0c1ed09d-4dfa-4cf1-b7c3-4562f3811ed9-cert") pod "infra-operator-controller-manager-f7fcc58b9-2gxfg" (UID: "0c1ed09d-4dfa-4cf1-b7c3-4562f3811ed9") : secret "infra-operator-webhook-server-cert" not found Feb 27 10:45:05 crc kubenswrapper[4728]: I0227 10:45:05.505924 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d0eb0111-8103-4481-af5b-9507858073ef-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cht92p\" (UID: \"d0eb0111-8103-4481-af5b-9507858073ef\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cht92p" Feb 27 10:45:05 crc kubenswrapper[4728]: E0227 10:45:05.506223 4728 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 27 10:45:05 crc kubenswrapper[4728]: E0227 10:45:05.506344 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d0eb0111-8103-4481-af5b-9507858073ef-cert podName:d0eb0111-8103-4481-af5b-9507858073ef nodeName:}" failed. No retries permitted until 2026-02-27 10:45:13.50631349 +0000 UTC m=+1133.468679636 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/d0eb0111-8103-4481-af5b-9507858073ef-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cht92p" (UID: "d0eb0111-8103-4481-af5b-9507858073ef") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 27 10:45:05 crc kubenswrapper[4728]: I0227 10:45:05.915367 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/08f5d7df-9e0f-4c13-8799-bcb605842ffd-metrics-certs\") pod \"openstack-operator-controller-manager-6d5879f6b9-x7nrw\" (UID: \"08f5d7df-9e0f-4c13-8799-bcb605842ffd\") " pod="openstack-operators/openstack-operator-controller-manager-6d5879f6b9-x7nrw" Feb 27 10:45:05 crc kubenswrapper[4728]: E0227 10:45:05.915583 4728 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 27 10:45:05 crc kubenswrapper[4728]: E0227 10:45:05.915664 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/08f5d7df-9e0f-4c13-8799-bcb605842ffd-metrics-certs podName:08f5d7df-9e0f-4c13-8799-bcb605842ffd nodeName:}" failed. No retries permitted until 2026-02-27 10:45:13.915637967 +0000 UTC m=+1133.878004113 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/08f5d7df-9e0f-4c13-8799-bcb605842ffd-metrics-certs") pod "openstack-operator-controller-manager-6d5879f6b9-x7nrw" (UID: "08f5d7df-9e0f-4c13-8799-bcb605842ffd") : secret "metrics-server-cert" not found Feb 27 10:45:05 crc kubenswrapper[4728]: I0227 10:45:05.916674 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/08f5d7df-9e0f-4c13-8799-bcb605842ffd-webhook-certs\") pod \"openstack-operator-controller-manager-6d5879f6b9-x7nrw\" (UID: \"08f5d7df-9e0f-4c13-8799-bcb605842ffd\") " pod="openstack-operators/openstack-operator-controller-manager-6d5879f6b9-x7nrw" Feb 27 10:45:05 crc kubenswrapper[4728]: E0227 10:45:05.916787 4728 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 27 10:45:05 crc kubenswrapper[4728]: E0227 10:45:05.916895 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/08f5d7df-9e0f-4c13-8799-bcb605842ffd-webhook-certs podName:08f5d7df-9e0f-4c13-8799-bcb605842ffd nodeName:}" failed. No retries permitted until 2026-02-27 10:45:13.91686565 +0000 UTC m=+1133.879231776 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/08f5d7df-9e0f-4c13-8799-bcb605842ffd-webhook-certs") pod "openstack-operator-controller-manager-6d5879f6b9-x7nrw" (UID: "08f5d7df-9e0f-4c13-8799-bcb605842ffd") : secret "webhook-server-cert" not found Feb 27 10:45:05 crc kubenswrapper[4728]: I0227 10:45:05.922189 4728 patch_prober.go:28] interesting pod/machine-config-daemon-mf2hh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 10:45:05 crc kubenswrapper[4728]: I0227 10:45:05.922268 4728 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 10:45:12 crc kubenswrapper[4728]: E0227 10:45:12.069517 4728 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/barbican-operator@sha256:3f9b0446a124745439306dc3bb7faec8c02c0b6be33f788b9d455fa57fb60120" Feb 27 10:45:12 crc kubenswrapper[4728]: E0227 10:45:12.071527 4728 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/barbican-operator@sha256:3f9b0446a124745439306dc3bb7faec8c02c0b6be33f788b9d455fa57fb60120,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tx9kr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-operator-controller-manager-6db6876945-8dzvn_openstack-operators(46e1e2f4-2677-4d4f-88c3-22c7f3942e12): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 27 10:45:12 crc kubenswrapper[4728]: E0227 10:45:12.073210 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/barbican-operator-controller-manager-6db6876945-8dzvn" podUID="46e1e2f4-2677-4d4f-88c3-22c7f3942e12" Feb 27 10:45:12 crc kubenswrapper[4728]: E0227 10:45:12.680015 4728 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/horizon-operator@sha256:114c0dee0bab1d453890e9dcc7727de749055bdbea049384d5696e7ac8d78fe3" Feb 27 10:45:12 crc kubenswrapper[4728]: E0227 10:45:12.683848 4728 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/horizon-operator@sha256:114c0dee0bab1d453890e9dcc7727de749055bdbea049384d5696e7ac8d78fe3,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pb8jh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-operator-controller-manager-78bc7f9bd9-2prk8_openstack-operators(35b1713c-2089-47f7-8537-468fc7a8f79e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 27 10:45:12 crc kubenswrapper[4728]: E0227 10:45:12.685103 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/horizon-operator-controller-manager-78bc7f9bd9-2prk8" podUID="35b1713c-2089-47f7-8537-468fc7a8f79e" Feb 27 10:45:12 crc kubenswrapper[4728]: E0227 10:45:12.863836 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/horizon-operator@sha256:114c0dee0bab1d453890e9dcc7727de749055bdbea049384d5696e7ac8d78fe3\\\"\"" pod="openstack-operators/horizon-operator-controller-manager-78bc7f9bd9-2prk8" podUID="35b1713c-2089-47f7-8537-468fc7a8f79e" Feb 27 10:45:12 crc kubenswrapper[4728]: E0227 10:45:12.864538 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/barbican-operator@sha256:3f9b0446a124745439306dc3bb7faec8c02c0b6be33f788b9d455fa57fb60120\\\"\"" pod="openstack-operators/barbican-operator-controller-manager-6db6876945-8dzvn" podUID="46e1e2f4-2677-4d4f-88c3-22c7f3942e12" Feb 27 10:45:13 crc kubenswrapper[4728]: I0227 10:45:13.046541 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0c1ed09d-4dfa-4cf1-b7c3-4562f3811ed9-cert\") pod \"infra-operator-controller-manager-f7fcc58b9-2gxfg\" (UID: \"0c1ed09d-4dfa-4cf1-b7c3-4562f3811ed9\") " pod="openstack-operators/infra-operator-controller-manager-f7fcc58b9-2gxfg" Feb 27 10:45:13 crc kubenswrapper[4728]: I0227 10:45:13.052700 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0c1ed09d-4dfa-4cf1-b7c3-4562f3811ed9-cert\") pod \"infra-operator-controller-manager-f7fcc58b9-2gxfg\" (UID: \"0c1ed09d-4dfa-4cf1-b7c3-4562f3811ed9\") " pod="openstack-operators/infra-operator-controller-manager-f7fcc58b9-2gxfg" Feb 27 10:45:13 crc kubenswrapper[4728]: I0227 10:45:13.084032 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-f7fcc58b9-2gxfg" Feb 27 10:45:13 crc kubenswrapper[4728]: E0227 10:45:13.379915 4728 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/designate-operator@sha256:508859beb0e5b69169393dbb0039dc03a9d4ba05f16f6ff74f9b25e19d446214" Feb 27 10:45:13 crc kubenswrapper[4728]: E0227 10:45:13.380085 4728 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/designate-operator@sha256:508859beb0e5b69169393dbb0039dc03a9d4ba05f16f6ff74f9b25e19d446214,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-26ppk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod designate-operator-controller-manager-5d87c9d997-l2ffb_openstack-operators(65769a2d-f0c4-4af4-b5ce-f5918e90bfbf): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 27 10:45:13 crc kubenswrapper[4728]: E0227 10:45:13.381419 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/designate-operator-controller-manager-5d87c9d997-l2ffb" podUID="65769a2d-f0c4-4af4-b5ce-f5918e90bfbf" Feb 27 10:45:13 crc kubenswrapper[4728]: I0227 10:45:13.554992 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d0eb0111-8103-4481-af5b-9507858073ef-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cht92p\" (UID: \"d0eb0111-8103-4481-af5b-9507858073ef\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cht92p" Feb 27 10:45:13 crc kubenswrapper[4728]: I0227 10:45:13.558241 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d0eb0111-8103-4481-af5b-9507858073ef-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cht92p\" (UID: \"d0eb0111-8103-4481-af5b-9507858073ef\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cht92p" Feb 27 10:45:13 crc kubenswrapper[4728]: I0227 10:45:13.834105 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cht92p" Feb 27 10:45:13 crc kubenswrapper[4728]: E0227 10:45:13.875732 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/designate-operator@sha256:508859beb0e5b69169393dbb0039dc03a9d4ba05f16f6ff74f9b25e19d446214\\\"\"" pod="openstack-operators/designate-operator-controller-manager-5d87c9d997-l2ffb" podUID="65769a2d-f0c4-4af4-b5ce-f5918e90bfbf" Feb 27 10:45:13 crc kubenswrapper[4728]: I0227 10:45:13.963932 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/08f5d7df-9e0f-4c13-8799-bcb605842ffd-metrics-certs\") pod \"openstack-operator-controller-manager-6d5879f6b9-x7nrw\" (UID: \"08f5d7df-9e0f-4c13-8799-bcb605842ffd\") " pod="openstack-operators/openstack-operator-controller-manager-6d5879f6b9-x7nrw" Feb 27 10:45:13 crc kubenswrapper[4728]: I0227 10:45:13.964046 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/08f5d7df-9e0f-4c13-8799-bcb605842ffd-webhook-certs\") pod \"openstack-operator-controller-manager-6d5879f6b9-x7nrw\" (UID: \"08f5d7df-9e0f-4c13-8799-bcb605842ffd\") " pod="openstack-operators/openstack-operator-controller-manager-6d5879f6b9-x7nrw" Feb 27 10:45:13 crc kubenswrapper[4728]: E0227 10:45:13.964123 4728 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 27 10:45:13 crc kubenswrapper[4728]: E0227 10:45:13.964200 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/08f5d7df-9e0f-4c13-8799-bcb605842ffd-metrics-certs podName:08f5d7df-9e0f-4c13-8799-bcb605842ffd nodeName:}" failed. No retries permitted until 2026-02-27 10:45:29.964182052 +0000 UTC m=+1149.926548158 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/08f5d7df-9e0f-4c13-8799-bcb605842ffd-metrics-certs") pod "openstack-operator-controller-manager-6d5879f6b9-x7nrw" (UID: "08f5d7df-9e0f-4c13-8799-bcb605842ffd") : secret "metrics-server-cert" not found Feb 27 10:45:13 crc kubenswrapper[4728]: I0227 10:45:13.968229 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/08f5d7df-9e0f-4c13-8799-bcb605842ffd-webhook-certs\") pod \"openstack-operator-controller-manager-6d5879f6b9-x7nrw\" (UID: \"08f5d7df-9e0f-4c13-8799-bcb605842ffd\") " pod="openstack-operators/openstack-operator-controller-manager-6d5879f6b9-x7nrw" Feb 27 10:45:14 crc kubenswrapper[4728]: E0227 10:45:14.925460 4728 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/cinder-operator@sha256:7961c67cfc87de69055f8330771af625f73d857426c4bb17ebb888ead843fff3" Feb 27 10:45:14 crc kubenswrapper[4728]: E0227 10:45:14.926086 4728 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/cinder-operator@sha256:7961c67cfc87de69055f8330771af625f73d857426c4bb17ebb888ead843fff3,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ghvbk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-operator-controller-manager-55d77d7b5c-2whpj_openstack-operators(6fd71010-803b-40dc-8cba-72c0a8987b5b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 27 10:45:14 crc kubenswrapper[4728]: E0227 10:45:14.929664 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-2whpj" podUID="6fd71010-803b-40dc-8cba-72c0a8987b5b" Feb 27 10:45:15 crc kubenswrapper[4728]: E0227 10:45:15.598193 4728 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/manila-operator@sha256:f1158ec4d879c4646eee4323bc501eba4d377beb2ad6fbe08ed30070c441ac26" Feb 27 10:45:15 crc kubenswrapper[4728]: E0227 10:45:15.598484 4728 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:f1158ec4d879c4646eee4323bc501eba4d377beb2ad6fbe08ed30070c441ac26,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9c9z9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-67d996989d-ngg85_openstack-operators(9556c640-95cc-4030-a15d-eb61a6bcca3b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 27 10:45:15 crc kubenswrapper[4728]: E0227 10:45:15.599846 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/manila-operator-controller-manager-67d996989d-ngg85" podUID="9556c640-95cc-4030-a15d-eb61a6bcca3b" Feb 27 10:45:15 crc kubenswrapper[4728]: E0227 10:45:15.894432 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:f1158ec4d879c4646eee4323bc501eba4d377beb2ad6fbe08ed30070c441ac26\\\"\"" pod="openstack-operators/manila-operator-controller-manager-67d996989d-ngg85" podUID="9556c640-95cc-4030-a15d-eb61a6bcca3b" Feb 27 10:45:15 crc kubenswrapper[4728]: E0227 10:45:15.894456 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/cinder-operator@sha256:7961c67cfc87de69055f8330771af625f73d857426c4bb17ebb888ead843fff3\\\"\"" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-2whpj" podUID="6fd71010-803b-40dc-8cba-72c0a8987b5b" Feb 27 10:45:18 crc kubenswrapper[4728]: E0227 10:45:18.636105 4728 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/swift-operator@sha256:f309cdea8084a4b1e8cbcd732d6e250fd93c55cfd1b48ba9026907c8591faab7" Feb 27 10:45:18 crc kubenswrapper[4728]: E0227 10:45:18.636541 4728 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:f309cdea8084a4b1e8cbcd732d6e250fd93c55cfd1b48ba9026907c8591faab7,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5rvnl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-9b9ff9f4d-4s265_openstack-operators(11dda47d-9181-4a72-a9ea-68874d9ca367): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 27 10:45:18 crc kubenswrapper[4728]: E0227 10:45:18.637767 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/swift-operator-controller-manager-9b9ff9f4d-4s265" podUID="11dda47d-9181-4a72-a9ea-68874d9ca367" Feb 27 10:45:18 crc kubenswrapper[4728]: E0227 10:45:18.929379 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:f309cdea8084a4b1e8cbcd732d6e250fd93c55cfd1b48ba9026907c8591faab7\\\"\"" pod="openstack-operators/swift-operator-controller-manager-9b9ff9f4d-4s265" podUID="11dda47d-9181-4a72-a9ea-68874d9ca367" Feb 27 10:45:19 crc kubenswrapper[4728]: E0227 10:45:19.886093 4728 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/test-operator@sha256:9d03f03aa9a460f1fcac8875064808c03e4ecd0388873bbfb9c7dc58331f3968" Feb 27 10:45:19 crc kubenswrapper[4728]: E0227 10:45:19.886372 4728 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:9d03f03aa9a460f1fcac8875064808c03e4ecd0388873bbfb9c7dc58331f3968,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-sjpbx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-55b5ff4dbb-tq6s8_openstack-operators(e81ced4b-a6cf-4dba-964a-52f8bcbd82ae): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 27 10:45:19 crc kubenswrapper[4728]: E0227 10:45:19.887609 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/test-operator-controller-manager-55b5ff4dbb-tq6s8" podUID="e81ced4b-a6cf-4dba-964a-52f8bcbd82ae" Feb 27 10:45:19 crc kubenswrapper[4728]: E0227 10:45:19.941004 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:9d03f03aa9a460f1fcac8875064808c03e4ecd0388873bbfb9c7dc58331f3968\\\"\"" pod="openstack-operators/test-operator-controller-manager-55b5ff4dbb-tq6s8" podUID="e81ced4b-a6cf-4dba-964a-52f8bcbd82ae" Feb 27 10:45:20 crc kubenswrapper[4728]: E0227 10:45:20.382489 4728 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/neutron-operator@sha256:b242403a27609ac87a0ed3a7dd788aceaf8f3da3620981cf5e000d56862d77a4" Feb 27 10:45:20 crc kubenswrapper[4728]: E0227 10:45:20.382941 4728 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:b242403a27609ac87a0ed3a7dd788aceaf8f3da3620981cf5e000d56862d77a4,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-k5hd8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-54688575f-5xx5w_openstack-operators(c0bf29f3-9fc3-4d2e-b2f6-979e2ed8f3ce): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 27 10:45:20 crc kubenswrapper[4728]: E0227 10:45:20.384543 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/neutron-operator-controller-manager-54688575f-5xx5w" podUID="c0bf29f3-9fc3-4d2e-b2f6-979e2ed8f3ce" Feb 27 10:45:20 crc kubenswrapper[4728]: E0227 10:45:20.948143 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:b242403a27609ac87a0ed3a7dd788aceaf8f3da3620981cf5e000d56862d77a4\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-54688575f-5xx5w" podUID="c0bf29f3-9fc3-4d2e-b2f6-979e2ed8f3ce" Feb 27 10:45:22 crc kubenswrapper[4728]: E0227 10:45:22.085304 4728 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ironic-operator@sha256:e41dfadd2c3bbcae29f8c43cd2feea6724a48cdef127d65d1d37816bb9945a01" Feb 27 10:45:22 crc kubenswrapper[4728]: E0227 10:45:22.085872 4728 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ironic-operator@sha256:e41dfadd2c3bbcae29f8c43cd2feea6724a48cdef127d65d1d37816bb9945a01,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9wwh4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ironic-operator-controller-manager-545456dc4-qrz29_openstack-operators(20a64934-5596-44eb-9646-115ff6b4e9c8): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 27 10:45:22 crc kubenswrapper[4728]: E0227 10:45:22.087602 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ironic-operator-controller-manager-545456dc4-qrz29" podUID="20a64934-5596-44eb-9646-115ff6b4e9c8" Feb 27 10:45:22 crc kubenswrapper[4728]: E0227 10:45:22.595129 4728 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/mariadb-operator@sha256:71f2ab3bb41d1743287a3270dd49e32192b347d8ba7353d2250cbd7e8528219b" Feb 27 10:45:22 crc kubenswrapper[4728]: E0227 10:45:22.595372 4728 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:71f2ab3bb41d1743287a3270dd49e32192b347d8ba7353d2250cbd7e8528219b,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5sltg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-556b8b874-rxh2d_openstack-operators(9a392698-f140-4562-b72c-7cbe0a868f1c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 27 10:45:22 crc kubenswrapper[4728]: E0227 10:45:22.596696 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/mariadb-operator-controller-manager-556b8b874-rxh2d" podUID="9a392698-f140-4562-b72c-7cbe0a868f1c" Feb 27 10:45:22 crc kubenswrapper[4728]: E0227 10:45:22.966486 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ironic-operator@sha256:e41dfadd2c3bbcae29f8c43cd2feea6724a48cdef127d65d1d37816bb9945a01\\\"\"" pod="openstack-operators/ironic-operator-controller-manager-545456dc4-qrz29" podUID="20a64934-5596-44eb-9646-115ff6b4e9c8" Feb 27 10:45:22 crc kubenswrapper[4728]: E0227 10:45:22.969401 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:71f2ab3bb41d1743287a3270dd49e32192b347d8ba7353d2250cbd7e8528219b\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-556b8b874-rxh2d" podUID="9a392698-f140-4562-b72c-7cbe0a868f1c" Feb 27 10:45:23 crc kubenswrapper[4728]: E0227 10:45:23.662378 4728 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:12fa31d2a2dfe1a832c6a2c0eb58876a3a62595a1a1f49b13c2a1f9b6d378735" Feb 27 10:45:23 crc kubenswrapper[4728]: E0227 10:45:23.663491 4728 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:12fa31d2a2dfe1a832c6a2c0eb58876a3a62595a1a1f49b13c2a1f9b6d378735,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-b2frk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-55ffd4876b-qkrxj_openstack-operators(df32cd75-8d92-4dbe-9e96-0c943a0f2614): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 27 10:45:23 crc kubenswrapper[4728]: E0227 10:45:23.664837 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-55ffd4876b-qkrxj" podUID="df32cd75-8d92-4dbe-9e96-0c943a0f2614" Feb 27 10:45:23 crc kubenswrapper[4728]: E0227 10:45:23.974879 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:12fa31d2a2dfe1a832c6a2c0eb58876a3a62595a1a1f49b13c2a1f9b6d378735\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-55ffd4876b-qkrxj" podUID="df32cd75-8d92-4dbe-9e96-0c943a0f2614" Feb 27 10:45:24 crc kubenswrapper[4728]: I0227 10:45:24.580187 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29536485-vwhxc"] Feb 27 10:45:26 crc kubenswrapper[4728]: E0227 10:45:26.740794 4728 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:172f24bd4603ac3498536a8a2c8fffb07cf9113dd52bc132778ea0aa275c6b84" Feb 27 10:45:26 crc kubenswrapper[4728]: E0227 10:45:26.741340 4728 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:172f24bd4603ac3498536a8a2c8fffb07cf9113dd52bc132778ea0aa275c6b84,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6dbsd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-74b6b5dc96-zwlhz_openstack-operators(2343eecc-62d9-4859-9cf9-6a3ec71f4906): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 27 10:45:26 crc kubenswrapper[4728]: E0227 10:45:26.742666 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-74b6b5dc96-zwlhz" podUID="2343eecc-62d9-4859-9cf9-6a3ec71f4906" Feb 27 10:45:27 crc kubenswrapper[4728]: E0227 10:45:27.002176 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:172f24bd4603ac3498536a8a2c8fffb07cf9113dd52bc132778ea0aa275c6b84\\\"\"" pod="openstack-operators/nova-operator-controller-manager-74b6b5dc96-zwlhz" podUID="2343eecc-62d9-4859-9cf9-6a3ec71f4906" Feb 27 10:45:28 crc kubenswrapper[4728]: E0227 10:45:28.958323 4728 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/octavia-operator@sha256:2d59045b8d8e6f9c5483c4fdda7c5057218d553200dc4bcf26789980ac1d9abd" Feb 27 10:45:28 crc kubenswrapper[4728]: E0227 10:45:28.959093 4728 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:2d59045b8d8e6f9c5483c4fdda7c5057218d553200dc4bcf26789980ac1d9abd,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fqrp5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-5d86c7ddb7-6f5z2_openstack-operators(8303b246-6796-4521-8c7c-95234d371456): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 27 10:45:28 crc kubenswrapper[4728]: E0227 10:45:28.960328 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-6f5z2" podUID="8303b246-6796-4521-8c7c-95234d371456" Feb 27 10:45:29 crc kubenswrapper[4728]: E0227 10:45:29.027569 4728 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.22:5001/openstack-k8s-operators/telemetry-operator:aecf86554bb8ec55563815e8bb164483ca142491" Feb 27 10:45:29 crc kubenswrapper[4728]: E0227 10:45:29.027649 4728 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.22:5001/openstack-k8s-operators/telemetry-operator:aecf86554bb8ec55563815e8bb164483ca142491" Feb 27 10:45:29 crc kubenswrapper[4728]: E0227 10:45:29.027825 4728 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.102.83.22:5001/openstack-k8s-operators/telemetry-operator:aecf86554bb8ec55563815e8bb164483ca142491,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nc9kl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-7776f7b585-jhzvv_openstack-operators(4af1c14f-ff9c-4c9c-a110-4f6b462c7acd): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 27 10:45:29 crc kubenswrapper[4728]: E0227 10:45:29.029018 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/telemetry-operator-controller-manager-7776f7b585-jhzvv" podUID="4af1c14f-ff9c-4c9c-a110-4f6b462c7acd" Feb 27 10:45:29 crc kubenswrapper[4728]: E0227 10:45:29.473710 4728 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Feb 27 10:45:29 crc kubenswrapper[4728]: E0227 10:45:29.474209 4728 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6z7l7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-5z9sj_openstack-operators(980c029c-25d4-4063-be4d-ea30564c2120): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 27 10:45:29 crc kubenswrapper[4728]: E0227 10:45:29.477148 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5z9sj" podUID="980c029c-25d4-4063-be4d-ea30564c2120" Feb 27 10:45:29 crc kubenswrapper[4728]: I0227 10:45:29.902134 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cht92p"] Feb 27 10:45:29 crc kubenswrapper[4728]: I0227 10:45:29.969102 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/08f5d7df-9e0f-4c13-8799-bcb605842ffd-metrics-certs\") pod \"openstack-operator-controller-manager-6d5879f6b9-x7nrw\" (UID: \"08f5d7df-9e0f-4c13-8799-bcb605842ffd\") " pod="openstack-operators/openstack-operator-controller-manager-6d5879f6b9-x7nrw" Feb 27 10:45:29 crc kubenswrapper[4728]: I0227 10:45:29.980331 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/08f5d7df-9e0f-4c13-8799-bcb605842ffd-metrics-certs\") pod \"openstack-operator-controller-manager-6d5879f6b9-x7nrw\" (UID: \"08f5d7df-9e0f-4c13-8799-bcb605842ffd\") " pod="openstack-operators/openstack-operator-controller-manager-6d5879f6b9-x7nrw" Feb 27 10:45:29 crc kubenswrapper[4728]: I0227 10:45:29.983117 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-f7fcc58b9-2gxfg"] Feb 27 10:45:29 crc kubenswrapper[4728]: W0227 10:45:29.988298 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0c1ed09d_4dfa_4cf1_b7c3_4562f3811ed9.slice/crio-b03d1b17406a6148176c86fd8690ba5fc0ffbacb43107b196ee2536cda7fbee1 WatchSource:0}: Error finding container b03d1b17406a6148176c86fd8690ba5fc0ffbacb43107b196ee2536cda7fbee1: Status 404 returned error can't find the container with id b03d1b17406a6148176c86fd8690ba5fc0ffbacb43107b196ee2536cda7fbee1 Feb 27 10:45:30 crc kubenswrapper[4728]: I0227 10:45:30.022855 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-64db6967f8-j8qfr" event={"ID":"7f038385-91be-41e0-b79c-0f6160bdf07a","Type":"ContainerStarted","Data":"cca2f82f9ecec8c6ae2d944c9a29ea0e7b1f526cabded1f8745a40f22f3382db"} Feb 27 10:45:30 crc kubenswrapper[4728]: I0227 10:45:30.022941 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-64db6967f8-j8qfr" Feb 27 10:45:30 crc kubenswrapper[4728]: I0227 10:45:30.024445 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-2whpj" event={"ID":"6fd71010-803b-40dc-8cba-72c0a8987b5b","Type":"ContainerStarted","Data":"fa128c1de615eb56b023d00f0f0bf3e1a70f300c681fbe640a3e5830dc198a60"} Feb 27 10:45:30 crc kubenswrapper[4728]: I0227 10:45:30.025446 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-2whpj" Feb 27 10:45:30 crc kubenswrapper[4728]: I0227 10:45:30.026440 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29536485-vwhxc" event={"ID":"90322058-3b16-4e4a-8116-7f02e4865437","Type":"ContainerStarted","Data":"63b19765d61ef3e37790b3aeddbf0afca9b520422fc33a8c0d77158817c683d6"} Feb 27 10:45:30 crc kubenswrapper[4728]: I0227 10:45:30.026463 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29536485-vwhxc" event={"ID":"90322058-3b16-4e4a-8116-7f02e4865437","Type":"ContainerStarted","Data":"c54b95bc3584c574692d544b46b04ecbc367dd7d0bcc1c0f91cefecc7b1370de"} Feb 27 10:45:30 crc kubenswrapper[4728]: I0227 10:45:30.027884 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-75684d597f-hk2dt" event={"ID":"10e9cde6-9b86-4bb8-9171-f73a3a034411","Type":"ContainerStarted","Data":"4fed427da87d2929dfca4719af80f93697771ab28dd894c071a81fa7e35cb22e"} Feb 27 10:45:30 crc kubenswrapper[4728]: I0227 10:45:30.028243 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-75684d597f-hk2dt" Feb 27 10:45:30 crc kubenswrapper[4728]: I0227 10:45:30.028952 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-f7fcc58b9-2gxfg" event={"ID":"0c1ed09d-4dfa-4cf1-b7c3-4562f3811ed9","Type":"ContainerStarted","Data":"b03d1b17406a6148176c86fd8690ba5fc0ffbacb43107b196ee2536cda7fbee1"} Feb 27 10:45:30 crc kubenswrapper[4728]: I0227 10:45:30.036893 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-648564c9fc-rrwdq" event={"ID":"4c16843a-96c9-45b3-9203-6beef6d0c61c","Type":"ContainerStarted","Data":"2cda5b54ad4946fc7aaf75870b12656c79f270c80a3bc08141e7676b15849e8e"} Feb 27 10:45:30 crc kubenswrapper[4728]: I0227 10:45:30.037339 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-648564c9fc-rrwdq" Feb 27 10:45:30 crc kubenswrapper[4728]: I0227 10:45:30.044487 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-cf99c678f-sfznl" event={"ID":"c34e67f9-f7a4-4918-85c5-9d26f0f47f83","Type":"ContainerStarted","Data":"270f810961469d395da0b7aec1724dda8f071dd46df44d48ecbb5d47f70fc02f"} Feb 27 10:45:30 crc kubenswrapper[4728]: I0227 10:45:30.045464 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-cf99c678f-sfznl" Feb 27 10:45:30 crc kubenswrapper[4728]: I0227 10:45:30.051073 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-64db6967f8-j8qfr" podStartSLOduration=8.469075735 podStartE2EDuration="34.051056642s" podCreationTimestamp="2026-02-27 10:44:56 +0000 UTC" firstStartedPulling="2026-02-27 10:44:58.594655851 +0000 UTC m=+1118.557021957" lastFinishedPulling="2026-02-27 10:45:24.176636758 +0000 UTC m=+1144.139002864" observedRunningTime="2026-02-27 10:45:30.045101088 +0000 UTC m=+1150.007467194" watchObservedRunningTime="2026-02-27 10:45:30.051056642 +0000 UTC m=+1150.013422748" Feb 27 10:45:30 crc kubenswrapper[4728]: I0227 10:45:30.053417 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cht92p" event={"ID":"d0eb0111-8103-4481-af5b-9507858073ef","Type":"ContainerStarted","Data":"c13e4040f1edb2ca51d92163b8f89ad3234a1b0f243712be7dd96688a03c1040"} Feb 27 10:45:30 crc kubenswrapper[4728]: I0227 10:45:30.091046 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-648564c9fc-rrwdq" podStartSLOduration=8.34525161 podStartE2EDuration="33.091027941s" podCreationTimestamp="2026-02-27 10:44:57 +0000 UTC" firstStartedPulling="2026-02-27 10:44:59.423984458 +0000 UTC m=+1119.386350564" lastFinishedPulling="2026-02-27 10:45:24.169760789 +0000 UTC m=+1144.132126895" observedRunningTime="2026-02-27 10:45:30.07792662 +0000 UTC m=+1150.040292726" watchObservedRunningTime="2026-02-27 10:45:30.091027941 +0000 UTC m=+1150.053394047" Feb 27 10:45:30 crc kubenswrapper[4728]: I0227 10:45:30.126716 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-cf99c678f-sfznl" podStartSLOduration=8.287204932 podStartE2EDuration="34.126689932s" podCreationTimestamp="2026-02-27 10:44:56 +0000 UTC" firstStartedPulling="2026-02-27 10:44:58.336426398 +0000 UTC m=+1118.298792504" lastFinishedPulling="2026-02-27 10:45:24.175911398 +0000 UTC m=+1144.138277504" observedRunningTime="2026-02-27 10:45:30.109806867 +0000 UTC m=+1150.072172983" watchObservedRunningTime="2026-02-27 10:45:30.126689932 +0000 UTC m=+1150.089056048" Feb 27 10:45:30 crc kubenswrapper[4728]: I0227 10:45:30.131666 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29536485-vwhxc" podStartSLOduration=30.131643888 podStartE2EDuration="30.131643888s" podCreationTimestamp="2026-02-27 10:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:45:30.12225902 +0000 UTC m=+1150.084625126" watchObservedRunningTime="2026-02-27 10:45:30.131643888 +0000 UTC m=+1150.094009994" Feb 27 10:45:30 crc kubenswrapper[4728]: I0227 10:45:30.149325 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-75684d597f-hk2dt" podStartSLOduration=8.366869714 podStartE2EDuration="33.149306933s" podCreationTimestamp="2026-02-27 10:44:57 +0000 UTC" firstStartedPulling="2026-02-27 10:44:59.394205809 +0000 UTC m=+1119.356571915" lastFinishedPulling="2026-02-27 10:45:24.176643028 +0000 UTC m=+1144.139009134" observedRunningTime="2026-02-27 10:45:30.144235035 +0000 UTC m=+1150.106601141" watchObservedRunningTime="2026-02-27 10:45:30.149306933 +0000 UTC m=+1150.111673039" Feb 27 10:45:30 crc kubenswrapper[4728]: I0227 10:45:30.157668 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-6wjxw" Feb 27 10:45:30 crc kubenswrapper[4728]: I0227 10:45:30.159636 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-6d5879f6b9-x7nrw" Feb 27 10:45:30 crc kubenswrapper[4728]: I0227 10:45:30.173038 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-2whpj" podStartSLOduration=2.945689766 podStartE2EDuration="34.173020196s" podCreationTimestamp="2026-02-27 10:44:56 +0000 UTC" firstStartedPulling="2026-02-27 10:44:58.476944804 +0000 UTC m=+1118.439310910" lastFinishedPulling="2026-02-27 10:45:29.704275224 +0000 UTC m=+1149.666641340" observedRunningTime="2026-02-27 10:45:30.17207143 +0000 UTC m=+1150.134437536" watchObservedRunningTime="2026-02-27 10:45:30.173020196 +0000 UTC m=+1150.135386302" Feb 27 10:45:30 crc kubenswrapper[4728]: I0227 10:45:30.781100 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-6d5879f6b9-x7nrw"] Feb 27 10:45:31 crc kubenswrapper[4728]: I0227 10:45:31.064623 4728 generic.go:334] "Generic (PLEG): container finished" podID="90322058-3b16-4e4a-8116-7f02e4865437" containerID="63b19765d61ef3e37790b3aeddbf0afca9b520422fc33a8c0d77158817c683d6" exitCode=0 Feb 27 10:45:31 crc kubenswrapper[4728]: I0227 10:45:31.064743 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29536485-vwhxc" event={"ID":"90322058-3b16-4e4a-8116-7f02e4865437","Type":"ContainerDied","Data":"63b19765d61ef3e37790b3aeddbf0afca9b520422fc33a8c0d77158817c683d6"} Feb 27 10:45:31 crc kubenswrapper[4728]: I0227 10:45:31.066193 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-6db6876945-8dzvn" event={"ID":"46e1e2f4-2677-4d4f-88c3-22c7f3942e12","Type":"ContainerStarted","Data":"661fb6dc6deb471ef4d234855b34304d353ec9594d412bb7235192ae4c74154e"} Feb 27 10:45:31 crc kubenswrapper[4728]: I0227 10:45:31.067657 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-6db6876945-8dzvn" Feb 27 10:45:31 crc kubenswrapper[4728]: I0227 10:45:31.070654 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-78bc7f9bd9-2prk8" event={"ID":"35b1713c-2089-47f7-8537-468fc7a8f79e","Type":"ContainerStarted","Data":"a06579dc84ee5d4661d3c7f91dbe960a52f8000814e4b2f6cfe2c9f95a6ee360"} Feb 27 10:45:31 crc kubenswrapper[4728]: I0227 10:45:31.071744 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-78bc7f9bd9-2prk8" Feb 27 10:45:31 crc kubenswrapper[4728]: I0227 10:45:31.072954 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-6d5879f6b9-x7nrw" event={"ID":"08f5d7df-9e0f-4c13-8799-bcb605842ffd","Type":"ContainerStarted","Data":"e316c4bd80dfcd1a3a51647e5bcc9278068ca663f628e7bc284b7983c9e98f65"} Feb 27 10:45:31 crc kubenswrapper[4728]: I0227 10:45:31.074412 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-jnbr4" event={"ID":"d246350e-c4cc-4f56-948b-3515032e7645","Type":"ContainerStarted","Data":"2a73f89dac517afc4fde276ce46ae5e6300ce35d6b11aeb821d5e9144ec645c8"} Feb 27 10:45:31 crc kubenswrapper[4728]: I0227 10:45:31.075747 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-jnbr4" Feb 27 10:45:31 crc kubenswrapper[4728]: I0227 10:45:31.077304 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-5d87c9d997-l2ffb" event={"ID":"65769a2d-f0c4-4af4-b5ce-f5918e90bfbf","Type":"ContainerStarted","Data":"2f6e8322ee47bb343229e9be4d626bcbca1e3bd8a16c162c693eee2ce0e5bc18"} Feb 27 10:45:31 crc kubenswrapper[4728]: I0227 10:45:31.078726 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-5d87c9d997-l2ffb" Feb 27 10:45:31 crc kubenswrapper[4728]: I0227 10:45:31.094928 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-67d996989d-ngg85" event={"ID":"9556c640-95cc-4030-a15d-eb61a6bcca3b","Type":"ContainerStarted","Data":"988e4cee0410d18da9940a9c081de3a2562d838265e799e988bc4a776ccf8525"} Feb 27 10:45:31 crc kubenswrapper[4728]: I0227 10:45:31.095189 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-67d996989d-ngg85" Feb 27 10:45:31 crc kubenswrapper[4728]: I0227 10:45:31.157988 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-5d87c9d997-l2ffb" podStartSLOduration=3.992786941 podStartE2EDuration="35.157971763s" podCreationTimestamp="2026-02-27 10:44:56 +0000 UTC" firstStartedPulling="2026-02-27 10:44:58.514828395 +0000 UTC m=+1118.477194501" lastFinishedPulling="2026-02-27 10:45:29.680013217 +0000 UTC m=+1149.642379323" observedRunningTime="2026-02-27 10:45:31.13101126 +0000 UTC m=+1151.093377366" watchObservedRunningTime="2026-02-27 10:45:31.157971763 +0000 UTC m=+1151.120337869" Feb 27 10:45:31 crc kubenswrapper[4728]: I0227 10:45:31.159914 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-6db6876945-8dzvn" podStartSLOduration=3.756971524 podStartE2EDuration="35.159907395s" podCreationTimestamp="2026-02-27 10:44:56 +0000 UTC" firstStartedPulling="2026-02-27 10:44:58.277047856 +0000 UTC m=+1118.239413962" lastFinishedPulling="2026-02-27 10:45:29.679983727 +0000 UTC m=+1149.642349833" observedRunningTime="2026-02-27 10:45:31.148958654 +0000 UTC m=+1151.111324760" watchObservedRunningTime="2026-02-27 10:45:31.159907395 +0000 UTC m=+1151.122273501" Feb 27 10:45:31 crc kubenswrapper[4728]: I0227 10:45:31.165871 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-jnbr4" podStartSLOduration=4.366755706 podStartE2EDuration="34.165852439s" podCreationTimestamp="2026-02-27 10:44:57 +0000 UTC" firstStartedPulling="2026-02-27 10:44:59.661819499 +0000 UTC m=+1119.624185605" lastFinishedPulling="2026-02-27 10:45:29.460916232 +0000 UTC m=+1149.423282338" observedRunningTime="2026-02-27 10:45:31.160843012 +0000 UTC m=+1151.123209118" watchObservedRunningTime="2026-02-27 10:45:31.165852439 +0000 UTC m=+1151.128218545" Feb 27 10:45:31 crc kubenswrapper[4728]: I0227 10:45:31.171702 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-78bc7f9bd9-2prk8" podStartSLOduration=4.167980857 podStartE2EDuration="35.171684139s" podCreationTimestamp="2026-02-27 10:44:56 +0000 UTC" firstStartedPulling="2026-02-27 10:44:58.69718374 +0000 UTC m=+1118.659549846" lastFinishedPulling="2026-02-27 10:45:29.700887022 +0000 UTC m=+1149.663253128" observedRunningTime="2026-02-27 10:45:31.171546345 +0000 UTC m=+1151.133912441" watchObservedRunningTime="2026-02-27 10:45:31.171684139 +0000 UTC m=+1151.134050245" Feb 27 10:45:31 crc kubenswrapper[4728]: I0227 10:45:31.190363 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-67d996989d-ngg85" podStartSLOduration=3.391041202 podStartE2EDuration="34.190347502s" podCreationTimestamp="2026-02-27 10:44:57 +0000 UTC" firstStartedPulling="2026-02-27 10:44:58.902190198 +0000 UTC m=+1118.864556304" lastFinishedPulling="2026-02-27 10:45:29.701496488 +0000 UTC m=+1149.663862604" observedRunningTime="2026-02-27 10:45:31.18772575 +0000 UTC m=+1151.150091846" watchObservedRunningTime="2026-02-27 10:45:31.190347502 +0000 UTC m=+1151.152713608" Feb 27 10:45:32 crc kubenswrapper[4728]: I0227 10:45:32.113786 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-6d5879f6b9-x7nrw" event={"ID":"08f5d7df-9e0f-4c13-8799-bcb605842ffd","Type":"ContainerStarted","Data":"83ba7415fcf40a09ad598307c6abe761f25ec211946a165aaebb6e78ac8bcf13"} Feb 27 10:45:32 crc kubenswrapper[4728]: I0227 10:45:32.113859 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-6d5879f6b9-x7nrw" Feb 27 10:45:32 crc kubenswrapper[4728]: I0227 10:45:32.118883 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-9b9ff9f4d-4s265" event={"ID":"11dda47d-9181-4a72-a9ea-68874d9ca367","Type":"ContainerStarted","Data":"96ea2835de28e49da55f982c3361821a72490708aa5b506977d684d0ec60b10e"} Feb 27 10:45:32 crc kubenswrapper[4728]: I0227 10:45:32.155258 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-6d5879f6b9-x7nrw" podStartSLOduration=35.155235808 podStartE2EDuration="35.155235808s" podCreationTimestamp="2026-02-27 10:44:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:45:32.148107362 +0000 UTC m=+1152.110473468" watchObservedRunningTime="2026-02-27 10:45:32.155235808 +0000 UTC m=+1152.117601914" Feb 27 10:45:32 crc kubenswrapper[4728]: I0227 10:45:32.165138 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-9b9ff9f4d-4s265" podStartSLOduration=3.447704742 podStartE2EDuration="35.16512237s" podCreationTimestamp="2026-02-27 10:44:57 +0000 UTC" firstStartedPulling="2026-02-27 10:44:59.653256194 +0000 UTC m=+1119.615622300" lastFinishedPulling="2026-02-27 10:45:31.370673822 +0000 UTC m=+1151.333039928" observedRunningTime="2026-02-27 10:45:32.162819956 +0000 UTC m=+1152.125186062" watchObservedRunningTime="2026-02-27 10:45:32.16512237 +0000 UTC m=+1152.127488466" Feb 27 10:45:32 crc kubenswrapper[4728]: I0227 10:45:32.657161 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29536485-vwhxc" Feb 27 10:45:32 crc kubenswrapper[4728]: I0227 10:45:32.727287 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vvqdv\" (UniqueName: \"kubernetes.io/projected/90322058-3b16-4e4a-8116-7f02e4865437-kube-api-access-vvqdv\") pod \"90322058-3b16-4e4a-8116-7f02e4865437\" (UID: \"90322058-3b16-4e4a-8116-7f02e4865437\") " Feb 27 10:45:32 crc kubenswrapper[4728]: I0227 10:45:32.727439 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/90322058-3b16-4e4a-8116-7f02e4865437-config-volume\") pod \"90322058-3b16-4e4a-8116-7f02e4865437\" (UID: \"90322058-3b16-4e4a-8116-7f02e4865437\") " Feb 27 10:45:32 crc kubenswrapper[4728]: I0227 10:45:32.727543 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/90322058-3b16-4e4a-8116-7f02e4865437-secret-volume\") pod \"90322058-3b16-4e4a-8116-7f02e4865437\" (UID: \"90322058-3b16-4e4a-8116-7f02e4865437\") " Feb 27 10:45:32 crc kubenswrapper[4728]: I0227 10:45:32.729205 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/90322058-3b16-4e4a-8116-7f02e4865437-config-volume" (OuterVolumeSpecName: "config-volume") pod "90322058-3b16-4e4a-8116-7f02e4865437" (UID: "90322058-3b16-4e4a-8116-7f02e4865437"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:45:32 crc kubenswrapper[4728]: I0227 10:45:32.733757 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90322058-3b16-4e4a-8116-7f02e4865437-kube-api-access-vvqdv" (OuterVolumeSpecName: "kube-api-access-vvqdv") pod "90322058-3b16-4e4a-8116-7f02e4865437" (UID: "90322058-3b16-4e4a-8116-7f02e4865437"). InnerVolumeSpecName "kube-api-access-vvqdv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:45:32 crc kubenswrapper[4728]: I0227 10:45:32.733968 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90322058-3b16-4e4a-8116-7f02e4865437-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "90322058-3b16-4e4a-8116-7f02e4865437" (UID: "90322058-3b16-4e4a-8116-7f02e4865437"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:45:32 crc kubenswrapper[4728]: I0227 10:45:32.829649 4728 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/90322058-3b16-4e4a-8116-7f02e4865437-config-volume\") on node \"crc\" DevicePath \"\"" Feb 27 10:45:32 crc kubenswrapper[4728]: I0227 10:45:32.829676 4728 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/90322058-3b16-4e4a-8116-7f02e4865437-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 27 10:45:32 crc kubenswrapper[4728]: I0227 10:45:32.829685 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vvqdv\" (UniqueName: \"kubernetes.io/projected/90322058-3b16-4e4a-8116-7f02e4865437-kube-api-access-vvqdv\") on node \"crc\" DevicePath \"\"" Feb 27 10:45:33 crc kubenswrapper[4728]: I0227 10:45:33.140919 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-55b5ff4dbb-tq6s8" event={"ID":"e81ced4b-a6cf-4dba-964a-52f8bcbd82ae","Type":"ContainerStarted","Data":"577558a143069df4d340c1bb9ce135f6a947e4059a402ef55cef2c8b90d618a9"} Feb 27 10:45:33 crc kubenswrapper[4728]: I0227 10:45:33.141982 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-55b5ff4dbb-tq6s8" Feb 27 10:45:33 crc kubenswrapper[4728]: I0227 10:45:33.143759 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29536485-vwhxc" event={"ID":"90322058-3b16-4e4a-8116-7f02e4865437","Type":"ContainerDied","Data":"c54b95bc3584c574692d544b46b04ecbc367dd7d0bcc1c0f91cefecc7b1370de"} Feb 27 10:45:33 crc kubenswrapper[4728]: I0227 10:45:33.143810 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c54b95bc3584c574692d544b46b04ecbc367dd7d0bcc1c0f91cefecc7b1370de" Feb 27 10:45:33 crc kubenswrapper[4728]: I0227 10:45:33.143780 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29536485-vwhxc" Feb 27 10:45:33 crc kubenswrapper[4728]: I0227 10:45:33.145423 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-54688575f-5xx5w" event={"ID":"c0bf29f3-9fc3-4d2e-b2f6-979e2ed8f3ce","Type":"ContainerStarted","Data":"213adf4577707990a7f53e2baeda63bc70d2f59b462b49fc95adfb386c6aaa3a"} Feb 27 10:45:33 crc kubenswrapper[4728]: I0227 10:45:33.145732 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-54688575f-5xx5w" Feb 27 10:45:33 crc kubenswrapper[4728]: I0227 10:45:33.167852 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-55b5ff4dbb-tq6s8" podStartSLOduration=3.10182272 podStartE2EDuration="36.167831506s" podCreationTimestamp="2026-02-27 10:44:57 +0000 UTC" firstStartedPulling="2026-02-27 10:44:59.401401827 +0000 UTC m=+1119.363767933" lastFinishedPulling="2026-02-27 10:45:32.467410613 +0000 UTC m=+1152.429776719" observedRunningTime="2026-02-27 10:45:33.157483731 +0000 UTC m=+1153.119849837" watchObservedRunningTime="2026-02-27 10:45:33.167831506 +0000 UTC m=+1153.130197612" Feb 27 10:45:33 crc kubenswrapper[4728]: I0227 10:45:33.176265 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-54688575f-5xx5w" podStartSLOduration=2.817189031 podStartE2EDuration="36.176250717s" podCreationTimestamp="2026-02-27 10:44:57 +0000 UTC" firstStartedPulling="2026-02-27 10:44:58.890901107 +0000 UTC m=+1118.853267213" lastFinishedPulling="2026-02-27 10:45:32.249962793 +0000 UTC m=+1152.212328899" observedRunningTime="2026-02-27 10:45:33.171209989 +0000 UTC m=+1153.133576095" watchObservedRunningTime="2026-02-27 10:45:33.176250717 +0000 UTC m=+1153.138616823" Feb 27 10:45:35 crc kubenswrapper[4728]: I0227 10:45:35.172951 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-f7fcc58b9-2gxfg" event={"ID":"0c1ed09d-4dfa-4cf1-b7c3-4562f3811ed9","Type":"ContainerStarted","Data":"2c83b895c6719499fd739b15b64bc27d36155fd5633304a4a02dc459b567ed97"} Feb 27 10:45:35 crc kubenswrapper[4728]: I0227 10:45:35.173766 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-f7fcc58b9-2gxfg" Feb 27 10:45:35 crc kubenswrapper[4728]: I0227 10:45:35.175651 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cht92p" event={"ID":"d0eb0111-8103-4481-af5b-9507858073ef","Type":"ContainerStarted","Data":"92a3dced32df016c3718ee0b15dca185061979ca15663b26c61e08df7458f745"} Feb 27 10:45:35 crc kubenswrapper[4728]: I0227 10:45:35.175853 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cht92p" Feb 27 10:45:35 crc kubenswrapper[4728]: I0227 10:45:35.200938 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-f7fcc58b9-2gxfg" podStartSLOduration=34.141728837 podStartE2EDuration="38.200922798s" podCreationTimestamp="2026-02-27 10:44:57 +0000 UTC" firstStartedPulling="2026-02-27 10:45:29.994945739 +0000 UTC m=+1149.957311845" lastFinishedPulling="2026-02-27 10:45:34.05413971 +0000 UTC m=+1154.016505806" observedRunningTime="2026-02-27 10:45:35.200461686 +0000 UTC m=+1155.162827812" watchObservedRunningTime="2026-02-27 10:45:35.200922798 +0000 UTC m=+1155.163288914" Feb 27 10:45:35 crc kubenswrapper[4728]: I0227 10:45:35.252111 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cht92p" podStartSLOduration=34.115096085 podStartE2EDuration="38.252094276s" podCreationTimestamp="2026-02-27 10:44:57 +0000 UTC" firstStartedPulling="2026-02-27 10:45:29.922904728 +0000 UTC m=+1149.885270834" lastFinishedPulling="2026-02-27 10:45:34.059902919 +0000 UTC m=+1154.022269025" observedRunningTime="2026-02-27 10:45:35.243802298 +0000 UTC m=+1155.206168414" watchObservedRunningTime="2026-02-27 10:45:35.252094276 +0000 UTC m=+1155.214460382" Feb 27 10:45:35 crc kubenswrapper[4728]: I0227 10:45:35.922156 4728 patch_prober.go:28] interesting pod/machine-config-daemon-mf2hh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 10:45:35 crc kubenswrapper[4728]: I0227 10:45:35.922245 4728 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 10:45:35 crc kubenswrapper[4728]: I0227 10:45:35.922327 4728 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" Feb 27 10:45:35 crc kubenswrapper[4728]: I0227 10:45:35.923581 4728 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"25e402b0eb27122e7fbd4811edc6c8ff99dce0897b61a2efd27b0c5dbb0c9671"} pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 27 10:45:35 crc kubenswrapper[4728]: I0227 10:45:35.923704 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" containerName="machine-config-daemon" containerID="cri-o://25e402b0eb27122e7fbd4811edc6c8ff99dce0897b61a2efd27b0c5dbb0c9671" gracePeriod=600 Feb 27 10:45:36 crc kubenswrapper[4728]: I0227 10:45:36.188113 4728 generic.go:334] "Generic (PLEG): container finished" podID="c2cfd349-f825-497b-b698-7fb6bc258b22" containerID="25e402b0eb27122e7fbd4811edc6c8ff99dce0897b61a2efd27b0c5dbb0c9671" exitCode=0 Feb 27 10:45:36 crc kubenswrapper[4728]: I0227 10:45:36.188144 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" event={"ID":"c2cfd349-f825-497b-b698-7fb6bc258b22","Type":"ContainerDied","Data":"25e402b0eb27122e7fbd4811edc6c8ff99dce0897b61a2efd27b0c5dbb0c9671"} Feb 27 10:45:36 crc kubenswrapper[4728]: I0227 10:45:36.188655 4728 scope.go:117] "RemoveContainer" containerID="0c1db5be2b8f7ae48c2eb85c7a1f9d89d594ab5c8b362069a65d852dc6140374" Feb 27 10:45:37 crc kubenswrapper[4728]: I0227 10:45:37.202801 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" event={"ID":"c2cfd349-f825-497b-b698-7fb6bc258b22","Type":"ContainerStarted","Data":"38e4421806f8078d8e00d718689caad66ee119d857ee6a04b69a7a968f3e70aa"} Feb 27 10:45:37 crc kubenswrapper[4728]: I0227 10:45:37.205364 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-55ffd4876b-qkrxj" event={"ID":"df32cd75-8d92-4dbe-9e96-0c943a0f2614","Type":"ContainerStarted","Data":"4723b4d5d24ad006676a69e205e67edbc051bb4c407aef10ec13abc36969a6c4"} Feb 27 10:45:37 crc kubenswrapper[4728]: I0227 10:45:37.205860 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-55ffd4876b-qkrxj" Feb 27 10:45:37 crc kubenswrapper[4728]: I0227 10:45:37.208533 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-545456dc4-qrz29" event={"ID":"20a64934-5596-44eb-9646-115ff6b4e9c8","Type":"ContainerStarted","Data":"90ddea9deb49ad1d416fb9b4b612b9dcd45b08c34806514a8073af77a127b8c5"} Feb 27 10:45:37 crc kubenswrapper[4728]: I0227 10:45:37.209056 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-545456dc4-qrz29" Feb 27 10:45:37 crc kubenswrapper[4728]: I0227 10:45:37.255194 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-55ffd4876b-qkrxj" podStartSLOduration=3.281438289 podStartE2EDuration="40.255175033s" podCreationTimestamp="2026-02-27 10:44:57 +0000 UTC" firstStartedPulling="2026-02-27 10:44:59.108742299 +0000 UTC m=+1119.071108405" lastFinishedPulling="2026-02-27 10:45:36.082479003 +0000 UTC m=+1156.044845149" observedRunningTime="2026-02-27 10:45:37.245858967 +0000 UTC m=+1157.208225083" watchObservedRunningTime="2026-02-27 10:45:37.255175033 +0000 UTC m=+1157.217541149" Feb 27 10:45:37 crc kubenswrapper[4728]: I0227 10:45:37.264387 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-6db6876945-8dzvn" Feb 27 10:45:37 crc kubenswrapper[4728]: I0227 10:45:37.274408 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-545456dc4-qrz29" podStartSLOduration=2.402597019 podStartE2EDuration="40.274387411s" podCreationTimestamp="2026-02-27 10:44:57 +0000 UTC" firstStartedPulling="2026-02-27 10:44:58.689250302 +0000 UTC m=+1118.651616398" lastFinishedPulling="2026-02-27 10:45:36.561040684 +0000 UTC m=+1156.523406790" observedRunningTime="2026-02-27 10:45:37.264362996 +0000 UTC m=+1157.226729122" watchObservedRunningTime="2026-02-27 10:45:37.274387411 +0000 UTC m=+1157.236753527" Feb 27 10:45:37 crc kubenswrapper[4728]: I0227 10:45:37.284425 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-2whpj" Feb 27 10:45:37 crc kubenswrapper[4728]: I0227 10:45:37.301690 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-5d87c9d997-l2ffb" Feb 27 10:45:37 crc kubenswrapper[4728]: I0227 10:45:37.385887 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-cf99c678f-sfznl" Feb 27 10:45:37 crc kubenswrapper[4728]: I0227 10:45:37.397643 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-64db6967f8-j8qfr" Feb 27 10:45:37 crc kubenswrapper[4728]: I0227 10:45:37.485581 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-78bc7f9bd9-2prk8" Feb 27 10:45:37 crc kubenswrapper[4728]: I0227 10:45:37.709438 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-67d996989d-ngg85" Feb 27 10:45:37 crc kubenswrapper[4728]: I0227 10:45:37.709922 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-54688575f-5xx5w" Feb 27 10:45:37 crc kubenswrapper[4728]: I0227 10:45:37.873980 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-75684d597f-hk2dt" Feb 27 10:45:37 crc kubenswrapper[4728]: I0227 10:45:37.997052 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-648564c9fc-rrwdq" Feb 27 10:45:38 crc kubenswrapper[4728]: I0227 10:45:38.061193 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-9b9ff9f4d-4s265" Feb 27 10:45:38 crc kubenswrapper[4728]: I0227 10:45:38.063817 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-9b9ff9f4d-4s265" Feb 27 10:45:38 crc kubenswrapper[4728]: I0227 10:45:38.197117 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-55b5ff4dbb-tq6s8" Feb 27 10:45:38 crc kubenswrapper[4728]: I0227 10:45:38.264608 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-jnbr4" Feb 27 10:45:39 crc kubenswrapper[4728]: I0227 10:45:39.230111 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-556b8b874-rxh2d" event={"ID":"9a392698-f140-4562-b72c-7cbe0a868f1c","Type":"ContainerStarted","Data":"06a840a520e8bdf7a2258bb74386f81c4e559f2d1b43da90bd931f452b62bc4d"} Feb 27 10:45:39 crc kubenswrapper[4728]: I0227 10:45:39.231696 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-556b8b874-rxh2d" Feb 27 10:45:39 crc kubenswrapper[4728]: I0227 10:45:39.269488 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-556b8b874-rxh2d" podStartSLOduration=3.499702732 podStartE2EDuration="42.269457519s" podCreationTimestamp="2026-02-27 10:44:57 +0000 UTC" firstStartedPulling="2026-02-27 10:44:59.39390448 +0000 UTC m=+1119.356270586" lastFinishedPulling="2026-02-27 10:45:38.163659267 +0000 UTC m=+1158.126025373" observedRunningTime="2026-02-27 10:45:39.263098453 +0000 UTC m=+1159.225464559" watchObservedRunningTime="2026-02-27 10:45:39.269457519 +0000 UTC m=+1159.231823665" Feb 27 10:45:40 crc kubenswrapper[4728]: I0227 10:45:40.167703 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-6d5879f6b9-x7nrw" Feb 27 10:45:41 crc kubenswrapper[4728]: E0227 10:45:41.726267 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.22:5001/openstack-k8s-operators/telemetry-operator:aecf86554bb8ec55563815e8bb164483ca142491\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-7776f7b585-jhzvv" podUID="4af1c14f-ff9c-4c9c-a110-4f6b462c7acd" Feb 27 10:45:41 crc kubenswrapper[4728]: E0227 10:45:41.726309 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:2d59045b8d8e6f9c5483c4fdda7c5057218d553200dc4bcf26789980ac1d9abd\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-6f5z2" podUID="8303b246-6796-4521-8c7c-95234d371456" Feb 27 10:45:42 crc kubenswrapper[4728]: I0227 10:45:42.259667 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-74b6b5dc96-zwlhz" event={"ID":"2343eecc-62d9-4859-9cf9-6a3ec71f4906","Type":"ContainerStarted","Data":"65f3cd40a603be78bcd9ed6f163c8663d485e8ae743260cda569c78dfac3e987"} Feb 27 10:45:42 crc kubenswrapper[4728]: I0227 10:45:42.260177 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-74b6b5dc96-zwlhz" Feb 27 10:45:42 crc kubenswrapper[4728]: I0227 10:45:42.280891 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-74b6b5dc96-zwlhz" podStartSLOduration=3.159858875 podStartE2EDuration="45.280866526s" podCreationTimestamp="2026-02-27 10:44:57 +0000 UTC" firstStartedPulling="2026-02-27 10:44:59.108886032 +0000 UTC m=+1119.071252138" lastFinishedPulling="2026-02-27 10:45:41.229893683 +0000 UTC m=+1161.192259789" observedRunningTime="2026-02-27 10:45:42.275006585 +0000 UTC m=+1162.237372691" watchObservedRunningTime="2026-02-27 10:45:42.280866526 +0000 UTC m=+1162.243232672" Feb 27 10:45:43 crc kubenswrapper[4728]: I0227 10:45:43.094462 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-f7fcc58b9-2gxfg" Feb 27 10:45:43 crc kubenswrapper[4728]: I0227 10:45:43.841798 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cht92p" Feb 27 10:45:44 crc kubenswrapper[4728]: E0227 10:45:44.728002 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5z9sj" podUID="980c029c-25d4-4063-be4d-ea30564c2120" Feb 27 10:45:47 crc kubenswrapper[4728]: I0227 10:45:47.650540 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-545456dc4-qrz29" Feb 27 10:45:47 crc kubenswrapper[4728]: I0227 10:45:47.706168 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-55ffd4876b-qkrxj" Feb 27 10:45:47 crc kubenswrapper[4728]: I0227 10:45:47.714069 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-556b8b874-rxh2d" Feb 27 10:45:47 crc kubenswrapper[4728]: I0227 10:45:47.840433 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-74b6b5dc96-zwlhz" Feb 27 10:45:54 crc kubenswrapper[4728]: I0227 10:45:54.379198 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-7776f7b585-jhzvv" event={"ID":"4af1c14f-ff9c-4c9c-a110-4f6b462c7acd","Type":"ContainerStarted","Data":"244ea59a233590886a8db21d1bc13f4b8f27bf573e275e5d877396a71a93a05a"} Feb 27 10:45:54 crc kubenswrapper[4728]: I0227 10:45:54.380608 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-7776f7b585-jhzvv" Feb 27 10:45:54 crc kubenswrapper[4728]: I0227 10:45:54.405338 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-7776f7b585-jhzvv" podStartSLOduration=3.281196512 podStartE2EDuration="57.405319274s" podCreationTimestamp="2026-02-27 10:44:57 +0000 UTC" firstStartedPulling="2026-02-27 10:44:59.665936072 +0000 UTC m=+1119.628302178" lastFinishedPulling="2026-02-27 10:45:53.790058834 +0000 UTC m=+1173.752424940" observedRunningTime="2026-02-27 10:45:54.397256712 +0000 UTC m=+1174.359622838" watchObservedRunningTime="2026-02-27 10:45:54.405319274 +0000 UTC m=+1174.367685390" Feb 27 10:45:55 crc kubenswrapper[4728]: I0227 10:45:55.392987 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-6f5z2" event={"ID":"8303b246-6796-4521-8c7c-95234d371456","Type":"ContainerStarted","Data":"f73fd12391fdb49d4eb0fa4ca0ba30d763bda784a350dd4ad2ae2b8821c32edb"} Feb 27 10:45:55 crc kubenswrapper[4728]: I0227 10:45:55.393763 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-6f5z2" Feb 27 10:45:55 crc kubenswrapper[4728]: I0227 10:45:55.417911 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-6f5z2" podStartSLOduration=2.884016029 podStartE2EDuration="58.4178861s" podCreationTimestamp="2026-02-27 10:44:57 +0000 UTC" firstStartedPulling="2026-02-27 10:44:59.65821854 +0000 UTC m=+1119.620584646" lastFinishedPulling="2026-02-27 10:45:55.192088611 +0000 UTC m=+1175.154454717" observedRunningTime="2026-02-27 10:45:55.416203174 +0000 UTC m=+1175.378569280" watchObservedRunningTime="2026-02-27 10:45:55.4178861 +0000 UTC m=+1175.380252206" Feb 27 10:45:59 crc kubenswrapper[4728]: I0227 10:45:59.434544 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5z9sj" event={"ID":"980c029c-25d4-4063-be4d-ea30564c2120","Type":"ContainerStarted","Data":"e593341ab0dd8eb07996fd8ab59a146ea4274fb5613613b87a1478a038b62847"} Feb 27 10:45:59 crc kubenswrapper[4728]: I0227 10:45:59.455624 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5z9sj" podStartSLOduration=2.976057901 podStartE2EDuration="1m2.455595683s" podCreationTimestamp="2026-02-27 10:44:57 +0000 UTC" firstStartedPulling="2026-02-27 10:44:59.665594442 +0000 UTC m=+1119.627960548" lastFinishedPulling="2026-02-27 10:45:59.145132194 +0000 UTC m=+1179.107498330" observedRunningTime="2026-02-27 10:45:59.454544563 +0000 UTC m=+1179.416910689" watchObservedRunningTime="2026-02-27 10:45:59.455595683 +0000 UTC m=+1179.417961789" Feb 27 10:46:00 crc kubenswrapper[4728]: I0227 10:46:00.153571 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536486-wmfdc"] Feb 27 10:46:00 crc kubenswrapper[4728]: E0227 10:46:00.153993 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90322058-3b16-4e4a-8116-7f02e4865437" containerName="collect-profiles" Feb 27 10:46:00 crc kubenswrapper[4728]: I0227 10:46:00.154009 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="90322058-3b16-4e4a-8116-7f02e4865437" containerName="collect-profiles" Feb 27 10:46:00 crc kubenswrapper[4728]: I0227 10:46:00.154249 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="90322058-3b16-4e4a-8116-7f02e4865437" containerName="collect-profiles" Feb 27 10:46:00 crc kubenswrapper[4728]: I0227 10:46:00.155038 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536486-wmfdc" Feb 27 10:46:00 crc kubenswrapper[4728]: I0227 10:46:00.158282 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 10:46:00 crc kubenswrapper[4728]: I0227 10:46:00.158314 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-wxdfj" Feb 27 10:46:00 crc kubenswrapper[4728]: I0227 10:46:00.159286 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 10:46:00 crc kubenswrapper[4728]: I0227 10:46:00.170349 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536486-wmfdc"] Feb 27 10:46:00 crc kubenswrapper[4728]: I0227 10:46:00.223577 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tj92f\" (UniqueName: \"kubernetes.io/projected/073d11d2-49f2-497c-a676-5aa3dcb13859-kube-api-access-tj92f\") pod \"auto-csr-approver-29536486-wmfdc\" (UID: \"073d11d2-49f2-497c-a676-5aa3dcb13859\") " pod="openshift-infra/auto-csr-approver-29536486-wmfdc" Feb 27 10:46:00 crc kubenswrapper[4728]: I0227 10:46:00.325596 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tj92f\" (UniqueName: \"kubernetes.io/projected/073d11d2-49f2-497c-a676-5aa3dcb13859-kube-api-access-tj92f\") pod \"auto-csr-approver-29536486-wmfdc\" (UID: \"073d11d2-49f2-497c-a676-5aa3dcb13859\") " pod="openshift-infra/auto-csr-approver-29536486-wmfdc" Feb 27 10:46:00 crc kubenswrapper[4728]: I0227 10:46:00.350427 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tj92f\" (UniqueName: \"kubernetes.io/projected/073d11d2-49f2-497c-a676-5aa3dcb13859-kube-api-access-tj92f\") pod \"auto-csr-approver-29536486-wmfdc\" (UID: \"073d11d2-49f2-497c-a676-5aa3dcb13859\") " pod="openshift-infra/auto-csr-approver-29536486-wmfdc" Feb 27 10:46:00 crc kubenswrapper[4728]: I0227 10:46:00.483202 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536486-wmfdc" Feb 27 10:46:01 crc kubenswrapper[4728]: I0227 10:46:01.057883 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536486-wmfdc"] Feb 27 10:46:01 crc kubenswrapper[4728]: I0227 10:46:01.453104 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536486-wmfdc" event={"ID":"073d11d2-49f2-497c-a676-5aa3dcb13859","Type":"ContainerStarted","Data":"2571f309231ca20935c97814a25294557812366df4492e1d5808484dd391bc71"} Feb 27 10:46:02 crc kubenswrapper[4728]: I0227 10:46:02.471219 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536486-wmfdc" event={"ID":"073d11d2-49f2-497c-a676-5aa3dcb13859","Type":"ContainerStarted","Data":"76552ff94d7d3ab0f442fe3ba6993cf6738e1eed6e0320cd6d09dbc5a8eb995e"} Feb 27 10:46:02 crc kubenswrapper[4728]: I0227 10:46:02.502664 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29536486-wmfdc" podStartSLOduration=1.5987910140000001 podStartE2EDuration="2.50264123s" podCreationTimestamp="2026-02-27 10:46:00 +0000 UTC" firstStartedPulling="2026-02-27 10:46:01.069274132 +0000 UTC m=+1181.031640238" lastFinishedPulling="2026-02-27 10:46:01.973124338 +0000 UTC m=+1181.935490454" observedRunningTime="2026-02-27 10:46:02.496285656 +0000 UTC m=+1182.458651782" watchObservedRunningTime="2026-02-27 10:46:02.50264123 +0000 UTC m=+1182.465007336" Feb 27 10:46:03 crc kubenswrapper[4728]: I0227 10:46:03.486226 4728 generic.go:334] "Generic (PLEG): container finished" podID="073d11d2-49f2-497c-a676-5aa3dcb13859" containerID="76552ff94d7d3ab0f442fe3ba6993cf6738e1eed6e0320cd6d09dbc5a8eb995e" exitCode=0 Feb 27 10:46:03 crc kubenswrapper[4728]: I0227 10:46:03.486318 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536486-wmfdc" event={"ID":"073d11d2-49f2-497c-a676-5aa3dcb13859","Type":"ContainerDied","Data":"76552ff94d7d3ab0f442fe3ba6993cf6738e1eed6e0320cd6d09dbc5a8eb995e"} Feb 27 10:46:04 crc kubenswrapper[4728]: I0227 10:46:04.894698 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536486-wmfdc" Feb 27 10:46:05 crc kubenswrapper[4728]: I0227 10:46:05.013309 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tj92f\" (UniqueName: \"kubernetes.io/projected/073d11d2-49f2-497c-a676-5aa3dcb13859-kube-api-access-tj92f\") pod \"073d11d2-49f2-497c-a676-5aa3dcb13859\" (UID: \"073d11d2-49f2-497c-a676-5aa3dcb13859\") " Feb 27 10:46:05 crc kubenswrapper[4728]: I0227 10:46:05.019958 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/073d11d2-49f2-497c-a676-5aa3dcb13859-kube-api-access-tj92f" (OuterVolumeSpecName: "kube-api-access-tj92f") pod "073d11d2-49f2-497c-a676-5aa3dcb13859" (UID: "073d11d2-49f2-497c-a676-5aa3dcb13859"). InnerVolumeSpecName "kube-api-access-tj92f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:46:05 crc kubenswrapper[4728]: I0227 10:46:05.115768 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tj92f\" (UniqueName: \"kubernetes.io/projected/073d11d2-49f2-497c-a676-5aa3dcb13859-kube-api-access-tj92f\") on node \"crc\" DevicePath \"\"" Feb 27 10:46:05 crc kubenswrapper[4728]: I0227 10:46:05.507582 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536486-wmfdc" event={"ID":"073d11d2-49f2-497c-a676-5aa3dcb13859","Type":"ContainerDied","Data":"2571f309231ca20935c97814a25294557812366df4492e1d5808484dd391bc71"} Feb 27 10:46:05 crc kubenswrapper[4728]: I0227 10:46:05.507653 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2571f309231ca20935c97814a25294557812366df4492e1d5808484dd391bc71" Feb 27 10:46:05 crc kubenswrapper[4728]: I0227 10:46:05.507685 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536486-wmfdc" Feb 27 10:46:05 crc kubenswrapper[4728]: I0227 10:46:05.596332 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536480-5pkgk"] Feb 27 10:46:05 crc kubenswrapper[4728]: I0227 10:46:05.614115 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536480-5pkgk"] Feb 27 10:46:06 crc kubenswrapper[4728]: I0227 10:46:06.740131 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6047c131-51e2-4139-b721-183ee9db08e3" path="/var/lib/kubelet/pods/6047c131-51e2-4139-b721-183ee9db08e3/volumes" Feb 27 10:46:07 crc kubenswrapper[4728]: I0227 10:46:07.846086 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-6f5z2" Feb 27 10:46:08 crc kubenswrapper[4728]: I0227 10:46:08.120099 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-7776f7b585-jhzvv" Feb 27 10:46:26 crc kubenswrapper[4728]: I0227 10:46:26.625552 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-br2wj"] Feb 27 10:46:26 crc kubenswrapper[4728]: E0227 10:46:26.627514 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="073d11d2-49f2-497c-a676-5aa3dcb13859" containerName="oc" Feb 27 10:46:26 crc kubenswrapper[4728]: I0227 10:46:26.627599 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="073d11d2-49f2-497c-a676-5aa3dcb13859" containerName="oc" Feb 27 10:46:26 crc kubenswrapper[4728]: I0227 10:46:26.627823 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="073d11d2-49f2-497c-a676-5aa3dcb13859" containerName="oc" Feb 27 10:46:26 crc kubenswrapper[4728]: I0227 10:46:26.628750 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-br2wj" Feb 27 10:46:26 crc kubenswrapper[4728]: I0227 10:46:26.639270 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-6q67m" Feb 27 10:46:26 crc kubenswrapper[4728]: I0227 10:46:26.639702 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Feb 27 10:46:26 crc kubenswrapper[4728]: I0227 10:46:26.639925 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Feb 27 10:46:26 crc kubenswrapper[4728]: I0227 10:46:26.640119 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Feb 27 10:46:26 crc kubenswrapper[4728]: I0227 10:46:26.650778 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-br2wj"] Feb 27 10:46:26 crc kubenswrapper[4728]: I0227 10:46:26.740104 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dv6pp\" (UniqueName: \"kubernetes.io/projected/18512c82-a131-4c81-ab2f-c4d812729f46-kube-api-access-dv6pp\") pod \"dnsmasq-dns-675f4bcbfc-br2wj\" (UID: \"18512c82-a131-4c81-ab2f-c4d812729f46\") " pod="openstack/dnsmasq-dns-675f4bcbfc-br2wj" Feb 27 10:46:26 crc kubenswrapper[4728]: I0227 10:46:26.740195 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/18512c82-a131-4c81-ab2f-c4d812729f46-config\") pod \"dnsmasq-dns-675f4bcbfc-br2wj\" (UID: \"18512c82-a131-4c81-ab2f-c4d812729f46\") " pod="openstack/dnsmasq-dns-675f4bcbfc-br2wj" Feb 27 10:46:26 crc kubenswrapper[4728]: I0227 10:46:26.818571 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-xtqcx"] Feb 27 10:46:26 crc kubenswrapper[4728]: I0227 10:46:26.824452 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-xtqcx" Feb 27 10:46:26 crc kubenswrapper[4728]: I0227 10:46:26.831782 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Feb 27 10:46:26 crc kubenswrapper[4728]: I0227 10:46:26.834230 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-xtqcx"] Feb 27 10:46:26 crc kubenswrapper[4728]: I0227 10:46:26.841343 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dv6pp\" (UniqueName: \"kubernetes.io/projected/18512c82-a131-4c81-ab2f-c4d812729f46-kube-api-access-dv6pp\") pod \"dnsmasq-dns-675f4bcbfc-br2wj\" (UID: \"18512c82-a131-4c81-ab2f-c4d812729f46\") " pod="openstack/dnsmasq-dns-675f4bcbfc-br2wj" Feb 27 10:46:26 crc kubenswrapper[4728]: I0227 10:46:26.841427 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/18512c82-a131-4c81-ab2f-c4d812729f46-config\") pod \"dnsmasq-dns-675f4bcbfc-br2wj\" (UID: \"18512c82-a131-4c81-ab2f-c4d812729f46\") " pod="openstack/dnsmasq-dns-675f4bcbfc-br2wj" Feb 27 10:46:26 crc kubenswrapper[4728]: I0227 10:46:26.845884 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/18512c82-a131-4c81-ab2f-c4d812729f46-config\") pod \"dnsmasq-dns-675f4bcbfc-br2wj\" (UID: \"18512c82-a131-4c81-ab2f-c4d812729f46\") " pod="openstack/dnsmasq-dns-675f4bcbfc-br2wj" Feb 27 10:46:26 crc kubenswrapper[4728]: I0227 10:46:26.870957 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dv6pp\" (UniqueName: \"kubernetes.io/projected/18512c82-a131-4c81-ab2f-c4d812729f46-kube-api-access-dv6pp\") pod \"dnsmasq-dns-675f4bcbfc-br2wj\" (UID: \"18512c82-a131-4c81-ab2f-c4d812729f46\") " pod="openstack/dnsmasq-dns-675f4bcbfc-br2wj" Feb 27 10:46:26 crc kubenswrapper[4728]: I0227 10:46:26.943194 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8879fddd-543d-4f27-bf6f-c78338147058-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-xtqcx\" (UID: \"8879fddd-543d-4f27-bf6f-c78338147058\") " pod="openstack/dnsmasq-dns-78dd6ddcc-xtqcx" Feb 27 10:46:26 crc kubenswrapper[4728]: I0227 10:46:26.943251 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8879fddd-543d-4f27-bf6f-c78338147058-config\") pod \"dnsmasq-dns-78dd6ddcc-xtqcx\" (UID: \"8879fddd-543d-4f27-bf6f-c78338147058\") " pod="openstack/dnsmasq-dns-78dd6ddcc-xtqcx" Feb 27 10:46:26 crc kubenswrapper[4728]: I0227 10:46:26.943286 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvqkd\" (UniqueName: \"kubernetes.io/projected/8879fddd-543d-4f27-bf6f-c78338147058-kube-api-access-pvqkd\") pod \"dnsmasq-dns-78dd6ddcc-xtqcx\" (UID: \"8879fddd-543d-4f27-bf6f-c78338147058\") " pod="openstack/dnsmasq-dns-78dd6ddcc-xtqcx" Feb 27 10:46:26 crc kubenswrapper[4728]: I0227 10:46:26.957974 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-br2wj" Feb 27 10:46:27 crc kubenswrapper[4728]: I0227 10:46:27.045403 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8879fddd-543d-4f27-bf6f-c78338147058-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-xtqcx\" (UID: \"8879fddd-543d-4f27-bf6f-c78338147058\") " pod="openstack/dnsmasq-dns-78dd6ddcc-xtqcx" Feb 27 10:46:27 crc kubenswrapper[4728]: I0227 10:46:27.045725 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8879fddd-543d-4f27-bf6f-c78338147058-config\") pod \"dnsmasq-dns-78dd6ddcc-xtqcx\" (UID: \"8879fddd-543d-4f27-bf6f-c78338147058\") " pod="openstack/dnsmasq-dns-78dd6ddcc-xtqcx" Feb 27 10:46:27 crc kubenswrapper[4728]: I0227 10:46:27.045762 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pvqkd\" (UniqueName: \"kubernetes.io/projected/8879fddd-543d-4f27-bf6f-c78338147058-kube-api-access-pvqkd\") pod \"dnsmasq-dns-78dd6ddcc-xtqcx\" (UID: \"8879fddd-543d-4f27-bf6f-c78338147058\") " pod="openstack/dnsmasq-dns-78dd6ddcc-xtqcx" Feb 27 10:46:27 crc kubenswrapper[4728]: I0227 10:46:27.046465 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8879fddd-543d-4f27-bf6f-c78338147058-config\") pod \"dnsmasq-dns-78dd6ddcc-xtqcx\" (UID: \"8879fddd-543d-4f27-bf6f-c78338147058\") " pod="openstack/dnsmasq-dns-78dd6ddcc-xtqcx" Feb 27 10:46:27 crc kubenswrapper[4728]: I0227 10:46:27.046484 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8879fddd-543d-4f27-bf6f-c78338147058-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-xtqcx\" (UID: \"8879fddd-543d-4f27-bf6f-c78338147058\") " pod="openstack/dnsmasq-dns-78dd6ddcc-xtqcx" Feb 27 10:46:27 crc kubenswrapper[4728]: I0227 10:46:27.062547 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pvqkd\" (UniqueName: \"kubernetes.io/projected/8879fddd-543d-4f27-bf6f-c78338147058-kube-api-access-pvqkd\") pod \"dnsmasq-dns-78dd6ddcc-xtqcx\" (UID: \"8879fddd-543d-4f27-bf6f-c78338147058\") " pod="openstack/dnsmasq-dns-78dd6ddcc-xtqcx" Feb 27 10:46:27 crc kubenswrapper[4728]: I0227 10:46:27.154350 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-xtqcx" Feb 27 10:46:27 crc kubenswrapper[4728]: I0227 10:46:27.458403 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-br2wj"] Feb 27 10:46:27 crc kubenswrapper[4728]: W0227 10:46:27.460341 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod18512c82_a131_4c81_ab2f_c4d812729f46.slice/crio-6517c27dc9054bf649862fac038c04f398b45102964935266251633bfe82b3ae WatchSource:0}: Error finding container 6517c27dc9054bf649862fac038c04f398b45102964935266251633bfe82b3ae: Status 404 returned error can't find the container with id 6517c27dc9054bf649862fac038c04f398b45102964935266251633bfe82b3ae Feb 27 10:46:27 crc kubenswrapper[4728]: I0227 10:46:27.462313 4728 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 27 10:46:27 crc kubenswrapper[4728]: I0227 10:46:27.619434 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-xtqcx"] Feb 27 10:46:27 crc kubenswrapper[4728]: W0227 10:46:27.620623 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8879fddd_543d_4f27_bf6f_c78338147058.slice/crio-2bb6b1e09737a7af7d8d2ef205e1d3238e3f325a969c0bcc50179e7cfe60b439 WatchSource:0}: Error finding container 2bb6b1e09737a7af7d8d2ef205e1d3238e3f325a969c0bcc50179e7cfe60b439: Status 404 returned error can't find the container with id 2bb6b1e09737a7af7d8d2ef205e1d3238e3f325a969c0bcc50179e7cfe60b439 Feb 27 10:46:27 crc kubenswrapper[4728]: I0227 10:46:27.751770 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-xtqcx" event={"ID":"8879fddd-543d-4f27-bf6f-c78338147058","Type":"ContainerStarted","Data":"2bb6b1e09737a7af7d8d2ef205e1d3238e3f325a969c0bcc50179e7cfe60b439"} Feb 27 10:46:27 crc kubenswrapper[4728]: I0227 10:46:27.755915 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-br2wj" event={"ID":"18512c82-a131-4c81-ab2f-c4d812729f46","Type":"ContainerStarted","Data":"6517c27dc9054bf649862fac038c04f398b45102964935266251633bfe82b3ae"} Feb 27 10:46:29 crc kubenswrapper[4728]: I0227 10:46:29.431689 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-br2wj"] Feb 27 10:46:29 crc kubenswrapper[4728]: I0227 10:46:29.461128 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-zpz7d"] Feb 27 10:46:29 crc kubenswrapper[4728]: I0227 10:46:29.463075 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc8479f9-zpz7d" Feb 27 10:46:29 crc kubenswrapper[4728]: I0227 10:46:29.472204 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-zpz7d"] Feb 27 10:46:29 crc kubenswrapper[4728]: I0227 10:46:29.490651 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8brl\" (UniqueName: \"kubernetes.io/projected/b7f588e1-6d0a-44ac-a7a4-967b0ecb4cd0-kube-api-access-m8brl\") pod \"dnsmasq-dns-5ccc8479f9-zpz7d\" (UID: \"b7f588e1-6d0a-44ac-a7a4-967b0ecb4cd0\") " pod="openstack/dnsmasq-dns-5ccc8479f9-zpz7d" Feb 27 10:46:29 crc kubenswrapper[4728]: I0227 10:46:29.490801 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b7f588e1-6d0a-44ac-a7a4-967b0ecb4cd0-config\") pod \"dnsmasq-dns-5ccc8479f9-zpz7d\" (UID: \"b7f588e1-6d0a-44ac-a7a4-967b0ecb4cd0\") " pod="openstack/dnsmasq-dns-5ccc8479f9-zpz7d" Feb 27 10:46:29 crc kubenswrapper[4728]: I0227 10:46:29.490824 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b7f588e1-6d0a-44ac-a7a4-967b0ecb4cd0-dns-svc\") pod \"dnsmasq-dns-5ccc8479f9-zpz7d\" (UID: \"b7f588e1-6d0a-44ac-a7a4-967b0ecb4cd0\") " pod="openstack/dnsmasq-dns-5ccc8479f9-zpz7d" Feb 27 10:46:29 crc kubenswrapper[4728]: I0227 10:46:29.596172 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b7f588e1-6d0a-44ac-a7a4-967b0ecb4cd0-config\") pod \"dnsmasq-dns-5ccc8479f9-zpz7d\" (UID: \"b7f588e1-6d0a-44ac-a7a4-967b0ecb4cd0\") " pod="openstack/dnsmasq-dns-5ccc8479f9-zpz7d" Feb 27 10:46:29 crc kubenswrapper[4728]: I0227 10:46:29.596214 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b7f588e1-6d0a-44ac-a7a4-967b0ecb4cd0-dns-svc\") pod \"dnsmasq-dns-5ccc8479f9-zpz7d\" (UID: \"b7f588e1-6d0a-44ac-a7a4-967b0ecb4cd0\") " pod="openstack/dnsmasq-dns-5ccc8479f9-zpz7d" Feb 27 10:46:29 crc kubenswrapper[4728]: I0227 10:46:29.596270 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m8brl\" (UniqueName: \"kubernetes.io/projected/b7f588e1-6d0a-44ac-a7a4-967b0ecb4cd0-kube-api-access-m8brl\") pod \"dnsmasq-dns-5ccc8479f9-zpz7d\" (UID: \"b7f588e1-6d0a-44ac-a7a4-967b0ecb4cd0\") " pod="openstack/dnsmasq-dns-5ccc8479f9-zpz7d" Feb 27 10:46:29 crc kubenswrapper[4728]: I0227 10:46:29.597338 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b7f588e1-6d0a-44ac-a7a4-967b0ecb4cd0-config\") pod \"dnsmasq-dns-5ccc8479f9-zpz7d\" (UID: \"b7f588e1-6d0a-44ac-a7a4-967b0ecb4cd0\") " pod="openstack/dnsmasq-dns-5ccc8479f9-zpz7d" Feb 27 10:46:29 crc kubenswrapper[4728]: I0227 10:46:29.597587 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b7f588e1-6d0a-44ac-a7a4-967b0ecb4cd0-dns-svc\") pod \"dnsmasq-dns-5ccc8479f9-zpz7d\" (UID: \"b7f588e1-6d0a-44ac-a7a4-967b0ecb4cd0\") " pod="openstack/dnsmasq-dns-5ccc8479f9-zpz7d" Feb 27 10:46:29 crc kubenswrapper[4728]: I0227 10:46:29.626620 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m8brl\" (UniqueName: \"kubernetes.io/projected/b7f588e1-6d0a-44ac-a7a4-967b0ecb4cd0-kube-api-access-m8brl\") pod \"dnsmasq-dns-5ccc8479f9-zpz7d\" (UID: \"b7f588e1-6d0a-44ac-a7a4-967b0ecb4cd0\") " pod="openstack/dnsmasq-dns-5ccc8479f9-zpz7d" Feb 27 10:46:29 crc kubenswrapper[4728]: I0227 10:46:29.723668 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-xtqcx"] Feb 27 10:46:29 crc kubenswrapper[4728]: I0227 10:46:29.782129 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-jvkhg"] Feb 27 10:46:29 crc kubenswrapper[4728]: I0227 10:46:29.783521 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-jvkhg" Feb 27 10:46:29 crc kubenswrapper[4728]: I0227 10:46:29.784771 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc8479f9-zpz7d" Feb 27 10:46:29 crc kubenswrapper[4728]: I0227 10:46:29.807559 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-jvkhg"] Feb 27 10:46:29 crc kubenswrapper[4728]: I0227 10:46:29.908845 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fd76g\" (UniqueName: \"kubernetes.io/projected/4eee28e7-e3c4-46a4-a0ee-88fdc8cb10e0-kube-api-access-fd76g\") pod \"dnsmasq-dns-57d769cc4f-jvkhg\" (UID: \"4eee28e7-e3c4-46a4-a0ee-88fdc8cb10e0\") " pod="openstack/dnsmasq-dns-57d769cc4f-jvkhg" Feb 27 10:46:29 crc kubenswrapper[4728]: I0227 10:46:29.909229 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4eee28e7-e3c4-46a4-a0ee-88fdc8cb10e0-config\") pod \"dnsmasq-dns-57d769cc4f-jvkhg\" (UID: \"4eee28e7-e3c4-46a4-a0ee-88fdc8cb10e0\") " pod="openstack/dnsmasq-dns-57d769cc4f-jvkhg" Feb 27 10:46:29 crc kubenswrapper[4728]: I0227 10:46:29.909280 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4eee28e7-e3c4-46a4-a0ee-88fdc8cb10e0-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-jvkhg\" (UID: \"4eee28e7-e3c4-46a4-a0ee-88fdc8cb10e0\") " pod="openstack/dnsmasq-dns-57d769cc4f-jvkhg" Feb 27 10:46:30 crc kubenswrapper[4728]: I0227 10:46:30.010647 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fd76g\" (UniqueName: \"kubernetes.io/projected/4eee28e7-e3c4-46a4-a0ee-88fdc8cb10e0-kube-api-access-fd76g\") pod \"dnsmasq-dns-57d769cc4f-jvkhg\" (UID: \"4eee28e7-e3c4-46a4-a0ee-88fdc8cb10e0\") " pod="openstack/dnsmasq-dns-57d769cc4f-jvkhg" Feb 27 10:46:30 crc kubenswrapper[4728]: I0227 10:46:30.010695 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4eee28e7-e3c4-46a4-a0ee-88fdc8cb10e0-config\") pod \"dnsmasq-dns-57d769cc4f-jvkhg\" (UID: \"4eee28e7-e3c4-46a4-a0ee-88fdc8cb10e0\") " pod="openstack/dnsmasq-dns-57d769cc4f-jvkhg" Feb 27 10:46:30 crc kubenswrapper[4728]: I0227 10:46:30.010719 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4eee28e7-e3c4-46a4-a0ee-88fdc8cb10e0-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-jvkhg\" (UID: \"4eee28e7-e3c4-46a4-a0ee-88fdc8cb10e0\") " pod="openstack/dnsmasq-dns-57d769cc4f-jvkhg" Feb 27 10:46:30 crc kubenswrapper[4728]: I0227 10:46:30.011535 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4eee28e7-e3c4-46a4-a0ee-88fdc8cb10e0-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-jvkhg\" (UID: \"4eee28e7-e3c4-46a4-a0ee-88fdc8cb10e0\") " pod="openstack/dnsmasq-dns-57d769cc4f-jvkhg" Feb 27 10:46:30 crc kubenswrapper[4728]: I0227 10:46:30.012534 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4eee28e7-e3c4-46a4-a0ee-88fdc8cb10e0-config\") pod \"dnsmasq-dns-57d769cc4f-jvkhg\" (UID: \"4eee28e7-e3c4-46a4-a0ee-88fdc8cb10e0\") " pod="openstack/dnsmasq-dns-57d769cc4f-jvkhg" Feb 27 10:46:30 crc kubenswrapper[4728]: I0227 10:46:30.031971 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fd76g\" (UniqueName: \"kubernetes.io/projected/4eee28e7-e3c4-46a4-a0ee-88fdc8cb10e0-kube-api-access-fd76g\") pod \"dnsmasq-dns-57d769cc4f-jvkhg\" (UID: \"4eee28e7-e3c4-46a4-a0ee-88fdc8cb10e0\") " pod="openstack/dnsmasq-dns-57d769cc4f-jvkhg" Feb 27 10:46:30 crc kubenswrapper[4728]: I0227 10:46:30.133541 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-jvkhg" Feb 27 10:46:30 crc kubenswrapper[4728]: I0227 10:46:30.385892 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-zpz7d"] Feb 27 10:46:30 crc kubenswrapper[4728]: I0227 10:46:30.629198 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 27 10:46:30 crc kubenswrapper[4728]: I0227 10:46:30.647795 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 27 10:46:30 crc kubenswrapper[4728]: I0227 10:46:30.647914 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 27 10:46:30 crc kubenswrapper[4728]: I0227 10:46:30.651820 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 27 10:46:30 crc kubenswrapper[4728]: I0227 10:46:30.652065 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 27 10:46:30 crc kubenswrapper[4728]: I0227 10:46:30.652194 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 27 10:46:30 crc kubenswrapper[4728]: I0227 10:46:30.652311 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 27 10:46:30 crc kubenswrapper[4728]: I0227 10:46:30.652434 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 27 10:46:30 crc kubenswrapper[4728]: I0227 10:46:30.652894 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-6zc2k" Feb 27 10:46:30 crc kubenswrapper[4728]: I0227 10:46:30.654624 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 27 10:46:30 crc kubenswrapper[4728]: I0227 10:46:30.656447 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-jvkhg"] Feb 27 10:46:30 crc kubenswrapper[4728]: I0227 10:46:30.729004 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/26ecfb63-8476-497d-9cb3-3729c4961b4e-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"26ecfb63-8476-497d-9cb3-3729c4961b4e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 10:46:30 crc kubenswrapper[4728]: I0227 10:46:30.729059 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/26ecfb63-8476-497d-9cb3-3729c4961b4e-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"26ecfb63-8476-497d-9cb3-3729c4961b4e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 10:46:30 crc kubenswrapper[4728]: I0227 10:46:30.729080 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/26ecfb63-8476-497d-9cb3-3729c4961b4e-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"26ecfb63-8476-497d-9cb3-3729c4961b4e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 10:46:30 crc kubenswrapper[4728]: I0227 10:46:30.729127 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/26ecfb63-8476-497d-9cb3-3729c4961b4e-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"26ecfb63-8476-497d-9cb3-3729c4961b4e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 10:46:30 crc kubenswrapper[4728]: I0227 10:46:30.729241 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/26ecfb63-8476-497d-9cb3-3729c4961b4e-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"26ecfb63-8476-497d-9cb3-3729c4961b4e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 10:46:30 crc kubenswrapper[4728]: I0227 10:46:30.729357 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/26ecfb63-8476-497d-9cb3-3729c4961b4e-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"26ecfb63-8476-497d-9cb3-3729c4961b4e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 10:46:30 crc kubenswrapper[4728]: I0227 10:46:30.729421 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-0766c1d2-55b8-4e58-9f95-8902126e782c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0766c1d2-55b8-4e58-9f95-8902126e782c\") pod \"rabbitmq-cell1-server-0\" (UID: \"26ecfb63-8476-497d-9cb3-3729c4961b4e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 10:46:30 crc kubenswrapper[4728]: I0227 10:46:30.729772 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/26ecfb63-8476-497d-9cb3-3729c4961b4e-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"26ecfb63-8476-497d-9cb3-3729c4961b4e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 10:46:30 crc kubenswrapper[4728]: I0227 10:46:30.729828 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/26ecfb63-8476-497d-9cb3-3729c4961b4e-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"26ecfb63-8476-497d-9cb3-3729c4961b4e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 10:46:30 crc kubenswrapper[4728]: I0227 10:46:30.729862 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xlhqr\" (UniqueName: \"kubernetes.io/projected/26ecfb63-8476-497d-9cb3-3729c4961b4e-kube-api-access-xlhqr\") pod \"rabbitmq-cell1-server-0\" (UID: \"26ecfb63-8476-497d-9cb3-3729c4961b4e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 10:46:30 crc kubenswrapper[4728]: I0227 10:46:30.729971 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/26ecfb63-8476-497d-9cb3-3729c4961b4e-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"26ecfb63-8476-497d-9cb3-3729c4961b4e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 10:46:30 crc kubenswrapper[4728]: I0227 10:46:30.792152 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc8479f9-zpz7d" event={"ID":"b7f588e1-6d0a-44ac-a7a4-967b0ecb4cd0","Type":"ContainerStarted","Data":"7de452135186dd165ad0cf00c04ac3d340b9651ae13fea7b8f3e50e76f13b0ff"} Feb 27 10:46:30 crc kubenswrapper[4728]: I0227 10:46:30.793228 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-jvkhg" event={"ID":"4eee28e7-e3c4-46a4-a0ee-88fdc8cb10e0","Type":"ContainerStarted","Data":"cfe4282ce681b6289fa1a5afcc6902675e4506ce82742702572bcecc2dfccf17"} Feb 27 10:46:30 crc kubenswrapper[4728]: I0227 10:46:30.831076 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/26ecfb63-8476-497d-9cb3-3729c4961b4e-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"26ecfb63-8476-497d-9cb3-3729c4961b4e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 10:46:30 crc kubenswrapper[4728]: I0227 10:46:30.831330 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/26ecfb63-8476-497d-9cb3-3729c4961b4e-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"26ecfb63-8476-497d-9cb3-3729c4961b4e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 10:46:30 crc kubenswrapper[4728]: I0227 10:46:30.831351 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/26ecfb63-8476-497d-9cb3-3729c4961b4e-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"26ecfb63-8476-497d-9cb3-3729c4961b4e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 10:46:30 crc kubenswrapper[4728]: I0227 10:46:30.831393 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/26ecfb63-8476-497d-9cb3-3729c4961b4e-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"26ecfb63-8476-497d-9cb3-3729c4961b4e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 10:46:30 crc kubenswrapper[4728]: I0227 10:46:30.831409 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/26ecfb63-8476-497d-9cb3-3729c4961b4e-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"26ecfb63-8476-497d-9cb3-3729c4961b4e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 10:46:30 crc kubenswrapper[4728]: I0227 10:46:30.831438 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/26ecfb63-8476-497d-9cb3-3729c4961b4e-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"26ecfb63-8476-497d-9cb3-3729c4961b4e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 10:46:30 crc kubenswrapper[4728]: I0227 10:46:30.831460 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-0766c1d2-55b8-4e58-9f95-8902126e782c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0766c1d2-55b8-4e58-9f95-8902126e782c\") pod \"rabbitmq-cell1-server-0\" (UID: \"26ecfb63-8476-497d-9cb3-3729c4961b4e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 10:46:30 crc kubenswrapper[4728]: I0227 10:46:30.831571 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/26ecfb63-8476-497d-9cb3-3729c4961b4e-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"26ecfb63-8476-497d-9cb3-3729c4961b4e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 10:46:30 crc kubenswrapper[4728]: I0227 10:46:30.831600 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/26ecfb63-8476-497d-9cb3-3729c4961b4e-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"26ecfb63-8476-497d-9cb3-3729c4961b4e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 10:46:30 crc kubenswrapper[4728]: I0227 10:46:30.831618 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xlhqr\" (UniqueName: \"kubernetes.io/projected/26ecfb63-8476-497d-9cb3-3729c4961b4e-kube-api-access-xlhqr\") pod \"rabbitmq-cell1-server-0\" (UID: \"26ecfb63-8476-497d-9cb3-3729c4961b4e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 10:46:30 crc kubenswrapper[4728]: I0227 10:46:30.831645 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/26ecfb63-8476-497d-9cb3-3729c4961b4e-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"26ecfb63-8476-497d-9cb3-3729c4961b4e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 10:46:30 crc kubenswrapper[4728]: I0227 10:46:30.834604 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/26ecfb63-8476-497d-9cb3-3729c4961b4e-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"26ecfb63-8476-497d-9cb3-3729c4961b4e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 10:46:30 crc kubenswrapper[4728]: I0227 10:46:30.835177 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/26ecfb63-8476-497d-9cb3-3729c4961b4e-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"26ecfb63-8476-497d-9cb3-3729c4961b4e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 10:46:30 crc kubenswrapper[4728]: I0227 10:46:30.835297 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/26ecfb63-8476-497d-9cb3-3729c4961b4e-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"26ecfb63-8476-497d-9cb3-3729c4961b4e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 10:46:30 crc kubenswrapper[4728]: I0227 10:46:30.838074 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/26ecfb63-8476-497d-9cb3-3729c4961b4e-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"26ecfb63-8476-497d-9cb3-3729c4961b4e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 10:46:30 crc kubenswrapper[4728]: I0227 10:46:30.840199 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/26ecfb63-8476-497d-9cb3-3729c4961b4e-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"26ecfb63-8476-497d-9cb3-3729c4961b4e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 10:46:30 crc kubenswrapper[4728]: I0227 10:46:30.840836 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/26ecfb63-8476-497d-9cb3-3729c4961b4e-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"26ecfb63-8476-497d-9cb3-3729c4961b4e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 10:46:30 crc kubenswrapper[4728]: I0227 10:46:30.840953 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/26ecfb63-8476-497d-9cb3-3729c4961b4e-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"26ecfb63-8476-497d-9cb3-3729c4961b4e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 10:46:30 crc kubenswrapper[4728]: I0227 10:46:30.843644 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/26ecfb63-8476-497d-9cb3-3729c4961b4e-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"26ecfb63-8476-497d-9cb3-3729c4961b4e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 10:46:30 crc kubenswrapper[4728]: I0227 10:46:30.843698 4728 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 27 10:46:30 crc kubenswrapper[4728]: I0227 10:46:30.843746 4728 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-0766c1d2-55b8-4e58-9f95-8902126e782c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0766c1d2-55b8-4e58-9f95-8902126e782c\") pod \"rabbitmq-cell1-server-0\" (UID: \"26ecfb63-8476-497d-9cb3-3729c4961b4e\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/323f8e7c36144bd439e3d750bd883408bca281a20485637d7338f56eabb24e88/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Feb 27 10:46:30 crc kubenswrapper[4728]: I0227 10:46:30.851845 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xlhqr\" (UniqueName: \"kubernetes.io/projected/26ecfb63-8476-497d-9cb3-3729c4961b4e-kube-api-access-xlhqr\") pod \"rabbitmq-cell1-server-0\" (UID: \"26ecfb63-8476-497d-9cb3-3729c4961b4e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 10:46:30 crc kubenswrapper[4728]: I0227 10:46:30.853645 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/26ecfb63-8476-497d-9cb3-3729c4961b4e-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"26ecfb63-8476-497d-9cb3-3729c4961b4e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 10:46:30 crc kubenswrapper[4728]: I0227 10:46:30.871183 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-0766c1d2-55b8-4e58-9f95-8902126e782c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0766c1d2-55b8-4e58-9f95-8902126e782c\") pod \"rabbitmq-cell1-server-0\" (UID: \"26ecfb63-8476-497d-9cb3-3729c4961b4e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 10:46:30 crc kubenswrapper[4728]: I0227 10:46:30.970251 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 27 10:46:30 crc kubenswrapper[4728]: I0227 10:46:30.973916 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 27 10:46:31 crc kubenswrapper[4728]: I0227 10:46:31.007842 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Feb 27 10:46:31 crc kubenswrapper[4728]: I0227 10:46:31.008134 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 27 10:46:31 crc kubenswrapper[4728]: I0227 10:46:31.008350 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Feb 27 10:46:31 crc kubenswrapper[4728]: I0227 10:46:31.008461 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Feb 27 10:46:31 crc kubenswrapper[4728]: I0227 10:46:31.008959 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Feb 27 10:46:31 crc kubenswrapper[4728]: I0227 10:46:31.009103 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Feb 27 10:46:31 crc kubenswrapper[4728]: I0227 10:46:31.009189 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Feb 27 10:46:31 crc kubenswrapper[4728]: I0227 10:46:31.009336 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-dkpkz" Feb 27 10:46:31 crc kubenswrapper[4728]: I0227 10:46:31.034585 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-2"] Feb 27 10:46:31 crc kubenswrapper[4728]: I0227 10:46:31.036185 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Feb 27 10:46:31 crc kubenswrapper[4728]: I0227 10:46:31.060288 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-1"] Feb 27 10:46:31 crc kubenswrapper[4728]: I0227 10:46:31.061918 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Feb 27 10:46:31 crc kubenswrapper[4728]: I0227 10:46:31.082486 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 27 10:46:31 crc kubenswrapper[4728]: I0227 10:46:31.095149 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-2"] Feb 27 10:46:31 crc kubenswrapper[4728]: I0227 10:46:31.124552 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-1"] Feb 27 10:46:31 crc kubenswrapper[4728]: I0227 10:46:31.141316 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/5948716b-2c2b-4a90-b4b5-f8daad17f020-server-conf\") pod \"rabbitmq-server-0\" (UID: \"5948716b-2c2b-4a90-b4b5-f8daad17f020\") " pod="openstack/rabbitmq-server-0" Feb 27 10:46:31 crc kubenswrapper[4728]: I0227 10:46:31.141625 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/5948716b-2c2b-4a90-b4b5-f8daad17f020-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"5948716b-2c2b-4a90-b4b5-f8daad17f020\") " pod="openstack/rabbitmq-server-0" Feb 27 10:46:31 crc kubenswrapper[4728]: I0227 10:46:31.141674 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/5948716b-2c2b-4a90-b4b5-f8daad17f020-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"5948716b-2c2b-4a90-b4b5-f8daad17f020\") " pod="openstack/rabbitmq-server-0" Feb 27 10:46:31 crc kubenswrapper[4728]: I0227 10:46:31.141694 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/5948716b-2c2b-4a90-b4b5-f8daad17f020-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"5948716b-2c2b-4a90-b4b5-f8daad17f020\") " pod="openstack/rabbitmq-server-0" Feb 27 10:46:31 crc kubenswrapper[4728]: I0227 10:46:31.141724 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/5948716b-2c2b-4a90-b4b5-f8daad17f020-pod-info\") pod \"rabbitmq-server-0\" (UID: \"5948716b-2c2b-4a90-b4b5-f8daad17f020\") " pod="openstack/rabbitmq-server-0" Feb 27 10:46:31 crc kubenswrapper[4728]: I0227 10:46:31.141764 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/5948716b-2c2b-4a90-b4b5-f8daad17f020-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"5948716b-2c2b-4a90-b4b5-f8daad17f020\") " pod="openstack/rabbitmq-server-0" Feb 27 10:46:31 crc kubenswrapper[4728]: I0227 10:46:31.141789 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/5948716b-2c2b-4a90-b4b5-f8daad17f020-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"5948716b-2c2b-4a90-b4b5-f8daad17f020\") " pod="openstack/rabbitmq-server-0" Feb 27 10:46:31 crc kubenswrapper[4728]: I0227 10:46:31.141807 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5948716b-2c2b-4a90-b4b5-f8daad17f020-config-data\") pod \"rabbitmq-server-0\" (UID: \"5948716b-2c2b-4a90-b4b5-f8daad17f020\") " pod="openstack/rabbitmq-server-0" Feb 27 10:46:31 crc kubenswrapper[4728]: I0227 10:46:31.141830 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-f3f36d95-12a2-44f4-8352-45ce349ef93c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f3f36d95-12a2-44f4-8352-45ce349ef93c\") pod \"rabbitmq-server-0\" (UID: \"5948716b-2c2b-4a90-b4b5-f8daad17f020\") " pod="openstack/rabbitmq-server-0" Feb 27 10:46:31 crc kubenswrapper[4728]: I0227 10:46:31.141864 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/5948716b-2c2b-4a90-b4b5-f8daad17f020-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"5948716b-2c2b-4a90-b4b5-f8daad17f020\") " pod="openstack/rabbitmq-server-0" Feb 27 10:46:31 crc kubenswrapper[4728]: I0227 10:46:31.141925 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j76f9\" (UniqueName: \"kubernetes.io/projected/5948716b-2c2b-4a90-b4b5-f8daad17f020-kube-api-access-j76f9\") pod \"rabbitmq-server-0\" (UID: \"5948716b-2c2b-4a90-b4b5-f8daad17f020\") " pod="openstack/rabbitmq-server-0" Feb 27 10:46:31 crc kubenswrapper[4728]: I0227 10:46:31.243004 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d96ab6cd-ed9d-4924-9566-91930411701d-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"d96ab6cd-ed9d-4924-9566-91930411701d\") " pod="openstack/rabbitmq-server-1" Feb 27 10:46:31 crc kubenswrapper[4728]: I0227 10:46:31.243057 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ad00da50-2e05-4612-a862-5cccd698e77b-config-data\") pod \"rabbitmq-server-2\" (UID: \"ad00da50-2e05-4612-a862-5cccd698e77b\") " pod="openstack/rabbitmq-server-2" Feb 27 10:46:31 crc kubenswrapper[4728]: I0227 10:46:31.243075 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ad00da50-2e05-4612-a862-5cccd698e77b-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"ad00da50-2e05-4612-a862-5cccd698e77b\") " pod="openstack/rabbitmq-server-2" Feb 27 10:46:31 crc kubenswrapper[4728]: I0227 10:46:31.243095 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j76f9\" (UniqueName: \"kubernetes.io/projected/5948716b-2c2b-4a90-b4b5-f8daad17f020-kube-api-access-j76f9\") pod \"rabbitmq-server-0\" (UID: \"5948716b-2c2b-4a90-b4b5-f8daad17f020\") " pod="openstack/rabbitmq-server-0" Feb 27 10:46:31 crc kubenswrapper[4728]: I0227 10:46:31.243115 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d96ab6cd-ed9d-4924-9566-91930411701d-pod-info\") pod \"rabbitmq-server-1\" (UID: \"d96ab6cd-ed9d-4924-9566-91930411701d\") " pod="openstack/rabbitmq-server-1" Feb 27 10:46:31 crc kubenswrapper[4728]: I0227 10:46:31.243133 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ad00da50-2e05-4612-a862-5cccd698e77b-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"ad00da50-2e05-4612-a862-5cccd698e77b\") " pod="openstack/rabbitmq-server-2" Feb 27 10:46:31 crc kubenswrapper[4728]: I0227 10:46:31.243151 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d96ab6cd-ed9d-4924-9566-91930411701d-config-data\") pod \"rabbitmq-server-1\" (UID: \"d96ab6cd-ed9d-4924-9566-91930411701d\") " pod="openstack/rabbitmq-server-1" Feb 27 10:46:31 crc kubenswrapper[4728]: I0227 10:46:31.243271 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqc66\" (UniqueName: \"kubernetes.io/projected/ad00da50-2e05-4612-a862-5cccd698e77b-kube-api-access-xqc66\") pod \"rabbitmq-server-2\" (UID: \"ad00da50-2e05-4612-a862-5cccd698e77b\") " pod="openstack/rabbitmq-server-2" Feb 27 10:46:31 crc kubenswrapper[4728]: I0227 10:46:31.243390 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d96ab6cd-ed9d-4924-9566-91930411701d-server-conf\") pod \"rabbitmq-server-1\" (UID: \"d96ab6cd-ed9d-4924-9566-91930411701d\") " pod="openstack/rabbitmq-server-1" Feb 27 10:46:31 crc kubenswrapper[4728]: I0227 10:46:31.243421 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/5948716b-2c2b-4a90-b4b5-f8daad17f020-server-conf\") pod \"rabbitmq-server-0\" (UID: \"5948716b-2c2b-4a90-b4b5-f8daad17f020\") " pod="openstack/rabbitmq-server-0" Feb 27 10:46:31 crc kubenswrapper[4728]: I0227 10:46:31.243436 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/5948716b-2c2b-4a90-b4b5-f8daad17f020-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"5948716b-2c2b-4a90-b4b5-f8daad17f020\") " pod="openstack/rabbitmq-server-0" Feb 27 10:46:31 crc kubenswrapper[4728]: I0227 10:46:31.243456 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d96ab6cd-ed9d-4924-9566-91930411701d-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"d96ab6cd-ed9d-4924-9566-91930411701d\") " pod="openstack/rabbitmq-server-1" Feb 27 10:46:31 crc kubenswrapper[4728]: I0227 10:46:31.243548 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d96ab6cd-ed9d-4924-9566-91930411701d-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"d96ab6cd-ed9d-4924-9566-91930411701d\") " pod="openstack/rabbitmq-server-1" Feb 27 10:46:31 crc kubenswrapper[4728]: I0227 10:46:31.244356 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ad00da50-2e05-4612-a862-5cccd698e77b-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"ad00da50-2e05-4612-a862-5cccd698e77b\") " pod="openstack/rabbitmq-server-2" Feb 27 10:46:31 crc kubenswrapper[4728]: I0227 10:46:31.244417 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/5948716b-2c2b-4a90-b4b5-f8daad17f020-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"5948716b-2c2b-4a90-b4b5-f8daad17f020\") " pod="openstack/rabbitmq-server-0" Feb 27 10:46:31 crc kubenswrapper[4728]: I0227 10:46:31.244442 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/5948716b-2c2b-4a90-b4b5-f8daad17f020-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"5948716b-2c2b-4a90-b4b5-f8daad17f020\") " pod="openstack/rabbitmq-server-0" Feb 27 10:46:31 crc kubenswrapper[4728]: I0227 10:46:31.244482 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2cgps\" (UniqueName: \"kubernetes.io/projected/d96ab6cd-ed9d-4924-9566-91930411701d-kube-api-access-2cgps\") pod \"rabbitmq-server-1\" (UID: \"d96ab6cd-ed9d-4924-9566-91930411701d\") " pod="openstack/rabbitmq-server-1" Feb 27 10:46:31 crc kubenswrapper[4728]: I0227 10:46:31.244538 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d96ab6cd-ed9d-4924-9566-91930411701d-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"d96ab6cd-ed9d-4924-9566-91930411701d\") " pod="openstack/rabbitmq-server-1" Feb 27 10:46:31 crc kubenswrapper[4728]: I0227 10:46:31.244556 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ad00da50-2e05-4612-a862-5cccd698e77b-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"ad00da50-2e05-4612-a862-5cccd698e77b\") " pod="openstack/rabbitmq-server-2" Feb 27 10:46:31 crc kubenswrapper[4728]: I0227 10:46:31.244595 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/5948716b-2c2b-4a90-b4b5-f8daad17f020-pod-info\") pod \"rabbitmq-server-0\" (UID: \"5948716b-2c2b-4a90-b4b5-f8daad17f020\") " pod="openstack/rabbitmq-server-0" Feb 27 10:46:31 crc kubenswrapper[4728]: I0227 10:46:31.244620 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d96ab6cd-ed9d-4924-9566-91930411701d-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"d96ab6cd-ed9d-4924-9566-91930411701d\") " pod="openstack/rabbitmq-server-1" Feb 27 10:46:31 crc kubenswrapper[4728]: I0227 10:46:31.244652 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ad00da50-2e05-4612-a862-5cccd698e77b-pod-info\") pod \"rabbitmq-server-2\" (UID: \"ad00da50-2e05-4612-a862-5cccd698e77b\") " pod="openstack/rabbitmq-server-2" Feb 27 10:46:31 crc kubenswrapper[4728]: I0227 10:46:31.244683 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ad00da50-2e05-4612-a862-5cccd698e77b-server-conf\") pod \"rabbitmq-server-2\" (UID: \"ad00da50-2e05-4612-a862-5cccd698e77b\") " pod="openstack/rabbitmq-server-2" Feb 27 10:46:31 crc kubenswrapper[4728]: I0227 10:46:31.244706 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d96ab6cd-ed9d-4924-9566-91930411701d-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"d96ab6cd-ed9d-4924-9566-91930411701d\") " pod="openstack/rabbitmq-server-1" Feb 27 10:46:31 crc kubenswrapper[4728]: I0227 10:46:31.244745 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-b8de2248-9600-4d63-9c2b-b5303351b265\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b8de2248-9600-4d63-9c2b-b5303351b265\") pod \"rabbitmq-server-1\" (UID: \"d96ab6cd-ed9d-4924-9566-91930411701d\") " pod="openstack/rabbitmq-server-1" Feb 27 10:46:31 crc kubenswrapper[4728]: I0227 10:46:31.245108 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/5948716b-2c2b-4a90-b4b5-f8daad17f020-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"5948716b-2c2b-4a90-b4b5-f8daad17f020\") " pod="openstack/rabbitmq-server-0" Feb 27 10:46:31 crc kubenswrapper[4728]: I0227 10:46:31.245128 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ad00da50-2e05-4612-a862-5cccd698e77b-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"ad00da50-2e05-4612-a862-5cccd698e77b\") " pod="openstack/rabbitmq-server-2" Feb 27 10:46:31 crc kubenswrapper[4728]: I0227 10:46:31.245152 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-6483969b-d2a4-484b-83ca-44d2967b94b0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6483969b-d2a4-484b-83ca-44d2967b94b0\") pod \"rabbitmq-server-2\" (UID: \"ad00da50-2e05-4612-a862-5cccd698e77b\") " pod="openstack/rabbitmq-server-2" Feb 27 10:46:31 crc kubenswrapper[4728]: I0227 10:46:31.245198 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5948716b-2c2b-4a90-b4b5-f8daad17f020-config-data\") pod \"rabbitmq-server-0\" (UID: \"5948716b-2c2b-4a90-b4b5-f8daad17f020\") " pod="openstack/rabbitmq-server-0" Feb 27 10:46:31 crc kubenswrapper[4728]: I0227 10:46:31.245219 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/5948716b-2c2b-4a90-b4b5-f8daad17f020-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"5948716b-2c2b-4a90-b4b5-f8daad17f020\") " pod="openstack/rabbitmq-server-0" Feb 27 10:46:31 crc kubenswrapper[4728]: I0227 10:46:31.245246 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-f3f36d95-12a2-44f4-8352-45ce349ef93c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f3f36d95-12a2-44f4-8352-45ce349ef93c\") pod \"rabbitmq-server-0\" (UID: \"5948716b-2c2b-4a90-b4b5-f8daad17f020\") " pod="openstack/rabbitmq-server-0" Feb 27 10:46:31 crc kubenswrapper[4728]: I0227 10:46:31.245273 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ad00da50-2e05-4612-a862-5cccd698e77b-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"ad00da50-2e05-4612-a862-5cccd698e77b\") " pod="openstack/rabbitmq-server-2" Feb 27 10:46:31 crc kubenswrapper[4728]: I0227 10:46:31.245311 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/5948716b-2c2b-4a90-b4b5-f8daad17f020-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"5948716b-2c2b-4a90-b4b5-f8daad17f020\") " pod="openstack/rabbitmq-server-0" Feb 27 10:46:31 crc kubenswrapper[4728]: I0227 10:46:31.245314 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/5948716b-2c2b-4a90-b4b5-f8daad17f020-server-conf\") pod \"rabbitmq-server-0\" (UID: \"5948716b-2c2b-4a90-b4b5-f8daad17f020\") " pod="openstack/rabbitmq-server-0" Feb 27 10:46:31 crc kubenswrapper[4728]: I0227 10:46:31.245945 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/5948716b-2c2b-4a90-b4b5-f8daad17f020-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"5948716b-2c2b-4a90-b4b5-f8daad17f020\") " pod="openstack/rabbitmq-server-0" Feb 27 10:46:31 crc kubenswrapper[4728]: I0227 10:46:31.246157 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5948716b-2c2b-4a90-b4b5-f8daad17f020-config-data\") pod \"rabbitmq-server-0\" (UID: \"5948716b-2c2b-4a90-b4b5-f8daad17f020\") " pod="openstack/rabbitmq-server-0" Feb 27 10:46:31 crc kubenswrapper[4728]: I0227 10:46:31.248986 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/5948716b-2c2b-4a90-b4b5-f8daad17f020-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"5948716b-2c2b-4a90-b4b5-f8daad17f020\") " pod="openstack/rabbitmq-server-0" Feb 27 10:46:31 crc kubenswrapper[4728]: I0227 10:46:31.249128 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/5948716b-2c2b-4a90-b4b5-f8daad17f020-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"5948716b-2c2b-4a90-b4b5-f8daad17f020\") " pod="openstack/rabbitmq-server-0" Feb 27 10:46:31 crc kubenswrapper[4728]: I0227 10:46:31.249259 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/5948716b-2c2b-4a90-b4b5-f8daad17f020-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"5948716b-2c2b-4a90-b4b5-f8daad17f020\") " pod="openstack/rabbitmq-server-0" Feb 27 10:46:31 crc kubenswrapper[4728]: I0227 10:46:31.250270 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/5948716b-2c2b-4a90-b4b5-f8daad17f020-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"5948716b-2c2b-4a90-b4b5-f8daad17f020\") " pod="openstack/rabbitmq-server-0" Feb 27 10:46:31 crc kubenswrapper[4728]: I0227 10:46:31.250321 4728 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 27 10:46:31 crc kubenswrapper[4728]: I0227 10:46:31.250342 4728 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-f3f36d95-12a2-44f4-8352-45ce349ef93c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f3f36d95-12a2-44f4-8352-45ce349ef93c\") pod \"rabbitmq-server-0\" (UID: \"5948716b-2c2b-4a90-b4b5-f8daad17f020\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/cf7e939e9da33831c806ab7477c702db97a95da5329aaac1969ecd83a668f3c0/globalmount\"" pod="openstack/rabbitmq-server-0" Feb 27 10:46:31 crc kubenswrapper[4728]: I0227 10:46:31.250434 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/5948716b-2c2b-4a90-b4b5-f8daad17f020-pod-info\") pod \"rabbitmq-server-0\" (UID: \"5948716b-2c2b-4a90-b4b5-f8daad17f020\") " pod="openstack/rabbitmq-server-0" Feb 27 10:46:31 crc kubenswrapper[4728]: I0227 10:46:31.258983 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j76f9\" (UniqueName: \"kubernetes.io/projected/5948716b-2c2b-4a90-b4b5-f8daad17f020-kube-api-access-j76f9\") pod \"rabbitmq-server-0\" (UID: \"5948716b-2c2b-4a90-b4b5-f8daad17f020\") " pod="openstack/rabbitmq-server-0" Feb 27 10:46:31 crc kubenswrapper[4728]: I0227 10:46:31.259892 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/5948716b-2c2b-4a90-b4b5-f8daad17f020-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"5948716b-2c2b-4a90-b4b5-f8daad17f020\") " pod="openstack/rabbitmq-server-0" Feb 27 10:46:31 crc kubenswrapper[4728]: I0227 10:46:31.284484 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-f3f36d95-12a2-44f4-8352-45ce349ef93c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f3f36d95-12a2-44f4-8352-45ce349ef93c\") pod \"rabbitmq-server-0\" (UID: \"5948716b-2c2b-4a90-b4b5-f8daad17f020\") " pod="openstack/rabbitmq-server-0" Feb 27 10:46:31 crc kubenswrapper[4728]: I0227 10:46:31.335863 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 27 10:46:31 crc kubenswrapper[4728]: I0227 10:46:31.347966 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2cgps\" (UniqueName: \"kubernetes.io/projected/d96ab6cd-ed9d-4924-9566-91930411701d-kube-api-access-2cgps\") pod \"rabbitmq-server-1\" (UID: \"d96ab6cd-ed9d-4924-9566-91930411701d\") " pod="openstack/rabbitmq-server-1" Feb 27 10:46:31 crc kubenswrapper[4728]: I0227 10:46:31.348015 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d96ab6cd-ed9d-4924-9566-91930411701d-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"d96ab6cd-ed9d-4924-9566-91930411701d\") " pod="openstack/rabbitmq-server-1" Feb 27 10:46:31 crc kubenswrapper[4728]: I0227 10:46:31.348034 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ad00da50-2e05-4612-a862-5cccd698e77b-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"ad00da50-2e05-4612-a862-5cccd698e77b\") " pod="openstack/rabbitmq-server-2" Feb 27 10:46:31 crc kubenswrapper[4728]: I0227 10:46:31.348062 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d96ab6cd-ed9d-4924-9566-91930411701d-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"d96ab6cd-ed9d-4924-9566-91930411701d\") " pod="openstack/rabbitmq-server-1" Feb 27 10:46:31 crc kubenswrapper[4728]: I0227 10:46:31.348083 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ad00da50-2e05-4612-a862-5cccd698e77b-pod-info\") pod \"rabbitmq-server-2\" (UID: \"ad00da50-2e05-4612-a862-5cccd698e77b\") " pod="openstack/rabbitmq-server-2" Feb 27 10:46:31 crc kubenswrapper[4728]: I0227 10:46:31.348103 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ad00da50-2e05-4612-a862-5cccd698e77b-server-conf\") pod \"rabbitmq-server-2\" (UID: \"ad00da50-2e05-4612-a862-5cccd698e77b\") " pod="openstack/rabbitmq-server-2" Feb 27 10:46:31 crc kubenswrapper[4728]: I0227 10:46:31.348119 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d96ab6cd-ed9d-4924-9566-91930411701d-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"d96ab6cd-ed9d-4924-9566-91930411701d\") " pod="openstack/rabbitmq-server-1" Feb 27 10:46:31 crc kubenswrapper[4728]: I0227 10:46:31.348141 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-b8de2248-9600-4d63-9c2b-b5303351b265\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b8de2248-9600-4d63-9c2b-b5303351b265\") pod \"rabbitmq-server-1\" (UID: \"d96ab6cd-ed9d-4924-9566-91930411701d\") " pod="openstack/rabbitmq-server-1" Feb 27 10:46:31 crc kubenswrapper[4728]: I0227 10:46:31.348160 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ad00da50-2e05-4612-a862-5cccd698e77b-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"ad00da50-2e05-4612-a862-5cccd698e77b\") " pod="openstack/rabbitmq-server-2" Feb 27 10:46:31 crc kubenswrapper[4728]: I0227 10:46:31.348178 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-6483969b-d2a4-484b-83ca-44d2967b94b0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6483969b-d2a4-484b-83ca-44d2967b94b0\") pod \"rabbitmq-server-2\" (UID: \"ad00da50-2e05-4612-a862-5cccd698e77b\") " pod="openstack/rabbitmq-server-2" Feb 27 10:46:31 crc kubenswrapper[4728]: I0227 10:46:31.348206 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ad00da50-2e05-4612-a862-5cccd698e77b-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"ad00da50-2e05-4612-a862-5cccd698e77b\") " pod="openstack/rabbitmq-server-2" Feb 27 10:46:31 crc kubenswrapper[4728]: I0227 10:46:31.348248 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d96ab6cd-ed9d-4924-9566-91930411701d-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"d96ab6cd-ed9d-4924-9566-91930411701d\") " pod="openstack/rabbitmq-server-1" Feb 27 10:46:31 crc kubenswrapper[4728]: I0227 10:46:31.348279 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ad00da50-2e05-4612-a862-5cccd698e77b-config-data\") pod \"rabbitmq-server-2\" (UID: \"ad00da50-2e05-4612-a862-5cccd698e77b\") " pod="openstack/rabbitmq-server-2" Feb 27 10:46:31 crc kubenswrapper[4728]: I0227 10:46:31.348294 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ad00da50-2e05-4612-a862-5cccd698e77b-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"ad00da50-2e05-4612-a862-5cccd698e77b\") " pod="openstack/rabbitmq-server-2" Feb 27 10:46:31 crc kubenswrapper[4728]: I0227 10:46:31.348315 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d96ab6cd-ed9d-4924-9566-91930411701d-pod-info\") pod \"rabbitmq-server-1\" (UID: \"d96ab6cd-ed9d-4924-9566-91930411701d\") " pod="openstack/rabbitmq-server-1" Feb 27 10:46:31 crc kubenswrapper[4728]: I0227 10:46:31.348332 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ad00da50-2e05-4612-a862-5cccd698e77b-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"ad00da50-2e05-4612-a862-5cccd698e77b\") " pod="openstack/rabbitmq-server-2" Feb 27 10:46:31 crc kubenswrapper[4728]: I0227 10:46:31.348348 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d96ab6cd-ed9d-4924-9566-91930411701d-config-data\") pod \"rabbitmq-server-1\" (UID: \"d96ab6cd-ed9d-4924-9566-91930411701d\") " pod="openstack/rabbitmq-server-1" Feb 27 10:46:31 crc kubenswrapper[4728]: I0227 10:46:31.348367 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xqc66\" (UniqueName: \"kubernetes.io/projected/ad00da50-2e05-4612-a862-5cccd698e77b-kube-api-access-xqc66\") pod \"rabbitmq-server-2\" (UID: \"ad00da50-2e05-4612-a862-5cccd698e77b\") " pod="openstack/rabbitmq-server-2" Feb 27 10:46:31 crc kubenswrapper[4728]: I0227 10:46:31.348389 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d96ab6cd-ed9d-4924-9566-91930411701d-server-conf\") pod \"rabbitmq-server-1\" (UID: \"d96ab6cd-ed9d-4924-9566-91930411701d\") " pod="openstack/rabbitmq-server-1" Feb 27 10:46:31 crc kubenswrapper[4728]: I0227 10:46:31.348406 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d96ab6cd-ed9d-4924-9566-91930411701d-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"d96ab6cd-ed9d-4924-9566-91930411701d\") " pod="openstack/rabbitmq-server-1" Feb 27 10:46:31 crc kubenswrapper[4728]: I0227 10:46:31.348429 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d96ab6cd-ed9d-4924-9566-91930411701d-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"d96ab6cd-ed9d-4924-9566-91930411701d\") " pod="openstack/rabbitmq-server-1" Feb 27 10:46:31 crc kubenswrapper[4728]: I0227 10:46:31.348450 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ad00da50-2e05-4612-a862-5cccd698e77b-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"ad00da50-2e05-4612-a862-5cccd698e77b\") " pod="openstack/rabbitmq-server-2" Feb 27 10:46:31 crc kubenswrapper[4728]: I0227 10:46:31.349081 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ad00da50-2e05-4612-a862-5cccd698e77b-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"ad00da50-2e05-4612-a862-5cccd698e77b\") " pod="openstack/rabbitmq-server-2" Feb 27 10:46:31 crc kubenswrapper[4728]: I0227 10:46:31.351599 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ad00da50-2e05-4612-a862-5cccd698e77b-config-data\") pod \"rabbitmq-server-2\" (UID: \"ad00da50-2e05-4612-a862-5cccd698e77b\") " pod="openstack/rabbitmq-server-2" Feb 27 10:46:31 crc kubenswrapper[4728]: I0227 10:46:31.352011 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d96ab6cd-ed9d-4924-9566-91930411701d-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"d96ab6cd-ed9d-4924-9566-91930411701d\") " pod="openstack/rabbitmq-server-1" Feb 27 10:46:31 crc kubenswrapper[4728]: I0227 10:46:31.352173 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ad00da50-2e05-4612-a862-5cccd698e77b-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"ad00da50-2e05-4612-a862-5cccd698e77b\") " pod="openstack/rabbitmq-server-2" Feb 27 10:46:31 crc kubenswrapper[4728]: I0227 10:46:31.351617 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d96ab6cd-ed9d-4924-9566-91930411701d-config-data\") pod \"rabbitmq-server-1\" (UID: \"d96ab6cd-ed9d-4924-9566-91930411701d\") " pod="openstack/rabbitmq-server-1" Feb 27 10:46:31 crc kubenswrapper[4728]: I0227 10:46:31.352428 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d96ab6cd-ed9d-4924-9566-91930411701d-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"d96ab6cd-ed9d-4924-9566-91930411701d\") " pod="openstack/rabbitmq-server-1" Feb 27 10:46:31 crc kubenswrapper[4728]: I0227 10:46:31.352653 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d96ab6cd-ed9d-4924-9566-91930411701d-server-conf\") pod \"rabbitmq-server-1\" (UID: \"d96ab6cd-ed9d-4924-9566-91930411701d\") " pod="openstack/rabbitmq-server-1" Feb 27 10:46:31 crc kubenswrapper[4728]: I0227 10:46:31.353635 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ad00da50-2e05-4612-a862-5cccd698e77b-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"ad00da50-2e05-4612-a862-5cccd698e77b\") " pod="openstack/rabbitmq-server-2" Feb 27 10:46:31 crc kubenswrapper[4728]: I0227 10:46:31.353703 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ad00da50-2e05-4612-a862-5cccd698e77b-server-conf\") pod \"rabbitmq-server-2\" (UID: \"ad00da50-2e05-4612-a862-5cccd698e77b\") " pod="openstack/rabbitmq-server-2" Feb 27 10:46:31 crc kubenswrapper[4728]: I0227 10:46:31.353734 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ad00da50-2e05-4612-a862-5cccd698e77b-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"ad00da50-2e05-4612-a862-5cccd698e77b\") " pod="openstack/rabbitmq-server-2" Feb 27 10:46:31 crc kubenswrapper[4728]: I0227 10:46:31.353886 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d96ab6cd-ed9d-4924-9566-91930411701d-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"d96ab6cd-ed9d-4924-9566-91930411701d\") " pod="openstack/rabbitmq-server-1" Feb 27 10:46:31 crc kubenswrapper[4728]: I0227 10:46:31.354646 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d96ab6cd-ed9d-4924-9566-91930411701d-pod-info\") pod \"rabbitmq-server-1\" (UID: \"d96ab6cd-ed9d-4924-9566-91930411701d\") " pod="openstack/rabbitmq-server-1" Feb 27 10:46:31 crc kubenswrapper[4728]: I0227 10:46:31.356907 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d96ab6cd-ed9d-4924-9566-91930411701d-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"d96ab6cd-ed9d-4924-9566-91930411701d\") " pod="openstack/rabbitmq-server-1" Feb 27 10:46:31 crc kubenswrapper[4728]: I0227 10:46:31.356958 4728 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 27 10:46:31 crc kubenswrapper[4728]: I0227 10:46:31.356988 4728 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-6483969b-d2a4-484b-83ca-44d2967b94b0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6483969b-d2a4-484b-83ca-44d2967b94b0\") pod \"rabbitmq-server-2\" (UID: \"ad00da50-2e05-4612-a862-5cccd698e77b\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/425f9233c08c849daa276787437e4d0b866a73669054d95a34efdd19e1b082af/globalmount\"" pod="openstack/rabbitmq-server-2" Feb 27 10:46:31 crc kubenswrapper[4728]: I0227 10:46:31.357838 4728 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 27 10:46:31 crc kubenswrapper[4728]: I0227 10:46:31.357988 4728 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-b8de2248-9600-4d63-9c2b-b5303351b265\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b8de2248-9600-4d63-9c2b-b5303351b265\") pod \"rabbitmq-server-1\" (UID: \"d96ab6cd-ed9d-4924-9566-91930411701d\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/a1d6ea4b058ccfdf797f27c81301cba645230fa01a1deef34493c9fa0069be7c/globalmount\"" pod="openstack/rabbitmq-server-1" Feb 27 10:46:31 crc kubenswrapper[4728]: I0227 10:46:31.359048 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ad00da50-2e05-4612-a862-5cccd698e77b-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"ad00da50-2e05-4612-a862-5cccd698e77b\") " pod="openstack/rabbitmq-server-2" Feb 27 10:46:31 crc kubenswrapper[4728]: I0227 10:46:31.359929 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ad00da50-2e05-4612-a862-5cccd698e77b-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"ad00da50-2e05-4612-a862-5cccd698e77b\") " pod="openstack/rabbitmq-server-2" Feb 27 10:46:31 crc kubenswrapper[4728]: I0227 10:46:31.360321 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d96ab6cd-ed9d-4924-9566-91930411701d-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"d96ab6cd-ed9d-4924-9566-91930411701d\") " pod="openstack/rabbitmq-server-1" Feb 27 10:46:31 crc kubenswrapper[4728]: I0227 10:46:31.360812 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d96ab6cd-ed9d-4924-9566-91930411701d-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"d96ab6cd-ed9d-4924-9566-91930411701d\") " pod="openstack/rabbitmq-server-1" Feb 27 10:46:31 crc kubenswrapper[4728]: I0227 10:46:31.367183 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2cgps\" (UniqueName: \"kubernetes.io/projected/d96ab6cd-ed9d-4924-9566-91930411701d-kube-api-access-2cgps\") pod \"rabbitmq-server-1\" (UID: \"d96ab6cd-ed9d-4924-9566-91930411701d\") " pod="openstack/rabbitmq-server-1" Feb 27 10:46:31 crc kubenswrapper[4728]: I0227 10:46:31.370958 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xqc66\" (UniqueName: \"kubernetes.io/projected/ad00da50-2e05-4612-a862-5cccd698e77b-kube-api-access-xqc66\") pod \"rabbitmq-server-2\" (UID: \"ad00da50-2e05-4612-a862-5cccd698e77b\") " pod="openstack/rabbitmq-server-2" Feb 27 10:46:31 crc kubenswrapper[4728]: I0227 10:46:31.371113 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ad00da50-2e05-4612-a862-5cccd698e77b-pod-info\") pod \"rabbitmq-server-2\" (UID: \"ad00da50-2e05-4612-a862-5cccd698e77b\") " pod="openstack/rabbitmq-server-2" Feb 27 10:46:31 crc kubenswrapper[4728]: I0227 10:46:31.403671 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-6483969b-d2a4-484b-83ca-44d2967b94b0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6483969b-d2a4-484b-83ca-44d2967b94b0\") pod \"rabbitmq-server-2\" (UID: \"ad00da50-2e05-4612-a862-5cccd698e77b\") " pod="openstack/rabbitmq-server-2" Feb 27 10:46:31 crc kubenswrapper[4728]: I0227 10:46:31.411490 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-b8de2248-9600-4d63-9c2b-b5303351b265\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b8de2248-9600-4d63-9c2b-b5303351b265\") pod \"rabbitmq-server-1\" (UID: \"d96ab6cd-ed9d-4924-9566-91930411701d\") " pod="openstack/rabbitmq-server-1" Feb 27 10:46:31 crc kubenswrapper[4728]: I0227 10:46:31.690726 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Feb 27 10:46:31 crc kubenswrapper[4728]: I0227 10:46:31.706062 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Feb 27 10:46:32 crc kubenswrapper[4728]: I0227 10:46:32.155679 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Feb 27 10:46:32 crc kubenswrapper[4728]: I0227 10:46:32.157354 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 27 10:46:32 crc kubenswrapper[4728]: I0227 10:46:32.159420 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-hxhsh" Feb 27 10:46:32 crc kubenswrapper[4728]: I0227 10:46:32.159485 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Feb 27 10:46:32 crc kubenswrapper[4728]: I0227 10:46:32.159739 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Feb 27 10:46:32 crc kubenswrapper[4728]: I0227 10:46:32.159826 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Feb 27 10:46:32 crc kubenswrapper[4728]: I0227 10:46:32.165204 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 27 10:46:32 crc kubenswrapper[4728]: I0227 10:46:32.168610 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Feb 27 10:46:32 crc kubenswrapper[4728]: I0227 10:46:32.191716 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/803ed01f-b95c-4718-a5e8-3a864b0b7850-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"803ed01f-b95c-4718-a5e8-3a864b0b7850\") " pod="openstack/openstack-galera-0" Feb 27 10:46:32 crc kubenswrapper[4728]: I0227 10:46:32.191778 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/803ed01f-b95c-4718-a5e8-3a864b0b7850-config-data-default\") pod \"openstack-galera-0\" (UID: \"803ed01f-b95c-4718-a5e8-3a864b0b7850\") " pod="openstack/openstack-galera-0" Feb 27 10:46:32 crc kubenswrapper[4728]: I0227 10:46:32.191809 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/803ed01f-b95c-4718-a5e8-3a864b0b7850-operator-scripts\") pod \"openstack-galera-0\" (UID: \"803ed01f-b95c-4718-a5e8-3a864b0b7850\") " pod="openstack/openstack-galera-0" Feb 27 10:46:32 crc kubenswrapper[4728]: I0227 10:46:32.191849 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/803ed01f-b95c-4718-a5e8-3a864b0b7850-kolla-config\") pod \"openstack-galera-0\" (UID: \"803ed01f-b95c-4718-a5e8-3a864b0b7850\") " pod="openstack/openstack-galera-0" Feb 27 10:46:32 crc kubenswrapper[4728]: I0227 10:46:32.191903 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/803ed01f-b95c-4718-a5e8-3a864b0b7850-config-data-generated\") pod \"openstack-galera-0\" (UID: \"803ed01f-b95c-4718-a5e8-3a864b0b7850\") " pod="openstack/openstack-galera-0" Feb 27 10:46:32 crc kubenswrapper[4728]: I0227 10:46:32.191945 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-c97ba688-2540-4995-8629-6194147bddb4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c97ba688-2540-4995-8629-6194147bddb4\") pod \"openstack-galera-0\" (UID: \"803ed01f-b95c-4718-a5e8-3a864b0b7850\") " pod="openstack/openstack-galera-0" Feb 27 10:46:32 crc kubenswrapper[4728]: I0227 10:46:32.191973 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/803ed01f-b95c-4718-a5e8-3a864b0b7850-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"803ed01f-b95c-4718-a5e8-3a864b0b7850\") " pod="openstack/openstack-galera-0" Feb 27 10:46:32 crc kubenswrapper[4728]: I0227 10:46:32.192037 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fnxdx\" (UniqueName: \"kubernetes.io/projected/803ed01f-b95c-4718-a5e8-3a864b0b7850-kube-api-access-fnxdx\") pod \"openstack-galera-0\" (UID: \"803ed01f-b95c-4718-a5e8-3a864b0b7850\") " pod="openstack/openstack-galera-0" Feb 27 10:46:32 crc kubenswrapper[4728]: I0227 10:46:32.296183 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/803ed01f-b95c-4718-a5e8-3a864b0b7850-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"803ed01f-b95c-4718-a5e8-3a864b0b7850\") " pod="openstack/openstack-galera-0" Feb 27 10:46:32 crc kubenswrapper[4728]: I0227 10:46:32.296259 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/803ed01f-b95c-4718-a5e8-3a864b0b7850-config-data-default\") pod \"openstack-galera-0\" (UID: \"803ed01f-b95c-4718-a5e8-3a864b0b7850\") " pod="openstack/openstack-galera-0" Feb 27 10:46:32 crc kubenswrapper[4728]: I0227 10:46:32.296286 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/803ed01f-b95c-4718-a5e8-3a864b0b7850-operator-scripts\") pod \"openstack-galera-0\" (UID: \"803ed01f-b95c-4718-a5e8-3a864b0b7850\") " pod="openstack/openstack-galera-0" Feb 27 10:46:32 crc kubenswrapper[4728]: I0227 10:46:32.296331 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/803ed01f-b95c-4718-a5e8-3a864b0b7850-kolla-config\") pod \"openstack-galera-0\" (UID: \"803ed01f-b95c-4718-a5e8-3a864b0b7850\") " pod="openstack/openstack-galera-0" Feb 27 10:46:32 crc kubenswrapper[4728]: I0227 10:46:32.296381 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/803ed01f-b95c-4718-a5e8-3a864b0b7850-config-data-generated\") pod \"openstack-galera-0\" (UID: \"803ed01f-b95c-4718-a5e8-3a864b0b7850\") " pod="openstack/openstack-galera-0" Feb 27 10:46:32 crc kubenswrapper[4728]: I0227 10:46:32.296431 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-c97ba688-2540-4995-8629-6194147bddb4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c97ba688-2540-4995-8629-6194147bddb4\") pod \"openstack-galera-0\" (UID: \"803ed01f-b95c-4718-a5e8-3a864b0b7850\") " pod="openstack/openstack-galera-0" Feb 27 10:46:32 crc kubenswrapper[4728]: I0227 10:46:32.296459 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/803ed01f-b95c-4718-a5e8-3a864b0b7850-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"803ed01f-b95c-4718-a5e8-3a864b0b7850\") " pod="openstack/openstack-galera-0" Feb 27 10:46:32 crc kubenswrapper[4728]: I0227 10:46:32.296589 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fnxdx\" (UniqueName: \"kubernetes.io/projected/803ed01f-b95c-4718-a5e8-3a864b0b7850-kube-api-access-fnxdx\") pod \"openstack-galera-0\" (UID: \"803ed01f-b95c-4718-a5e8-3a864b0b7850\") " pod="openstack/openstack-galera-0" Feb 27 10:46:32 crc kubenswrapper[4728]: I0227 10:46:32.296856 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/803ed01f-b95c-4718-a5e8-3a864b0b7850-config-data-generated\") pod \"openstack-galera-0\" (UID: \"803ed01f-b95c-4718-a5e8-3a864b0b7850\") " pod="openstack/openstack-galera-0" Feb 27 10:46:32 crc kubenswrapper[4728]: I0227 10:46:32.297217 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/803ed01f-b95c-4718-a5e8-3a864b0b7850-kolla-config\") pod \"openstack-galera-0\" (UID: \"803ed01f-b95c-4718-a5e8-3a864b0b7850\") " pod="openstack/openstack-galera-0" Feb 27 10:46:32 crc kubenswrapper[4728]: I0227 10:46:32.297927 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/803ed01f-b95c-4718-a5e8-3a864b0b7850-config-data-default\") pod \"openstack-galera-0\" (UID: \"803ed01f-b95c-4718-a5e8-3a864b0b7850\") " pod="openstack/openstack-galera-0" Feb 27 10:46:32 crc kubenswrapper[4728]: I0227 10:46:32.298312 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/803ed01f-b95c-4718-a5e8-3a864b0b7850-operator-scripts\") pod \"openstack-galera-0\" (UID: \"803ed01f-b95c-4718-a5e8-3a864b0b7850\") " pod="openstack/openstack-galera-0" Feb 27 10:46:32 crc kubenswrapper[4728]: I0227 10:46:32.300095 4728 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 27 10:46:32 crc kubenswrapper[4728]: I0227 10:46:32.300138 4728 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-c97ba688-2540-4995-8629-6194147bddb4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c97ba688-2540-4995-8629-6194147bddb4\") pod \"openstack-galera-0\" (UID: \"803ed01f-b95c-4718-a5e8-3a864b0b7850\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/444eeb3ff7dede97c7610825d4a76538d33b74baed92e2b4e16e9c6d224fb84a/globalmount\"" pod="openstack/openstack-galera-0" Feb 27 10:46:32 crc kubenswrapper[4728]: I0227 10:46:32.300598 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/803ed01f-b95c-4718-a5e8-3a864b0b7850-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"803ed01f-b95c-4718-a5e8-3a864b0b7850\") " pod="openstack/openstack-galera-0" Feb 27 10:46:32 crc kubenswrapper[4728]: I0227 10:46:32.303085 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/803ed01f-b95c-4718-a5e8-3a864b0b7850-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"803ed01f-b95c-4718-a5e8-3a864b0b7850\") " pod="openstack/openstack-galera-0" Feb 27 10:46:32 crc kubenswrapper[4728]: I0227 10:46:32.332873 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fnxdx\" (UniqueName: \"kubernetes.io/projected/803ed01f-b95c-4718-a5e8-3a864b0b7850-kube-api-access-fnxdx\") pod \"openstack-galera-0\" (UID: \"803ed01f-b95c-4718-a5e8-3a864b0b7850\") " pod="openstack/openstack-galera-0" Feb 27 10:46:32 crc kubenswrapper[4728]: I0227 10:46:32.369666 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-c97ba688-2540-4995-8629-6194147bddb4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c97ba688-2540-4995-8629-6194147bddb4\") pod \"openstack-galera-0\" (UID: \"803ed01f-b95c-4718-a5e8-3a864b0b7850\") " pod="openstack/openstack-galera-0" Feb 27 10:46:32 crc kubenswrapper[4728]: I0227 10:46:32.508696 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 27 10:46:33 crc kubenswrapper[4728]: I0227 10:46:33.406371 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 27 10:46:33 crc kubenswrapper[4728]: I0227 10:46:33.408210 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 27 10:46:33 crc kubenswrapper[4728]: I0227 10:46:33.413011 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Feb 27 10:46:33 crc kubenswrapper[4728]: I0227 10:46:33.413287 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-n4znk" Feb 27 10:46:33 crc kubenswrapper[4728]: I0227 10:46:33.413748 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Feb 27 10:46:33 crc kubenswrapper[4728]: I0227 10:46:33.416060 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Feb 27 10:46:33 crc kubenswrapper[4728]: I0227 10:46:33.420095 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 27 10:46:33 crc kubenswrapper[4728]: I0227 10:46:33.521424 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-832ad2b8-aa54-4df8-ae6f-27b2feebd5f6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-832ad2b8-aa54-4df8-ae6f-27b2feebd5f6\") pod \"openstack-cell1-galera-0\" (UID: \"a7b93ac4-55f2-4491-b4b4-f8abfd837dfa\") " pod="openstack/openstack-cell1-galera-0" Feb 27 10:46:33 crc kubenswrapper[4728]: I0227 10:46:33.521521 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/a7b93ac4-55f2-4491-b4b4-f8abfd837dfa-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"a7b93ac4-55f2-4491-b4b4-f8abfd837dfa\") " pod="openstack/openstack-cell1-galera-0" Feb 27 10:46:33 crc kubenswrapper[4728]: I0227 10:46:33.521558 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a7b93ac4-55f2-4491-b4b4-f8abfd837dfa-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"a7b93ac4-55f2-4491-b4b4-f8abfd837dfa\") " pod="openstack/openstack-cell1-galera-0" Feb 27 10:46:33 crc kubenswrapper[4728]: I0227 10:46:33.521588 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92wkk\" (UniqueName: \"kubernetes.io/projected/a7b93ac4-55f2-4491-b4b4-f8abfd837dfa-kube-api-access-92wkk\") pod \"openstack-cell1-galera-0\" (UID: \"a7b93ac4-55f2-4491-b4b4-f8abfd837dfa\") " pod="openstack/openstack-cell1-galera-0" Feb 27 10:46:33 crc kubenswrapper[4728]: I0227 10:46:33.521647 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7b93ac4-55f2-4491-b4b4-f8abfd837dfa-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"a7b93ac4-55f2-4491-b4b4-f8abfd837dfa\") " pod="openstack/openstack-cell1-galera-0" Feb 27 10:46:33 crc kubenswrapper[4728]: I0227 10:46:33.521669 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/a7b93ac4-55f2-4491-b4b4-f8abfd837dfa-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"a7b93ac4-55f2-4491-b4b4-f8abfd837dfa\") " pod="openstack/openstack-cell1-galera-0" Feb 27 10:46:33 crc kubenswrapper[4728]: I0227 10:46:33.521740 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/a7b93ac4-55f2-4491-b4b4-f8abfd837dfa-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"a7b93ac4-55f2-4491-b4b4-f8abfd837dfa\") " pod="openstack/openstack-cell1-galera-0" Feb 27 10:46:33 crc kubenswrapper[4728]: I0227 10:46:33.521904 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/a7b93ac4-55f2-4491-b4b4-f8abfd837dfa-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"a7b93ac4-55f2-4491-b4b4-f8abfd837dfa\") " pod="openstack/openstack-cell1-galera-0" Feb 27 10:46:33 crc kubenswrapper[4728]: I0227 10:46:33.623928 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-832ad2b8-aa54-4df8-ae6f-27b2feebd5f6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-832ad2b8-aa54-4df8-ae6f-27b2feebd5f6\") pod \"openstack-cell1-galera-0\" (UID: \"a7b93ac4-55f2-4491-b4b4-f8abfd837dfa\") " pod="openstack/openstack-cell1-galera-0" Feb 27 10:46:33 crc kubenswrapper[4728]: I0227 10:46:33.624023 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/a7b93ac4-55f2-4491-b4b4-f8abfd837dfa-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"a7b93ac4-55f2-4491-b4b4-f8abfd837dfa\") " pod="openstack/openstack-cell1-galera-0" Feb 27 10:46:33 crc kubenswrapper[4728]: I0227 10:46:33.624064 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a7b93ac4-55f2-4491-b4b4-f8abfd837dfa-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"a7b93ac4-55f2-4491-b4b4-f8abfd837dfa\") " pod="openstack/openstack-cell1-galera-0" Feb 27 10:46:33 crc kubenswrapper[4728]: I0227 10:46:33.624093 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-92wkk\" (UniqueName: \"kubernetes.io/projected/a7b93ac4-55f2-4491-b4b4-f8abfd837dfa-kube-api-access-92wkk\") pod \"openstack-cell1-galera-0\" (UID: \"a7b93ac4-55f2-4491-b4b4-f8abfd837dfa\") " pod="openstack/openstack-cell1-galera-0" Feb 27 10:46:33 crc kubenswrapper[4728]: I0227 10:46:33.624128 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7b93ac4-55f2-4491-b4b4-f8abfd837dfa-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"a7b93ac4-55f2-4491-b4b4-f8abfd837dfa\") " pod="openstack/openstack-cell1-galera-0" Feb 27 10:46:33 crc kubenswrapper[4728]: I0227 10:46:33.624146 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/a7b93ac4-55f2-4491-b4b4-f8abfd837dfa-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"a7b93ac4-55f2-4491-b4b4-f8abfd837dfa\") " pod="openstack/openstack-cell1-galera-0" Feb 27 10:46:33 crc kubenswrapper[4728]: I0227 10:46:33.624165 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/a7b93ac4-55f2-4491-b4b4-f8abfd837dfa-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"a7b93ac4-55f2-4491-b4b4-f8abfd837dfa\") " pod="openstack/openstack-cell1-galera-0" Feb 27 10:46:33 crc kubenswrapper[4728]: I0227 10:46:33.624207 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/a7b93ac4-55f2-4491-b4b4-f8abfd837dfa-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"a7b93ac4-55f2-4491-b4b4-f8abfd837dfa\") " pod="openstack/openstack-cell1-galera-0" Feb 27 10:46:33 crc kubenswrapper[4728]: I0227 10:46:33.624599 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/a7b93ac4-55f2-4491-b4b4-f8abfd837dfa-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"a7b93ac4-55f2-4491-b4b4-f8abfd837dfa\") " pod="openstack/openstack-cell1-galera-0" Feb 27 10:46:33 crc kubenswrapper[4728]: I0227 10:46:33.625544 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/a7b93ac4-55f2-4491-b4b4-f8abfd837dfa-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"a7b93ac4-55f2-4491-b4b4-f8abfd837dfa\") " pod="openstack/openstack-cell1-galera-0" Feb 27 10:46:33 crc kubenswrapper[4728]: I0227 10:46:33.626558 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a7b93ac4-55f2-4491-b4b4-f8abfd837dfa-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"a7b93ac4-55f2-4491-b4b4-f8abfd837dfa\") " pod="openstack/openstack-cell1-galera-0" Feb 27 10:46:33 crc kubenswrapper[4728]: I0227 10:46:33.629994 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/a7b93ac4-55f2-4491-b4b4-f8abfd837dfa-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"a7b93ac4-55f2-4491-b4b4-f8abfd837dfa\") " pod="openstack/openstack-cell1-galera-0" Feb 27 10:46:33 crc kubenswrapper[4728]: I0227 10:46:33.632612 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7b93ac4-55f2-4491-b4b4-f8abfd837dfa-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"a7b93ac4-55f2-4491-b4b4-f8abfd837dfa\") " pod="openstack/openstack-cell1-galera-0" Feb 27 10:46:33 crc kubenswrapper[4728]: I0227 10:46:33.634595 4728 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 27 10:46:33 crc kubenswrapper[4728]: I0227 10:46:33.635094 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/a7b93ac4-55f2-4491-b4b4-f8abfd837dfa-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"a7b93ac4-55f2-4491-b4b4-f8abfd837dfa\") " pod="openstack/openstack-cell1-galera-0" Feb 27 10:46:33 crc kubenswrapper[4728]: I0227 10:46:33.646548 4728 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-832ad2b8-aa54-4df8-ae6f-27b2feebd5f6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-832ad2b8-aa54-4df8-ae6f-27b2feebd5f6\") pod \"openstack-cell1-galera-0\" (UID: \"a7b93ac4-55f2-4491-b4b4-f8abfd837dfa\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/af1637cb76885e76145402fec52e2e36822784f8defbe0833451d92392f6b0c8/globalmount\"" pod="openstack/openstack-cell1-galera-0" Feb 27 10:46:33 crc kubenswrapper[4728]: I0227 10:46:33.654888 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-92wkk\" (UniqueName: \"kubernetes.io/projected/a7b93ac4-55f2-4491-b4b4-f8abfd837dfa-kube-api-access-92wkk\") pod \"openstack-cell1-galera-0\" (UID: \"a7b93ac4-55f2-4491-b4b4-f8abfd837dfa\") " pod="openstack/openstack-cell1-galera-0" Feb 27 10:46:33 crc kubenswrapper[4728]: I0227 10:46:33.704979 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-832ad2b8-aa54-4df8-ae6f-27b2feebd5f6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-832ad2b8-aa54-4df8-ae6f-27b2feebd5f6\") pod \"openstack-cell1-galera-0\" (UID: \"a7b93ac4-55f2-4491-b4b4-f8abfd837dfa\") " pod="openstack/openstack-cell1-galera-0" Feb 27 10:46:33 crc kubenswrapper[4728]: I0227 10:46:33.734594 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 27 10:46:33 crc kubenswrapper[4728]: I0227 10:46:33.768697 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Feb 27 10:46:33 crc kubenswrapper[4728]: I0227 10:46:33.770804 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 27 10:46:33 crc kubenswrapper[4728]: I0227 10:46:33.773056 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-kbtkr" Feb 27 10:46:33 crc kubenswrapper[4728]: I0227 10:46:33.775986 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Feb 27 10:46:33 crc kubenswrapper[4728]: I0227 10:46:33.777406 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Feb 27 10:46:33 crc kubenswrapper[4728]: I0227 10:46:33.780872 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 27 10:46:33 crc kubenswrapper[4728]: I0227 10:46:33.929666 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7p4t\" (UniqueName: \"kubernetes.io/projected/6e834d11-1d93-42ba-8dfe-f17c9faddff2-kube-api-access-g7p4t\") pod \"memcached-0\" (UID: \"6e834d11-1d93-42ba-8dfe-f17c9faddff2\") " pod="openstack/memcached-0" Feb 27 10:46:33 crc kubenswrapper[4728]: I0227 10:46:33.929775 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6e834d11-1d93-42ba-8dfe-f17c9faddff2-config-data\") pod \"memcached-0\" (UID: \"6e834d11-1d93-42ba-8dfe-f17c9faddff2\") " pod="openstack/memcached-0" Feb 27 10:46:33 crc kubenswrapper[4728]: I0227 10:46:33.929867 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/6e834d11-1d93-42ba-8dfe-f17c9faddff2-kolla-config\") pod \"memcached-0\" (UID: \"6e834d11-1d93-42ba-8dfe-f17c9faddff2\") " pod="openstack/memcached-0" Feb 27 10:46:33 crc kubenswrapper[4728]: I0227 10:46:33.929982 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/6e834d11-1d93-42ba-8dfe-f17c9faddff2-memcached-tls-certs\") pod \"memcached-0\" (UID: \"6e834d11-1d93-42ba-8dfe-f17c9faddff2\") " pod="openstack/memcached-0" Feb 27 10:46:33 crc kubenswrapper[4728]: I0227 10:46:33.930107 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e834d11-1d93-42ba-8dfe-f17c9faddff2-combined-ca-bundle\") pod \"memcached-0\" (UID: \"6e834d11-1d93-42ba-8dfe-f17c9faddff2\") " pod="openstack/memcached-0" Feb 27 10:46:34 crc kubenswrapper[4728]: I0227 10:46:34.032517 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/6e834d11-1d93-42ba-8dfe-f17c9faddff2-kolla-config\") pod \"memcached-0\" (UID: \"6e834d11-1d93-42ba-8dfe-f17c9faddff2\") " pod="openstack/memcached-0" Feb 27 10:46:34 crc kubenswrapper[4728]: I0227 10:46:34.032629 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/6e834d11-1d93-42ba-8dfe-f17c9faddff2-memcached-tls-certs\") pod \"memcached-0\" (UID: \"6e834d11-1d93-42ba-8dfe-f17c9faddff2\") " pod="openstack/memcached-0" Feb 27 10:46:34 crc kubenswrapper[4728]: I0227 10:46:34.032734 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e834d11-1d93-42ba-8dfe-f17c9faddff2-combined-ca-bundle\") pod \"memcached-0\" (UID: \"6e834d11-1d93-42ba-8dfe-f17c9faddff2\") " pod="openstack/memcached-0" Feb 27 10:46:34 crc kubenswrapper[4728]: I0227 10:46:34.032811 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g7p4t\" (UniqueName: \"kubernetes.io/projected/6e834d11-1d93-42ba-8dfe-f17c9faddff2-kube-api-access-g7p4t\") pod \"memcached-0\" (UID: \"6e834d11-1d93-42ba-8dfe-f17c9faddff2\") " pod="openstack/memcached-0" Feb 27 10:46:34 crc kubenswrapper[4728]: I0227 10:46:34.032853 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6e834d11-1d93-42ba-8dfe-f17c9faddff2-config-data\") pod \"memcached-0\" (UID: \"6e834d11-1d93-42ba-8dfe-f17c9faddff2\") " pod="openstack/memcached-0" Feb 27 10:46:34 crc kubenswrapper[4728]: I0227 10:46:34.033199 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/6e834d11-1d93-42ba-8dfe-f17c9faddff2-kolla-config\") pod \"memcached-0\" (UID: \"6e834d11-1d93-42ba-8dfe-f17c9faddff2\") " pod="openstack/memcached-0" Feb 27 10:46:34 crc kubenswrapper[4728]: I0227 10:46:34.033713 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6e834d11-1d93-42ba-8dfe-f17c9faddff2-config-data\") pod \"memcached-0\" (UID: \"6e834d11-1d93-42ba-8dfe-f17c9faddff2\") " pod="openstack/memcached-0" Feb 27 10:46:34 crc kubenswrapper[4728]: I0227 10:46:34.037570 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e834d11-1d93-42ba-8dfe-f17c9faddff2-combined-ca-bundle\") pod \"memcached-0\" (UID: \"6e834d11-1d93-42ba-8dfe-f17c9faddff2\") " pod="openstack/memcached-0" Feb 27 10:46:34 crc kubenswrapper[4728]: I0227 10:46:34.052707 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g7p4t\" (UniqueName: \"kubernetes.io/projected/6e834d11-1d93-42ba-8dfe-f17c9faddff2-kube-api-access-g7p4t\") pod \"memcached-0\" (UID: \"6e834d11-1d93-42ba-8dfe-f17c9faddff2\") " pod="openstack/memcached-0" Feb 27 10:46:34 crc kubenswrapper[4728]: I0227 10:46:34.052890 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/6e834d11-1d93-42ba-8dfe-f17c9faddff2-memcached-tls-certs\") pod \"memcached-0\" (UID: \"6e834d11-1d93-42ba-8dfe-f17c9faddff2\") " pod="openstack/memcached-0" Feb 27 10:46:34 crc kubenswrapper[4728]: I0227 10:46:34.090883 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 27 10:46:35 crc kubenswrapper[4728]: I0227 10:46:35.968732 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 27 10:46:35 crc kubenswrapper[4728]: I0227 10:46:35.970617 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 27 10:46:36 crc kubenswrapper[4728]: I0227 10:46:36.009598 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-ptxvx" Feb 27 10:46:36 crc kubenswrapper[4728]: I0227 10:46:36.026211 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 27 10:46:36 crc kubenswrapper[4728]: I0227 10:46:36.069284 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tsc5n\" (UniqueName: \"kubernetes.io/projected/ed5c2715-a8a7-4d10-ba69-32133e2b6e51-kube-api-access-tsc5n\") pod \"kube-state-metrics-0\" (UID: \"ed5c2715-a8a7-4d10-ba69-32133e2b6e51\") " pod="openstack/kube-state-metrics-0" Feb 27 10:46:36 crc kubenswrapper[4728]: I0227 10:46:36.171037 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tsc5n\" (UniqueName: \"kubernetes.io/projected/ed5c2715-a8a7-4d10-ba69-32133e2b6e51-kube-api-access-tsc5n\") pod \"kube-state-metrics-0\" (UID: \"ed5c2715-a8a7-4d10-ba69-32133e2b6e51\") " pod="openstack/kube-state-metrics-0" Feb 27 10:46:36 crc kubenswrapper[4728]: I0227 10:46:36.216517 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tsc5n\" (UniqueName: \"kubernetes.io/projected/ed5c2715-a8a7-4d10-ba69-32133e2b6e51-kube-api-access-tsc5n\") pod \"kube-state-metrics-0\" (UID: \"ed5c2715-a8a7-4d10-ba69-32133e2b6e51\") " pod="openstack/kube-state-metrics-0" Feb 27 10:46:36 crc kubenswrapper[4728]: I0227 10:46:36.333955 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 27 10:46:36 crc kubenswrapper[4728]: I0227 10:46:36.787880 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-ui-dashboards-66cbf594b5-jc2j2"] Feb 27 10:46:36 crc kubenswrapper[4728]: I0227 10:46:36.791975 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-jc2j2" Feb 27 10:46:36 crc kubenswrapper[4728]: I0227 10:46:36.794933 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-ui-dashboards" Feb 27 10:46:36 crc kubenswrapper[4728]: I0227 10:46:36.799560 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-ui-dashboards-sa-dockercfg-rh4hh" Feb 27 10:46:36 crc kubenswrapper[4728]: I0227 10:46:36.810939 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-ui-dashboards-66cbf594b5-jc2j2"] Feb 27 10:46:36 crc kubenswrapper[4728]: I0227 10:46:36.890449 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tjwl9\" (UniqueName: \"kubernetes.io/projected/fec191d1-b76f-4b8c-94c2-2d217a21951c-kube-api-access-tjwl9\") pod \"observability-ui-dashboards-66cbf594b5-jc2j2\" (UID: \"fec191d1-b76f-4b8c-94c2-2d217a21951c\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-jc2j2" Feb 27 10:46:36 crc kubenswrapper[4728]: I0227 10:46:36.890512 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fec191d1-b76f-4b8c-94c2-2d217a21951c-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-jc2j2\" (UID: \"fec191d1-b76f-4b8c-94c2-2d217a21951c\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-jc2j2" Feb 27 10:46:36 crc kubenswrapper[4728]: I0227 10:46:36.992671 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tjwl9\" (UniqueName: \"kubernetes.io/projected/fec191d1-b76f-4b8c-94c2-2d217a21951c-kube-api-access-tjwl9\") pod \"observability-ui-dashboards-66cbf594b5-jc2j2\" (UID: \"fec191d1-b76f-4b8c-94c2-2d217a21951c\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-jc2j2" Feb 27 10:46:36 crc kubenswrapper[4728]: I0227 10:46:36.992737 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fec191d1-b76f-4b8c-94c2-2d217a21951c-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-jc2j2\" (UID: \"fec191d1-b76f-4b8c-94c2-2d217a21951c\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-jc2j2" Feb 27 10:46:36 crc kubenswrapper[4728]: E0227 10:46:36.992945 4728 secret.go:188] Couldn't get secret openshift-operators/observability-ui-dashboards: secret "observability-ui-dashboards" not found Feb 27 10:46:36 crc kubenswrapper[4728]: E0227 10:46:36.993007 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fec191d1-b76f-4b8c-94c2-2d217a21951c-serving-cert podName:fec191d1-b76f-4b8c-94c2-2d217a21951c nodeName:}" failed. No retries permitted until 2026-02-27 10:46:37.492985507 +0000 UTC m=+1217.455351613 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/fec191d1-b76f-4b8c-94c2-2d217a21951c-serving-cert") pod "observability-ui-dashboards-66cbf594b5-jc2j2" (UID: "fec191d1-b76f-4b8c-94c2-2d217a21951c") : secret "observability-ui-dashboards" not found Feb 27 10:46:37 crc kubenswrapper[4728]: I0227 10:46:37.046680 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tjwl9\" (UniqueName: \"kubernetes.io/projected/fec191d1-b76f-4b8c-94c2-2d217a21951c-kube-api-access-tjwl9\") pod \"observability-ui-dashboards-66cbf594b5-jc2j2\" (UID: \"fec191d1-b76f-4b8c-94c2-2d217a21951c\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-jc2j2" Feb 27 10:46:37 crc kubenswrapper[4728]: I0227 10:46:37.126694 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-78c47f664f-wpqzf"] Feb 27 10:46:37 crc kubenswrapper[4728]: I0227 10:46:37.134625 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-78c47f664f-wpqzf" Feb 27 10:46:37 crc kubenswrapper[4728]: I0227 10:46:37.155819 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-78c47f664f-wpqzf"] Feb 27 10:46:37 crc kubenswrapper[4728]: I0227 10:46:37.228591 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/80aafb5f-a486-4afe-8402-a592e62b95d8-console-config\") pod \"console-78c47f664f-wpqzf\" (UID: \"80aafb5f-a486-4afe-8402-a592e62b95d8\") " pod="openshift-console/console-78c47f664f-wpqzf" Feb 27 10:46:37 crc kubenswrapper[4728]: I0227 10:46:37.228677 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzzlg\" (UniqueName: \"kubernetes.io/projected/80aafb5f-a486-4afe-8402-a592e62b95d8-kube-api-access-gzzlg\") pod \"console-78c47f664f-wpqzf\" (UID: \"80aafb5f-a486-4afe-8402-a592e62b95d8\") " pod="openshift-console/console-78c47f664f-wpqzf" Feb 27 10:46:37 crc kubenswrapper[4728]: I0227 10:46:37.228738 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/80aafb5f-a486-4afe-8402-a592e62b95d8-trusted-ca-bundle\") pod \"console-78c47f664f-wpqzf\" (UID: \"80aafb5f-a486-4afe-8402-a592e62b95d8\") " pod="openshift-console/console-78c47f664f-wpqzf" Feb 27 10:46:37 crc kubenswrapper[4728]: I0227 10:46:37.228776 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/80aafb5f-a486-4afe-8402-a592e62b95d8-console-serving-cert\") pod \"console-78c47f664f-wpqzf\" (UID: \"80aafb5f-a486-4afe-8402-a592e62b95d8\") " pod="openshift-console/console-78c47f664f-wpqzf" Feb 27 10:46:37 crc kubenswrapper[4728]: I0227 10:46:37.228832 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/80aafb5f-a486-4afe-8402-a592e62b95d8-service-ca\") pod \"console-78c47f664f-wpqzf\" (UID: \"80aafb5f-a486-4afe-8402-a592e62b95d8\") " pod="openshift-console/console-78c47f664f-wpqzf" Feb 27 10:46:37 crc kubenswrapper[4728]: I0227 10:46:37.228852 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/80aafb5f-a486-4afe-8402-a592e62b95d8-console-oauth-config\") pod \"console-78c47f664f-wpqzf\" (UID: \"80aafb5f-a486-4afe-8402-a592e62b95d8\") " pod="openshift-console/console-78c47f664f-wpqzf" Feb 27 10:46:37 crc kubenswrapper[4728]: I0227 10:46:37.228902 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/80aafb5f-a486-4afe-8402-a592e62b95d8-oauth-serving-cert\") pod \"console-78c47f664f-wpqzf\" (UID: \"80aafb5f-a486-4afe-8402-a592e62b95d8\") " pod="openshift-console/console-78c47f664f-wpqzf" Feb 27 10:46:37 crc kubenswrapper[4728]: I0227 10:46:37.313372 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 27 10:46:37 crc kubenswrapper[4728]: I0227 10:46:37.316297 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 27 10:46:37 crc kubenswrapper[4728]: I0227 10:46:37.323226 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Feb 27 10:46:37 crc kubenswrapper[4728]: I0227 10:46:37.323254 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Feb 27 10:46:37 crc kubenswrapper[4728]: I0227 10:46:37.323296 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Feb 27 10:46:37 crc kubenswrapper[4728]: I0227 10:46:37.323533 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-bzqqf" Feb 27 10:46:37 crc kubenswrapper[4728]: I0227 10:46:37.323608 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Feb 27 10:46:37 crc kubenswrapper[4728]: I0227 10:46:37.323578 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Feb 27 10:46:37 crc kubenswrapper[4728]: I0227 10:46:37.324157 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Feb 27 10:46:37 crc kubenswrapper[4728]: I0227 10:46:37.329093 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Feb 27 10:46:37 crc kubenswrapper[4728]: I0227 10:46:37.330145 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/80aafb5f-a486-4afe-8402-a592e62b95d8-trusted-ca-bundle\") pod \"console-78c47f664f-wpqzf\" (UID: \"80aafb5f-a486-4afe-8402-a592e62b95d8\") " pod="openshift-console/console-78c47f664f-wpqzf" Feb 27 10:46:37 crc kubenswrapper[4728]: I0227 10:46:37.330197 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/80aafb5f-a486-4afe-8402-a592e62b95d8-console-serving-cert\") pod \"console-78c47f664f-wpqzf\" (UID: \"80aafb5f-a486-4afe-8402-a592e62b95d8\") " pod="openshift-console/console-78c47f664f-wpqzf" Feb 27 10:46:37 crc kubenswrapper[4728]: I0227 10:46:37.330252 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/80aafb5f-a486-4afe-8402-a592e62b95d8-service-ca\") pod \"console-78c47f664f-wpqzf\" (UID: \"80aafb5f-a486-4afe-8402-a592e62b95d8\") " pod="openshift-console/console-78c47f664f-wpqzf" Feb 27 10:46:37 crc kubenswrapper[4728]: I0227 10:46:37.330273 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/80aafb5f-a486-4afe-8402-a592e62b95d8-console-oauth-config\") pod \"console-78c47f664f-wpqzf\" (UID: \"80aafb5f-a486-4afe-8402-a592e62b95d8\") " pod="openshift-console/console-78c47f664f-wpqzf" Feb 27 10:46:37 crc kubenswrapper[4728]: I0227 10:46:37.330300 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/80aafb5f-a486-4afe-8402-a592e62b95d8-oauth-serving-cert\") pod \"console-78c47f664f-wpqzf\" (UID: \"80aafb5f-a486-4afe-8402-a592e62b95d8\") " pod="openshift-console/console-78c47f664f-wpqzf" Feb 27 10:46:37 crc kubenswrapper[4728]: I0227 10:46:37.330348 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/80aafb5f-a486-4afe-8402-a592e62b95d8-console-config\") pod \"console-78c47f664f-wpqzf\" (UID: \"80aafb5f-a486-4afe-8402-a592e62b95d8\") " pod="openshift-console/console-78c47f664f-wpqzf" Feb 27 10:46:37 crc kubenswrapper[4728]: I0227 10:46:37.330400 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gzzlg\" (UniqueName: \"kubernetes.io/projected/80aafb5f-a486-4afe-8402-a592e62b95d8-kube-api-access-gzzlg\") pod \"console-78c47f664f-wpqzf\" (UID: \"80aafb5f-a486-4afe-8402-a592e62b95d8\") " pod="openshift-console/console-78c47f664f-wpqzf" Feb 27 10:46:37 crc kubenswrapper[4728]: I0227 10:46:37.331190 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/80aafb5f-a486-4afe-8402-a592e62b95d8-trusted-ca-bundle\") pod \"console-78c47f664f-wpqzf\" (UID: \"80aafb5f-a486-4afe-8402-a592e62b95d8\") " pod="openshift-console/console-78c47f664f-wpqzf" Feb 27 10:46:37 crc kubenswrapper[4728]: I0227 10:46:37.331200 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/80aafb5f-a486-4afe-8402-a592e62b95d8-service-ca\") pod \"console-78c47f664f-wpqzf\" (UID: \"80aafb5f-a486-4afe-8402-a592e62b95d8\") " pod="openshift-console/console-78c47f664f-wpqzf" Feb 27 10:46:37 crc kubenswrapper[4728]: I0227 10:46:37.331272 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/80aafb5f-a486-4afe-8402-a592e62b95d8-oauth-serving-cert\") pod \"console-78c47f664f-wpqzf\" (UID: \"80aafb5f-a486-4afe-8402-a592e62b95d8\") " pod="openshift-console/console-78c47f664f-wpqzf" Feb 27 10:46:37 crc kubenswrapper[4728]: I0227 10:46:37.331721 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/80aafb5f-a486-4afe-8402-a592e62b95d8-console-config\") pod \"console-78c47f664f-wpqzf\" (UID: \"80aafb5f-a486-4afe-8402-a592e62b95d8\") " pod="openshift-console/console-78c47f664f-wpqzf" Feb 27 10:46:37 crc kubenswrapper[4728]: I0227 10:46:37.332900 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 27 10:46:37 crc kubenswrapper[4728]: I0227 10:46:37.336043 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/80aafb5f-a486-4afe-8402-a592e62b95d8-console-serving-cert\") pod \"console-78c47f664f-wpqzf\" (UID: \"80aafb5f-a486-4afe-8402-a592e62b95d8\") " pod="openshift-console/console-78c47f664f-wpqzf" Feb 27 10:46:37 crc kubenswrapper[4728]: I0227 10:46:37.336973 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/80aafb5f-a486-4afe-8402-a592e62b95d8-console-oauth-config\") pod \"console-78c47f664f-wpqzf\" (UID: \"80aafb5f-a486-4afe-8402-a592e62b95d8\") " pod="openshift-console/console-78c47f664f-wpqzf" Feb 27 10:46:37 crc kubenswrapper[4728]: I0227 10:46:37.348603 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gzzlg\" (UniqueName: \"kubernetes.io/projected/80aafb5f-a486-4afe-8402-a592e62b95d8-kube-api-access-gzzlg\") pod \"console-78c47f664f-wpqzf\" (UID: \"80aafb5f-a486-4afe-8402-a592e62b95d8\") " pod="openshift-console/console-78c47f664f-wpqzf" Feb 27 10:46:37 crc kubenswrapper[4728]: I0227 10:46:37.432161 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/9d90c432-384c-4a43-a2cf-b26c3804a632-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"9d90c432-384c-4a43-a2cf-b26c3804a632\") " pod="openstack/prometheus-metric-storage-0" Feb 27 10:46:37 crc kubenswrapper[4728]: I0227 10:46:37.432238 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/9d90c432-384c-4a43-a2cf-b26c3804a632-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"9d90c432-384c-4a43-a2cf-b26c3804a632\") " pod="openstack/prometheus-metric-storage-0" Feb 27 10:46:37 crc kubenswrapper[4728]: I0227 10:46:37.432272 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/9d90c432-384c-4a43-a2cf-b26c3804a632-config\") pod \"prometheus-metric-storage-0\" (UID: \"9d90c432-384c-4a43-a2cf-b26c3804a632\") " pod="openstack/prometheus-metric-storage-0" Feb 27 10:46:37 crc kubenswrapper[4728]: I0227 10:46:37.432415 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/9d90c432-384c-4a43-a2cf-b26c3804a632-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"9d90c432-384c-4a43-a2cf-b26c3804a632\") " pod="openstack/prometheus-metric-storage-0" Feb 27 10:46:37 crc kubenswrapper[4728]: I0227 10:46:37.432587 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/9d90c432-384c-4a43-a2cf-b26c3804a632-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"9d90c432-384c-4a43-a2cf-b26c3804a632\") " pod="openstack/prometheus-metric-storage-0" Feb 27 10:46:37 crc kubenswrapper[4728]: I0227 10:46:37.432658 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-6054c06d-bbc1-4903-8542-ab0378034a6a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6054c06d-bbc1-4903-8542-ab0378034a6a\") pod \"prometheus-metric-storage-0\" (UID: \"9d90c432-384c-4a43-a2cf-b26c3804a632\") " pod="openstack/prometheus-metric-storage-0" Feb 27 10:46:37 crc kubenswrapper[4728]: I0227 10:46:37.432704 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/9d90c432-384c-4a43-a2cf-b26c3804a632-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"9d90c432-384c-4a43-a2cf-b26c3804a632\") " pod="openstack/prometheus-metric-storage-0" Feb 27 10:46:37 crc kubenswrapper[4728]: I0227 10:46:37.432729 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/9d90c432-384c-4a43-a2cf-b26c3804a632-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"9d90c432-384c-4a43-a2cf-b26c3804a632\") " pod="openstack/prometheus-metric-storage-0" Feb 27 10:46:37 crc kubenswrapper[4728]: I0227 10:46:37.432825 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pbs59\" (UniqueName: \"kubernetes.io/projected/9d90c432-384c-4a43-a2cf-b26c3804a632-kube-api-access-pbs59\") pod \"prometheus-metric-storage-0\" (UID: \"9d90c432-384c-4a43-a2cf-b26c3804a632\") " pod="openstack/prometheus-metric-storage-0" Feb 27 10:46:37 crc kubenswrapper[4728]: I0227 10:46:37.432944 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/9d90c432-384c-4a43-a2cf-b26c3804a632-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"9d90c432-384c-4a43-a2cf-b26c3804a632\") " pod="openstack/prometheus-metric-storage-0" Feb 27 10:46:37 crc kubenswrapper[4728]: I0227 10:46:37.477111 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-78c47f664f-wpqzf" Feb 27 10:46:37 crc kubenswrapper[4728]: I0227 10:46:37.534795 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/9d90c432-384c-4a43-a2cf-b26c3804a632-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"9d90c432-384c-4a43-a2cf-b26c3804a632\") " pod="openstack/prometheus-metric-storage-0" Feb 27 10:46:37 crc kubenswrapper[4728]: I0227 10:46:37.534856 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-6054c06d-bbc1-4903-8542-ab0378034a6a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6054c06d-bbc1-4903-8542-ab0378034a6a\") pod \"prometheus-metric-storage-0\" (UID: \"9d90c432-384c-4a43-a2cf-b26c3804a632\") " pod="openstack/prometheus-metric-storage-0" Feb 27 10:46:37 crc kubenswrapper[4728]: I0227 10:46:37.534884 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/9d90c432-384c-4a43-a2cf-b26c3804a632-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"9d90c432-384c-4a43-a2cf-b26c3804a632\") " pod="openstack/prometheus-metric-storage-0" Feb 27 10:46:37 crc kubenswrapper[4728]: I0227 10:46:37.534907 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/9d90c432-384c-4a43-a2cf-b26c3804a632-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"9d90c432-384c-4a43-a2cf-b26c3804a632\") " pod="openstack/prometheus-metric-storage-0" Feb 27 10:46:37 crc kubenswrapper[4728]: I0227 10:46:37.534942 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pbs59\" (UniqueName: \"kubernetes.io/projected/9d90c432-384c-4a43-a2cf-b26c3804a632-kube-api-access-pbs59\") pod \"prometheus-metric-storage-0\" (UID: \"9d90c432-384c-4a43-a2cf-b26c3804a632\") " pod="openstack/prometheus-metric-storage-0" Feb 27 10:46:37 crc kubenswrapper[4728]: I0227 10:46:37.534979 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fec191d1-b76f-4b8c-94c2-2d217a21951c-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-jc2j2\" (UID: \"fec191d1-b76f-4b8c-94c2-2d217a21951c\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-jc2j2" Feb 27 10:46:37 crc kubenswrapper[4728]: I0227 10:46:37.534996 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/9d90c432-384c-4a43-a2cf-b26c3804a632-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"9d90c432-384c-4a43-a2cf-b26c3804a632\") " pod="openstack/prometheus-metric-storage-0" Feb 27 10:46:37 crc kubenswrapper[4728]: I0227 10:46:37.535045 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/9d90c432-384c-4a43-a2cf-b26c3804a632-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"9d90c432-384c-4a43-a2cf-b26c3804a632\") " pod="openstack/prometheus-metric-storage-0" Feb 27 10:46:37 crc kubenswrapper[4728]: I0227 10:46:37.535081 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/9d90c432-384c-4a43-a2cf-b26c3804a632-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"9d90c432-384c-4a43-a2cf-b26c3804a632\") " pod="openstack/prometheus-metric-storage-0" Feb 27 10:46:37 crc kubenswrapper[4728]: I0227 10:46:37.535108 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/9d90c432-384c-4a43-a2cf-b26c3804a632-config\") pod \"prometheus-metric-storage-0\" (UID: \"9d90c432-384c-4a43-a2cf-b26c3804a632\") " pod="openstack/prometheus-metric-storage-0" Feb 27 10:46:37 crc kubenswrapper[4728]: I0227 10:46:37.535132 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/9d90c432-384c-4a43-a2cf-b26c3804a632-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"9d90c432-384c-4a43-a2cf-b26c3804a632\") " pod="openstack/prometheus-metric-storage-0" Feb 27 10:46:37 crc kubenswrapper[4728]: I0227 10:46:37.535924 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/9d90c432-384c-4a43-a2cf-b26c3804a632-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"9d90c432-384c-4a43-a2cf-b26c3804a632\") " pod="openstack/prometheus-metric-storage-0" Feb 27 10:46:37 crc kubenswrapper[4728]: I0227 10:46:37.538413 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/9d90c432-384c-4a43-a2cf-b26c3804a632-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"9d90c432-384c-4a43-a2cf-b26c3804a632\") " pod="openstack/prometheus-metric-storage-0" Feb 27 10:46:37 crc kubenswrapper[4728]: I0227 10:46:37.538850 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fec191d1-b76f-4b8c-94c2-2d217a21951c-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-jc2j2\" (UID: \"fec191d1-b76f-4b8c-94c2-2d217a21951c\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-jc2j2" Feb 27 10:46:37 crc kubenswrapper[4728]: I0227 10:46:37.539119 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/9d90c432-384c-4a43-a2cf-b26c3804a632-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"9d90c432-384c-4a43-a2cf-b26c3804a632\") " pod="openstack/prometheus-metric-storage-0" Feb 27 10:46:37 crc kubenswrapper[4728]: I0227 10:46:37.539382 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/9d90c432-384c-4a43-a2cf-b26c3804a632-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"9d90c432-384c-4a43-a2cf-b26c3804a632\") " pod="openstack/prometheus-metric-storage-0" Feb 27 10:46:37 crc kubenswrapper[4728]: I0227 10:46:37.540646 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/9d90c432-384c-4a43-a2cf-b26c3804a632-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"9d90c432-384c-4a43-a2cf-b26c3804a632\") " pod="openstack/prometheus-metric-storage-0" Feb 27 10:46:37 crc kubenswrapper[4728]: I0227 10:46:37.541856 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/9d90c432-384c-4a43-a2cf-b26c3804a632-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"9d90c432-384c-4a43-a2cf-b26c3804a632\") " pod="openstack/prometheus-metric-storage-0" Feb 27 10:46:37 crc kubenswrapper[4728]: I0227 10:46:37.542160 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/9d90c432-384c-4a43-a2cf-b26c3804a632-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"9d90c432-384c-4a43-a2cf-b26c3804a632\") " pod="openstack/prometheus-metric-storage-0" Feb 27 10:46:37 crc kubenswrapper[4728]: I0227 10:46:37.543580 4728 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 27 10:46:37 crc kubenswrapper[4728]: I0227 10:46:37.543630 4728 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-6054c06d-bbc1-4903-8542-ab0378034a6a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6054c06d-bbc1-4903-8542-ab0378034a6a\") pod \"prometheus-metric-storage-0\" (UID: \"9d90c432-384c-4a43-a2cf-b26c3804a632\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/da9c5ded63a0483190dd926a1aaaeae7e9f2dd385d7e3f523dfb80808a161f9d/globalmount\"" pod="openstack/prometheus-metric-storage-0" Feb 27 10:46:37 crc kubenswrapper[4728]: I0227 10:46:37.543944 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/9d90c432-384c-4a43-a2cf-b26c3804a632-config\") pod \"prometheus-metric-storage-0\" (UID: \"9d90c432-384c-4a43-a2cf-b26c3804a632\") " pod="openstack/prometheus-metric-storage-0" Feb 27 10:46:37 crc kubenswrapper[4728]: I0227 10:46:37.563701 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pbs59\" (UniqueName: \"kubernetes.io/projected/9d90c432-384c-4a43-a2cf-b26c3804a632-kube-api-access-pbs59\") pod \"prometheus-metric-storage-0\" (UID: \"9d90c432-384c-4a43-a2cf-b26c3804a632\") " pod="openstack/prometheus-metric-storage-0" Feb 27 10:46:37 crc kubenswrapper[4728]: I0227 10:46:37.582291 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-6054c06d-bbc1-4903-8542-ab0378034a6a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6054c06d-bbc1-4903-8542-ab0378034a6a\") pod \"prometheus-metric-storage-0\" (UID: \"9d90c432-384c-4a43-a2cf-b26c3804a632\") " pod="openstack/prometheus-metric-storage-0" Feb 27 10:46:37 crc kubenswrapper[4728]: I0227 10:46:37.707973 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 27 10:46:37 crc kubenswrapper[4728]: I0227 10:46:37.716455 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-jc2j2" Feb 27 10:46:39 crc kubenswrapper[4728]: I0227 10:46:39.002807 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-bd5fc"] Feb 27 10:46:39 crc kubenswrapper[4728]: I0227 10:46:39.004641 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-bd5fc" Feb 27 10:46:39 crc kubenswrapper[4728]: I0227 10:46:39.009367 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-cg2mk" Feb 27 10:46:39 crc kubenswrapper[4728]: I0227 10:46:39.010044 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Feb 27 10:46:39 crc kubenswrapper[4728]: I0227 10:46:39.010398 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Feb 27 10:46:39 crc kubenswrapper[4728]: I0227 10:46:39.013619 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-bhldn"] Feb 27 10:46:39 crc kubenswrapper[4728]: I0227 10:46:39.016338 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-bhldn" Feb 27 10:46:39 crc kubenswrapper[4728]: I0227 10:46:39.028369 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-bd5fc"] Feb 27 10:46:39 crc kubenswrapper[4728]: I0227 10:46:39.036743 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-bhldn"] Feb 27 10:46:39 crc kubenswrapper[4728]: I0227 10:46:39.068842 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/20d22b86-c3cb-4b12-8e88-35369d033e1e-var-log-ovn\") pod \"ovn-controller-bd5fc\" (UID: \"20d22b86-c3cb-4b12-8e88-35369d033e1e\") " pod="openstack/ovn-controller-bd5fc" Feb 27 10:46:39 crc kubenswrapper[4728]: I0227 10:46:39.068947 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pbdsx\" (UniqueName: \"kubernetes.io/projected/20d22b86-c3cb-4b12-8e88-35369d033e1e-kube-api-access-pbdsx\") pod \"ovn-controller-bd5fc\" (UID: \"20d22b86-c3cb-4b12-8e88-35369d033e1e\") " pod="openstack/ovn-controller-bd5fc" Feb 27 10:46:39 crc kubenswrapper[4728]: I0227 10:46:39.068979 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/e03375ec-5705-44e3-9dda-686c809cf4ef-var-lib\") pod \"ovn-controller-ovs-bhldn\" (UID: \"e03375ec-5705-44e3-9dda-686c809cf4ef\") " pod="openstack/ovn-controller-ovs-bhldn" Feb 27 10:46:39 crc kubenswrapper[4728]: I0227 10:46:39.069013 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2kj8\" (UniqueName: \"kubernetes.io/projected/e03375ec-5705-44e3-9dda-686c809cf4ef-kube-api-access-m2kj8\") pod \"ovn-controller-ovs-bhldn\" (UID: \"e03375ec-5705-44e3-9dda-686c809cf4ef\") " pod="openstack/ovn-controller-ovs-bhldn" Feb 27 10:46:39 crc kubenswrapper[4728]: I0227 10:46:39.069032 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/e03375ec-5705-44e3-9dda-686c809cf4ef-var-log\") pod \"ovn-controller-ovs-bhldn\" (UID: \"e03375ec-5705-44e3-9dda-686c809cf4ef\") " pod="openstack/ovn-controller-ovs-bhldn" Feb 27 10:46:39 crc kubenswrapper[4728]: I0227 10:46:39.069058 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e03375ec-5705-44e3-9dda-686c809cf4ef-scripts\") pod \"ovn-controller-ovs-bhldn\" (UID: \"e03375ec-5705-44e3-9dda-686c809cf4ef\") " pod="openstack/ovn-controller-ovs-bhldn" Feb 27 10:46:39 crc kubenswrapper[4728]: I0227 10:46:39.069082 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/20d22b86-c3cb-4b12-8e88-35369d033e1e-scripts\") pod \"ovn-controller-bd5fc\" (UID: \"20d22b86-c3cb-4b12-8e88-35369d033e1e\") " pod="openstack/ovn-controller-bd5fc" Feb 27 10:46:39 crc kubenswrapper[4728]: I0227 10:46:39.069107 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/e03375ec-5705-44e3-9dda-686c809cf4ef-etc-ovs\") pod \"ovn-controller-ovs-bhldn\" (UID: \"e03375ec-5705-44e3-9dda-686c809cf4ef\") " pod="openstack/ovn-controller-ovs-bhldn" Feb 27 10:46:39 crc kubenswrapper[4728]: I0227 10:46:39.069124 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20d22b86-c3cb-4b12-8e88-35369d033e1e-combined-ca-bundle\") pod \"ovn-controller-bd5fc\" (UID: \"20d22b86-c3cb-4b12-8e88-35369d033e1e\") " pod="openstack/ovn-controller-bd5fc" Feb 27 10:46:39 crc kubenswrapper[4728]: I0227 10:46:39.069146 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/20d22b86-c3cb-4b12-8e88-35369d033e1e-ovn-controller-tls-certs\") pod \"ovn-controller-bd5fc\" (UID: \"20d22b86-c3cb-4b12-8e88-35369d033e1e\") " pod="openstack/ovn-controller-bd5fc" Feb 27 10:46:39 crc kubenswrapper[4728]: I0227 10:46:39.069163 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e03375ec-5705-44e3-9dda-686c809cf4ef-var-run\") pod \"ovn-controller-ovs-bhldn\" (UID: \"e03375ec-5705-44e3-9dda-686c809cf4ef\") " pod="openstack/ovn-controller-ovs-bhldn" Feb 27 10:46:39 crc kubenswrapper[4728]: I0227 10:46:39.069187 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/20d22b86-c3cb-4b12-8e88-35369d033e1e-var-run-ovn\") pod \"ovn-controller-bd5fc\" (UID: \"20d22b86-c3cb-4b12-8e88-35369d033e1e\") " pod="openstack/ovn-controller-bd5fc" Feb 27 10:46:39 crc kubenswrapper[4728]: I0227 10:46:39.069215 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/20d22b86-c3cb-4b12-8e88-35369d033e1e-var-run\") pod \"ovn-controller-bd5fc\" (UID: \"20d22b86-c3cb-4b12-8e88-35369d033e1e\") " pod="openstack/ovn-controller-bd5fc" Feb 27 10:46:39 crc kubenswrapper[4728]: I0227 10:46:39.170565 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m2kj8\" (UniqueName: \"kubernetes.io/projected/e03375ec-5705-44e3-9dda-686c809cf4ef-kube-api-access-m2kj8\") pod \"ovn-controller-ovs-bhldn\" (UID: \"e03375ec-5705-44e3-9dda-686c809cf4ef\") " pod="openstack/ovn-controller-ovs-bhldn" Feb 27 10:46:39 crc kubenswrapper[4728]: I0227 10:46:39.170738 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/e03375ec-5705-44e3-9dda-686c809cf4ef-var-log\") pod \"ovn-controller-ovs-bhldn\" (UID: \"e03375ec-5705-44e3-9dda-686c809cf4ef\") " pod="openstack/ovn-controller-ovs-bhldn" Feb 27 10:46:39 crc kubenswrapper[4728]: I0227 10:46:39.170899 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e03375ec-5705-44e3-9dda-686c809cf4ef-scripts\") pod \"ovn-controller-ovs-bhldn\" (UID: \"e03375ec-5705-44e3-9dda-686c809cf4ef\") " pod="openstack/ovn-controller-ovs-bhldn" Feb 27 10:46:39 crc kubenswrapper[4728]: I0227 10:46:39.171056 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/20d22b86-c3cb-4b12-8e88-35369d033e1e-scripts\") pod \"ovn-controller-bd5fc\" (UID: \"20d22b86-c3cb-4b12-8e88-35369d033e1e\") " pod="openstack/ovn-controller-bd5fc" Feb 27 10:46:39 crc kubenswrapper[4728]: I0227 10:46:39.171214 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/e03375ec-5705-44e3-9dda-686c809cf4ef-etc-ovs\") pod \"ovn-controller-ovs-bhldn\" (UID: \"e03375ec-5705-44e3-9dda-686c809cf4ef\") " pod="openstack/ovn-controller-ovs-bhldn" Feb 27 10:46:39 crc kubenswrapper[4728]: I0227 10:46:39.171268 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/e03375ec-5705-44e3-9dda-686c809cf4ef-var-log\") pod \"ovn-controller-ovs-bhldn\" (UID: \"e03375ec-5705-44e3-9dda-686c809cf4ef\") " pod="openstack/ovn-controller-ovs-bhldn" Feb 27 10:46:39 crc kubenswrapper[4728]: I0227 10:46:39.171391 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20d22b86-c3cb-4b12-8e88-35369d033e1e-combined-ca-bundle\") pod \"ovn-controller-bd5fc\" (UID: \"20d22b86-c3cb-4b12-8e88-35369d033e1e\") " pod="openstack/ovn-controller-bd5fc" Feb 27 10:46:39 crc kubenswrapper[4728]: I0227 10:46:39.171546 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/e03375ec-5705-44e3-9dda-686c809cf4ef-etc-ovs\") pod \"ovn-controller-ovs-bhldn\" (UID: \"e03375ec-5705-44e3-9dda-686c809cf4ef\") " pod="openstack/ovn-controller-ovs-bhldn" Feb 27 10:46:39 crc kubenswrapper[4728]: I0227 10:46:39.171614 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/20d22b86-c3cb-4b12-8e88-35369d033e1e-ovn-controller-tls-certs\") pod \"ovn-controller-bd5fc\" (UID: \"20d22b86-c3cb-4b12-8e88-35369d033e1e\") " pod="openstack/ovn-controller-bd5fc" Feb 27 10:46:39 crc kubenswrapper[4728]: I0227 10:46:39.171651 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e03375ec-5705-44e3-9dda-686c809cf4ef-var-run\") pod \"ovn-controller-ovs-bhldn\" (UID: \"e03375ec-5705-44e3-9dda-686c809cf4ef\") " pod="openstack/ovn-controller-ovs-bhldn" Feb 27 10:46:39 crc kubenswrapper[4728]: I0227 10:46:39.171883 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e03375ec-5705-44e3-9dda-686c809cf4ef-var-run\") pod \"ovn-controller-ovs-bhldn\" (UID: \"e03375ec-5705-44e3-9dda-686c809cf4ef\") " pod="openstack/ovn-controller-ovs-bhldn" Feb 27 10:46:39 crc kubenswrapper[4728]: I0227 10:46:39.172098 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/20d22b86-c3cb-4b12-8e88-35369d033e1e-var-run-ovn\") pod \"ovn-controller-bd5fc\" (UID: \"20d22b86-c3cb-4b12-8e88-35369d033e1e\") " pod="openstack/ovn-controller-bd5fc" Feb 27 10:46:39 crc kubenswrapper[4728]: I0227 10:46:39.172153 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/20d22b86-c3cb-4b12-8e88-35369d033e1e-var-run-ovn\") pod \"ovn-controller-bd5fc\" (UID: \"20d22b86-c3cb-4b12-8e88-35369d033e1e\") " pod="openstack/ovn-controller-bd5fc" Feb 27 10:46:39 crc kubenswrapper[4728]: I0227 10:46:39.172364 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/20d22b86-c3cb-4b12-8e88-35369d033e1e-var-run\") pod \"ovn-controller-bd5fc\" (UID: \"20d22b86-c3cb-4b12-8e88-35369d033e1e\") " pod="openstack/ovn-controller-bd5fc" Feb 27 10:46:39 crc kubenswrapper[4728]: I0227 10:46:39.172477 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/20d22b86-c3cb-4b12-8e88-35369d033e1e-var-log-ovn\") pod \"ovn-controller-bd5fc\" (UID: \"20d22b86-c3cb-4b12-8e88-35369d033e1e\") " pod="openstack/ovn-controller-bd5fc" Feb 27 10:46:39 crc kubenswrapper[4728]: I0227 10:46:39.172823 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/20d22b86-c3cb-4b12-8e88-35369d033e1e-var-log-ovn\") pod \"ovn-controller-bd5fc\" (UID: \"20d22b86-c3cb-4b12-8e88-35369d033e1e\") " pod="openstack/ovn-controller-bd5fc" Feb 27 10:46:39 crc kubenswrapper[4728]: I0227 10:46:39.172419 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/20d22b86-c3cb-4b12-8e88-35369d033e1e-var-run\") pod \"ovn-controller-bd5fc\" (UID: \"20d22b86-c3cb-4b12-8e88-35369d033e1e\") " pod="openstack/ovn-controller-bd5fc" Feb 27 10:46:39 crc kubenswrapper[4728]: I0227 10:46:39.173058 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pbdsx\" (UniqueName: \"kubernetes.io/projected/20d22b86-c3cb-4b12-8e88-35369d033e1e-kube-api-access-pbdsx\") pod \"ovn-controller-bd5fc\" (UID: \"20d22b86-c3cb-4b12-8e88-35369d033e1e\") " pod="openstack/ovn-controller-bd5fc" Feb 27 10:46:39 crc kubenswrapper[4728]: I0227 10:46:39.173213 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/e03375ec-5705-44e3-9dda-686c809cf4ef-var-lib\") pod \"ovn-controller-ovs-bhldn\" (UID: \"e03375ec-5705-44e3-9dda-686c809cf4ef\") " pod="openstack/ovn-controller-ovs-bhldn" Feb 27 10:46:39 crc kubenswrapper[4728]: I0227 10:46:39.173313 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e03375ec-5705-44e3-9dda-686c809cf4ef-scripts\") pod \"ovn-controller-ovs-bhldn\" (UID: \"e03375ec-5705-44e3-9dda-686c809cf4ef\") " pod="openstack/ovn-controller-ovs-bhldn" Feb 27 10:46:39 crc kubenswrapper[4728]: I0227 10:46:39.173475 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/e03375ec-5705-44e3-9dda-686c809cf4ef-var-lib\") pod \"ovn-controller-ovs-bhldn\" (UID: \"e03375ec-5705-44e3-9dda-686c809cf4ef\") " pod="openstack/ovn-controller-ovs-bhldn" Feb 27 10:46:39 crc kubenswrapper[4728]: I0227 10:46:39.173559 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/20d22b86-c3cb-4b12-8e88-35369d033e1e-scripts\") pod \"ovn-controller-bd5fc\" (UID: \"20d22b86-c3cb-4b12-8e88-35369d033e1e\") " pod="openstack/ovn-controller-bd5fc" Feb 27 10:46:39 crc kubenswrapper[4728]: I0227 10:46:39.177082 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20d22b86-c3cb-4b12-8e88-35369d033e1e-combined-ca-bundle\") pod \"ovn-controller-bd5fc\" (UID: \"20d22b86-c3cb-4b12-8e88-35369d033e1e\") " pod="openstack/ovn-controller-bd5fc" Feb 27 10:46:39 crc kubenswrapper[4728]: I0227 10:46:39.177625 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/20d22b86-c3cb-4b12-8e88-35369d033e1e-ovn-controller-tls-certs\") pod \"ovn-controller-bd5fc\" (UID: \"20d22b86-c3cb-4b12-8e88-35369d033e1e\") " pod="openstack/ovn-controller-bd5fc" Feb 27 10:46:39 crc kubenswrapper[4728]: I0227 10:46:39.195202 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m2kj8\" (UniqueName: \"kubernetes.io/projected/e03375ec-5705-44e3-9dda-686c809cf4ef-kube-api-access-m2kj8\") pod \"ovn-controller-ovs-bhldn\" (UID: \"e03375ec-5705-44e3-9dda-686c809cf4ef\") " pod="openstack/ovn-controller-ovs-bhldn" Feb 27 10:46:39 crc kubenswrapper[4728]: I0227 10:46:39.201137 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pbdsx\" (UniqueName: \"kubernetes.io/projected/20d22b86-c3cb-4b12-8e88-35369d033e1e-kube-api-access-pbdsx\") pod \"ovn-controller-bd5fc\" (UID: \"20d22b86-c3cb-4b12-8e88-35369d033e1e\") " pod="openstack/ovn-controller-bd5fc" Feb 27 10:46:39 crc kubenswrapper[4728]: I0227 10:46:39.333918 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-bd5fc" Feb 27 10:46:39 crc kubenswrapper[4728]: I0227 10:46:39.348124 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-bhldn" Feb 27 10:46:40 crc kubenswrapper[4728]: I0227 10:46:39.920514 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 27 10:46:40 crc kubenswrapper[4728]: I0227 10:46:39.923050 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 27 10:46:40 crc kubenswrapper[4728]: I0227 10:46:39.927822 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Feb 27 10:46:40 crc kubenswrapper[4728]: I0227 10:46:39.928156 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Feb 27 10:46:40 crc kubenswrapper[4728]: I0227 10:46:39.928260 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Feb 27 10:46:40 crc kubenswrapper[4728]: I0227 10:46:39.928353 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-4rqgr" Feb 27 10:46:40 crc kubenswrapper[4728]: I0227 10:46:39.928450 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Feb 27 10:46:40 crc kubenswrapper[4728]: I0227 10:46:39.989588 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/94cc80b4-aa0c-439a-be56-22b86add0bb3-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"94cc80b4-aa0c-439a-be56-22b86add0bb3\") " pod="openstack/ovsdbserver-nb-0" Feb 27 10:46:40 crc kubenswrapper[4728]: I0227 10:46:39.989647 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/94cc80b4-aa0c-439a-be56-22b86add0bb3-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"94cc80b4-aa0c-439a-be56-22b86add0bb3\") " pod="openstack/ovsdbserver-nb-0" Feb 27 10:46:40 crc kubenswrapper[4728]: I0227 10:46:39.989679 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94cc80b4-aa0c-439a-be56-22b86add0bb3-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"94cc80b4-aa0c-439a-be56-22b86add0bb3\") " pod="openstack/ovsdbserver-nb-0" Feb 27 10:46:40 crc kubenswrapper[4728]: I0227 10:46:39.989707 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/94cc80b4-aa0c-439a-be56-22b86add0bb3-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"94cc80b4-aa0c-439a-be56-22b86add0bb3\") " pod="openstack/ovsdbserver-nb-0" Feb 27 10:46:40 crc kubenswrapper[4728]: I0227 10:46:39.989727 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fm257\" (UniqueName: \"kubernetes.io/projected/94cc80b4-aa0c-439a-be56-22b86add0bb3-kube-api-access-fm257\") pod \"ovsdbserver-nb-0\" (UID: \"94cc80b4-aa0c-439a-be56-22b86add0bb3\") " pod="openstack/ovsdbserver-nb-0" Feb 27 10:46:40 crc kubenswrapper[4728]: I0227 10:46:39.989768 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-8991397a-ec7e-49b0-991e-747df9a03c33\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8991397a-ec7e-49b0-991e-747df9a03c33\") pod \"ovsdbserver-nb-0\" (UID: \"94cc80b4-aa0c-439a-be56-22b86add0bb3\") " pod="openstack/ovsdbserver-nb-0" Feb 27 10:46:40 crc kubenswrapper[4728]: I0227 10:46:39.989822 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/94cc80b4-aa0c-439a-be56-22b86add0bb3-config\") pod \"ovsdbserver-nb-0\" (UID: \"94cc80b4-aa0c-439a-be56-22b86add0bb3\") " pod="openstack/ovsdbserver-nb-0" Feb 27 10:46:40 crc kubenswrapper[4728]: I0227 10:46:39.989851 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/94cc80b4-aa0c-439a-be56-22b86add0bb3-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"94cc80b4-aa0c-439a-be56-22b86add0bb3\") " pod="openstack/ovsdbserver-nb-0" Feb 27 10:46:40 crc kubenswrapper[4728]: I0227 10:46:39.995803 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 27 10:46:40 crc kubenswrapper[4728]: I0227 10:46:40.094613 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/94cc80b4-aa0c-439a-be56-22b86add0bb3-config\") pod \"ovsdbserver-nb-0\" (UID: \"94cc80b4-aa0c-439a-be56-22b86add0bb3\") " pod="openstack/ovsdbserver-nb-0" Feb 27 10:46:40 crc kubenswrapper[4728]: I0227 10:46:40.094670 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/94cc80b4-aa0c-439a-be56-22b86add0bb3-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"94cc80b4-aa0c-439a-be56-22b86add0bb3\") " pod="openstack/ovsdbserver-nb-0" Feb 27 10:46:40 crc kubenswrapper[4728]: I0227 10:46:40.094728 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/94cc80b4-aa0c-439a-be56-22b86add0bb3-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"94cc80b4-aa0c-439a-be56-22b86add0bb3\") " pod="openstack/ovsdbserver-nb-0" Feb 27 10:46:40 crc kubenswrapper[4728]: I0227 10:46:40.094754 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/94cc80b4-aa0c-439a-be56-22b86add0bb3-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"94cc80b4-aa0c-439a-be56-22b86add0bb3\") " pod="openstack/ovsdbserver-nb-0" Feb 27 10:46:40 crc kubenswrapper[4728]: I0227 10:46:40.094794 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94cc80b4-aa0c-439a-be56-22b86add0bb3-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"94cc80b4-aa0c-439a-be56-22b86add0bb3\") " pod="openstack/ovsdbserver-nb-0" Feb 27 10:46:40 crc kubenswrapper[4728]: I0227 10:46:40.094821 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/94cc80b4-aa0c-439a-be56-22b86add0bb3-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"94cc80b4-aa0c-439a-be56-22b86add0bb3\") " pod="openstack/ovsdbserver-nb-0" Feb 27 10:46:40 crc kubenswrapper[4728]: I0227 10:46:40.094845 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fm257\" (UniqueName: \"kubernetes.io/projected/94cc80b4-aa0c-439a-be56-22b86add0bb3-kube-api-access-fm257\") pod \"ovsdbserver-nb-0\" (UID: \"94cc80b4-aa0c-439a-be56-22b86add0bb3\") " pod="openstack/ovsdbserver-nb-0" Feb 27 10:46:40 crc kubenswrapper[4728]: I0227 10:46:40.094889 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-8991397a-ec7e-49b0-991e-747df9a03c33\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8991397a-ec7e-49b0-991e-747df9a03c33\") pod \"ovsdbserver-nb-0\" (UID: \"94cc80b4-aa0c-439a-be56-22b86add0bb3\") " pod="openstack/ovsdbserver-nb-0" Feb 27 10:46:40 crc kubenswrapper[4728]: I0227 10:46:40.095989 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/94cc80b4-aa0c-439a-be56-22b86add0bb3-config\") pod \"ovsdbserver-nb-0\" (UID: \"94cc80b4-aa0c-439a-be56-22b86add0bb3\") " pod="openstack/ovsdbserver-nb-0" Feb 27 10:46:40 crc kubenswrapper[4728]: I0227 10:46:40.096777 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/94cc80b4-aa0c-439a-be56-22b86add0bb3-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"94cc80b4-aa0c-439a-be56-22b86add0bb3\") " pod="openstack/ovsdbserver-nb-0" Feb 27 10:46:40 crc kubenswrapper[4728]: I0227 10:46:40.099346 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/94cc80b4-aa0c-439a-be56-22b86add0bb3-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"94cc80b4-aa0c-439a-be56-22b86add0bb3\") " pod="openstack/ovsdbserver-nb-0" Feb 27 10:46:40 crc kubenswrapper[4728]: I0227 10:46:40.116397 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/94cc80b4-aa0c-439a-be56-22b86add0bb3-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"94cc80b4-aa0c-439a-be56-22b86add0bb3\") " pod="openstack/ovsdbserver-nb-0" Feb 27 10:46:40 crc kubenswrapper[4728]: I0227 10:46:40.116470 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/94cc80b4-aa0c-439a-be56-22b86add0bb3-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"94cc80b4-aa0c-439a-be56-22b86add0bb3\") " pod="openstack/ovsdbserver-nb-0" Feb 27 10:46:40 crc kubenswrapper[4728]: I0227 10:46:40.128474 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94cc80b4-aa0c-439a-be56-22b86add0bb3-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"94cc80b4-aa0c-439a-be56-22b86add0bb3\") " pod="openstack/ovsdbserver-nb-0" Feb 27 10:46:40 crc kubenswrapper[4728]: I0227 10:46:40.128651 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fm257\" (UniqueName: \"kubernetes.io/projected/94cc80b4-aa0c-439a-be56-22b86add0bb3-kube-api-access-fm257\") pod \"ovsdbserver-nb-0\" (UID: \"94cc80b4-aa0c-439a-be56-22b86add0bb3\") " pod="openstack/ovsdbserver-nb-0" Feb 27 10:46:40 crc kubenswrapper[4728]: I0227 10:46:40.135471 4728 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 27 10:46:40 crc kubenswrapper[4728]: I0227 10:46:40.136747 4728 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-8991397a-ec7e-49b0-991e-747df9a03c33\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8991397a-ec7e-49b0-991e-747df9a03c33\") pod \"ovsdbserver-nb-0\" (UID: \"94cc80b4-aa0c-439a-be56-22b86add0bb3\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/62dcf6b1bfa6b5cbad73fc171aed323dde3b8e8ddc7c81f69ceae144de838251/globalmount\"" pod="openstack/ovsdbserver-nb-0" Feb 27 10:46:40 crc kubenswrapper[4728]: I0227 10:46:40.178970 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-8991397a-ec7e-49b0-991e-747df9a03c33\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8991397a-ec7e-49b0-991e-747df9a03c33\") pod \"ovsdbserver-nb-0\" (UID: \"94cc80b4-aa0c-439a-be56-22b86add0bb3\") " pod="openstack/ovsdbserver-nb-0" Feb 27 10:46:40 crc kubenswrapper[4728]: I0227 10:46:40.250852 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 27 10:46:43 crc kubenswrapper[4728]: I0227 10:46:43.424580 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 27 10:46:43 crc kubenswrapper[4728]: I0227 10:46:43.426869 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 27 10:46:43 crc kubenswrapper[4728]: I0227 10:46:43.429066 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Feb 27 10:46:43 crc kubenswrapper[4728]: I0227 10:46:43.429388 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-9tc4b" Feb 27 10:46:43 crc kubenswrapper[4728]: I0227 10:46:43.430905 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Feb 27 10:46:43 crc kubenswrapper[4728]: I0227 10:46:43.431470 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Feb 27 10:46:43 crc kubenswrapper[4728]: I0227 10:46:43.439298 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 27 10:46:43 crc kubenswrapper[4728]: I0227 10:46:43.463619 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0be54b22-6600-4033-92e9-10fd8a540238-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"0be54b22-6600-4033-92e9-10fd8a540238\") " pod="openstack/ovsdbserver-sb-0" Feb 27 10:46:43 crc kubenswrapper[4728]: I0227 10:46:43.463846 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0be54b22-6600-4033-92e9-10fd8a540238-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"0be54b22-6600-4033-92e9-10fd8a540238\") " pod="openstack/ovsdbserver-sb-0" Feb 27 10:46:43 crc kubenswrapper[4728]: I0227 10:46:43.464008 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-4f8cc813-fdd1-48d8-9493-a439fae65204\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4f8cc813-fdd1-48d8-9493-a439fae65204\") pod \"ovsdbserver-sb-0\" (UID: \"0be54b22-6600-4033-92e9-10fd8a540238\") " pod="openstack/ovsdbserver-sb-0" Feb 27 10:46:43 crc kubenswrapper[4728]: I0227 10:46:43.464086 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2j29b\" (UniqueName: \"kubernetes.io/projected/0be54b22-6600-4033-92e9-10fd8a540238-kube-api-access-2j29b\") pod \"ovsdbserver-sb-0\" (UID: \"0be54b22-6600-4033-92e9-10fd8a540238\") " pod="openstack/ovsdbserver-sb-0" Feb 27 10:46:43 crc kubenswrapper[4728]: I0227 10:46:43.464115 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/0be54b22-6600-4033-92e9-10fd8a540238-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"0be54b22-6600-4033-92e9-10fd8a540238\") " pod="openstack/ovsdbserver-sb-0" Feb 27 10:46:43 crc kubenswrapper[4728]: I0227 10:46:43.464171 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0be54b22-6600-4033-92e9-10fd8a540238-config\") pod \"ovsdbserver-sb-0\" (UID: \"0be54b22-6600-4033-92e9-10fd8a540238\") " pod="openstack/ovsdbserver-sb-0" Feb 27 10:46:43 crc kubenswrapper[4728]: I0227 10:46:43.464204 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0be54b22-6600-4033-92e9-10fd8a540238-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"0be54b22-6600-4033-92e9-10fd8a540238\") " pod="openstack/ovsdbserver-sb-0" Feb 27 10:46:43 crc kubenswrapper[4728]: I0227 10:46:43.464280 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0be54b22-6600-4033-92e9-10fd8a540238-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"0be54b22-6600-4033-92e9-10fd8a540238\") " pod="openstack/ovsdbserver-sb-0" Feb 27 10:46:43 crc kubenswrapper[4728]: I0227 10:46:43.566484 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0be54b22-6600-4033-92e9-10fd8a540238-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"0be54b22-6600-4033-92e9-10fd8a540238\") " pod="openstack/ovsdbserver-sb-0" Feb 27 10:46:43 crc kubenswrapper[4728]: I0227 10:46:43.566615 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-4f8cc813-fdd1-48d8-9493-a439fae65204\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4f8cc813-fdd1-48d8-9493-a439fae65204\") pod \"ovsdbserver-sb-0\" (UID: \"0be54b22-6600-4033-92e9-10fd8a540238\") " pod="openstack/ovsdbserver-sb-0" Feb 27 10:46:43 crc kubenswrapper[4728]: I0227 10:46:43.566671 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2j29b\" (UniqueName: \"kubernetes.io/projected/0be54b22-6600-4033-92e9-10fd8a540238-kube-api-access-2j29b\") pod \"ovsdbserver-sb-0\" (UID: \"0be54b22-6600-4033-92e9-10fd8a540238\") " pod="openstack/ovsdbserver-sb-0" Feb 27 10:46:43 crc kubenswrapper[4728]: I0227 10:46:43.566716 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/0be54b22-6600-4033-92e9-10fd8a540238-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"0be54b22-6600-4033-92e9-10fd8a540238\") " pod="openstack/ovsdbserver-sb-0" Feb 27 10:46:43 crc kubenswrapper[4728]: I0227 10:46:43.566794 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0be54b22-6600-4033-92e9-10fd8a540238-config\") pod \"ovsdbserver-sb-0\" (UID: \"0be54b22-6600-4033-92e9-10fd8a540238\") " pod="openstack/ovsdbserver-sb-0" Feb 27 10:46:43 crc kubenswrapper[4728]: I0227 10:46:43.566851 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0be54b22-6600-4033-92e9-10fd8a540238-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"0be54b22-6600-4033-92e9-10fd8a540238\") " pod="openstack/ovsdbserver-sb-0" Feb 27 10:46:43 crc kubenswrapper[4728]: I0227 10:46:43.566912 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0be54b22-6600-4033-92e9-10fd8a540238-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"0be54b22-6600-4033-92e9-10fd8a540238\") " pod="openstack/ovsdbserver-sb-0" Feb 27 10:46:43 crc kubenswrapper[4728]: I0227 10:46:43.566999 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0be54b22-6600-4033-92e9-10fd8a540238-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"0be54b22-6600-4033-92e9-10fd8a540238\") " pod="openstack/ovsdbserver-sb-0" Feb 27 10:46:43 crc kubenswrapper[4728]: I0227 10:46:43.567531 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/0be54b22-6600-4033-92e9-10fd8a540238-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"0be54b22-6600-4033-92e9-10fd8a540238\") " pod="openstack/ovsdbserver-sb-0" Feb 27 10:46:43 crc kubenswrapper[4728]: I0227 10:46:43.568044 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0be54b22-6600-4033-92e9-10fd8a540238-config\") pod \"ovsdbserver-sb-0\" (UID: \"0be54b22-6600-4033-92e9-10fd8a540238\") " pod="openstack/ovsdbserver-sb-0" Feb 27 10:46:43 crc kubenswrapper[4728]: I0227 10:46:43.568129 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0be54b22-6600-4033-92e9-10fd8a540238-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"0be54b22-6600-4033-92e9-10fd8a540238\") " pod="openstack/ovsdbserver-sb-0" Feb 27 10:46:43 crc kubenswrapper[4728]: I0227 10:46:43.569972 4728 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 27 10:46:43 crc kubenswrapper[4728]: I0227 10:46:43.569998 4728 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-4f8cc813-fdd1-48d8-9493-a439fae65204\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4f8cc813-fdd1-48d8-9493-a439fae65204\") pod \"ovsdbserver-sb-0\" (UID: \"0be54b22-6600-4033-92e9-10fd8a540238\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/519790c4d2237131a304ffe57c1ad3ff614b01e87c6a7d8464bb3d1fc03c5fdb/globalmount\"" pod="openstack/ovsdbserver-sb-0" Feb 27 10:46:43 crc kubenswrapper[4728]: I0227 10:46:43.572292 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0be54b22-6600-4033-92e9-10fd8a540238-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"0be54b22-6600-4033-92e9-10fd8a540238\") " pod="openstack/ovsdbserver-sb-0" Feb 27 10:46:43 crc kubenswrapper[4728]: I0227 10:46:43.572475 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0be54b22-6600-4033-92e9-10fd8a540238-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"0be54b22-6600-4033-92e9-10fd8a540238\") " pod="openstack/ovsdbserver-sb-0" Feb 27 10:46:43 crc kubenswrapper[4728]: I0227 10:46:43.573685 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0be54b22-6600-4033-92e9-10fd8a540238-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"0be54b22-6600-4033-92e9-10fd8a540238\") " pod="openstack/ovsdbserver-sb-0" Feb 27 10:46:43 crc kubenswrapper[4728]: I0227 10:46:43.582447 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2j29b\" (UniqueName: \"kubernetes.io/projected/0be54b22-6600-4033-92e9-10fd8a540238-kube-api-access-2j29b\") pod \"ovsdbserver-sb-0\" (UID: \"0be54b22-6600-4033-92e9-10fd8a540238\") " pod="openstack/ovsdbserver-sb-0" Feb 27 10:46:43 crc kubenswrapper[4728]: I0227 10:46:43.598944 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-4f8cc813-fdd1-48d8-9493-a439fae65204\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4f8cc813-fdd1-48d8-9493-a439fae65204\") pod \"ovsdbserver-sb-0\" (UID: \"0be54b22-6600-4033-92e9-10fd8a540238\") " pod="openstack/ovsdbserver-sb-0" Feb 27 10:46:43 crc kubenswrapper[4728]: I0227 10:46:43.758651 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 27 10:46:44 crc kubenswrapper[4728]: I0227 10:46:44.451665 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 27 10:46:44 crc kubenswrapper[4728]: E0227 10:46:44.844230 4728 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 27 10:46:44 crc kubenswrapper[4728]: E0227 10:46:44.844847 4728 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dv6pp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-br2wj_openstack(18512c82-a131-4c81-ab2f-c4d812729f46): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 27 10:46:44 crc kubenswrapper[4728]: E0227 10:46:44.846074 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-br2wj" podUID="18512c82-a131-4c81-ab2f-c4d812729f46" Feb 27 10:46:44 crc kubenswrapper[4728]: E0227 10:46:44.851160 4728 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 27 10:46:44 crc kubenswrapper[4728]: E0227 10:46:44.851298 4728 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pvqkd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-xtqcx_openstack(8879fddd-543d-4f27-bf6f-c78338147058): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 27 10:46:44 crc kubenswrapper[4728]: E0227 10:46:44.852729 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-xtqcx" podUID="8879fddd-543d-4f27-bf6f-c78338147058" Feb 27 10:46:44 crc kubenswrapper[4728]: I0227 10:46:44.924344 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"6e834d11-1d93-42ba-8dfe-f17c9faddff2","Type":"ContainerStarted","Data":"e4019760180b3ae5a5f69b7417b90dfb0fe2a3fbd07740d0a193bcf6e073dfa9"} Feb 27 10:46:45 crc kubenswrapper[4728]: I0227 10:46:45.560040 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 27 10:46:45 crc kubenswrapper[4728]: W0227 10:46:45.573443 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda7b93ac4_55f2_4491_b4b4_f8abfd837dfa.slice/crio-1390eb941649c0aff4c598754edb92b3179e22a63b1a7433540f25749475fb6b WatchSource:0}: Error finding container 1390eb941649c0aff4c598754edb92b3179e22a63b1a7433540f25749475fb6b: Status 404 returned error can't find the container with id 1390eb941649c0aff4c598754edb92b3179e22a63b1a7433540f25749475fb6b Feb 27 10:46:45 crc kubenswrapper[4728]: I0227 10:46:45.687621 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-2"] Feb 27 10:46:45 crc kubenswrapper[4728]: W0227 10:46:45.696997 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podad00da50_2e05_4612_a862_5cccd698e77b.slice/crio-6e35e91f6db4e63f3741e13105b339155b27780cf82cd562159503978bb6aacc WatchSource:0}: Error finding container 6e35e91f6db4e63f3741e13105b339155b27780cf82cd562159503978bb6aacc: Status 404 returned error can't find the container with id 6e35e91f6db4e63f3741e13105b339155b27780cf82cd562159503978bb6aacc Feb 27 10:46:45 crc kubenswrapper[4728]: I0227 10:46:45.865497 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-br2wj" Feb 27 10:46:45 crc kubenswrapper[4728]: I0227 10:46:45.871016 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-xtqcx" Feb 27 10:46:45 crc kubenswrapper[4728]: I0227 10:46:45.924452 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dv6pp\" (UniqueName: \"kubernetes.io/projected/18512c82-a131-4c81-ab2f-c4d812729f46-kube-api-access-dv6pp\") pod \"18512c82-a131-4c81-ab2f-c4d812729f46\" (UID: \"18512c82-a131-4c81-ab2f-c4d812729f46\") " Feb 27 10:46:45 crc kubenswrapper[4728]: I0227 10:46:45.924524 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/18512c82-a131-4c81-ab2f-c4d812729f46-config\") pod \"18512c82-a131-4c81-ab2f-c4d812729f46\" (UID: \"18512c82-a131-4c81-ab2f-c4d812729f46\") " Feb 27 10:46:45 crc kubenswrapper[4728]: I0227 10:46:45.924605 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pvqkd\" (UniqueName: \"kubernetes.io/projected/8879fddd-543d-4f27-bf6f-c78338147058-kube-api-access-pvqkd\") pod \"8879fddd-543d-4f27-bf6f-c78338147058\" (UID: \"8879fddd-543d-4f27-bf6f-c78338147058\") " Feb 27 10:46:45 crc kubenswrapper[4728]: I0227 10:46:45.924743 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8879fddd-543d-4f27-bf6f-c78338147058-dns-svc\") pod \"8879fddd-543d-4f27-bf6f-c78338147058\" (UID: \"8879fddd-543d-4f27-bf6f-c78338147058\") " Feb 27 10:46:45 crc kubenswrapper[4728]: I0227 10:46:45.925111 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8879fddd-543d-4f27-bf6f-c78338147058-config\") pod \"8879fddd-543d-4f27-bf6f-c78338147058\" (UID: \"8879fddd-543d-4f27-bf6f-c78338147058\") " Feb 27 10:46:45 crc kubenswrapper[4728]: I0227 10:46:45.925247 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8879fddd-543d-4f27-bf6f-c78338147058-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8879fddd-543d-4f27-bf6f-c78338147058" (UID: "8879fddd-543d-4f27-bf6f-c78338147058"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:46:45 crc kubenswrapper[4728]: I0227 10:46:45.925313 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18512c82-a131-4c81-ab2f-c4d812729f46-config" (OuterVolumeSpecName: "config") pod "18512c82-a131-4c81-ab2f-c4d812729f46" (UID: "18512c82-a131-4c81-ab2f-c4d812729f46"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:46:45 crc kubenswrapper[4728]: I0227 10:46:45.926231 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8879fddd-543d-4f27-bf6f-c78338147058-config" (OuterVolumeSpecName: "config") pod "8879fddd-543d-4f27-bf6f-c78338147058" (UID: "8879fddd-543d-4f27-bf6f-c78338147058"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:46:45 crc kubenswrapper[4728]: I0227 10:46:45.927005 4728 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8879fddd-543d-4f27-bf6f-c78338147058-config\") on node \"crc\" DevicePath \"\"" Feb 27 10:46:45 crc kubenswrapper[4728]: I0227 10:46:45.927057 4728 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/18512c82-a131-4c81-ab2f-c4d812729f46-config\") on node \"crc\" DevicePath \"\"" Feb 27 10:46:45 crc kubenswrapper[4728]: I0227 10:46:45.927074 4728 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8879fddd-543d-4f27-bf6f-c78338147058-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 27 10:46:45 crc kubenswrapper[4728]: I0227 10:46:45.932358 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18512c82-a131-4c81-ab2f-c4d812729f46-kube-api-access-dv6pp" (OuterVolumeSpecName: "kube-api-access-dv6pp") pod "18512c82-a131-4c81-ab2f-c4d812729f46" (UID: "18512c82-a131-4c81-ab2f-c4d812729f46"). InnerVolumeSpecName "kube-api-access-dv6pp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:46:45 crc kubenswrapper[4728]: I0227 10:46:45.932729 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8879fddd-543d-4f27-bf6f-c78338147058-kube-api-access-pvqkd" (OuterVolumeSpecName: "kube-api-access-pvqkd") pod "8879fddd-543d-4f27-bf6f-c78338147058" (UID: "8879fddd-543d-4f27-bf6f-c78338147058"). InnerVolumeSpecName "kube-api-access-pvqkd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:46:45 crc kubenswrapper[4728]: I0227 10:46:45.936348 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-xtqcx" event={"ID":"8879fddd-543d-4f27-bf6f-c78338147058","Type":"ContainerDied","Data":"2bb6b1e09737a7af7d8d2ef205e1d3238e3f325a969c0bcc50179e7cfe60b439"} Feb 27 10:46:45 crc kubenswrapper[4728]: I0227 10:46:45.936368 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-xtqcx" Feb 27 10:46:45 crc kubenswrapper[4728]: I0227 10:46:45.937371 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"a7b93ac4-55f2-4491-b4b4-f8abfd837dfa","Type":"ContainerStarted","Data":"1390eb941649c0aff4c598754edb92b3179e22a63b1a7433540f25749475fb6b"} Feb 27 10:46:45 crc kubenswrapper[4728]: I0227 10:46:45.938331 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"ad00da50-2e05-4612-a862-5cccd698e77b","Type":"ContainerStarted","Data":"6e35e91f6db4e63f3741e13105b339155b27780cf82cd562159503978bb6aacc"} Feb 27 10:46:45 crc kubenswrapper[4728]: I0227 10:46:45.939678 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-jvkhg" event={"ID":"4eee28e7-e3c4-46a4-a0ee-88fdc8cb10e0","Type":"ContainerDied","Data":"75eed1813ea14af39108ac71772ba33817f909960c4565e843f9e5fbe21d7c2d"} Feb 27 10:46:45 crc kubenswrapper[4728]: I0227 10:46:45.940635 4728 generic.go:334] "Generic (PLEG): container finished" podID="4eee28e7-e3c4-46a4-a0ee-88fdc8cb10e0" containerID="75eed1813ea14af39108ac71772ba33817f909960c4565e843f9e5fbe21d7c2d" exitCode=0 Feb 27 10:46:45 crc kubenswrapper[4728]: I0227 10:46:45.942440 4728 generic.go:334] "Generic (PLEG): container finished" podID="b7f588e1-6d0a-44ac-a7a4-967b0ecb4cd0" containerID="6493c0364199145f1e4802e35b468d4a15e48c3fe1388feef2e9b63d4aba2117" exitCode=0 Feb 27 10:46:45 crc kubenswrapper[4728]: I0227 10:46:45.942546 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc8479f9-zpz7d" event={"ID":"b7f588e1-6d0a-44ac-a7a4-967b0ecb4cd0","Type":"ContainerDied","Data":"6493c0364199145f1e4802e35b468d4a15e48c3fe1388feef2e9b63d4aba2117"} Feb 27 10:46:45 crc kubenswrapper[4728]: I0227 10:46:45.945645 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-br2wj" event={"ID":"18512c82-a131-4c81-ab2f-c4d812729f46","Type":"ContainerDied","Data":"6517c27dc9054bf649862fac038c04f398b45102964935266251633bfe82b3ae"} Feb 27 10:46:45 crc kubenswrapper[4728]: I0227 10:46:45.945697 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-br2wj" Feb 27 10:46:46 crc kubenswrapper[4728]: I0227 10:46:46.029485 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dv6pp\" (UniqueName: \"kubernetes.io/projected/18512c82-a131-4c81-ab2f-c4d812729f46-kube-api-access-dv6pp\") on node \"crc\" DevicePath \"\"" Feb 27 10:46:46 crc kubenswrapper[4728]: I0227 10:46:46.029666 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pvqkd\" (UniqueName: \"kubernetes.io/projected/8879fddd-543d-4f27-bf6f-c78338147058-kube-api-access-pvqkd\") on node \"crc\" DevicePath \"\"" Feb 27 10:46:46 crc kubenswrapper[4728]: I0227 10:46:46.170269 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-br2wj"] Feb 27 10:46:46 crc kubenswrapper[4728]: I0227 10:46:46.191483 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-br2wj"] Feb 27 10:46:46 crc kubenswrapper[4728]: I0227 10:46:46.211211 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-xtqcx"] Feb 27 10:46:46 crc kubenswrapper[4728]: I0227 10:46:46.218191 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-xtqcx"] Feb 27 10:46:46 crc kubenswrapper[4728]: I0227 10:46:46.718131 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 27 10:46:46 crc kubenswrapper[4728]: I0227 10:46:46.760126 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18512c82-a131-4c81-ab2f-c4d812729f46" path="/var/lib/kubelet/pods/18512c82-a131-4c81-ab2f-c4d812729f46/volumes" Feb 27 10:46:46 crc kubenswrapper[4728]: I0227 10:46:46.760495 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8879fddd-543d-4f27-bf6f-c78338147058" path="/var/lib/kubelet/pods/8879fddd-543d-4f27-bf6f-c78338147058/volumes" Feb 27 10:46:46 crc kubenswrapper[4728]: I0227 10:46:46.760858 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 27 10:46:46 crc kubenswrapper[4728]: I0227 10:46:46.772291 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 27 10:46:46 crc kubenswrapper[4728]: I0227 10:46:46.794477 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-ui-dashboards-66cbf594b5-jc2j2"] Feb 27 10:46:46 crc kubenswrapper[4728]: I0227 10:46:46.804358 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 27 10:46:46 crc kubenswrapper[4728]: I0227 10:46:46.812901 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-78c47f664f-wpqzf"] Feb 27 10:46:46 crc kubenswrapper[4728]: I0227 10:46:46.819740 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-1"] Feb 27 10:46:46 crc kubenswrapper[4728]: I0227 10:46:46.836341 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 27 10:46:46 crc kubenswrapper[4728]: I0227 10:46:46.857107 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-bd5fc"] Feb 27 10:46:46 crc kubenswrapper[4728]: I0227 10:46:46.956678 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-jvkhg" event={"ID":"4eee28e7-e3c4-46a4-a0ee-88fdc8cb10e0","Type":"ContainerStarted","Data":"ca43cefce3d00218282806f2a7b3925d5de51832aeffe6151436d3058c14586b"} Feb 27 10:46:46 crc kubenswrapper[4728]: I0227 10:46:46.957152 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57d769cc4f-jvkhg" Feb 27 10:46:46 crc kubenswrapper[4728]: I0227 10:46:46.982302 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57d769cc4f-jvkhg" podStartSLOduration=3.497626527 podStartE2EDuration="17.982284314s" podCreationTimestamp="2026-02-27 10:46:29 +0000 UTC" firstStartedPulling="2026-02-27 10:46:30.6515719 +0000 UTC m=+1210.613938006" lastFinishedPulling="2026-02-27 10:46:45.136229687 +0000 UTC m=+1225.098595793" observedRunningTime="2026-02-27 10:46:46.978322195 +0000 UTC m=+1226.940688301" watchObservedRunningTime="2026-02-27 10:46:46.982284314 +0000 UTC m=+1226.944650420" Feb 27 10:46:47 crc kubenswrapper[4728]: E0227 10:46:47.272754 4728 log.go:32] "CreateContainer in sandbox from runtime service failed" err=< Feb 27 10:46:47 crc kubenswrapper[4728]: rpc error: code = Unknown desc = container create failed: mount `/var/lib/kubelet/pods/b7f588e1-6d0a-44ac-a7a4-967b0ecb4cd0/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Feb 27 10:46:47 crc kubenswrapper[4728]: > podSandboxID="7de452135186dd165ad0cf00c04ac3d340b9651ae13fea7b8f3e50e76f13b0ff" Feb 27 10:46:47 crc kubenswrapper[4728]: E0227 10:46:47.272890 4728 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 10:46:47 crc kubenswrapper[4728]: container &Container{Name:dnsmasq-dns,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nfdh5dfhb6h64h676hc4h78h97h669h54chfbh696hb5h54bh5d4h6bh64h644h677h584h5cbh698h9dh5bbh5f8h5b8hcdh644h5c7h694hbfh589q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m8brl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-5ccc8479f9-zpz7d_openstack(b7f588e1-6d0a-44ac-a7a4-967b0ecb4cd0): CreateContainerError: container create failed: mount `/var/lib/kubelet/pods/b7f588e1-6d0a-44ac-a7a4-967b0ecb4cd0/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Feb 27 10:46:47 crc kubenswrapper[4728]: > logger="UnhandledError" Feb 27 10:46:47 crc kubenswrapper[4728]: E0227 10:46:47.273988 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dnsmasq-dns\" with CreateContainerError: \"container create failed: mount `/var/lib/kubelet/pods/b7f588e1-6d0a-44ac-a7a4-967b0ecb4cd0/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory\\n\"" pod="openstack/dnsmasq-dns-5ccc8479f9-zpz7d" podUID="b7f588e1-6d0a-44ac-a7a4-967b0ecb4cd0" Feb 27 10:46:47 crc kubenswrapper[4728]: I0227 10:46:47.433035 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-bhldn"] Feb 27 10:46:47 crc kubenswrapper[4728]: I0227 10:46:47.967416 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"26ecfb63-8476-497d-9cb3-3729c4961b4e","Type":"ContainerStarted","Data":"20bd26906667b33fbc7ef32e65d32ccef8bfad6703dd271d66749056416e5032"} Feb 27 10:46:47 crc kubenswrapper[4728]: I0227 10:46:47.969453 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-jc2j2" event={"ID":"fec191d1-b76f-4b8c-94c2-2d217a21951c","Type":"ContainerStarted","Data":"7cd08e30891669108caf6d40b5675dc10d4cdc919b59616e4fea9f5beae71366"} Feb 27 10:46:47 crc kubenswrapper[4728]: I0227 10:46:47.970845 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"d96ab6cd-ed9d-4924-9566-91930411701d","Type":"ContainerStarted","Data":"2d51f58ce483d814534088b6d04d9e8dce3267e8a571edb8bc4a89e066226ddd"} Feb 27 10:46:47 crc kubenswrapper[4728]: I0227 10:46:47.972560 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-bhldn" event={"ID":"e03375ec-5705-44e3-9dda-686c809cf4ef","Type":"ContainerStarted","Data":"f583aeb8692655b6530947c33090aefaaefd63c86d53af350d2d336bb2b03a71"} Feb 27 10:46:47 crc kubenswrapper[4728]: I0227 10:46:47.973779 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"ed5c2715-a8a7-4d10-ba69-32133e2b6e51","Type":"ContainerStarted","Data":"be2f88fb9a96191505ae662e54e0b8050a92780a5316ca642015bd9582aebf5e"} Feb 27 10:46:47 crc kubenswrapper[4728]: I0227 10:46:47.975234 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-bd5fc" event={"ID":"20d22b86-c3cb-4b12-8e88-35369d033e1e","Type":"ContainerStarted","Data":"8190e6b9f19b7c9bea3b68c2fd372b32f09f67edb85631a42cd194a132042cc0"} Feb 27 10:46:47 crc kubenswrapper[4728]: I0227 10:46:47.976981 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"5948716b-2c2b-4a90-b4b5-f8daad17f020","Type":"ContainerStarted","Data":"bad6251d347acc9e66730129b933f53650d03311f9dc5315a9f605ae06d55405"} Feb 27 10:46:47 crc kubenswrapper[4728]: I0227 10:46:47.978956 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"6e834d11-1d93-42ba-8dfe-f17c9faddff2","Type":"ContainerStarted","Data":"b2712b08b2b8d2503bc503ad0ef96bfb22f11a45448bf39f64a98c44ff7a7d4e"} Feb 27 10:46:47 crc kubenswrapper[4728]: I0227 10:46:47.979139 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Feb 27 10:46:47 crc kubenswrapper[4728]: I0227 10:46:47.980466 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"803ed01f-b95c-4718-a5e8-3a864b0b7850","Type":"ContainerStarted","Data":"846c606e30bb1d2f94fcba6d4d1a80ffcbf1718be18c7d33a758686ec827a350"} Feb 27 10:46:47 crc kubenswrapper[4728]: I0227 10:46:47.982031 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"9d90c432-384c-4a43-a2cf-b26c3804a632","Type":"ContainerStarted","Data":"230864acd20b1d218307455ac86d10812859e6b9018472e64fe0c97099a0802f"} Feb 27 10:46:47 crc kubenswrapper[4728]: I0227 10:46:47.984748 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-78c47f664f-wpqzf" event={"ID":"80aafb5f-a486-4afe-8402-a592e62b95d8","Type":"ContainerStarted","Data":"bdbe4fdcf12b37ec260ff2d88f56180aad21f4233f7bb97451ae5ba8e699269a"} Feb 27 10:46:47 crc kubenswrapper[4728]: I0227 10:46:47.984781 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-78c47f664f-wpqzf" event={"ID":"80aafb5f-a486-4afe-8402-a592e62b95d8","Type":"ContainerStarted","Data":"52cd8295284fea909eac35c6e4f15c7d3d7fd4e27a7419969bd1ed93c3412de3"} Feb 27 10:46:47 crc kubenswrapper[4728]: I0227 10:46:47.995359 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=12.417533733 podStartE2EDuration="14.995343555s" podCreationTimestamp="2026-02-27 10:46:33 +0000 UTC" firstStartedPulling="2026-02-27 10:46:44.882497779 +0000 UTC m=+1224.844863905" lastFinishedPulling="2026-02-27 10:46:47.460307621 +0000 UTC m=+1227.422673727" observedRunningTime="2026-02-27 10:46:47.993678449 +0000 UTC m=+1227.956044555" watchObservedRunningTime="2026-02-27 10:46:47.995343555 +0000 UTC m=+1227.957709661" Feb 27 10:46:48 crc kubenswrapper[4728]: I0227 10:46:48.037312 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-78c47f664f-wpqzf" podStartSLOduration=11.037278688 podStartE2EDuration="11.037278688s" podCreationTimestamp="2026-02-27 10:46:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:46:48.035545661 +0000 UTC m=+1227.997911787" watchObservedRunningTime="2026-02-27 10:46:48.037278688 +0000 UTC m=+1227.999644784" Feb 27 10:46:48 crc kubenswrapper[4728]: I0227 10:46:48.302050 4728 scope.go:117] "RemoveContainer" containerID="32ffc77fce0486bb172e2a6183397a95b8d10e04eee3214d055edb961fd1a7d4" Feb 27 10:46:48 crc kubenswrapper[4728]: I0227 10:46:48.321459 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 27 10:46:48 crc kubenswrapper[4728]: I0227 10:46:48.426632 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 27 10:46:54 crc kubenswrapper[4728]: I0227 10:46:54.092916 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Feb 27 10:46:55 crc kubenswrapper[4728]: I0227 10:46:55.057940 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"94cc80b4-aa0c-439a-be56-22b86add0bb3","Type":"ContainerStarted","Data":"318fb7f3743a3e9a96a4dfab50d3cb010c3c40a7282ab1cb031f4516724b7786"} Feb 27 10:46:55 crc kubenswrapper[4728]: I0227 10:46:55.135643 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-57d769cc4f-jvkhg" Feb 27 10:46:55 crc kubenswrapper[4728]: I0227 10:46:55.250145 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-zpz7d"] Feb 27 10:46:56 crc kubenswrapper[4728]: I0227 10:46:56.252360 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-xgqx9"] Feb 27 10:46:56 crc kubenswrapper[4728]: I0227 10:46:56.254361 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cb5889db5-xgqx9" Feb 27 10:46:56 crc kubenswrapper[4728]: I0227 10:46:56.279880 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-xgqx9"] Feb 27 10:46:56 crc kubenswrapper[4728]: I0227 10:46:56.383282 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jnj2\" (UniqueName: \"kubernetes.io/projected/962b4d7e-3021-44a1-9374-0cbf20fbffa1-kube-api-access-7jnj2\") pod \"dnsmasq-dns-7cb5889db5-xgqx9\" (UID: \"962b4d7e-3021-44a1-9374-0cbf20fbffa1\") " pod="openstack/dnsmasq-dns-7cb5889db5-xgqx9" Feb 27 10:46:56 crc kubenswrapper[4728]: I0227 10:46:56.383342 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/962b4d7e-3021-44a1-9374-0cbf20fbffa1-config\") pod \"dnsmasq-dns-7cb5889db5-xgqx9\" (UID: \"962b4d7e-3021-44a1-9374-0cbf20fbffa1\") " pod="openstack/dnsmasq-dns-7cb5889db5-xgqx9" Feb 27 10:46:56 crc kubenswrapper[4728]: I0227 10:46:56.383366 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/962b4d7e-3021-44a1-9374-0cbf20fbffa1-dns-svc\") pod \"dnsmasq-dns-7cb5889db5-xgqx9\" (UID: \"962b4d7e-3021-44a1-9374-0cbf20fbffa1\") " pod="openstack/dnsmasq-dns-7cb5889db5-xgqx9" Feb 27 10:46:56 crc kubenswrapper[4728]: I0227 10:46:56.485161 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7jnj2\" (UniqueName: \"kubernetes.io/projected/962b4d7e-3021-44a1-9374-0cbf20fbffa1-kube-api-access-7jnj2\") pod \"dnsmasq-dns-7cb5889db5-xgqx9\" (UID: \"962b4d7e-3021-44a1-9374-0cbf20fbffa1\") " pod="openstack/dnsmasq-dns-7cb5889db5-xgqx9" Feb 27 10:46:56 crc kubenswrapper[4728]: I0227 10:46:56.485231 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/962b4d7e-3021-44a1-9374-0cbf20fbffa1-config\") pod \"dnsmasq-dns-7cb5889db5-xgqx9\" (UID: \"962b4d7e-3021-44a1-9374-0cbf20fbffa1\") " pod="openstack/dnsmasq-dns-7cb5889db5-xgqx9" Feb 27 10:46:56 crc kubenswrapper[4728]: I0227 10:46:56.485261 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/962b4d7e-3021-44a1-9374-0cbf20fbffa1-dns-svc\") pod \"dnsmasq-dns-7cb5889db5-xgqx9\" (UID: \"962b4d7e-3021-44a1-9374-0cbf20fbffa1\") " pod="openstack/dnsmasq-dns-7cb5889db5-xgqx9" Feb 27 10:46:56 crc kubenswrapper[4728]: I0227 10:46:56.486329 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/962b4d7e-3021-44a1-9374-0cbf20fbffa1-dns-svc\") pod \"dnsmasq-dns-7cb5889db5-xgqx9\" (UID: \"962b4d7e-3021-44a1-9374-0cbf20fbffa1\") " pod="openstack/dnsmasq-dns-7cb5889db5-xgqx9" Feb 27 10:46:56 crc kubenswrapper[4728]: I0227 10:46:56.487321 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/962b4d7e-3021-44a1-9374-0cbf20fbffa1-config\") pod \"dnsmasq-dns-7cb5889db5-xgqx9\" (UID: \"962b4d7e-3021-44a1-9374-0cbf20fbffa1\") " pod="openstack/dnsmasq-dns-7cb5889db5-xgqx9" Feb 27 10:46:56 crc kubenswrapper[4728]: I0227 10:46:56.518784 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7jnj2\" (UniqueName: \"kubernetes.io/projected/962b4d7e-3021-44a1-9374-0cbf20fbffa1-kube-api-access-7jnj2\") pod \"dnsmasq-dns-7cb5889db5-xgqx9\" (UID: \"962b4d7e-3021-44a1-9374-0cbf20fbffa1\") " pod="openstack/dnsmasq-dns-7cb5889db5-xgqx9" Feb 27 10:46:56 crc kubenswrapper[4728]: I0227 10:46:56.589188 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cb5889db5-xgqx9" Feb 27 10:46:57 crc kubenswrapper[4728]: I0227 10:46:57.354761 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Feb 27 10:46:57 crc kubenswrapper[4728]: I0227 10:46:57.364453 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 27 10:46:57 crc kubenswrapper[4728]: I0227 10:46:57.438105 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-vvmtw" Feb 27 10:46:57 crc kubenswrapper[4728]: I0227 10:46:57.439396 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Feb 27 10:46:57 crc kubenswrapper[4728]: I0227 10:46:57.439847 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Feb 27 10:46:57 crc kubenswrapper[4728]: I0227 10:46:57.440033 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Feb 27 10:46:57 crc kubenswrapper[4728]: I0227 10:46:57.482183 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-78c47f664f-wpqzf" Feb 27 10:46:57 crc kubenswrapper[4728]: I0227 10:46:57.482224 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-78c47f664f-wpqzf" Feb 27 10:46:57 crc kubenswrapper[4728]: I0227 10:46:57.495022 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-78c47f664f-wpqzf" Feb 27 10:46:57 crc kubenswrapper[4728]: I0227 10:46:57.497781 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 27 10:46:57 crc kubenswrapper[4728]: I0227 10:46:57.541697 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-d2d04088-6aa0-49dd-9d00-66044720f5d8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d2d04088-6aa0-49dd-9d00-66044720f5d8\") pod \"swift-storage-0\" (UID: \"ec0a9664-7538-43dd-904d-c386d569999e\") " pod="openstack/swift-storage-0" Feb 27 10:46:57 crc kubenswrapper[4728]: I0227 10:46:57.541774 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ec0a9664-7538-43dd-904d-c386d569999e-etc-swift\") pod \"swift-storage-0\" (UID: \"ec0a9664-7538-43dd-904d-c386d569999e\") " pod="openstack/swift-storage-0" Feb 27 10:46:57 crc kubenswrapper[4728]: I0227 10:46:57.541799 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/ec0a9664-7538-43dd-904d-c386d569999e-lock\") pod \"swift-storage-0\" (UID: \"ec0a9664-7538-43dd-904d-c386d569999e\") " pod="openstack/swift-storage-0" Feb 27 10:46:57 crc kubenswrapper[4728]: I0227 10:46:57.541844 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x56mt\" (UniqueName: \"kubernetes.io/projected/ec0a9664-7538-43dd-904d-c386d569999e-kube-api-access-x56mt\") pod \"swift-storage-0\" (UID: \"ec0a9664-7538-43dd-904d-c386d569999e\") " pod="openstack/swift-storage-0" Feb 27 10:46:57 crc kubenswrapper[4728]: I0227 10:46:57.541893 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec0a9664-7538-43dd-904d-c386d569999e-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"ec0a9664-7538-43dd-904d-c386d569999e\") " pod="openstack/swift-storage-0" Feb 27 10:46:57 crc kubenswrapper[4728]: I0227 10:46:57.541920 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/ec0a9664-7538-43dd-904d-c386d569999e-cache\") pod \"swift-storage-0\" (UID: \"ec0a9664-7538-43dd-904d-c386d569999e\") " pod="openstack/swift-storage-0" Feb 27 10:46:57 crc kubenswrapper[4728]: I0227 10:46:57.645031 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ec0a9664-7538-43dd-904d-c386d569999e-etc-swift\") pod \"swift-storage-0\" (UID: \"ec0a9664-7538-43dd-904d-c386d569999e\") " pod="openstack/swift-storage-0" Feb 27 10:46:57 crc kubenswrapper[4728]: I0227 10:46:57.645088 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/ec0a9664-7538-43dd-904d-c386d569999e-lock\") pod \"swift-storage-0\" (UID: \"ec0a9664-7538-43dd-904d-c386d569999e\") " pod="openstack/swift-storage-0" Feb 27 10:46:57 crc kubenswrapper[4728]: I0227 10:46:57.645155 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x56mt\" (UniqueName: \"kubernetes.io/projected/ec0a9664-7538-43dd-904d-c386d569999e-kube-api-access-x56mt\") pod \"swift-storage-0\" (UID: \"ec0a9664-7538-43dd-904d-c386d569999e\") " pod="openstack/swift-storage-0" Feb 27 10:46:57 crc kubenswrapper[4728]: E0227 10:46:57.645189 4728 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 27 10:46:57 crc kubenswrapper[4728]: E0227 10:46:57.645221 4728 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 27 10:46:57 crc kubenswrapper[4728]: E0227 10:46:57.645277 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ec0a9664-7538-43dd-904d-c386d569999e-etc-swift podName:ec0a9664-7538-43dd-904d-c386d569999e nodeName:}" failed. No retries permitted until 2026-02-27 10:46:58.14525518 +0000 UTC m=+1238.107621286 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/ec0a9664-7538-43dd-904d-c386d569999e-etc-swift") pod "swift-storage-0" (UID: "ec0a9664-7538-43dd-904d-c386d569999e") : configmap "swift-ring-files" not found Feb 27 10:46:57 crc kubenswrapper[4728]: I0227 10:46:57.645208 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec0a9664-7538-43dd-904d-c386d569999e-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"ec0a9664-7538-43dd-904d-c386d569999e\") " pod="openstack/swift-storage-0" Feb 27 10:46:57 crc kubenswrapper[4728]: I0227 10:46:57.645570 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/ec0a9664-7538-43dd-904d-c386d569999e-cache\") pod \"swift-storage-0\" (UID: \"ec0a9664-7538-43dd-904d-c386d569999e\") " pod="openstack/swift-storage-0" Feb 27 10:46:57 crc kubenswrapper[4728]: I0227 10:46:57.645901 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/ec0a9664-7538-43dd-904d-c386d569999e-lock\") pod \"swift-storage-0\" (UID: \"ec0a9664-7538-43dd-904d-c386d569999e\") " pod="openstack/swift-storage-0" Feb 27 10:46:57 crc kubenswrapper[4728]: I0227 10:46:57.646059 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-d2d04088-6aa0-49dd-9d00-66044720f5d8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d2d04088-6aa0-49dd-9d00-66044720f5d8\") pod \"swift-storage-0\" (UID: \"ec0a9664-7538-43dd-904d-c386d569999e\") " pod="openstack/swift-storage-0" Feb 27 10:46:57 crc kubenswrapper[4728]: I0227 10:46:57.646156 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/ec0a9664-7538-43dd-904d-c386d569999e-cache\") pod \"swift-storage-0\" (UID: \"ec0a9664-7538-43dd-904d-c386d569999e\") " pod="openstack/swift-storage-0" Feb 27 10:46:57 crc kubenswrapper[4728]: I0227 10:46:57.648934 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec0a9664-7538-43dd-904d-c386d569999e-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"ec0a9664-7538-43dd-904d-c386d569999e\") " pod="openstack/swift-storage-0" Feb 27 10:46:57 crc kubenswrapper[4728]: I0227 10:46:57.651302 4728 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 27 10:46:57 crc kubenswrapper[4728]: I0227 10:46:57.651330 4728 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-d2d04088-6aa0-49dd-9d00-66044720f5d8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d2d04088-6aa0-49dd-9d00-66044720f5d8\") pod \"swift-storage-0\" (UID: \"ec0a9664-7538-43dd-904d-c386d569999e\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/5a13c4f89b6388f1453492c05c5e73b8b0661be1a36e02137a706731454ba9a4/globalmount\"" pod="openstack/swift-storage-0" Feb 27 10:46:57 crc kubenswrapper[4728]: I0227 10:46:57.667719 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x56mt\" (UniqueName: \"kubernetes.io/projected/ec0a9664-7538-43dd-904d-c386d569999e-kube-api-access-x56mt\") pod \"swift-storage-0\" (UID: \"ec0a9664-7538-43dd-904d-c386d569999e\") " pod="openstack/swift-storage-0" Feb 27 10:46:57 crc kubenswrapper[4728]: I0227 10:46:57.693682 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-d2d04088-6aa0-49dd-9d00-66044720f5d8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d2d04088-6aa0-49dd-9d00-66044720f5d8\") pod \"swift-storage-0\" (UID: \"ec0a9664-7538-43dd-904d-c386d569999e\") " pod="openstack/swift-storage-0" Feb 27 10:46:57 crc kubenswrapper[4728]: I0227 10:46:57.862104 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-5fw24"] Feb 27 10:46:57 crc kubenswrapper[4728]: I0227 10:46:57.863677 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-5fw24" Feb 27 10:46:57 crc kubenswrapper[4728]: I0227 10:46:57.872469 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Feb 27 10:46:57 crc kubenswrapper[4728]: I0227 10:46:57.872688 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 27 10:46:57 crc kubenswrapper[4728]: I0227 10:46:57.872832 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Feb 27 10:46:57 crc kubenswrapper[4728]: I0227 10:46:57.879336 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-5fw24"] Feb 27 10:46:57 crc kubenswrapper[4728]: I0227 10:46:57.926140 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-wnkfg"] Feb 27 10:46:57 crc kubenswrapper[4728]: I0227 10:46:57.927805 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-wnkfg" Feb 27 10:46:57 crc kubenswrapper[4728]: I0227 10:46:57.939201 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-5fw24"] Feb 27 10:46:57 crc kubenswrapper[4728]: E0227 10:46:57.940080 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle dispersionconf etc-swift kube-api-access-8skrr ring-data-devices scripts swiftconf], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/swift-ring-rebalance-5fw24" podUID="51e74345-bd5e-4679-b4f5-3fbb1c1c6752" Feb 27 10:46:57 crc kubenswrapper[4728]: I0227 10:46:57.950744 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8skrr\" (UniqueName: \"kubernetes.io/projected/51e74345-bd5e-4679-b4f5-3fbb1c1c6752-kube-api-access-8skrr\") pod \"swift-ring-rebalance-5fw24\" (UID: \"51e74345-bd5e-4679-b4f5-3fbb1c1c6752\") " pod="openstack/swift-ring-rebalance-5fw24" Feb 27 10:46:57 crc kubenswrapper[4728]: I0227 10:46:57.950825 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/51e74345-bd5e-4679-b4f5-3fbb1c1c6752-ring-data-devices\") pod \"swift-ring-rebalance-5fw24\" (UID: \"51e74345-bd5e-4679-b4f5-3fbb1c1c6752\") " pod="openstack/swift-ring-rebalance-5fw24" Feb 27 10:46:57 crc kubenswrapper[4728]: I0227 10:46:57.950883 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/51e74345-bd5e-4679-b4f5-3fbb1c1c6752-etc-swift\") pod \"swift-ring-rebalance-5fw24\" (UID: \"51e74345-bd5e-4679-b4f5-3fbb1c1c6752\") " pod="openstack/swift-ring-rebalance-5fw24" Feb 27 10:46:57 crc kubenswrapper[4728]: I0227 10:46:57.950918 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/51e74345-bd5e-4679-b4f5-3fbb1c1c6752-scripts\") pod \"swift-ring-rebalance-5fw24\" (UID: \"51e74345-bd5e-4679-b4f5-3fbb1c1c6752\") " pod="openstack/swift-ring-rebalance-5fw24" Feb 27 10:46:57 crc kubenswrapper[4728]: I0227 10:46:57.950939 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/51e74345-bd5e-4679-b4f5-3fbb1c1c6752-swiftconf\") pod \"swift-ring-rebalance-5fw24\" (UID: \"51e74345-bd5e-4679-b4f5-3fbb1c1c6752\") " pod="openstack/swift-ring-rebalance-5fw24" Feb 27 10:46:57 crc kubenswrapper[4728]: I0227 10:46:57.950974 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51e74345-bd5e-4679-b4f5-3fbb1c1c6752-combined-ca-bundle\") pod \"swift-ring-rebalance-5fw24\" (UID: \"51e74345-bd5e-4679-b4f5-3fbb1c1c6752\") " pod="openstack/swift-ring-rebalance-5fw24" Feb 27 10:46:57 crc kubenswrapper[4728]: I0227 10:46:57.950999 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/51e74345-bd5e-4679-b4f5-3fbb1c1c6752-dispersionconf\") pod \"swift-ring-rebalance-5fw24\" (UID: \"51e74345-bd5e-4679-b4f5-3fbb1c1c6752\") " pod="openstack/swift-ring-rebalance-5fw24" Feb 27 10:46:57 crc kubenswrapper[4728]: I0227 10:46:57.960307 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-wnkfg"] Feb 27 10:46:58 crc kubenswrapper[4728]: I0227 10:46:58.052933 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8skrr\" (UniqueName: \"kubernetes.io/projected/51e74345-bd5e-4679-b4f5-3fbb1c1c6752-kube-api-access-8skrr\") pod \"swift-ring-rebalance-5fw24\" (UID: \"51e74345-bd5e-4679-b4f5-3fbb1c1c6752\") " pod="openstack/swift-ring-rebalance-5fw24" Feb 27 10:46:58 crc kubenswrapper[4728]: I0227 10:46:58.052999 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ec9639a6-9853-49cd-8215-0301af98d73b-scripts\") pod \"swift-ring-rebalance-wnkfg\" (UID: \"ec9639a6-9853-49cd-8215-0301af98d73b\") " pod="openstack/swift-ring-rebalance-wnkfg" Feb 27 10:46:58 crc kubenswrapper[4728]: I0227 10:46:58.053030 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/ec9639a6-9853-49cd-8215-0301af98d73b-dispersionconf\") pod \"swift-ring-rebalance-wnkfg\" (UID: \"ec9639a6-9853-49cd-8215-0301af98d73b\") " pod="openstack/swift-ring-rebalance-wnkfg" Feb 27 10:46:58 crc kubenswrapper[4728]: I0227 10:46:58.053064 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/51e74345-bd5e-4679-b4f5-3fbb1c1c6752-ring-data-devices\") pod \"swift-ring-rebalance-5fw24\" (UID: \"51e74345-bd5e-4679-b4f5-3fbb1c1c6752\") " pod="openstack/swift-ring-rebalance-5fw24" Feb 27 10:46:58 crc kubenswrapper[4728]: I0227 10:46:58.053105 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/51e74345-bd5e-4679-b4f5-3fbb1c1c6752-etc-swift\") pod \"swift-ring-rebalance-5fw24\" (UID: \"51e74345-bd5e-4679-b4f5-3fbb1c1c6752\") " pod="openstack/swift-ring-rebalance-5fw24" Feb 27 10:46:58 crc kubenswrapper[4728]: I0227 10:46:58.053164 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/51e74345-bd5e-4679-b4f5-3fbb1c1c6752-scripts\") pod \"swift-ring-rebalance-5fw24\" (UID: \"51e74345-bd5e-4679-b4f5-3fbb1c1c6752\") " pod="openstack/swift-ring-rebalance-5fw24" Feb 27 10:46:58 crc kubenswrapper[4728]: I0227 10:46:58.053185 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/51e74345-bd5e-4679-b4f5-3fbb1c1c6752-swiftconf\") pod \"swift-ring-rebalance-5fw24\" (UID: \"51e74345-bd5e-4679-b4f5-3fbb1c1c6752\") " pod="openstack/swift-ring-rebalance-5fw24" Feb 27 10:46:58 crc kubenswrapper[4728]: I0227 10:46:58.053201 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/ec9639a6-9853-49cd-8215-0301af98d73b-etc-swift\") pod \"swift-ring-rebalance-wnkfg\" (UID: \"ec9639a6-9853-49cd-8215-0301af98d73b\") " pod="openstack/swift-ring-rebalance-wnkfg" Feb 27 10:46:58 crc kubenswrapper[4728]: I0227 10:46:58.053228 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51e74345-bd5e-4679-b4f5-3fbb1c1c6752-combined-ca-bundle\") pod \"swift-ring-rebalance-5fw24\" (UID: \"51e74345-bd5e-4679-b4f5-3fbb1c1c6752\") " pod="openstack/swift-ring-rebalance-5fw24" Feb 27 10:46:58 crc kubenswrapper[4728]: I0227 10:46:58.053252 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/51e74345-bd5e-4679-b4f5-3fbb1c1c6752-dispersionconf\") pod \"swift-ring-rebalance-5fw24\" (UID: \"51e74345-bd5e-4679-b4f5-3fbb1c1c6752\") " pod="openstack/swift-ring-rebalance-5fw24" Feb 27 10:46:58 crc kubenswrapper[4728]: I0227 10:46:58.053274 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec9639a6-9853-49cd-8215-0301af98d73b-combined-ca-bundle\") pod \"swift-ring-rebalance-wnkfg\" (UID: \"ec9639a6-9853-49cd-8215-0301af98d73b\") " pod="openstack/swift-ring-rebalance-wnkfg" Feb 27 10:46:58 crc kubenswrapper[4728]: I0227 10:46:58.053299 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zt5ds\" (UniqueName: \"kubernetes.io/projected/ec9639a6-9853-49cd-8215-0301af98d73b-kube-api-access-zt5ds\") pod \"swift-ring-rebalance-wnkfg\" (UID: \"ec9639a6-9853-49cd-8215-0301af98d73b\") " pod="openstack/swift-ring-rebalance-wnkfg" Feb 27 10:46:58 crc kubenswrapper[4728]: I0227 10:46:58.053317 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/ec9639a6-9853-49cd-8215-0301af98d73b-swiftconf\") pod \"swift-ring-rebalance-wnkfg\" (UID: \"ec9639a6-9853-49cd-8215-0301af98d73b\") " pod="openstack/swift-ring-rebalance-wnkfg" Feb 27 10:46:58 crc kubenswrapper[4728]: I0227 10:46:58.053357 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/ec9639a6-9853-49cd-8215-0301af98d73b-ring-data-devices\") pod \"swift-ring-rebalance-wnkfg\" (UID: \"ec9639a6-9853-49cd-8215-0301af98d73b\") " pod="openstack/swift-ring-rebalance-wnkfg" Feb 27 10:46:58 crc kubenswrapper[4728]: I0227 10:46:58.054261 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/51e74345-bd5e-4679-b4f5-3fbb1c1c6752-etc-swift\") pod \"swift-ring-rebalance-5fw24\" (UID: \"51e74345-bd5e-4679-b4f5-3fbb1c1c6752\") " pod="openstack/swift-ring-rebalance-5fw24" Feb 27 10:46:58 crc kubenswrapper[4728]: I0227 10:46:58.054710 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/51e74345-bd5e-4679-b4f5-3fbb1c1c6752-ring-data-devices\") pod \"swift-ring-rebalance-5fw24\" (UID: \"51e74345-bd5e-4679-b4f5-3fbb1c1c6752\") " pod="openstack/swift-ring-rebalance-5fw24" Feb 27 10:46:58 crc kubenswrapper[4728]: I0227 10:46:58.054916 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/51e74345-bd5e-4679-b4f5-3fbb1c1c6752-scripts\") pod \"swift-ring-rebalance-5fw24\" (UID: \"51e74345-bd5e-4679-b4f5-3fbb1c1c6752\") " pod="openstack/swift-ring-rebalance-5fw24" Feb 27 10:46:58 crc kubenswrapper[4728]: I0227 10:46:58.056535 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/51e74345-bd5e-4679-b4f5-3fbb1c1c6752-swiftconf\") pod \"swift-ring-rebalance-5fw24\" (UID: \"51e74345-bd5e-4679-b4f5-3fbb1c1c6752\") " pod="openstack/swift-ring-rebalance-5fw24" Feb 27 10:46:58 crc kubenswrapper[4728]: I0227 10:46:58.070221 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/51e74345-bd5e-4679-b4f5-3fbb1c1c6752-dispersionconf\") pod \"swift-ring-rebalance-5fw24\" (UID: \"51e74345-bd5e-4679-b4f5-3fbb1c1c6752\") " pod="openstack/swift-ring-rebalance-5fw24" Feb 27 10:46:58 crc kubenswrapper[4728]: I0227 10:46:58.070487 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51e74345-bd5e-4679-b4f5-3fbb1c1c6752-combined-ca-bundle\") pod \"swift-ring-rebalance-5fw24\" (UID: \"51e74345-bd5e-4679-b4f5-3fbb1c1c6752\") " pod="openstack/swift-ring-rebalance-5fw24" Feb 27 10:46:58 crc kubenswrapper[4728]: I0227 10:46:58.076430 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8skrr\" (UniqueName: \"kubernetes.io/projected/51e74345-bd5e-4679-b4f5-3fbb1c1c6752-kube-api-access-8skrr\") pod \"swift-ring-rebalance-5fw24\" (UID: \"51e74345-bd5e-4679-b4f5-3fbb1c1c6752\") " pod="openstack/swift-ring-rebalance-5fw24" Feb 27 10:46:58 crc kubenswrapper[4728]: I0227 10:46:58.095523 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-5fw24" Feb 27 10:46:58 crc kubenswrapper[4728]: I0227 10:46:58.095750 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"0be54b22-6600-4033-92e9-10fd8a540238","Type":"ContainerStarted","Data":"9d7d593a420bc68360e6827dec9f554945a330565d076934122204eca2625085"} Feb 27 10:46:58 crc kubenswrapper[4728]: I0227 10:46:58.099318 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-78c47f664f-wpqzf" Feb 27 10:46:58 crc kubenswrapper[4728]: I0227 10:46:58.144280 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-5fw24" Feb 27 10:46:58 crc kubenswrapper[4728]: I0227 10:46:58.159177 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec9639a6-9853-49cd-8215-0301af98d73b-combined-ca-bundle\") pod \"swift-ring-rebalance-wnkfg\" (UID: \"ec9639a6-9853-49cd-8215-0301af98d73b\") " pod="openstack/swift-ring-rebalance-wnkfg" Feb 27 10:46:58 crc kubenswrapper[4728]: I0227 10:46:58.159223 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zt5ds\" (UniqueName: \"kubernetes.io/projected/ec9639a6-9853-49cd-8215-0301af98d73b-kube-api-access-zt5ds\") pod \"swift-ring-rebalance-wnkfg\" (UID: \"ec9639a6-9853-49cd-8215-0301af98d73b\") " pod="openstack/swift-ring-rebalance-wnkfg" Feb 27 10:46:58 crc kubenswrapper[4728]: I0227 10:46:58.159240 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/ec9639a6-9853-49cd-8215-0301af98d73b-swiftconf\") pod \"swift-ring-rebalance-wnkfg\" (UID: \"ec9639a6-9853-49cd-8215-0301af98d73b\") " pod="openstack/swift-ring-rebalance-wnkfg" Feb 27 10:46:58 crc kubenswrapper[4728]: I0227 10:46:58.159279 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/ec9639a6-9853-49cd-8215-0301af98d73b-ring-data-devices\") pod \"swift-ring-rebalance-wnkfg\" (UID: \"ec9639a6-9853-49cd-8215-0301af98d73b\") " pod="openstack/swift-ring-rebalance-wnkfg" Feb 27 10:46:58 crc kubenswrapper[4728]: I0227 10:46:58.159337 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ec9639a6-9853-49cd-8215-0301af98d73b-scripts\") pod \"swift-ring-rebalance-wnkfg\" (UID: \"ec9639a6-9853-49cd-8215-0301af98d73b\") " pod="openstack/swift-ring-rebalance-wnkfg" Feb 27 10:46:58 crc kubenswrapper[4728]: I0227 10:46:58.159366 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/ec9639a6-9853-49cd-8215-0301af98d73b-dispersionconf\") pod \"swift-ring-rebalance-wnkfg\" (UID: \"ec9639a6-9853-49cd-8215-0301af98d73b\") " pod="openstack/swift-ring-rebalance-wnkfg" Feb 27 10:46:58 crc kubenswrapper[4728]: I0227 10:46:58.159404 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ec0a9664-7538-43dd-904d-c386d569999e-etc-swift\") pod \"swift-storage-0\" (UID: \"ec0a9664-7538-43dd-904d-c386d569999e\") " pod="openstack/swift-storage-0" Feb 27 10:46:58 crc kubenswrapper[4728]: I0227 10:46:58.159446 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/ec9639a6-9853-49cd-8215-0301af98d73b-etc-swift\") pod \"swift-ring-rebalance-wnkfg\" (UID: \"ec9639a6-9853-49cd-8215-0301af98d73b\") " pod="openstack/swift-ring-rebalance-wnkfg" Feb 27 10:46:58 crc kubenswrapper[4728]: I0227 10:46:58.159810 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/ec9639a6-9853-49cd-8215-0301af98d73b-etc-swift\") pod \"swift-ring-rebalance-wnkfg\" (UID: \"ec9639a6-9853-49cd-8215-0301af98d73b\") " pod="openstack/swift-ring-rebalance-wnkfg" Feb 27 10:46:58 crc kubenswrapper[4728]: I0227 10:46:58.160862 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/ec9639a6-9853-49cd-8215-0301af98d73b-ring-data-devices\") pod \"swift-ring-rebalance-wnkfg\" (UID: \"ec9639a6-9853-49cd-8215-0301af98d73b\") " pod="openstack/swift-ring-rebalance-wnkfg" Feb 27 10:46:58 crc kubenswrapper[4728]: E0227 10:46:58.162789 4728 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 27 10:46:58 crc kubenswrapper[4728]: E0227 10:46:58.162813 4728 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 27 10:46:58 crc kubenswrapper[4728]: E0227 10:46:58.162879 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ec0a9664-7538-43dd-904d-c386d569999e-etc-swift podName:ec0a9664-7538-43dd-904d-c386d569999e nodeName:}" failed. No retries permitted until 2026-02-27 10:46:59.162861505 +0000 UTC m=+1239.125227611 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/ec0a9664-7538-43dd-904d-c386d569999e-etc-swift") pod "swift-storage-0" (UID: "ec0a9664-7538-43dd-904d-c386d569999e") : configmap "swift-ring-files" not found Feb 27 10:46:58 crc kubenswrapper[4728]: I0227 10:46:58.163252 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ec9639a6-9853-49cd-8215-0301af98d73b-scripts\") pod \"swift-ring-rebalance-wnkfg\" (UID: \"ec9639a6-9853-49cd-8215-0301af98d73b\") " pod="openstack/swift-ring-rebalance-wnkfg" Feb 27 10:46:58 crc kubenswrapper[4728]: I0227 10:46:58.164072 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec9639a6-9853-49cd-8215-0301af98d73b-combined-ca-bundle\") pod \"swift-ring-rebalance-wnkfg\" (UID: \"ec9639a6-9853-49cd-8215-0301af98d73b\") " pod="openstack/swift-ring-rebalance-wnkfg" Feb 27 10:46:58 crc kubenswrapper[4728]: I0227 10:46:58.169293 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/ec9639a6-9853-49cd-8215-0301af98d73b-swiftconf\") pod \"swift-ring-rebalance-wnkfg\" (UID: \"ec9639a6-9853-49cd-8215-0301af98d73b\") " pod="openstack/swift-ring-rebalance-wnkfg" Feb 27 10:46:58 crc kubenswrapper[4728]: I0227 10:46:58.171672 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/ec9639a6-9853-49cd-8215-0301af98d73b-dispersionconf\") pod \"swift-ring-rebalance-wnkfg\" (UID: \"ec9639a6-9853-49cd-8215-0301af98d73b\") " pod="openstack/swift-ring-rebalance-wnkfg" Feb 27 10:46:58 crc kubenswrapper[4728]: I0227 10:46:58.173860 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-7b97b57845-jhmxj"] Feb 27 10:46:58 crc kubenswrapper[4728]: I0227 10:46:58.185320 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zt5ds\" (UniqueName: \"kubernetes.io/projected/ec9639a6-9853-49cd-8215-0301af98d73b-kube-api-access-zt5ds\") pod \"swift-ring-rebalance-wnkfg\" (UID: \"ec9639a6-9853-49cd-8215-0301af98d73b\") " pod="openstack/swift-ring-rebalance-wnkfg" Feb 27 10:46:58 crc kubenswrapper[4728]: I0227 10:46:58.251914 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-wnkfg" Feb 27 10:46:58 crc kubenswrapper[4728]: I0227 10:46:58.260330 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/51e74345-bd5e-4679-b4f5-3fbb1c1c6752-etc-swift\") pod \"51e74345-bd5e-4679-b4f5-3fbb1c1c6752\" (UID: \"51e74345-bd5e-4679-b4f5-3fbb1c1c6752\") " Feb 27 10:46:58 crc kubenswrapper[4728]: I0227 10:46:58.260663 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/51e74345-bd5e-4679-b4f5-3fbb1c1c6752-swiftconf\") pod \"51e74345-bd5e-4679-b4f5-3fbb1c1c6752\" (UID: \"51e74345-bd5e-4679-b4f5-3fbb1c1c6752\") " Feb 27 10:46:58 crc kubenswrapper[4728]: I0227 10:46:58.260742 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/51e74345-bd5e-4679-b4f5-3fbb1c1c6752-ring-data-devices\") pod \"51e74345-bd5e-4679-b4f5-3fbb1c1c6752\" (UID: \"51e74345-bd5e-4679-b4f5-3fbb1c1c6752\") " Feb 27 10:46:58 crc kubenswrapper[4728]: I0227 10:46:58.260784 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/51e74345-bd5e-4679-b4f5-3fbb1c1c6752-scripts\") pod \"51e74345-bd5e-4679-b4f5-3fbb1c1c6752\" (UID: \"51e74345-bd5e-4679-b4f5-3fbb1c1c6752\") " Feb 27 10:46:58 crc kubenswrapper[4728]: I0227 10:46:58.260841 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/51e74345-bd5e-4679-b4f5-3fbb1c1c6752-dispersionconf\") pod \"51e74345-bd5e-4679-b4f5-3fbb1c1c6752\" (UID: \"51e74345-bd5e-4679-b4f5-3fbb1c1c6752\") " Feb 27 10:46:58 crc kubenswrapper[4728]: I0227 10:46:58.260866 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8skrr\" (UniqueName: \"kubernetes.io/projected/51e74345-bd5e-4679-b4f5-3fbb1c1c6752-kube-api-access-8skrr\") pod \"51e74345-bd5e-4679-b4f5-3fbb1c1c6752\" (UID: \"51e74345-bd5e-4679-b4f5-3fbb1c1c6752\") " Feb 27 10:46:58 crc kubenswrapper[4728]: I0227 10:46:58.260889 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51e74345-bd5e-4679-b4f5-3fbb1c1c6752-combined-ca-bundle\") pod \"51e74345-bd5e-4679-b4f5-3fbb1c1c6752\" (UID: \"51e74345-bd5e-4679-b4f5-3fbb1c1c6752\") " Feb 27 10:46:58 crc kubenswrapper[4728]: I0227 10:46:58.261859 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/51e74345-bd5e-4679-b4f5-3fbb1c1c6752-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "51e74345-bd5e-4679-b4f5-3fbb1c1c6752" (UID: "51e74345-bd5e-4679-b4f5-3fbb1c1c6752"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:46:58 crc kubenswrapper[4728]: I0227 10:46:58.263273 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/51e74345-bd5e-4679-b4f5-3fbb1c1c6752-scripts" (OuterVolumeSpecName: "scripts") pod "51e74345-bd5e-4679-b4f5-3fbb1c1c6752" (UID: "51e74345-bd5e-4679-b4f5-3fbb1c1c6752"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:46:58 crc kubenswrapper[4728]: I0227 10:46:58.263386 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/51e74345-bd5e-4679-b4f5-3fbb1c1c6752-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "51e74345-bd5e-4679-b4f5-3fbb1c1c6752" (UID: "51e74345-bd5e-4679-b4f5-3fbb1c1c6752"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:46:58 crc kubenswrapper[4728]: I0227 10:46:58.269217 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51e74345-bd5e-4679-b4f5-3fbb1c1c6752-kube-api-access-8skrr" (OuterVolumeSpecName: "kube-api-access-8skrr") pod "51e74345-bd5e-4679-b4f5-3fbb1c1c6752" (UID: "51e74345-bd5e-4679-b4f5-3fbb1c1c6752"). InnerVolumeSpecName "kube-api-access-8skrr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:46:58 crc kubenswrapper[4728]: I0227 10:46:58.269807 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51e74345-bd5e-4679-b4f5-3fbb1c1c6752-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "51e74345-bd5e-4679-b4f5-3fbb1c1c6752" (UID: "51e74345-bd5e-4679-b4f5-3fbb1c1c6752"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:46:58 crc kubenswrapper[4728]: I0227 10:46:58.270022 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51e74345-bd5e-4679-b4f5-3fbb1c1c6752-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "51e74345-bd5e-4679-b4f5-3fbb1c1c6752" (UID: "51e74345-bd5e-4679-b4f5-3fbb1c1c6752"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:46:58 crc kubenswrapper[4728]: I0227 10:46:58.271757 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51e74345-bd5e-4679-b4f5-3fbb1c1c6752-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "51e74345-bd5e-4679-b4f5-3fbb1c1c6752" (UID: "51e74345-bd5e-4679-b4f5-3fbb1c1c6752"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:46:58 crc kubenswrapper[4728]: I0227 10:46:58.364031 4728 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/51e74345-bd5e-4679-b4f5-3fbb1c1c6752-dispersionconf\") on node \"crc\" DevicePath \"\"" Feb 27 10:46:58 crc kubenswrapper[4728]: I0227 10:46:58.364066 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8skrr\" (UniqueName: \"kubernetes.io/projected/51e74345-bd5e-4679-b4f5-3fbb1c1c6752-kube-api-access-8skrr\") on node \"crc\" DevicePath \"\"" Feb 27 10:46:58 crc kubenswrapper[4728]: I0227 10:46:58.364078 4728 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51e74345-bd5e-4679-b4f5-3fbb1c1c6752-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 10:46:58 crc kubenswrapper[4728]: I0227 10:46:58.364086 4728 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/51e74345-bd5e-4679-b4f5-3fbb1c1c6752-etc-swift\") on node \"crc\" DevicePath \"\"" Feb 27 10:46:58 crc kubenswrapper[4728]: I0227 10:46:58.364097 4728 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/51e74345-bd5e-4679-b4f5-3fbb1c1c6752-swiftconf\") on node \"crc\" DevicePath \"\"" Feb 27 10:46:58 crc kubenswrapper[4728]: I0227 10:46:58.364105 4728 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/51e74345-bd5e-4679-b4f5-3fbb1c1c6752-ring-data-devices\") on node \"crc\" DevicePath \"\"" Feb 27 10:46:58 crc kubenswrapper[4728]: I0227 10:46:58.364115 4728 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/51e74345-bd5e-4679-b4f5-3fbb1c1c6752-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 10:46:59 crc kubenswrapper[4728]: I0227 10:46:59.101916 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-5fw24" Feb 27 10:46:59 crc kubenswrapper[4728]: I0227 10:46:59.186845 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-5fw24"] Feb 27 10:46:59 crc kubenswrapper[4728]: E0227 10:46:59.193579 4728 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 27 10:46:59 crc kubenswrapper[4728]: E0227 10:46:59.193610 4728 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 27 10:46:59 crc kubenswrapper[4728]: E0227 10:46:59.193653 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ec0a9664-7538-43dd-904d-c386d569999e-etc-swift podName:ec0a9664-7538-43dd-904d-c386d569999e nodeName:}" failed. No retries permitted until 2026-02-27 10:47:01.193638622 +0000 UTC m=+1241.156004728 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/ec0a9664-7538-43dd-904d-c386d569999e-etc-swift") pod "swift-storage-0" (UID: "ec0a9664-7538-43dd-904d-c386d569999e") : configmap "swift-ring-files" not found Feb 27 10:46:59 crc kubenswrapper[4728]: I0227 10:46:59.193690 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ec0a9664-7538-43dd-904d-c386d569999e-etc-swift\") pod \"swift-storage-0\" (UID: \"ec0a9664-7538-43dd-904d-c386d569999e\") " pod="openstack/swift-storage-0" Feb 27 10:46:59 crc kubenswrapper[4728]: I0227 10:46:59.195617 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-ring-rebalance-5fw24"] Feb 27 10:47:00 crc kubenswrapper[4728]: I0227 10:47:00.741404 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="51e74345-bd5e-4679-b4f5-3fbb1c1c6752" path="/var/lib/kubelet/pods/51e74345-bd5e-4679-b4f5-3fbb1c1c6752/volumes" Feb 27 10:47:00 crc kubenswrapper[4728]: I0227 10:47:00.883590 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-xgqx9"] Feb 27 10:47:01 crc kubenswrapper[4728]: W0227 10:47:01.148293 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod962b4d7e_3021_44a1_9374_0cbf20fbffa1.slice/crio-36fbb57183b1fb2a7642f23293cbb6628082346ec54ddc53339f894898aaa5b9 WatchSource:0}: Error finding container 36fbb57183b1fb2a7642f23293cbb6628082346ec54ddc53339f894898aaa5b9: Status 404 returned error can't find the container with id 36fbb57183b1fb2a7642f23293cbb6628082346ec54ddc53339f894898aaa5b9 Feb 27 10:47:01 crc kubenswrapper[4728]: I0227 10:47:01.239555 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ec0a9664-7538-43dd-904d-c386d569999e-etc-swift\") pod \"swift-storage-0\" (UID: \"ec0a9664-7538-43dd-904d-c386d569999e\") " pod="openstack/swift-storage-0" Feb 27 10:47:01 crc kubenswrapper[4728]: E0227 10:47:01.239928 4728 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 27 10:47:01 crc kubenswrapper[4728]: E0227 10:47:01.239978 4728 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 27 10:47:01 crc kubenswrapper[4728]: E0227 10:47:01.240074 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ec0a9664-7538-43dd-904d-c386d569999e-etc-swift podName:ec0a9664-7538-43dd-904d-c386d569999e nodeName:}" failed. No retries permitted until 2026-02-27 10:47:05.240047051 +0000 UTC m=+1245.202413187 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/ec0a9664-7538-43dd-904d-c386d569999e-etc-swift") pod "swift-storage-0" (UID: "ec0a9664-7538-43dd-904d-c386d569999e") : configmap "swift-ring-files" not found Feb 27 10:47:01 crc kubenswrapper[4728]: I0227 10:47:01.810269 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-wnkfg"] Feb 27 10:47:01 crc kubenswrapper[4728]: W0227 10:47:01.845889 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podec9639a6_9853_49cd_8215_0301af98d73b.slice/crio-b03c3073e1f6600afacebbceb93f931e0ced2a013caf7bec019d40a36cf5638b WatchSource:0}: Error finding container b03c3073e1f6600afacebbceb93f931e0ced2a013caf7bec019d40a36cf5638b: Status 404 returned error can't find the container with id b03c3073e1f6600afacebbceb93f931e0ced2a013caf7bec019d40a36cf5638b Feb 27 10:47:02 crc kubenswrapper[4728]: I0227 10:47:02.142962 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc8479f9-zpz7d" event={"ID":"b7f588e1-6d0a-44ac-a7a4-967b0ecb4cd0","Type":"ContainerStarted","Data":"bf08c00ae0b6de90498f13765571e32b34d1995d386e1fc6673dd5bdcdd531a6"} Feb 27 10:47:02 crc kubenswrapper[4728]: I0227 10:47:02.143126 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5ccc8479f9-zpz7d" podUID="b7f588e1-6d0a-44ac-a7a4-967b0ecb4cd0" containerName="dnsmasq-dns" containerID="cri-o://bf08c00ae0b6de90498f13765571e32b34d1995d386e1fc6673dd5bdcdd531a6" gracePeriod=10 Feb 27 10:47:02 crc kubenswrapper[4728]: I0227 10:47:02.143439 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5ccc8479f9-zpz7d" Feb 27 10:47:02 crc kubenswrapper[4728]: I0227 10:47:02.148681 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-wnkfg" event={"ID":"ec9639a6-9853-49cd-8215-0301af98d73b","Type":"ContainerStarted","Data":"b03c3073e1f6600afacebbceb93f931e0ced2a013caf7bec019d40a36cf5638b"} Feb 27 10:47:02 crc kubenswrapper[4728]: I0227 10:47:02.150977 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"a7b93ac4-55f2-4491-b4b4-f8abfd837dfa","Type":"ContainerStarted","Data":"f331ebdb27d148ecd5b44eada11e860f244b03f754ed7cd95e22b2ed29fb2acc"} Feb 27 10:47:02 crc kubenswrapper[4728]: I0227 10:47:02.153566 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-xgqx9" event={"ID":"962b4d7e-3021-44a1-9374-0cbf20fbffa1","Type":"ContainerStarted","Data":"36fbb57183b1fb2a7642f23293cbb6628082346ec54ddc53339f894898aaa5b9"} Feb 27 10:47:02 crc kubenswrapper[4728]: I0227 10:47:02.166533 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5ccc8479f9-zpz7d" podStartSLOduration=18.432558677 podStartE2EDuration="33.166495449s" podCreationTimestamp="2026-02-27 10:46:29 +0000 UTC" firstStartedPulling="2026-02-27 10:46:30.407871398 +0000 UTC m=+1210.370237504" lastFinishedPulling="2026-02-27 10:46:45.14180817 +0000 UTC m=+1225.104174276" observedRunningTime="2026-02-27 10:47:02.160979028 +0000 UTC m=+1242.123345134" watchObservedRunningTime="2026-02-27 10:47:02.166495449 +0000 UTC m=+1242.128861555" Feb 27 10:47:03 crc kubenswrapper[4728]: I0227 10:47:03.170702 4728 generic.go:334] "Generic (PLEG): container finished" podID="b7f588e1-6d0a-44ac-a7a4-967b0ecb4cd0" containerID="bf08c00ae0b6de90498f13765571e32b34d1995d386e1fc6673dd5bdcdd531a6" exitCode=0 Feb 27 10:47:03 crc kubenswrapper[4728]: I0227 10:47:03.170786 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc8479f9-zpz7d" event={"ID":"b7f588e1-6d0a-44ac-a7a4-967b0ecb4cd0","Type":"ContainerDied","Data":"bf08c00ae0b6de90498f13765571e32b34d1995d386e1fc6673dd5bdcdd531a6"} Feb 27 10:47:03 crc kubenswrapper[4728]: I0227 10:47:03.173657 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-jc2j2" event={"ID":"fec191d1-b76f-4b8c-94c2-2d217a21951c","Type":"ContainerStarted","Data":"40e7c8595a5fa399403409cea59abee062ebc14f119590eed15bcb1c7ada340a"} Feb 27 10:47:03 crc kubenswrapper[4728]: I0227 10:47:03.196092 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-jc2j2" podStartSLOduration=13.186462122 podStartE2EDuration="27.196068204s" podCreationTimestamp="2026-02-27 10:46:36 +0000 UTC" firstStartedPulling="2026-02-27 10:46:47.244244738 +0000 UTC m=+1227.206610844" lastFinishedPulling="2026-02-27 10:47:01.25385081 +0000 UTC m=+1241.216216926" observedRunningTime="2026-02-27 10:47:03.191627012 +0000 UTC m=+1243.153993128" watchObservedRunningTime="2026-02-27 10:47:03.196068204 +0000 UTC m=+1243.158434300" Feb 27 10:47:03 crc kubenswrapper[4728]: I0227 10:47:03.376362 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc8479f9-zpz7d" Feb 27 10:47:03 crc kubenswrapper[4728]: I0227 10:47:03.488712 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b7f588e1-6d0a-44ac-a7a4-967b0ecb4cd0-dns-svc\") pod \"b7f588e1-6d0a-44ac-a7a4-967b0ecb4cd0\" (UID: \"b7f588e1-6d0a-44ac-a7a4-967b0ecb4cd0\") " Feb 27 10:47:03 crc kubenswrapper[4728]: I0227 10:47:03.488933 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b7f588e1-6d0a-44ac-a7a4-967b0ecb4cd0-config\") pod \"b7f588e1-6d0a-44ac-a7a4-967b0ecb4cd0\" (UID: \"b7f588e1-6d0a-44ac-a7a4-967b0ecb4cd0\") " Feb 27 10:47:03 crc kubenswrapper[4728]: I0227 10:47:03.488965 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m8brl\" (UniqueName: \"kubernetes.io/projected/b7f588e1-6d0a-44ac-a7a4-967b0ecb4cd0-kube-api-access-m8brl\") pod \"b7f588e1-6d0a-44ac-a7a4-967b0ecb4cd0\" (UID: \"b7f588e1-6d0a-44ac-a7a4-967b0ecb4cd0\") " Feb 27 10:47:03 crc kubenswrapper[4728]: I0227 10:47:03.500837 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b7f588e1-6d0a-44ac-a7a4-967b0ecb4cd0-kube-api-access-m8brl" (OuterVolumeSpecName: "kube-api-access-m8brl") pod "b7f588e1-6d0a-44ac-a7a4-967b0ecb4cd0" (UID: "b7f588e1-6d0a-44ac-a7a4-967b0ecb4cd0"). InnerVolumeSpecName "kube-api-access-m8brl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:47:03 crc kubenswrapper[4728]: I0227 10:47:03.584671 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b7f588e1-6d0a-44ac-a7a4-967b0ecb4cd0-config" (OuterVolumeSpecName: "config") pod "b7f588e1-6d0a-44ac-a7a4-967b0ecb4cd0" (UID: "b7f588e1-6d0a-44ac-a7a4-967b0ecb4cd0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:47:03 crc kubenswrapper[4728]: I0227 10:47:03.591738 4728 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b7f588e1-6d0a-44ac-a7a4-967b0ecb4cd0-config\") on node \"crc\" DevicePath \"\"" Feb 27 10:47:03 crc kubenswrapper[4728]: I0227 10:47:03.591761 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m8brl\" (UniqueName: \"kubernetes.io/projected/b7f588e1-6d0a-44ac-a7a4-967b0ecb4cd0-kube-api-access-m8brl\") on node \"crc\" DevicePath \"\"" Feb 27 10:47:03 crc kubenswrapper[4728]: I0227 10:47:03.682018 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b7f588e1-6d0a-44ac-a7a4-967b0ecb4cd0-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b7f588e1-6d0a-44ac-a7a4-967b0ecb4cd0" (UID: "b7f588e1-6d0a-44ac-a7a4-967b0ecb4cd0"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:47:03 crc kubenswrapper[4728]: I0227 10:47:03.693764 4728 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b7f588e1-6d0a-44ac-a7a4-967b0ecb4cd0-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 27 10:47:04 crc kubenswrapper[4728]: I0227 10:47:04.207224 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-bd5fc" event={"ID":"20d22b86-c3cb-4b12-8e88-35369d033e1e","Type":"ContainerStarted","Data":"e4d3027353112c9720db1fc373d2f8b10587506cfe62f5edd6bb53893fcdae30"} Feb 27 10:47:04 crc kubenswrapper[4728]: I0227 10:47:04.207656 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-bd5fc" Feb 27 10:47:04 crc kubenswrapper[4728]: I0227 10:47:04.210868 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"ad00da50-2e05-4612-a862-5cccd698e77b","Type":"ContainerStarted","Data":"ece13434c955547aaf3f7f164eaf74b912d99426d2f94d33488bf7c110f9b30c"} Feb 27 10:47:04 crc kubenswrapper[4728]: I0227 10:47:04.216395 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc8479f9-zpz7d" event={"ID":"b7f588e1-6d0a-44ac-a7a4-967b0ecb4cd0","Type":"ContainerDied","Data":"7de452135186dd165ad0cf00c04ac3d340b9651ae13fea7b8f3e50e76f13b0ff"} Feb 27 10:47:04 crc kubenswrapper[4728]: I0227 10:47:04.216451 4728 scope.go:117] "RemoveContainer" containerID="bf08c00ae0b6de90498f13765571e32b34d1995d386e1fc6673dd5bdcdd531a6" Feb 27 10:47:04 crc kubenswrapper[4728]: I0227 10:47:04.216725 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc8479f9-zpz7d" Feb 27 10:47:04 crc kubenswrapper[4728]: I0227 10:47:04.219004 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"94cc80b4-aa0c-439a-be56-22b86add0bb3","Type":"ContainerStarted","Data":"09bbab08f2fe5756df506f98a7852ea7c025a8ca59fb0ba51048276525049d9c"} Feb 27 10:47:04 crc kubenswrapper[4728]: I0227 10:47:04.223237 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"ed5c2715-a8a7-4d10-ba69-32133e2b6e51","Type":"ContainerStarted","Data":"1503074435abecf09855a93619b6cf0dbeab23c896af55fe7e0a4d539da35b29"} Feb 27 10:47:04 crc kubenswrapper[4728]: I0227 10:47:04.223994 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 27 10:47:04 crc kubenswrapper[4728]: I0227 10:47:04.225283 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-bhldn" event={"ID":"e03375ec-5705-44e3-9dda-686c809cf4ef","Type":"ContainerStarted","Data":"184f22737be582806619cbb09f7d91b87be49f49156944181fa187eddd8f9e27"} Feb 27 10:47:04 crc kubenswrapper[4728]: I0227 10:47:04.237024 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-bd5fc" podStartSLOduration=12.493945379 podStartE2EDuration="26.237003761s" podCreationTimestamp="2026-02-27 10:46:38 +0000 UTC" firstStartedPulling="2026-02-27 10:46:47.273658797 +0000 UTC m=+1227.236024913" lastFinishedPulling="2026-02-27 10:47:01.016717149 +0000 UTC m=+1240.979083295" observedRunningTime="2026-02-27 10:47:04.234202174 +0000 UTC m=+1244.196568280" watchObservedRunningTime="2026-02-27 10:47:04.237003761 +0000 UTC m=+1244.199369867" Feb 27 10:47:04 crc kubenswrapper[4728]: I0227 10:47:04.239593 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"803ed01f-b95c-4718-a5e8-3a864b0b7850","Type":"ContainerStarted","Data":"4c43c64c2c4c38c2fd8a856786061ac54c268113467653f8b8ab10dbeed5528f"} Feb 27 10:47:04 crc kubenswrapper[4728]: I0227 10:47:04.241307 4728 generic.go:334] "Generic (PLEG): container finished" podID="962b4d7e-3021-44a1-9374-0cbf20fbffa1" containerID="8e15a1287eb1c0670fa53a80fc24f1e50a003ed7832c1f16439e02ae653adafe" exitCode=0 Feb 27 10:47:04 crc kubenswrapper[4728]: I0227 10:47:04.241489 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-xgqx9" event={"ID":"962b4d7e-3021-44a1-9374-0cbf20fbffa1","Type":"ContainerDied","Data":"8e15a1287eb1c0670fa53a80fc24f1e50a003ed7832c1f16439e02ae653adafe"} Feb 27 10:47:04 crc kubenswrapper[4728]: I0227 10:47:04.250155 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"0be54b22-6600-4033-92e9-10fd8a540238","Type":"ContainerStarted","Data":"1d3495dfb15a51ddc6e96e3678151ea540244e1b6cc8d47c52f161b5cc09036f"} Feb 27 10:47:04 crc kubenswrapper[4728]: I0227 10:47:04.326032 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=13.562033868 podStartE2EDuration="29.326007808s" podCreationTimestamp="2026-02-27 10:46:35 +0000 UTC" firstStartedPulling="2026-02-27 10:46:47.243919469 +0000 UTC m=+1227.206285585" lastFinishedPulling="2026-02-27 10:47:03.007893419 +0000 UTC m=+1242.970259525" observedRunningTime="2026-02-27 10:47:04.312259981 +0000 UTC m=+1244.274626087" watchObservedRunningTime="2026-02-27 10:47:04.326007808 +0000 UTC m=+1244.288373914" Feb 27 10:47:04 crc kubenswrapper[4728]: I0227 10:47:04.342737 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-zpz7d"] Feb 27 10:47:04 crc kubenswrapper[4728]: I0227 10:47:04.356034 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-zpz7d"] Feb 27 10:47:04 crc kubenswrapper[4728]: I0227 10:47:04.747778 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b7f588e1-6d0a-44ac-a7a4-967b0ecb4cd0" path="/var/lib/kubelet/pods/b7f588e1-6d0a-44ac-a7a4-967b0ecb4cd0/volumes" Feb 27 10:47:05 crc kubenswrapper[4728]: I0227 10:47:05.169174 4728 scope.go:117] "RemoveContainer" containerID="6493c0364199145f1e4802e35b468d4a15e48c3fe1388feef2e9b63d4aba2117" Feb 27 10:47:05 crc kubenswrapper[4728]: I0227 10:47:05.262099 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"d96ab6cd-ed9d-4924-9566-91930411701d","Type":"ContainerStarted","Data":"e4997d36ad5328d03f64dcc85aa7e6861c52e45b14220b75791b03e527d710b7"} Feb 27 10:47:05 crc kubenswrapper[4728]: I0227 10:47:05.266445 4728 generic.go:334] "Generic (PLEG): container finished" podID="e03375ec-5705-44e3-9dda-686c809cf4ef" containerID="184f22737be582806619cbb09f7d91b87be49f49156944181fa187eddd8f9e27" exitCode=0 Feb 27 10:47:05 crc kubenswrapper[4728]: I0227 10:47:05.266530 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-bhldn" event={"ID":"e03375ec-5705-44e3-9dda-686c809cf4ef","Type":"ContainerDied","Data":"184f22737be582806619cbb09f7d91b87be49f49156944181fa187eddd8f9e27"} Feb 27 10:47:05 crc kubenswrapper[4728]: I0227 10:47:05.269107 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"5948716b-2c2b-4a90-b4b5-f8daad17f020","Type":"ContainerStarted","Data":"1ea7d949740097930544753162eda041e0ea42aee0a8c73e677e4887cdfaae0c"} Feb 27 10:47:05 crc kubenswrapper[4728]: I0227 10:47:05.270999 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"26ecfb63-8476-497d-9cb3-3729c4961b4e","Type":"ContainerStarted","Data":"7085535c1cf06df2af491ea6ba1e48ccf7c883b1ebac3eccf340158c02955b37"} Feb 27 10:47:05 crc kubenswrapper[4728]: I0227 10:47:05.356937 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ec0a9664-7538-43dd-904d-c386d569999e-etc-swift\") pod \"swift-storage-0\" (UID: \"ec0a9664-7538-43dd-904d-c386d569999e\") " pod="openstack/swift-storage-0" Feb 27 10:47:05 crc kubenswrapper[4728]: E0227 10:47:05.359447 4728 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 27 10:47:05 crc kubenswrapper[4728]: E0227 10:47:05.359467 4728 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 27 10:47:05 crc kubenswrapper[4728]: E0227 10:47:05.359527 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ec0a9664-7538-43dd-904d-c386d569999e-etc-swift podName:ec0a9664-7538-43dd-904d-c386d569999e nodeName:}" failed. No retries permitted until 2026-02-27 10:47:13.359492561 +0000 UTC m=+1253.321858667 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/ec0a9664-7538-43dd-904d-c386d569999e-etc-swift") pod "swift-storage-0" (UID: "ec0a9664-7538-43dd-904d-c386d569999e") : configmap "swift-ring-files" not found Feb 27 10:47:07 crc kubenswrapper[4728]: I0227 10:47:07.290968 4728 generic.go:334] "Generic (PLEG): container finished" podID="a7b93ac4-55f2-4491-b4b4-f8abfd837dfa" containerID="f331ebdb27d148ecd5b44eada11e860f244b03f754ed7cd95e22b2ed29fb2acc" exitCode=0 Feb 27 10:47:07 crc kubenswrapper[4728]: I0227 10:47:07.291024 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"a7b93ac4-55f2-4491-b4b4-f8abfd837dfa","Type":"ContainerDied","Data":"f331ebdb27d148ecd5b44eada11e860f244b03f754ed7cd95e22b2ed29fb2acc"} Feb 27 10:47:07 crc kubenswrapper[4728]: I0227 10:47:07.296909 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"9d90c432-384c-4a43-a2cf-b26c3804a632","Type":"ContainerStarted","Data":"557b65567de2e726ad23081e70f8b025af2dcf1b9d751df1a88af8882fa6306c"} Feb 27 10:47:09 crc kubenswrapper[4728]: I0227 10:47:09.323904 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-bhldn" event={"ID":"e03375ec-5705-44e3-9dda-686c809cf4ef","Type":"ContainerStarted","Data":"0c946ad9262ad5aabfdeddd7aff414cdf45bde467917edb0d9257343a6d62688"} Feb 27 10:47:09 crc kubenswrapper[4728]: I0227 10:47:09.324766 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-bhldn" event={"ID":"e03375ec-5705-44e3-9dda-686c809cf4ef","Type":"ContainerStarted","Data":"8807c085f12539b61e761ecdabdf4d1dad7755bd16bf503699f64ac1c46ccf70"} Feb 27 10:47:09 crc kubenswrapper[4728]: I0227 10:47:09.326836 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-bhldn" Feb 27 10:47:09 crc kubenswrapper[4728]: I0227 10:47:09.326877 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-bhldn" Feb 27 10:47:09 crc kubenswrapper[4728]: I0227 10:47:09.330763 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"a7b93ac4-55f2-4491-b4b4-f8abfd837dfa","Type":"ContainerStarted","Data":"adb06cd29e4b64d9c5f99566fe08ec633a03b6aa3a05579e0e0486177795c0d4"} Feb 27 10:47:09 crc kubenswrapper[4728]: I0227 10:47:09.333270 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-xgqx9" event={"ID":"962b4d7e-3021-44a1-9374-0cbf20fbffa1","Type":"ContainerStarted","Data":"baef38030692159fa7735cb0de86d609112173b26957793cc7cd8d6239ef2304"} Feb 27 10:47:09 crc kubenswrapper[4728]: I0227 10:47:09.333603 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7cb5889db5-xgqx9" Feb 27 10:47:09 crc kubenswrapper[4728]: I0227 10:47:09.336316 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"0be54b22-6600-4033-92e9-10fd8a540238","Type":"ContainerStarted","Data":"b6162016f3ee68b31ac2fa61035bcb4f9f61a10632a286b927a8e557a72a3c1e"} Feb 27 10:47:09 crc kubenswrapper[4728]: I0227 10:47:09.337955 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-wnkfg" event={"ID":"ec9639a6-9853-49cd-8215-0301af98d73b","Type":"ContainerStarted","Data":"c68cbc24be9bbc52102364afbc3388715622e2cc40cad1303b39b7df2ccb4365"} Feb 27 10:47:09 crc kubenswrapper[4728]: I0227 10:47:09.341343 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"94cc80b4-aa0c-439a-be56-22b86add0bb3","Type":"ContainerStarted","Data":"9a03f8a2d26a6c44d11a561db346f00c2449dab549374decbab9108c66c40da7"} Feb 27 10:47:09 crc kubenswrapper[4728]: I0227 10:47:09.344638 4728 generic.go:334] "Generic (PLEG): container finished" podID="803ed01f-b95c-4718-a5e8-3a864b0b7850" containerID="4c43c64c2c4c38c2fd8a856786061ac54c268113467653f8b8ab10dbeed5528f" exitCode=0 Feb 27 10:47:09 crc kubenswrapper[4728]: I0227 10:47:09.344682 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"803ed01f-b95c-4718-a5e8-3a864b0b7850","Type":"ContainerDied","Data":"4c43c64c2c4c38c2fd8a856786061ac54c268113467653f8b8ab10dbeed5528f"} Feb 27 10:47:09 crc kubenswrapper[4728]: I0227 10:47:09.361561 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-bhldn" podStartSLOduration=17.657214945 podStartE2EDuration="31.361535662s" podCreationTimestamp="2026-02-27 10:46:38 +0000 UTC" firstStartedPulling="2026-02-27 10:46:47.45158459 +0000 UTC m=+1227.413950696" lastFinishedPulling="2026-02-27 10:47:01.155905307 +0000 UTC m=+1241.118271413" observedRunningTime="2026-02-27 10:47:09.357206223 +0000 UTC m=+1249.319572349" watchObservedRunningTime="2026-02-27 10:47:09.361535662 +0000 UTC m=+1249.323901778" Feb 27 10:47:09 crc kubenswrapper[4728]: I0227 10:47:09.429481 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=17.300470675 podStartE2EDuration="31.42945619s" podCreationTimestamp="2026-02-27 10:46:38 +0000 UTC" firstStartedPulling="2026-02-27 10:46:54.276460254 +0000 UTC m=+1234.238826370" lastFinishedPulling="2026-02-27 10:47:08.405445769 +0000 UTC m=+1248.367811885" observedRunningTime="2026-02-27 10:47:09.37999727 +0000 UTC m=+1249.342363386" watchObservedRunningTime="2026-02-27 10:47:09.42945619 +0000 UTC m=+1249.391822296" Feb 27 10:47:09 crc kubenswrapper[4728]: I0227 10:47:09.454691 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7cb5889db5-xgqx9" podStartSLOduration=13.454667733 podStartE2EDuration="13.454667733s" podCreationTimestamp="2026-02-27 10:46:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:47:09.399591469 +0000 UTC m=+1249.361957585" watchObservedRunningTime="2026-02-27 10:47:09.454667733 +0000 UTC m=+1249.417033839" Feb 27 10:47:09 crc kubenswrapper[4728]: I0227 10:47:09.458925 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=23.985911756 podStartE2EDuration="37.4589148s" podCreationTimestamp="2026-02-27 10:46:32 +0000 UTC" firstStartedPulling="2026-02-27 10:46:45.577129381 +0000 UTC m=+1225.539495487" lastFinishedPulling="2026-02-27 10:46:59.050132425 +0000 UTC m=+1239.012498531" observedRunningTime="2026-02-27 10:47:09.421531113 +0000 UTC m=+1249.383897229" watchObservedRunningTime="2026-02-27 10:47:09.4589148 +0000 UTC m=+1249.421280906" Feb 27 10:47:09 crc kubenswrapper[4728]: I0227 10:47:09.497820 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=16.616389026 podStartE2EDuration="27.497794829s" podCreationTimestamp="2026-02-27 10:46:42 +0000 UTC" firstStartedPulling="2026-02-27 10:46:57.491389558 +0000 UTC m=+1237.453755674" lastFinishedPulling="2026-02-27 10:47:08.372795341 +0000 UTC m=+1248.335161477" observedRunningTime="2026-02-27 10:47:09.470807267 +0000 UTC m=+1249.433173373" watchObservedRunningTime="2026-02-27 10:47:09.497794829 +0000 UTC m=+1249.460160935" Feb 27 10:47:09 crc kubenswrapper[4728]: I0227 10:47:09.512271 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-wnkfg" podStartSLOduration=6.101226154 podStartE2EDuration="12.512243146s" podCreationTimestamp="2026-02-27 10:46:57 +0000 UTC" firstStartedPulling="2026-02-27 10:47:01.849059089 +0000 UTC m=+1241.811425195" lastFinishedPulling="2026-02-27 10:47:08.260076081 +0000 UTC m=+1248.222442187" observedRunningTime="2026-02-27 10:47:09.492388371 +0000 UTC m=+1249.454754487" watchObservedRunningTime="2026-02-27 10:47:09.512243146 +0000 UTC m=+1249.474609252" Feb 27 10:47:10 crc kubenswrapper[4728]: I0227 10:47:10.250999 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Feb 27 10:47:10 crc kubenswrapper[4728]: I0227 10:47:10.251548 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Feb 27 10:47:10 crc kubenswrapper[4728]: I0227 10:47:10.308824 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Feb 27 10:47:10 crc kubenswrapper[4728]: I0227 10:47:10.362177 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"803ed01f-b95c-4718-a5e8-3a864b0b7850","Type":"ContainerStarted","Data":"a68b6be4a6ee9ec3054f70ba31b8e7e8b3a32c808b67bf47c23129f6e3102a88"} Feb 27 10:47:10 crc kubenswrapper[4728]: I0227 10:47:10.390955 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=25.398462712 podStartE2EDuration="39.390929922s" podCreationTimestamp="2026-02-27 10:46:31 +0000 UTC" firstStartedPulling="2026-02-27 10:46:47.273841003 +0000 UTC m=+1227.236207109" lastFinishedPulling="2026-02-27 10:47:01.266308183 +0000 UTC m=+1241.228674319" observedRunningTime="2026-02-27 10:47:10.38541444 +0000 UTC m=+1250.347780556" watchObservedRunningTime="2026-02-27 10:47:10.390929922 +0000 UTC m=+1250.353296038" Feb 27 10:47:10 crc kubenswrapper[4728]: I0227 10:47:10.424666 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Feb 27 10:47:10 crc kubenswrapper[4728]: I0227 10:47:10.723701 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-xgqx9"] Feb 27 10:47:10 crc kubenswrapper[4728]: I0227 10:47:10.761494 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Feb 27 10:47:10 crc kubenswrapper[4728]: I0227 10:47:10.767901 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d65f699f-fqrzs"] Feb 27 10:47:10 crc kubenswrapper[4728]: E0227 10:47:10.768314 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7f588e1-6d0a-44ac-a7a4-967b0ecb4cd0" containerName="init" Feb 27 10:47:10 crc kubenswrapper[4728]: I0227 10:47:10.768334 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7f588e1-6d0a-44ac-a7a4-967b0ecb4cd0" containerName="init" Feb 27 10:47:10 crc kubenswrapper[4728]: E0227 10:47:10.768359 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7f588e1-6d0a-44ac-a7a4-967b0ecb4cd0" containerName="dnsmasq-dns" Feb 27 10:47:10 crc kubenswrapper[4728]: I0227 10:47:10.768365 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7f588e1-6d0a-44ac-a7a4-967b0ecb4cd0" containerName="dnsmasq-dns" Feb 27 10:47:10 crc kubenswrapper[4728]: I0227 10:47:10.768579 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="b7f588e1-6d0a-44ac-a7a4-967b0ecb4cd0" containerName="dnsmasq-dns" Feb 27 10:47:10 crc kubenswrapper[4728]: I0227 10:47:10.769723 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d65f699f-fqrzs" Feb 27 10:47:10 crc kubenswrapper[4728]: I0227 10:47:10.774327 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Feb 27 10:47:10 crc kubenswrapper[4728]: I0227 10:47:10.840661 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-nrfxq"] Feb 27 10:47:10 crc kubenswrapper[4728]: I0227 10:47:10.849059 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-nrfxq" Feb 27 10:47:10 crc kubenswrapper[4728]: I0227 10:47:10.851227 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Feb 27 10:47:10 crc kubenswrapper[4728]: I0227 10:47:10.857365 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Feb 27 10:47:10 crc kubenswrapper[4728]: I0227 10:47:10.873774 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d65f699f-fqrzs"] Feb 27 10:47:10 crc kubenswrapper[4728]: I0227 10:47:10.883110 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-nrfxq"] Feb 27 10:47:10 crc kubenswrapper[4728]: I0227 10:47:10.911624 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6lpmm\" (UniqueName: \"kubernetes.io/projected/54ecd495-526a-426c-8a53-911882e04cd8-kube-api-access-6lpmm\") pod \"dnsmasq-dns-57d65f699f-fqrzs\" (UID: \"54ecd495-526a-426c-8a53-911882e04cd8\") " pod="openstack/dnsmasq-dns-57d65f699f-fqrzs" Feb 27 10:47:10 crc kubenswrapper[4728]: I0227 10:47:10.911683 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/54ecd495-526a-426c-8a53-911882e04cd8-config\") pod \"dnsmasq-dns-57d65f699f-fqrzs\" (UID: \"54ecd495-526a-426c-8a53-911882e04cd8\") " pod="openstack/dnsmasq-dns-57d65f699f-fqrzs" Feb 27 10:47:10 crc kubenswrapper[4728]: I0227 10:47:10.911758 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/54ecd495-526a-426c-8a53-911882e04cd8-dns-svc\") pod \"dnsmasq-dns-57d65f699f-fqrzs\" (UID: \"54ecd495-526a-426c-8a53-911882e04cd8\") " pod="openstack/dnsmasq-dns-57d65f699f-fqrzs" Feb 27 10:47:10 crc kubenswrapper[4728]: I0227 10:47:10.911820 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/54ecd495-526a-426c-8a53-911882e04cd8-ovsdbserver-nb\") pod \"dnsmasq-dns-57d65f699f-fqrzs\" (UID: \"54ecd495-526a-426c-8a53-911882e04cd8\") " pod="openstack/dnsmasq-dns-57d65f699f-fqrzs" Feb 27 10:47:11 crc kubenswrapper[4728]: I0227 10:47:11.013943 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/54ecd495-526a-426c-8a53-911882e04cd8-ovsdbserver-nb\") pod \"dnsmasq-dns-57d65f699f-fqrzs\" (UID: \"54ecd495-526a-426c-8a53-911882e04cd8\") " pod="openstack/dnsmasq-dns-57d65f699f-fqrzs" Feb 27 10:47:11 crc kubenswrapper[4728]: I0227 10:47:11.014027 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/73bd084b-f8c7-4dcd-8d01-fcf8f8587275-ovn-rundir\") pod \"ovn-controller-metrics-nrfxq\" (UID: \"73bd084b-f8c7-4dcd-8d01-fcf8f8587275\") " pod="openstack/ovn-controller-metrics-nrfxq" Feb 27 10:47:11 crc kubenswrapper[4728]: I0227 10:47:11.014087 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6lpmm\" (UniqueName: \"kubernetes.io/projected/54ecd495-526a-426c-8a53-911882e04cd8-kube-api-access-6lpmm\") pod \"dnsmasq-dns-57d65f699f-fqrzs\" (UID: \"54ecd495-526a-426c-8a53-911882e04cd8\") " pod="openstack/dnsmasq-dns-57d65f699f-fqrzs" Feb 27 10:47:11 crc kubenswrapper[4728]: I0227 10:47:11.014116 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m998r\" (UniqueName: \"kubernetes.io/projected/73bd084b-f8c7-4dcd-8d01-fcf8f8587275-kube-api-access-m998r\") pod \"ovn-controller-metrics-nrfxq\" (UID: \"73bd084b-f8c7-4dcd-8d01-fcf8f8587275\") " pod="openstack/ovn-controller-metrics-nrfxq" Feb 27 10:47:11 crc kubenswrapper[4728]: I0227 10:47:11.014143 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/54ecd495-526a-426c-8a53-911882e04cd8-config\") pod \"dnsmasq-dns-57d65f699f-fqrzs\" (UID: \"54ecd495-526a-426c-8a53-911882e04cd8\") " pod="openstack/dnsmasq-dns-57d65f699f-fqrzs" Feb 27 10:47:11 crc kubenswrapper[4728]: I0227 10:47:11.014159 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/73bd084b-f8c7-4dcd-8d01-fcf8f8587275-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-nrfxq\" (UID: \"73bd084b-f8c7-4dcd-8d01-fcf8f8587275\") " pod="openstack/ovn-controller-metrics-nrfxq" Feb 27 10:47:11 crc kubenswrapper[4728]: I0227 10:47:11.014805 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/73bd084b-f8c7-4dcd-8d01-fcf8f8587275-config\") pod \"ovn-controller-metrics-nrfxq\" (UID: \"73bd084b-f8c7-4dcd-8d01-fcf8f8587275\") " pod="openstack/ovn-controller-metrics-nrfxq" Feb 27 10:47:11 crc kubenswrapper[4728]: I0227 10:47:11.015168 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/54ecd495-526a-426c-8a53-911882e04cd8-ovsdbserver-nb\") pod \"dnsmasq-dns-57d65f699f-fqrzs\" (UID: \"54ecd495-526a-426c-8a53-911882e04cd8\") " pod="openstack/dnsmasq-dns-57d65f699f-fqrzs" Feb 27 10:47:11 crc kubenswrapper[4728]: I0227 10:47:11.015435 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/54ecd495-526a-426c-8a53-911882e04cd8-config\") pod \"dnsmasq-dns-57d65f699f-fqrzs\" (UID: \"54ecd495-526a-426c-8a53-911882e04cd8\") " pod="openstack/dnsmasq-dns-57d65f699f-fqrzs" Feb 27 10:47:11 crc kubenswrapper[4728]: I0227 10:47:11.016032 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/54ecd495-526a-426c-8a53-911882e04cd8-dns-svc\") pod \"dnsmasq-dns-57d65f699f-fqrzs\" (UID: \"54ecd495-526a-426c-8a53-911882e04cd8\") " pod="openstack/dnsmasq-dns-57d65f699f-fqrzs" Feb 27 10:47:11 crc kubenswrapper[4728]: I0227 10:47:11.016261 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73bd084b-f8c7-4dcd-8d01-fcf8f8587275-combined-ca-bundle\") pod \"ovn-controller-metrics-nrfxq\" (UID: \"73bd084b-f8c7-4dcd-8d01-fcf8f8587275\") " pod="openstack/ovn-controller-metrics-nrfxq" Feb 27 10:47:11 crc kubenswrapper[4728]: I0227 10:47:11.016328 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/73bd084b-f8c7-4dcd-8d01-fcf8f8587275-ovs-rundir\") pod \"ovn-controller-metrics-nrfxq\" (UID: \"73bd084b-f8c7-4dcd-8d01-fcf8f8587275\") " pod="openstack/ovn-controller-metrics-nrfxq" Feb 27 10:47:11 crc kubenswrapper[4728]: I0227 10:47:11.017318 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/54ecd495-526a-426c-8a53-911882e04cd8-dns-svc\") pod \"dnsmasq-dns-57d65f699f-fqrzs\" (UID: \"54ecd495-526a-426c-8a53-911882e04cd8\") " pod="openstack/dnsmasq-dns-57d65f699f-fqrzs" Feb 27 10:47:11 crc kubenswrapper[4728]: I0227 10:47:11.036453 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6lpmm\" (UniqueName: \"kubernetes.io/projected/54ecd495-526a-426c-8a53-911882e04cd8-kube-api-access-6lpmm\") pod \"dnsmasq-dns-57d65f699f-fqrzs\" (UID: \"54ecd495-526a-426c-8a53-911882e04cd8\") " pod="openstack/dnsmasq-dns-57d65f699f-fqrzs" Feb 27 10:47:11 crc kubenswrapper[4728]: I0227 10:47:11.095364 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d65f699f-fqrzs"] Feb 27 10:47:11 crc kubenswrapper[4728]: I0227 10:47:11.096170 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d65f699f-fqrzs" Feb 27 10:47:11 crc kubenswrapper[4728]: I0227 10:47:11.118652 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m998r\" (UniqueName: \"kubernetes.io/projected/73bd084b-f8c7-4dcd-8d01-fcf8f8587275-kube-api-access-m998r\") pod \"ovn-controller-metrics-nrfxq\" (UID: \"73bd084b-f8c7-4dcd-8d01-fcf8f8587275\") " pod="openstack/ovn-controller-metrics-nrfxq" Feb 27 10:47:11 crc kubenswrapper[4728]: I0227 10:47:11.118703 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/73bd084b-f8c7-4dcd-8d01-fcf8f8587275-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-nrfxq\" (UID: \"73bd084b-f8c7-4dcd-8d01-fcf8f8587275\") " pod="openstack/ovn-controller-metrics-nrfxq" Feb 27 10:47:11 crc kubenswrapper[4728]: I0227 10:47:11.118758 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/73bd084b-f8c7-4dcd-8d01-fcf8f8587275-config\") pod \"ovn-controller-metrics-nrfxq\" (UID: \"73bd084b-f8c7-4dcd-8d01-fcf8f8587275\") " pod="openstack/ovn-controller-metrics-nrfxq" Feb 27 10:47:11 crc kubenswrapper[4728]: I0227 10:47:11.118843 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73bd084b-f8c7-4dcd-8d01-fcf8f8587275-combined-ca-bundle\") pod \"ovn-controller-metrics-nrfxq\" (UID: \"73bd084b-f8c7-4dcd-8d01-fcf8f8587275\") " pod="openstack/ovn-controller-metrics-nrfxq" Feb 27 10:47:11 crc kubenswrapper[4728]: I0227 10:47:11.118864 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/73bd084b-f8c7-4dcd-8d01-fcf8f8587275-ovs-rundir\") pod \"ovn-controller-metrics-nrfxq\" (UID: \"73bd084b-f8c7-4dcd-8d01-fcf8f8587275\") " pod="openstack/ovn-controller-metrics-nrfxq" Feb 27 10:47:11 crc kubenswrapper[4728]: I0227 10:47:11.118908 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/73bd084b-f8c7-4dcd-8d01-fcf8f8587275-ovn-rundir\") pod \"ovn-controller-metrics-nrfxq\" (UID: \"73bd084b-f8c7-4dcd-8d01-fcf8f8587275\") " pod="openstack/ovn-controller-metrics-nrfxq" Feb 27 10:47:11 crc kubenswrapper[4728]: I0227 10:47:11.119351 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/73bd084b-f8c7-4dcd-8d01-fcf8f8587275-ovs-rundir\") pod \"ovn-controller-metrics-nrfxq\" (UID: \"73bd084b-f8c7-4dcd-8d01-fcf8f8587275\") " pod="openstack/ovn-controller-metrics-nrfxq" Feb 27 10:47:11 crc kubenswrapper[4728]: I0227 10:47:11.119401 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/73bd084b-f8c7-4dcd-8d01-fcf8f8587275-ovn-rundir\") pod \"ovn-controller-metrics-nrfxq\" (UID: \"73bd084b-f8c7-4dcd-8d01-fcf8f8587275\") " pod="openstack/ovn-controller-metrics-nrfxq" Feb 27 10:47:11 crc kubenswrapper[4728]: I0227 10:47:11.120137 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/73bd084b-f8c7-4dcd-8d01-fcf8f8587275-config\") pod \"ovn-controller-metrics-nrfxq\" (UID: \"73bd084b-f8c7-4dcd-8d01-fcf8f8587275\") " pod="openstack/ovn-controller-metrics-nrfxq" Feb 27 10:47:11 crc kubenswrapper[4728]: I0227 10:47:11.128166 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/73bd084b-f8c7-4dcd-8d01-fcf8f8587275-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-nrfxq\" (UID: \"73bd084b-f8c7-4dcd-8d01-fcf8f8587275\") " pod="openstack/ovn-controller-metrics-nrfxq" Feb 27 10:47:11 crc kubenswrapper[4728]: I0227 10:47:11.130838 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73bd084b-f8c7-4dcd-8d01-fcf8f8587275-combined-ca-bundle\") pod \"ovn-controller-metrics-nrfxq\" (UID: \"73bd084b-f8c7-4dcd-8d01-fcf8f8587275\") " pod="openstack/ovn-controller-metrics-nrfxq" Feb 27 10:47:11 crc kubenswrapper[4728]: I0227 10:47:11.136876 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-cp22r"] Feb 27 10:47:11 crc kubenswrapper[4728]: I0227 10:47:11.141791 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-cp22r" Feb 27 10:47:11 crc kubenswrapper[4728]: I0227 10:47:11.142643 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m998r\" (UniqueName: \"kubernetes.io/projected/73bd084b-f8c7-4dcd-8d01-fcf8f8587275-kube-api-access-m998r\") pod \"ovn-controller-metrics-nrfxq\" (UID: \"73bd084b-f8c7-4dcd-8d01-fcf8f8587275\") " pod="openstack/ovn-controller-metrics-nrfxq" Feb 27 10:47:11 crc kubenswrapper[4728]: I0227 10:47:11.157645 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Feb 27 10:47:11 crc kubenswrapper[4728]: I0227 10:47:11.160539 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-cp22r"] Feb 27 10:47:11 crc kubenswrapper[4728]: I0227 10:47:11.174248 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-nrfxq" Feb 27 10:47:11 crc kubenswrapper[4728]: I0227 10:47:11.224490 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/259eb6ad-1caa-4bb6-a1d5-2b81ba757e27-config\") pod \"dnsmasq-dns-b8fbc5445-cp22r\" (UID: \"259eb6ad-1caa-4bb6-a1d5-2b81ba757e27\") " pod="openstack/dnsmasq-dns-b8fbc5445-cp22r" Feb 27 10:47:11 crc kubenswrapper[4728]: I0227 10:47:11.224586 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/259eb6ad-1caa-4bb6-a1d5-2b81ba757e27-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-cp22r\" (UID: \"259eb6ad-1caa-4bb6-a1d5-2b81ba757e27\") " pod="openstack/dnsmasq-dns-b8fbc5445-cp22r" Feb 27 10:47:11 crc kubenswrapper[4728]: I0227 10:47:11.224626 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/259eb6ad-1caa-4bb6-a1d5-2b81ba757e27-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-cp22r\" (UID: \"259eb6ad-1caa-4bb6-a1d5-2b81ba757e27\") " pod="openstack/dnsmasq-dns-b8fbc5445-cp22r" Feb 27 10:47:11 crc kubenswrapper[4728]: I0227 10:47:11.224648 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wcpj7\" (UniqueName: \"kubernetes.io/projected/259eb6ad-1caa-4bb6-a1d5-2b81ba757e27-kube-api-access-wcpj7\") pod \"dnsmasq-dns-b8fbc5445-cp22r\" (UID: \"259eb6ad-1caa-4bb6-a1d5-2b81ba757e27\") " pod="openstack/dnsmasq-dns-b8fbc5445-cp22r" Feb 27 10:47:11 crc kubenswrapper[4728]: I0227 10:47:11.224817 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/259eb6ad-1caa-4bb6-a1d5-2b81ba757e27-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-cp22r\" (UID: \"259eb6ad-1caa-4bb6-a1d5-2b81ba757e27\") " pod="openstack/dnsmasq-dns-b8fbc5445-cp22r" Feb 27 10:47:11 crc kubenswrapper[4728]: I0227 10:47:11.327224 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/259eb6ad-1caa-4bb6-a1d5-2b81ba757e27-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-cp22r\" (UID: \"259eb6ad-1caa-4bb6-a1d5-2b81ba757e27\") " pod="openstack/dnsmasq-dns-b8fbc5445-cp22r" Feb 27 10:47:11 crc kubenswrapper[4728]: I0227 10:47:11.327300 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/259eb6ad-1caa-4bb6-a1d5-2b81ba757e27-config\") pod \"dnsmasq-dns-b8fbc5445-cp22r\" (UID: \"259eb6ad-1caa-4bb6-a1d5-2b81ba757e27\") " pod="openstack/dnsmasq-dns-b8fbc5445-cp22r" Feb 27 10:47:11 crc kubenswrapper[4728]: I0227 10:47:11.327325 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/259eb6ad-1caa-4bb6-a1d5-2b81ba757e27-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-cp22r\" (UID: \"259eb6ad-1caa-4bb6-a1d5-2b81ba757e27\") " pod="openstack/dnsmasq-dns-b8fbc5445-cp22r" Feb 27 10:47:11 crc kubenswrapper[4728]: I0227 10:47:11.327358 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/259eb6ad-1caa-4bb6-a1d5-2b81ba757e27-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-cp22r\" (UID: \"259eb6ad-1caa-4bb6-a1d5-2b81ba757e27\") " pod="openstack/dnsmasq-dns-b8fbc5445-cp22r" Feb 27 10:47:11 crc kubenswrapper[4728]: I0227 10:47:11.327381 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wcpj7\" (UniqueName: \"kubernetes.io/projected/259eb6ad-1caa-4bb6-a1d5-2b81ba757e27-kube-api-access-wcpj7\") pod \"dnsmasq-dns-b8fbc5445-cp22r\" (UID: \"259eb6ad-1caa-4bb6-a1d5-2b81ba757e27\") " pod="openstack/dnsmasq-dns-b8fbc5445-cp22r" Feb 27 10:47:11 crc kubenswrapper[4728]: I0227 10:47:11.328332 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/259eb6ad-1caa-4bb6-a1d5-2b81ba757e27-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-cp22r\" (UID: \"259eb6ad-1caa-4bb6-a1d5-2b81ba757e27\") " pod="openstack/dnsmasq-dns-b8fbc5445-cp22r" Feb 27 10:47:11 crc kubenswrapper[4728]: I0227 10:47:11.328490 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/259eb6ad-1caa-4bb6-a1d5-2b81ba757e27-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-cp22r\" (UID: \"259eb6ad-1caa-4bb6-a1d5-2b81ba757e27\") " pod="openstack/dnsmasq-dns-b8fbc5445-cp22r" Feb 27 10:47:11 crc kubenswrapper[4728]: I0227 10:47:11.328978 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/259eb6ad-1caa-4bb6-a1d5-2b81ba757e27-config\") pod \"dnsmasq-dns-b8fbc5445-cp22r\" (UID: \"259eb6ad-1caa-4bb6-a1d5-2b81ba757e27\") " pod="openstack/dnsmasq-dns-b8fbc5445-cp22r" Feb 27 10:47:11 crc kubenswrapper[4728]: I0227 10:47:11.329118 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/259eb6ad-1caa-4bb6-a1d5-2b81ba757e27-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-cp22r\" (UID: \"259eb6ad-1caa-4bb6-a1d5-2b81ba757e27\") " pod="openstack/dnsmasq-dns-b8fbc5445-cp22r" Feb 27 10:47:11 crc kubenswrapper[4728]: I0227 10:47:11.348148 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wcpj7\" (UniqueName: \"kubernetes.io/projected/259eb6ad-1caa-4bb6-a1d5-2b81ba757e27-kube-api-access-wcpj7\") pod \"dnsmasq-dns-b8fbc5445-cp22r\" (UID: \"259eb6ad-1caa-4bb6-a1d5-2b81ba757e27\") " pod="openstack/dnsmasq-dns-b8fbc5445-cp22r" Feb 27 10:47:11 crc kubenswrapper[4728]: I0227 10:47:11.372205 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7cb5889db5-xgqx9" podUID="962b4d7e-3021-44a1-9374-0cbf20fbffa1" containerName="dnsmasq-dns" containerID="cri-o://baef38030692159fa7735cb0de86d609112173b26957793cc7cd8d6239ef2304" gracePeriod=10 Feb 27 10:47:11 crc kubenswrapper[4728]: I0227 10:47:11.372669 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Feb 27 10:47:11 crc kubenswrapper[4728]: I0227 10:47:11.507348 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Feb 27 10:47:11 crc kubenswrapper[4728]: I0227 10:47:11.600299 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-cp22r" Feb 27 10:47:11 crc kubenswrapper[4728]: I0227 10:47:11.730946 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d65f699f-fqrzs"] Feb 27 10:47:11 crc kubenswrapper[4728]: I0227 10:47:11.776306 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Feb 27 10:47:11 crc kubenswrapper[4728]: I0227 10:47:11.781887 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 27 10:47:11 crc kubenswrapper[4728]: I0227 10:47:11.786778 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Feb 27 10:47:11 crc kubenswrapper[4728]: I0227 10:47:11.786819 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Feb 27 10:47:11 crc kubenswrapper[4728]: I0227 10:47:11.790470 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 27 10:47:11 crc kubenswrapper[4728]: I0227 10:47:11.793008 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-skvgk" Feb 27 10:47:11 crc kubenswrapper[4728]: I0227 10:47:11.794096 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Feb 27 10:47:11 crc kubenswrapper[4728]: I0227 10:47:11.886632 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-nrfxq"] Feb 27 10:47:11 crc kubenswrapper[4728]: I0227 10:47:11.944569 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jf6gp\" (UniqueName: \"kubernetes.io/projected/7d741423-4119-4fd9-9314-50153ed061b6-kube-api-access-jf6gp\") pod \"ovn-northd-0\" (UID: \"7d741423-4119-4fd9-9314-50153ed061b6\") " pod="openstack/ovn-northd-0" Feb 27 10:47:11 crc kubenswrapper[4728]: I0227 10:47:11.944635 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d741423-4119-4fd9-9314-50153ed061b6-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"7d741423-4119-4fd9-9314-50153ed061b6\") " pod="openstack/ovn-northd-0" Feb 27 10:47:11 crc kubenswrapper[4728]: I0227 10:47:11.944695 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/7d741423-4119-4fd9-9314-50153ed061b6-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"7d741423-4119-4fd9-9314-50153ed061b6\") " pod="openstack/ovn-northd-0" Feb 27 10:47:11 crc kubenswrapper[4728]: I0227 10:47:11.944722 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/7d741423-4119-4fd9-9314-50153ed061b6-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"7d741423-4119-4fd9-9314-50153ed061b6\") " pod="openstack/ovn-northd-0" Feb 27 10:47:11 crc kubenswrapper[4728]: I0227 10:47:11.944903 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7d741423-4119-4fd9-9314-50153ed061b6-scripts\") pod \"ovn-northd-0\" (UID: \"7d741423-4119-4fd9-9314-50153ed061b6\") " pod="openstack/ovn-northd-0" Feb 27 10:47:11 crc kubenswrapper[4728]: I0227 10:47:11.944961 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/7d741423-4119-4fd9-9314-50153ed061b6-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"7d741423-4119-4fd9-9314-50153ed061b6\") " pod="openstack/ovn-northd-0" Feb 27 10:47:11 crc kubenswrapper[4728]: I0227 10:47:11.945153 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d741423-4119-4fd9-9314-50153ed061b6-config\") pod \"ovn-northd-0\" (UID: \"7d741423-4119-4fd9-9314-50153ed061b6\") " pod="openstack/ovn-northd-0" Feb 27 10:47:12 crc kubenswrapper[4728]: I0227 10:47:12.047106 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7d741423-4119-4fd9-9314-50153ed061b6-scripts\") pod \"ovn-northd-0\" (UID: \"7d741423-4119-4fd9-9314-50153ed061b6\") " pod="openstack/ovn-northd-0" Feb 27 10:47:12 crc kubenswrapper[4728]: I0227 10:47:12.047606 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/7d741423-4119-4fd9-9314-50153ed061b6-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"7d741423-4119-4fd9-9314-50153ed061b6\") " pod="openstack/ovn-northd-0" Feb 27 10:47:12 crc kubenswrapper[4728]: I0227 10:47:12.047725 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d741423-4119-4fd9-9314-50153ed061b6-config\") pod \"ovn-northd-0\" (UID: \"7d741423-4119-4fd9-9314-50153ed061b6\") " pod="openstack/ovn-northd-0" Feb 27 10:47:12 crc kubenswrapper[4728]: I0227 10:47:12.047814 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jf6gp\" (UniqueName: \"kubernetes.io/projected/7d741423-4119-4fd9-9314-50153ed061b6-kube-api-access-jf6gp\") pod \"ovn-northd-0\" (UID: \"7d741423-4119-4fd9-9314-50153ed061b6\") " pod="openstack/ovn-northd-0" Feb 27 10:47:12 crc kubenswrapper[4728]: I0227 10:47:12.047849 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d741423-4119-4fd9-9314-50153ed061b6-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"7d741423-4119-4fd9-9314-50153ed061b6\") " pod="openstack/ovn-northd-0" Feb 27 10:47:12 crc kubenswrapper[4728]: I0227 10:47:12.047904 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/7d741423-4119-4fd9-9314-50153ed061b6-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"7d741423-4119-4fd9-9314-50153ed061b6\") " pod="openstack/ovn-northd-0" Feb 27 10:47:12 crc kubenswrapper[4728]: I0227 10:47:12.047934 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/7d741423-4119-4fd9-9314-50153ed061b6-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"7d741423-4119-4fd9-9314-50153ed061b6\") " pod="openstack/ovn-northd-0" Feb 27 10:47:12 crc kubenswrapper[4728]: I0227 10:47:12.051526 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d741423-4119-4fd9-9314-50153ed061b6-config\") pod \"ovn-northd-0\" (UID: \"7d741423-4119-4fd9-9314-50153ed061b6\") " pod="openstack/ovn-northd-0" Feb 27 10:47:12 crc kubenswrapper[4728]: I0227 10:47:12.052158 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7d741423-4119-4fd9-9314-50153ed061b6-scripts\") pod \"ovn-northd-0\" (UID: \"7d741423-4119-4fd9-9314-50153ed061b6\") " pod="openstack/ovn-northd-0" Feb 27 10:47:12 crc kubenswrapper[4728]: I0227 10:47:12.052412 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/7d741423-4119-4fd9-9314-50153ed061b6-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"7d741423-4119-4fd9-9314-50153ed061b6\") " pod="openstack/ovn-northd-0" Feb 27 10:47:12 crc kubenswrapper[4728]: I0227 10:47:12.057115 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d741423-4119-4fd9-9314-50153ed061b6-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"7d741423-4119-4fd9-9314-50153ed061b6\") " pod="openstack/ovn-northd-0" Feb 27 10:47:12 crc kubenswrapper[4728]: I0227 10:47:12.059440 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/7d741423-4119-4fd9-9314-50153ed061b6-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"7d741423-4119-4fd9-9314-50153ed061b6\") " pod="openstack/ovn-northd-0" Feb 27 10:47:12 crc kubenswrapper[4728]: I0227 10:47:12.085899 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/7d741423-4119-4fd9-9314-50153ed061b6-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"7d741423-4119-4fd9-9314-50153ed061b6\") " pod="openstack/ovn-northd-0" Feb 27 10:47:12 crc kubenswrapper[4728]: I0227 10:47:12.090070 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jf6gp\" (UniqueName: \"kubernetes.io/projected/7d741423-4119-4fd9-9314-50153ed061b6-kube-api-access-jf6gp\") pod \"ovn-northd-0\" (UID: \"7d741423-4119-4fd9-9314-50153ed061b6\") " pod="openstack/ovn-northd-0" Feb 27 10:47:12 crc kubenswrapper[4728]: I0227 10:47:12.116520 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 27 10:47:12 crc kubenswrapper[4728]: I0227 10:47:12.202573 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-cp22r"] Feb 27 10:47:12 crc kubenswrapper[4728]: I0227 10:47:12.378464 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-cp22r" event={"ID":"259eb6ad-1caa-4bb6-a1d5-2b81ba757e27","Type":"ContainerStarted","Data":"2b151b66d4ac6e85c04a7c1d444cd39284a3f2bd10db382d59bf7f0a4b0d1e99"} Feb 27 10:47:12 crc kubenswrapper[4728]: I0227 10:47:12.388662 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d65f699f-fqrzs" event={"ID":"54ecd495-526a-426c-8a53-911882e04cd8","Type":"ContainerStarted","Data":"9f5d5118c2bcdb451aa7780e6ba84038c519a7b283f3bc2948aa16a40a36071b"} Feb 27 10:47:12 crc kubenswrapper[4728]: I0227 10:47:12.388712 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d65f699f-fqrzs" event={"ID":"54ecd495-526a-426c-8a53-911882e04cd8","Type":"ContainerStarted","Data":"b3fde438d84f7a66d67c25def9cc959d95794d4f9fda87bd27c6d33a04c9f1a7"} Feb 27 10:47:12 crc kubenswrapper[4728]: I0227 10:47:12.388726 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57d65f699f-fqrzs" podUID="54ecd495-526a-426c-8a53-911882e04cd8" containerName="init" containerID="cri-o://9f5d5118c2bcdb451aa7780e6ba84038c519a7b283f3bc2948aa16a40a36071b" gracePeriod=10 Feb 27 10:47:12 crc kubenswrapper[4728]: I0227 10:47:12.407716 4728 generic.go:334] "Generic (PLEG): container finished" podID="962b4d7e-3021-44a1-9374-0cbf20fbffa1" containerID="baef38030692159fa7735cb0de86d609112173b26957793cc7cd8d6239ef2304" exitCode=0 Feb 27 10:47:12 crc kubenswrapper[4728]: I0227 10:47:12.407805 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-xgqx9" event={"ID":"962b4d7e-3021-44a1-9374-0cbf20fbffa1","Type":"ContainerDied","Data":"baef38030692159fa7735cb0de86d609112173b26957793cc7cd8d6239ef2304"} Feb 27 10:47:12 crc kubenswrapper[4728]: I0227 10:47:12.409826 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-nrfxq" event={"ID":"73bd084b-f8c7-4dcd-8d01-fcf8f8587275","Type":"ContainerStarted","Data":"7827663421dbc0629e1df1df6ad7a17095a64e3f0e8aa5cd7f791113133f2dc9"} Feb 27 10:47:12 crc kubenswrapper[4728]: I0227 10:47:12.510627 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Feb 27 10:47:12 crc kubenswrapper[4728]: I0227 10:47:12.512207 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Feb 27 10:47:12 crc kubenswrapper[4728]: I0227 10:47:12.861794 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 27 10:47:12 crc kubenswrapper[4728]: W0227 10:47:12.875079 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7d741423_4119_4fd9_9314_50153ed061b6.slice/crio-694889e7cea654c8355efd632a1627c37664761d94a42a64dab84e6a0ce8440f WatchSource:0}: Error finding container 694889e7cea654c8355efd632a1627c37664761d94a42a64dab84e6a0ce8440f: Status 404 returned error can't find the container with id 694889e7cea654c8355efd632a1627c37664761d94a42a64dab84e6a0ce8440f Feb 27 10:47:13 crc kubenswrapper[4728]: I0227 10:47:13.382518 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ec0a9664-7538-43dd-904d-c386d569999e-etc-swift\") pod \"swift-storage-0\" (UID: \"ec0a9664-7538-43dd-904d-c386d569999e\") " pod="openstack/swift-storage-0" Feb 27 10:47:13 crc kubenswrapper[4728]: E0227 10:47:13.382785 4728 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 27 10:47:13 crc kubenswrapper[4728]: E0227 10:47:13.383191 4728 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 27 10:47:13 crc kubenswrapper[4728]: E0227 10:47:13.383269 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ec0a9664-7538-43dd-904d-c386d569999e-etc-swift podName:ec0a9664-7538-43dd-904d-c386d569999e nodeName:}" failed. No retries permitted until 2026-02-27 10:47:29.383243814 +0000 UTC m=+1269.345609930 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/ec0a9664-7538-43dd-904d-c386d569999e-etc-swift") pod "swift-storage-0" (UID: "ec0a9664-7538-43dd-904d-c386d569999e") : configmap "swift-ring-files" not found Feb 27 10:47:13 crc kubenswrapper[4728]: I0227 10:47:13.430381 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"7d741423-4119-4fd9-9314-50153ed061b6","Type":"ContainerStarted","Data":"694889e7cea654c8355efd632a1627c37664761d94a42a64dab84e6a0ce8440f"} Feb 27 10:47:13 crc kubenswrapper[4728]: I0227 10:47:13.440107 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-nrfxq" event={"ID":"73bd084b-f8c7-4dcd-8d01-fcf8f8587275","Type":"ContainerStarted","Data":"9317b3b3bcad3b797956d38b93bf36ea24d70b586ca3a7f49756f5273e505f7f"} Feb 27 10:47:13 crc kubenswrapper[4728]: I0227 10:47:13.444574 4728 generic.go:334] "Generic (PLEG): container finished" podID="259eb6ad-1caa-4bb6-a1d5-2b81ba757e27" containerID="2905360e69e8ca10873c633e7134b4ef77dc9b252607c9695e9944df54067b73" exitCode=0 Feb 27 10:47:13 crc kubenswrapper[4728]: I0227 10:47:13.444664 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-cp22r" event={"ID":"259eb6ad-1caa-4bb6-a1d5-2b81ba757e27","Type":"ContainerDied","Data":"2905360e69e8ca10873c633e7134b4ef77dc9b252607c9695e9944df54067b73"} Feb 27 10:47:13 crc kubenswrapper[4728]: I0227 10:47:13.451351 4728 generic.go:334] "Generic (PLEG): container finished" podID="54ecd495-526a-426c-8a53-911882e04cd8" containerID="9f5d5118c2bcdb451aa7780e6ba84038c519a7b283f3bc2948aa16a40a36071b" exitCode=0 Feb 27 10:47:13 crc kubenswrapper[4728]: I0227 10:47:13.451709 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d65f699f-fqrzs" event={"ID":"54ecd495-526a-426c-8a53-911882e04cd8","Type":"ContainerDied","Data":"9f5d5118c2bcdb451aa7780e6ba84038c519a7b283f3bc2948aa16a40a36071b"} Feb 27 10:47:13 crc kubenswrapper[4728]: I0227 10:47:13.514408 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-nrfxq" podStartSLOduration=3.514362811 podStartE2EDuration="3.514362811s" podCreationTimestamp="2026-02-27 10:47:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:47:13.478833503 +0000 UTC m=+1253.441199599" watchObservedRunningTime="2026-02-27 10:47:13.514362811 +0000 UTC m=+1253.476728917" Feb 27 10:47:13 crc kubenswrapper[4728]: I0227 10:47:13.730730 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d65f699f-fqrzs" Feb 27 10:47:13 crc kubenswrapper[4728]: I0227 10:47:13.731523 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cb5889db5-xgqx9" Feb 27 10:47:13 crc kubenswrapper[4728]: I0227 10:47:13.737698 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Feb 27 10:47:13 crc kubenswrapper[4728]: I0227 10:47:13.737733 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Feb 27 10:47:13 crc kubenswrapper[4728]: I0227 10:47:13.909428 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/54ecd495-526a-426c-8a53-911882e04cd8-dns-svc\") pod \"54ecd495-526a-426c-8a53-911882e04cd8\" (UID: \"54ecd495-526a-426c-8a53-911882e04cd8\") " Feb 27 10:47:13 crc kubenswrapper[4728]: I0227 10:47:13.909557 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7jnj2\" (UniqueName: \"kubernetes.io/projected/962b4d7e-3021-44a1-9374-0cbf20fbffa1-kube-api-access-7jnj2\") pod \"962b4d7e-3021-44a1-9374-0cbf20fbffa1\" (UID: \"962b4d7e-3021-44a1-9374-0cbf20fbffa1\") " Feb 27 10:47:13 crc kubenswrapper[4728]: I0227 10:47:13.909586 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/962b4d7e-3021-44a1-9374-0cbf20fbffa1-config\") pod \"962b4d7e-3021-44a1-9374-0cbf20fbffa1\" (UID: \"962b4d7e-3021-44a1-9374-0cbf20fbffa1\") " Feb 27 10:47:13 crc kubenswrapper[4728]: I0227 10:47:13.909604 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/54ecd495-526a-426c-8a53-911882e04cd8-ovsdbserver-nb\") pod \"54ecd495-526a-426c-8a53-911882e04cd8\" (UID: \"54ecd495-526a-426c-8a53-911882e04cd8\") " Feb 27 10:47:13 crc kubenswrapper[4728]: I0227 10:47:13.909622 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6lpmm\" (UniqueName: \"kubernetes.io/projected/54ecd495-526a-426c-8a53-911882e04cd8-kube-api-access-6lpmm\") pod \"54ecd495-526a-426c-8a53-911882e04cd8\" (UID: \"54ecd495-526a-426c-8a53-911882e04cd8\") " Feb 27 10:47:13 crc kubenswrapper[4728]: I0227 10:47:13.909832 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/962b4d7e-3021-44a1-9374-0cbf20fbffa1-dns-svc\") pod \"962b4d7e-3021-44a1-9374-0cbf20fbffa1\" (UID: \"962b4d7e-3021-44a1-9374-0cbf20fbffa1\") " Feb 27 10:47:13 crc kubenswrapper[4728]: I0227 10:47:13.909858 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/54ecd495-526a-426c-8a53-911882e04cd8-config\") pod \"54ecd495-526a-426c-8a53-911882e04cd8\" (UID: \"54ecd495-526a-426c-8a53-911882e04cd8\") " Feb 27 10:47:13 crc kubenswrapper[4728]: I0227 10:47:13.919685 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/962b4d7e-3021-44a1-9374-0cbf20fbffa1-kube-api-access-7jnj2" (OuterVolumeSpecName: "kube-api-access-7jnj2") pod "962b4d7e-3021-44a1-9374-0cbf20fbffa1" (UID: "962b4d7e-3021-44a1-9374-0cbf20fbffa1"). InnerVolumeSpecName "kube-api-access-7jnj2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:47:13 crc kubenswrapper[4728]: I0227 10:47:13.919965 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/54ecd495-526a-426c-8a53-911882e04cd8-kube-api-access-6lpmm" (OuterVolumeSpecName: "kube-api-access-6lpmm") pod "54ecd495-526a-426c-8a53-911882e04cd8" (UID: "54ecd495-526a-426c-8a53-911882e04cd8"). InnerVolumeSpecName "kube-api-access-6lpmm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:47:13 crc kubenswrapper[4728]: I0227 10:47:13.938057 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/54ecd495-526a-426c-8a53-911882e04cd8-config" (OuterVolumeSpecName: "config") pod "54ecd495-526a-426c-8a53-911882e04cd8" (UID: "54ecd495-526a-426c-8a53-911882e04cd8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:47:13 crc kubenswrapper[4728]: I0227 10:47:13.942952 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/54ecd495-526a-426c-8a53-911882e04cd8-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "54ecd495-526a-426c-8a53-911882e04cd8" (UID: "54ecd495-526a-426c-8a53-911882e04cd8"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:47:13 crc kubenswrapper[4728]: I0227 10:47:13.949146 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/54ecd495-526a-426c-8a53-911882e04cd8-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "54ecd495-526a-426c-8a53-911882e04cd8" (UID: "54ecd495-526a-426c-8a53-911882e04cd8"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:47:13 crc kubenswrapper[4728]: I0227 10:47:13.965576 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/962b4d7e-3021-44a1-9374-0cbf20fbffa1-config" (OuterVolumeSpecName: "config") pod "962b4d7e-3021-44a1-9374-0cbf20fbffa1" (UID: "962b4d7e-3021-44a1-9374-0cbf20fbffa1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:47:13 crc kubenswrapper[4728]: I0227 10:47:13.969039 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/962b4d7e-3021-44a1-9374-0cbf20fbffa1-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "962b4d7e-3021-44a1-9374-0cbf20fbffa1" (UID: "962b4d7e-3021-44a1-9374-0cbf20fbffa1"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:47:14 crc kubenswrapper[4728]: I0227 10:47:14.012116 4728 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/54ecd495-526a-426c-8a53-911882e04cd8-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 27 10:47:14 crc kubenswrapper[4728]: I0227 10:47:14.012157 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7jnj2\" (UniqueName: \"kubernetes.io/projected/962b4d7e-3021-44a1-9374-0cbf20fbffa1-kube-api-access-7jnj2\") on node \"crc\" DevicePath \"\"" Feb 27 10:47:14 crc kubenswrapper[4728]: I0227 10:47:14.012175 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6lpmm\" (UniqueName: \"kubernetes.io/projected/54ecd495-526a-426c-8a53-911882e04cd8-kube-api-access-6lpmm\") on node \"crc\" DevicePath \"\"" Feb 27 10:47:14 crc kubenswrapper[4728]: I0227 10:47:14.012187 4728 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/962b4d7e-3021-44a1-9374-0cbf20fbffa1-config\") on node \"crc\" DevicePath \"\"" Feb 27 10:47:14 crc kubenswrapper[4728]: I0227 10:47:14.012201 4728 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/54ecd495-526a-426c-8a53-911882e04cd8-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 27 10:47:14 crc kubenswrapper[4728]: I0227 10:47:14.012213 4728 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/962b4d7e-3021-44a1-9374-0cbf20fbffa1-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 27 10:47:14 crc kubenswrapper[4728]: I0227 10:47:14.012224 4728 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/54ecd495-526a-426c-8a53-911882e04cd8-config\") on node \"crc\" DevicePath \"\"" Feb 27 10:47:14 crc kubenswrapper[4728]: I0227 10:47:14.462831 4728 generic.go:334] "Generic (PLEG): container finished" podID="9d90c432-384c-4a43-a2cf-b26c3804a632" containerID="557b65567de2e726ad23081e70f8b025af2dcf1b9d751df1a88af8882fa6306c" exitCode=0 Feb 27 10:47:14 crc kubenswrapper[4728]: I0227 10:47:14.462918 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"9d90c432-384c-4a43-a2cf-b26c3804a632","Type":"ContainerDied","Data":"557b65567de2e726ad23081e70f8b025af2dcf1b9d751df1a88af8882fa6306c"} Feb 27 10:47:14 crc kubenswrapper[4728]: I0227 10:47:14.468510 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d65f699f-fqrzs" event={"ID":"54ecd495-526a-426c-8a53-911882e04cd8","Type":"ContainerDied","Data":"b3fde438d84f7a66d67c25def9cc959d95794d4f9fda87bd27c6d33a04c9f1a7"} Feb 27 10:47:14 crc kubenswrapper[4728]: I0227 10:47:14.468545 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d65f699f-fqrzs" Feb 27 10:47:14 crc kubenswrapper[4728]: I0227 10:47:14.468558 4728 scope.go:117] "RemoveContainer" containerID="9f5d5118c2bcdb451aa7780e6ba84038c519a7b283f3bc2948aa16a40a36071b" Feb 27 10:47:14 crc kubenswrapper[4728]: I0227 10:47:14.479558 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-xgqx9" event={"ID":"962b4d7e-3021-44a1-9374-0cbf20fbffa1","Type":"ContainerDied","Data":"36fbb57183b1fb2a7642f23293cbb6628082346ec54ddc53339f894898aaa5b9"} Feb 27 10:47:14 crc kubenswrapper[4728]: I0227 10:47:14.479609 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cb5889db5-xgqx9" Feb 27 10:47:14 crc kubenswrapper[4728]: I0227 10:47:14.493463 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-cp22r" event={"ID":"259eb6ad-1caa-4bb6-a1d5-2b81ba757e27","Type":"ContainerStarted","Data":"e0b03c27257e3fcc204efc5377d7300bb0d3af0e465c13afadc6c96431efa460"} Feb 27 10:47:14 crc kubenswrapper[4728]: I0227 10:47:14.494377 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-b8fbc5445-cp22r" Feb 27 10:47:14 crc kubenswrapper[4728]: I0227 10:47:14.501311 4728 scope.go:117] "RemoveContainer" containerID="baef38030692159fa7735cb0de86d609112173b26957793cc7cd8d6239ef2304" Feb 27 10:47:14 crc kubenswrapper[4728]: I0227 10:47:14.524372 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-b8fbc5445-cp22r" podStartSLOduration=3.524350116 podStartE2EDuration="3.524350116s" podCreationTimestamp="2026-02-27 10:47:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:47:14.512987244 +0000 UTC m=+1254.475353370" watchObservedRunningTime="2026-02-27 10:47:14.524350116 +0000 UTC m=+1254.486716222" Feb 27 10:47:14 crc kubenswrapper[4728]: I0227 10:47:14.533817 4728 scope.go:117] "RemoveContainer" containerID="8e15a1287eb1c0670fa53a80fc24f1e50a003ed7832c1f16439e02ae653adafe" Feb 27 10:47:14 crc kubenswrapper[4728]: I0227 10:47:14.566739 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d65f699f-fqrzs"] Feb 27 10:47:14 crc kubenswrapper[4728]: I0227 10:47:14.575162 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d65f699f-fqrzs"] Feb 27 10:47:14 crc kubenswrapper[4728]: I0227 10:47:14.583238 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-xgqx9"] Feb 27 10:47:14 crc kubenswrapper[4728]: I0227 10:47:14.600688 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-xgqx9"] Feb 27 10:47:14 crc kubenswrapper[4728]: I0227 10:47:14.738590 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="54ecd495-526a-426c-8a53-911882e04cd8" path="/var/lib/kubelet/pods/54ecd495-526a-426c-8a53-911882e04cd8/volumes" Feb 27 10:47:14 crc kubenswrapper[4728]: I0227 10:47:14.739136 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="962b4d7e-3021-44a1-9374-0cbf20fbffa1" path="/var/lib/kubelet/pods/962b4d7e-3021-44a1-9374-0cbf20fbffa1/volumes" Feb 27 10:47:15 crc kubenswrapper[4728]: I0227 10:47:15.512328 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"7d741423-4119-4fd9-9314-50153ed061b6","Type":"ContainerStarted","Data":"c49abe1e4cd3ceb6b64ba9f487aa5dff0d5a094a43f5658963ed57b8a1dc039c"} Feb 27 10:47:15 crc kubenswrapper[4728]: I0227 10:47:15.512839 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"7d741423-4119-4fd9-9314-50153ed061b6","Type":"ContainerStarted","Data":"96a21a1115fc3d74ff0d45408b55f9e61d4ede687a79f3764e815883b233b230"} Feb 27 10:47:15 crc kubenswrapper[4728]: I0227 10:47:15.512910 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Feb 27 10:47:15 crc kubenswrapper[4728]: I0227 10:47:15.542899 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=3.103680118 podStartE2EDuration="4.542876448s" podCreationTimestamp="2026-02-27 10:47:11 +0000 UTC" firstStartedPulling="2026-02-27 10:47:12.880902319 +0000 UTC m=+1252.843268425" lastFinishedPulling="2026-02-27 10:47:14.320098649 +0000 UTC m=+1254.282464755" observedRunningTime="2026-02-27 10:47:15.535488444 +0000 UTC m=+1255.497854590" watchObservedRunningTime="2026-02-27 10:47:15.542876448 +0000 UTC m=+1255.505242564" Feb 27 10:47:16 crc kubenswrapper[4728]: I0227 10:47:16.342037 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 27 10:47:16 crc kubenswrapper[4728]: I0227 10:47:16.748364 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Feb 27 10:47:16 crc kubenswrapper[4728]: I0227 10:47:16.860628 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Feb 27 10:47:17 crc kubenswrapper[4728]: I0227 10:47:17.541111 4728 generic.go:334] "Generic (PLEG): container finished" podID="ec9639a6-9853-49cd-8215-0301af98d73b" containerID="c68cbc24be9bbc52102364afbc3388715622e2cc40cad1303b39b7df2ccb4365" exitCode=0 Feb 27 10:47:17 crc kubenswrapper[4728]: I0227 10:47:17.541262 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-wnkfg" event={"ID":"ec9639a6-9853-49cd-8215-0301af98d73b","Type":"ContainerDied","Data":"c68cbc24be9bbc52102364afbc3388715622e2cc40cad1303b39b7df2ccb4365"} Feb 27 10:47:17 crc kubenswrapper[4728]: I0227 10:47:17.853692 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Feb 27 10:47:17 crc kubenswrapper[4728]: I0227 10:47:17.992333 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Feb 27 10:47:19 crc kubenswrapper[4728]: I0227 10:47:19.047463 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-wnkfg" Feb 27 10:47:19 crc kubenswrapper[4728]: I0227 10:47:19.143782 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/ec9639a6-9853-49cd-8215-0301af98d73b-dispersionconf\") pod \"ec9639a6-9853-49cd-8215-0301af98d73b\" (UID: \"ec9639a6-9853-49cd-8215-0301af98d73b\") " Feb 27 10:47:19 crc kubenswrapper[4728]: I0227 10:47:19.143961 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/ec9639a6-9853-49cd-8215-0301af98d73b-swiftconf\") pod \"ec9639a6-9853-49cd-8215-0301af98d73b\" (UID: \"ec9639a6-9853-49cd-8215-0301af98d73b\") " Feb 27 10:47:19 crc kubenswrapper[4728]: I0227 10:47:19.144015 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/ec9639a6-9853-49cd-8215-0301af98d73b-ring-data-devices\") pod \"ec9639a6-9853-49cd-8215-0301af98d73b\" (UID: \"ec9639a6-9853-49cd-8215-0301af98d73b\") " Feb 27 10:47:19 crc kubenswrapper[4728]: I0227 10:47:19.144059 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec9639a6-9853-49cd-8215-0301af98d73b-combined-ca-bundle\") pod \"ec9639a6-9853-49cd-8215-0301af98d73b\" (UID: \"ec9639a6-9853-49cd-8215-0301af98d73b\") " Feb 27 10:47:19 crc kubenswrapper[4728]: I0227 10:47:19.144108 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zt5ds\" (UniqueName: \"kubernetes.io/projected/ec9639a6-9853-49cd-8215-0301af98d73b-kube-api-access-zt5ds\") pod \"ec9639a6-9853-49cd-8215-0301af98d73b\" (UID: \"ec9639a6-9853-49cd-8215-0301af98d73b\") " Feb 27 10:47:19 crc kubenswrapper[4728]: I0227 10:47:19.144145 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/ec9639a6-9853-49cd-8215-0301af98d73b-etc-swift\") pod \"ec9639a6-9853-49cd-8215-0301af98d73b\" (UID: \"ec9639a6-9853-49cd-8215-0301af98d73b\") " Feb 27 10:47:19 crc kubenswrapper[4728]: I0227 10:47:19.144218 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ec9639a6-9853-49cd-8215-0301af98d73b-scripts\") pod \"ec9639a6-9853-49cd-8215-0301af98d73b\" (UID: \"ec9639a6-9853-49cd-8215-0301af98d73b\") " Feb 27 10:47:19 crc kubenswrapper[4728]: I0227 10:47:19.147139 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ec9639a6-9853-49cd-8215-0301af98d73b-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "ec9639a6-9853-49cd-8215-0301af98d73b" (UID: "ec9639a6-9853-49cd-8215-0301af98d73b"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:47:19 crc kubenswrapper[4728]: I0227 10:47:19.147482 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec9639a6-9853-49cd-8215-0301af98d73b-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "ec9639a6-9853-49cd-8215-0301af98d73b" (UID: "ec9639a6-9853-49cd-8215-0301af98d73b"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:47:19 crc kubenswrapper[4728]: I0227 10:47:19.168760 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec9639a6-9853-49cd-8215-0301af98d73b-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "ec9639a6-9853-49cd-8215-0301af98d73b" (UID: "ec9639a6-9853-49cd-8215-0301af98d73b"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:47:19 crc kubenswrapper[4728]: I0227 10:47:19.176776 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec9639a6-9853-49cd-8215-0301af98d73b-kube-api-access-zt5ds" (OuterVolumeSpecName: "kube-api-access-zt5ds") pod "ec9639a6-9853-49cd-8215-0301af98d73b" (UID: "ec9639a6-9853-49cd-8215-0301af98d73b"). InnerVolumeSpecName "kube-api-access-zt5ds". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:47:19 crc kubenswrapper[4728]: I0227 10:47:19.188539 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec9639a6-9853-49cd-8215-0301af98d73b-scripts" (OuterVolumeSpecName: "scripts") pod "ec9639a6-9853-49cd-8215-0301af98d73b" (UID: "ec9639a6-9853-49cd-8215-0301af98d73b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:47:19 crc kubenswrapper[4728]: I0227 10:47:19.190135 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec9639a6-9853-49cd-8215-0301af98d73b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ec9639a6-9853-49cd-8215-0301af98d73b" (UID: "ec9639a6-9853-49cd-8215-0301af98d73b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:47:19 crc kubenswrapper[4728]: I0227 10:47:19.198435 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec9639a6-9853-49cd-8215-0301af98d73b-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "ec9639a6-9853-49cd-8215-0301af98d73b" (UID: "ec9639a6-9853-49cd-8215-0301af98d73b"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:47:19 crc kubenswrapper[4728]: I0227 10:47:19.246601 4728 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/ec9639a6-9853-49cd-8215-0301af98d73b-etc-swift\") on node \"crc\" DevicePath \"\"" Feb 27 10:47:19 crc kubenswrapper[4728]: I0227 10:47:19.246721 4728 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ec9639a6-9853-49cd-8215-0301af98d73b-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 10:47:19 crc kubenswrapper[4728]: I0227 10:47:19.246797 4728 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/ec9639a6-9853-49cd-8215-0301af98d73b-dispersionconf\") on node \"crc\" DevicePath \"\"" Feb 27 10:47:19 crc kubenswrapper[4728]: I0227 10:47:19.246876 4728 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/ec9639a6-9853-49cd-8215-0301af98d73b-swiftconf\") on node \"crc\" DevicePath \"\"" Feb 27 10:47:19 crc kubenswrapper[4728]: I0227 10:47:19.246943 4728 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/ec9639a6-9853-49cd-8215-0301af98d73b-ring-data-devices\") on node \"crc\" DevicePath \"\"" Feb 27 10:47:19 crc kubenswrapper[4728]: I0227 10:47:19.247002 4728 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec9639a6-9853-49cd-8215-0301af98d73b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 10:47:19 crc kubenswrapper[4728]: I0227 10:47:19.247052 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zt5ds\" (UniqueName: \"kubernetes.io/projected/ec9639a6-9853-49cd-8215-0301af98d73b-kube-api-access-zt5ds\") on node \"crc\" DevicePath \"\"" Feb 27 10:47:19 crc kubenswrapper[4728]: I0227 10:47:19.564380 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-wnkfg" event={"ID":"ec9639a6-9853-49cd-8215-0301af98d73b","Type":"ContainerDied","Data":"b03c3073e1f6600afacebbceb93f931e0ced2a013caf7bec019d40a36cf5638b"} Feb 27 10:47:19 crc kubenswrapper[4728]: I0227 10:47:19.564443 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b03c3073e1f6600afacebbceb93f931e0ced2a013caf7bec019d40a36cf5638b" Feb 27 10:47:19 crc kubenswrapper[4728]: I0227 10:47:19.564444 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-wnkfg" Feb 27 10:47:21 crc kubenswrapper[4728]: I0227 10:47:21.227648 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-r5wj6"] Feb 27 10:47:21 crc kubenswrapper[4728]: E0227 10:47:21.228662 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec9639a6-9853-49cd-8215-0301af98d73b" containerName="swift-ring-rebalance" Feb 27 10:47:21 crc kubenswrapper[4728]: I0227 10:47:21.228687 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec9639a6-9853-49cd-8215-0301af98d73b" containerName="swift-ring-rebalance" Feb 27 10:47:21 crc kubenswrapper[4728]: E0227 10:47:21.228721 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="962b4d7e-3021-44a1-9374-0cbf20fbffa1" containerName="init" Feb 27 10:47:21 crc kubenswrapper[4728]: I0227 10:47:21.228735 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="962b4d7e-3021-44a1-9374-0cbf20fbffa1" containerName="init" Feb 27 10:47:21 crc kubenswrapper[4728]: E0227 10:47:21.228776 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="962b4d7e-3021-44a1-9374-0cbf20fbffa1" containerName="dnsmasq-dns" Feb 27 10:47:21 crc kubenswrapper[4728]: I0227 10:47:21.228790 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="962b4d7e-3021-44a1-9374-0cbf20fbffa1" containerName="dnsmasq-dns" Feb 27 10:47:21 crc kubenswrapper[4728]: E0227 10:47:21.228822 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54ecd495-526a-426c-8a53-911882e04cd8" containerName="init" Feb 27 10:47:21 crc kubenswrapper[4728]: I0227 10:47:21.228835 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="54ecd495-526a-426c-8a53-911882e04cd8" containerName="init" Feb 27 10:47:21 crc kubenswrapper[4728]: I0227 10:47:21.229185 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec9639a6-9853-49cd-8215-0301af98d73b" containerName="swift-ring-rebalance" Feb 27 10:47:21 crc kubenswrapper[4728]: I0227 10:47:21.229221 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="962b4d7e-3021-44a1-9374-0cbf20fbffa1" containerName="dnsmasq-dns" Feb 27 10:47:21 crc kubenswrapper[4728]: I0227 10:47:21.229279 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="54ecd495-526a-426c-8a53-911882e04cd8" containerName="init" Feb 27 10:47:21 crc kubenswrapper[4728]: I0227 10:47:21.230480 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-r5wj6" Feb 27 10:47:21 crc kubenswrapper[4728]: I0227 10:47:21.232978 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Feb 27 10:47:21 crc kubenswrapper[4728]: I0227 10:47:21.259046 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-r5wj6"] Feb 27 10:47:21 crc kubenswrapper[4728]: I0227 10:47:21.403188 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1e45b815-1e80-40bf-a006-f7e62a4c0e64-operator-scripts\") pod \"root-account-create-update-r5wj6\" (UID: \"1e45b815-1e80-40bf-a006-f7e62a4c0e64\") " pod="openstack/root-account-create-update-r5wj6" Feb 27 10:47:21 crc kubenswrapper[4728]: I0227 10:47:21.403312 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5t29\" (UniqueName: \"kubernetes.io/projected/1e45b815-1e80-40bf-a006-f7e62a4c0e64-kube-api-access-f5t29\") pod \"root-account-create-update-r5wj6\" (UID: \"1e45b815-1e80-40bf-a006-f7e62a4c0e64\") " pod="openstack/root-account-create-update-r5wj6" Feb 27 10:47:21 crc kubenswrapper[4728]: I0227 10:47:21.506217 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1e45b815-1e80-40bf-a006-f7e62a4c0e64-operator-scripts\") pod \"root-account-create-update-r5wj6\" (UID: \"1e45b815-1e80-40bf-a006-f7e62a4c0e64\") " pod="openstack/root-account-create-update-r5wj6" Feb 27 10:47:21 crc kubenswrapper[4728]: I0227 10:47:21.506360 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f5t29\" (UniqueName: \"kubernetes.io/projected/1e45b815-1e80-40bf-a006-f7e62a4c0e64-kube-api-access-f5t29\") pod \"root-account-create-update-r5wj6\" (UID: \"1e45b815-1e80-40bf-a006-f7e62a4c0e64\") " pod="openstack/root-account-create-update-r5wj6" Feb 27 10:47:21 crc kubenswrapper[4728]: I0227 10:47:21.507546 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1e45b815-1e80-40bf-a006-f7e62a4c0e64-operator-scripts\") pod \"root-account-create-update-r5wj6\" (UID: \"1e45b815-1e80-40bf-a006-f7e62a4c0e64\") " pod="openstack/root-account-create-update-r5wj6" Feb 27 10:47:21 crc kubenswrapper[4728]: I0227 10:47:21.545570 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f5t29\" (UniqueName: \"kubernetes.io/projected/1e45b815-1e80-40bf-a006-f7e62a4c0e64-kube-api-access-f5t29\") pod \"root-account-create-update-r5wj6\" (UID: \"1e45b815-1e80-40bf-a006-f7e62a4c0e64\") " pod="openstack/root-account-create-update-r5wj6" Feb 27 10:47:21 crc kubenswrapper[4728]: I0227 10:47:21.549005 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-r5wj6" Feb 27 10:47:21 crc kubenswrapper[4728]: I0227 10:47:21.603039 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-b8fbc5445-cp22r" Feb 27 10:47:21 crc kubenswrapper[4728]: I0227 10:47:21.725156 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-jvkhg"] Feb 27 10:47:21 crc kubenswrapper[4728]: I0227 10:47:21.726612 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57d769cc4f-jvkhg" podUID="4eee28e7-e3c4-46a4-a0ee-88fdc8cb10e0" containerName="dnsmasq-dns" containerID="cri-o://ca43cefce3d00218282806f2a7b3925d5de51832aeffe6151436d3058c14586b" gracePeriod=10 Feb 27 10:47:22 crc kubenswrapper[4728]: I0227 10:47:22.162872 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-r5wj6"] Feb 27 10:47:22 crc kubenswrapper[4728]: I0227 10:47:22.304638 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-jvkhg" Feb 27 10:47:22 crc kubenswrapper[4728]: I0227 10:47:22.432242 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4eee28e7-e3c4-46a4-a0ee-88fdc8cb10e0-dns-svc\") pod \"4eee28e7-e3c4-46a4-a0ee-88fdc8cb10e0\" (UID: \"4eee28e7-e3c4-46a4-a0ee-88fdc8cb10e0\") " Feb 27 10:47:22 crc kubenswrapper[4728]: I0227 10:47:22.432361 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4eee28e7-e3c4-46a4-a0ee-88fdc8cb10e0-config\") pod \"4eee28e7-e3c4-46a4-a0ee-88fdc8cb10e0\" (UID: \"4eee28e7-e3c4-46a4-a0ee-88fdc8cb10e0\") " Feb 27 10:47:22 crc kubenswrapper[4728]: I0227 10:47:22.432552 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fd76g\" (UniqueName: \"kubernetes.io/projected/4eee28e7-e3c4-46a4-a0ee-88fdc8cb10e0-kube-api-access-fd76g\") pod \"4eee28e7-e3c4-46a4-a0ee-88fdc8cb10e0\" (UID: \"4eee28e7-e3c4-46a4-a0ee-88fdc8cb10e0\") " Feb 27 10:47:22 crc kubenswrapper[4728]: I0227 10:47:22.441173 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4eee28e7-e3c4-46a4-a0ee-88fdc8cb10e0-kube-api-access-fd76g" (OuterVolumeSpecName: "kube-api-access-fd76g") pod "4eee28e7-e3c4-46a4-a0ee-88fdc8cb10e0" (UID: "4eee28e7-e3c4-46a4-a0ee-88fdc8cb10e0"). InnerVolumeSpecName "kube-api-access-fd76g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:47:22 crc kubenswrapper[4728]: I0227 10:47:22.477258 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4eee28e7-e3c4-46a4-a0ee-88fdc8cb10e0-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4eee28e7-e3c4-46a4-a0ee-88fdc8cb10e0" (UID: "4eee28e7-e3c4-46a4-a0ee-88fdc8cb10e0"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:47:22 crc kubenswrapper[4728]: I0227 10:47:22.479333 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4eee28e7-e3c4-46a4-a0ee-88fdc8cb10e0-config" (OuterVolumeSpecName: "config") pod "4eee28e7-e3c4-46a4-a0ee-88fdc8cb10e0" (UID: "4eee28e7-e3c4-46a4-a0ee-88fdc8cb10e0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:47:22 crc kubenswrapper[4728]: I0227 10:47:22.534810 4728 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4eee28e7-e3c4-46a4-a0ee-88fdc8cb10e0-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 27 10:47:22 crc kubenswrapper[4728]: I0227 10:47:22.534839 4728 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4eee28e7-e3c4-46a4-a0ee-88fdc8cb10e0-config\") on node \"crc\" DevicePath \"\"" Feb 27 10:47:22 crc kubenswrapper[4728]: I0227 10:47:22.534848 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fd76g\" (UniqueName: \"kubernetes.io/projected/4eee28e7-e3c4-46a4-a0ee-88fdc8cb10e0-kube-api-access-fd76g\") on node \"crc\" DevicePath \"\"" Feb 27 10:47:22 crc kubenswrapper[4728]: I0227 10:47:22.598683 4728 generic.go:334] "Generic (PLEG): container finished" podID="4eee28e7-e3c4-46a4-a0ee-88fdc8cb10e0" containerID="ca43cefce3d00218282806f2a7b3925d5de51832aeffe6151436d3058c14586b" exitCode=0 Feb 27 10:47:22 crc kubenswrapper[4728]: I0227 10:47:22.598740 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-jvkhg" event={"ID":"4eee28e7-e3c4-46a4-a0ee-88fdc8cb10e0","Type":"ContainerDied","Data":"ca43cefce3d00218282806f2a7b3925d5de51832aeffe6151436d3058c14586b"} Feb 27 10:47:22 crc kubenswrapper[4728]: I0227 10:47:22.598767 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-jvkhg" event={"ID":"4eee28e7-e3c4-46a4-a0ee-88fdc8cb10e0","Type":"ContainerDied","Data":"cfe4282ce681b6289fa1a5afcc6902675e4506ce82742702572bcecc2dfccf17"} Feb 27 10:47:22 crc kubenswrapper[4728]: I0227 10:47:22.598785 4728 scope.go:117] "RemoveContainer" containerID="ca43cefce3d00218282806f2a7b3925d5de51832aeffe6151436d3058c14586b" Feb 27 10:47:22 crc kubenswrapper[4728]: I0227 10:47:22.598893 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-jvkhg" Feb 27 10:47:22 crc kubenswrapper[4728]: I0227 10:47:22.604188 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-r5wj6" event={"ID":"1e45b815-1e80-40bf-a006-f7e62a4c0e64","Type":"ContainerStarted","Data":"11a744c851f102a1c67cb296ad43cb73ec26b28fbfbbdb0cb2daffeef02f844a"} Feb 27 10:47:22 crc kubenswrapper[4728]: I0227 10:47:22.604224 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-r5wj6" event={"ID":"1e45b815-1e80-40bf-a006-f7e62a4c0e64","Type":"ContainerStarted","Data":"7c6bc2910dcaf27fe4e439395e8f94a6281b6cf311e5b5f3dc9b76f39f1e6d3c"} Feb 27 10:47:22 crc kubenswrapper[4728]: I0227 10:47:22.626704 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-r5wj6" podStartSLOduration=1.626686531 podStartE2EDuration="1.626686531s" podCreationTimestamp="2026-02-27 10:47:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:47:22.620618475 +0000 UTC m=+1262.582984581" watchObservedRunningTime="2026-02-27 10:47:22.626686531 +0000 UTC m=+1262.589052637" Feb 27 10:47:22 crc kubenswrapper[4728]: I0227 10:47:22.631162 4728 scope.go:117] "RemoveContainer" containerID="75eed1813ea14af39108ac71772ba33817f909960c4565e843f9e5fbe21d7c2d" Feb 27 10:47:22 crc kubenswrapper[4728]: I0227 10:47:22.644208 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-jvkhg"] Feb 27 10:47:22 crc kubenswrapper[4728]: I0227 10:47:22.650390 4728 scope.go:117] "RemoveContainer" containerID="ca43cefce3d00218282806f2a7b3925d5de51832aeffe6151436d3058c14586b" Feb 27 10:47:22 crc kubenswrapper[4728]: E0227 10:47:22.652059 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca43cefce3d00218282806f2a7b3925d5de51832aeffe6151436d3058c14586b\": container with ID starting with ca43cefce3d00218282806f2a7b3925d5de51832aeffe6151436d3058c14586b not found: ID does not exist" containerID="ca43cefce3d00218282806f2a7b3925d5de51832aeffe6151436d3058c14586b" Feb 27 10:47:22 crc kubenswrapper[4728]: I0227 10:47:22.652099 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca43cefce3d00218282806f2a7b3925d5de51832aeffe6151436d3058c14586b"} err="failed to get container status \"ca43cefce3d00218282806f2a7b3925d5de51832aeffe6151436d3058c14586b\": rpc error: code = NotFound desc = could not find container \"ca43cefce3d00218282806f2a7b3925d5de51832aeffe6151436d3058c14586b\": container with ID starting with ca43cefce3d00218282806f2a7b3925d5de51832aeffe6151436d3058c14586b not found: ID does not exist" Feb 27 10:47:22 crc kubenswrapper[4728]: I0227 10:47:22.652127 4728 scope.go:117] "RemoveContainer" containerID="75eed1813ea14af39108ac71772ba33817f909960c4565e843f9e5fbe21d7c2d" Feb 27 10:47:22 crc kubenswrapper[4728]: E0227 10:47:22.652400 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"75eed1813ea14af39108ac71772ba33817f909960c4565e843f9e5fbe21d7c2d\": container with ID starting with 75eed1813ea14af39108ac71772ba33817f909960c4565e843f9e5fbe21d7c2d not found: ID does not exist" containerID="75eed1813ea14af39108ac71772ba33817f909960c4565e843f9e5fbe21d7c2d" Feb 27 10:47:22 crc kubenswrapper[4728]: I0227 10:47:22.652430 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"75eed1813ea14af39108ac71772ba33817f909960c4565e843f9e5fbe21d7c2d"} err="failed to get container status \"75eed1813ea14af39108ac71772ba33817f909960c4565e843f9e5fbe21d7c2d\": rpc error: code = NotFound desc = could not find container \"75eed1813ea14af39108ac71772ba33817f909960c4565e843f9e5fbe21d7c2d\": container with ID starting with 75eed1813ea14af39108ac71772ba33817f909960c4565e843f9e5fbe21d7c2d not found: ID does not exist" Feb 27 10:47:22 crc kubenswrapper[4728]: I0227 10:47:22.652521 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-jvkhg"] Feb 27 10:47:22 crc kubenswrapper[4728]: I0227 10:47:22.736429 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4eee28e7-e3c4-46a4-a0ee-88fdc8cb10e0" path="/var/lib/kubelet/pods/4eee28e7-e3c4-46a4-a0ee-88fdc8cb10e0/volumes" Feb 27 10:47:23 crc kubenswrapper[4728]: I0227 10:47:23.266414 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-7b97b57845-jhmxj" podUID="f82e439f-89e2-4143-b9c6-1935c3154d0c" containerName="console" containerID="cri-o://7940b5dc9d3f0c57017adf88f3e553f526d8fb2e1a7753dfe7f8ec881961af03" gracePeriod=15 Feb 27 10:47:23 crc kubenswrapper[4728]: I0227 10:47:23.625223 4728 generic.go:334] "Generic (PLEG): container finished" podID="1e45b815-1e80-40bf-a006-f7e62a4c0e64" containerID="11a744c851f102a1c67cb296ad43cb73ec26b28fbfbbdb0cb2daffeef02f844a" exitCode=0 Feb 27 10:47:23 crc kubenswrapper[4728]: I0227 10:47:23.625691 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-r5wj6" event={"ID":"1e45b815-1e80-40bf-a006-f7e62a4c0e64","Type":"ContainerDied","Data":"11a744c851f102a1c67cb296ad43cb73ec26b28fbfbbdb0cb2daffeef02f844a"} Feb 27 10:47:23 crc kubenswrapper[4728]: I0227 10:47:23.631197 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-7b97b57845-jhmxj_f82e439f-89e2-4143-b9c6-1935c3154d0c/console/0.log" Feb 27 10:47:23 crc kubenswrapper[4728]: I0227 10:47:23.631235 4728 generic.go:334] "Generic (PLEG): container finished" podID="f82e439f-89e2-4143-b9c6-1935c3154d0c" containerID="7940b5dc9d3f0c57017adf88f3e553f526d8fb2e1a7753dfe7f8ec881961af03" exitCode=2 Feb 27 10:47:23 crc kubenswrapper[4728]: I0227 10:47:23.631277 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7b97b57845-jhmxj" event={"ID":"f82e439f-89e2-4143-b9c6-1935c3154d0c","Type":"ContainerDied","Data":"7940b5dc9d3f0c57017adf88f3e553f526d8fb2e1a7753dfe7f8ec881961af03"} Feb 27 10:47:23 crc kubenswrapper[4728]: I0227 10:47:23.827851 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-7b97b57845-jhmxj_f82e439f-89e2-4143-b9c6-1935c3154d0c/console/0.log" Feb 27 10:47:23 crc kubenswrapper[4728]: I0227 10:47:23.827934 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7b97b57845-jhmxj" Feb 27 10:47:23 crc kubenswrapper[4728]: I0227 10:47:23.964006 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f82e439f-89e2-4143-b9c6-1935c3154d0c-console-serving-cert\") pod \"f82e439f-89e2-4143-b9c6-1935c3154d0c\" (UID: \"f82e439f-89e2-4143-b9c6-1935c3154d0c\") " Feb 27 10:47:23 crc kubenswrapper[4728]: I0227 10:47:23.964329 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f82e439f-89e2-4143-b9c6-1935c3154d0c-service-ca\") pod \"f82e439f-89e2-4143-b9c6-1935c3154d0c\" (UID: \"f82e439f-89e2-4143-b9c6-1935c3154d0c\") " Feb 27 10:47:23 crc kubenswrapper[4728]: I0227 10:47:23.964396 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f82e439f-89e2-4143-b9c6-1935c3154d0c-oauth-serving-cert\") pod \"f82e439f-89e2-4143-b9c6-1935c3154d0c\" (UID: \"f82e439f-89e2-4143-b9c6-1935c3154d0c\") " Feb 27 10:47:23 crc kubenswrapper[4728]: I0227 10:47:23.964473 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f82e439f-89e2-4143-b9c6-1935c3154d0c-console-config\") pod \"f82e439f-89e2-4143-b9c6-1935c3154d0c\" (UID: \"f82e439f-89e2-4143-b9c6-1935c3154d0c\") " Feb 27 10:47:23 crc kubenswrapper[4728]: I0227 10:47:23.964678 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6lwfj\" (UniqueName: \"kubernetes.io/projected/f82e439f-89e2-4143-b9c6-1935c3154d0c-kube-api-access-6lwfj\") pod \"f82e439f-89e2-4143-b9c6-1935c3154d0c\" (UID: \"f82e439f-89e2-4143-b9c6-1935c3154d0c\") " Feb 27 10:47:23 crc kubenswrapper[4728]: I0227 10:47:23.964839 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f82e439f-89e2-4143-b9c6-1935c3154d0c-trusted-ca-bundle\") pod \"f82e439f-89e2-4143-b9c6-1935c3154d0c\" (UID: \"f82e439f-89e2-4143-b9c6-1935c3154d0c\") " Feb 27 10:47:23 crc kubenswrapper[4728]: I0227 10:47:23.964954 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f82e439f-89e2-4143-b9c6-1935c3154d0c-console-oauth-config\") pod \"f82e439f-89e2-4143-b9c6-1935c3154d0c\" (UID: \"f82e439f-89e2-4143-b9c6-1935c3154d0c\") " Feb 27 10:47:23 crc kubenswrapper[4728]: I0227 10:47:23.965065 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f82e439f-89e2-4143-b9c6-1935c3154d0c-service-ca" (OuterVolumeSpecName: "service-ca") pod "f82e439f-89e2-4143-b9c6-1935c3154d0c" (UID: "f82e439f-89e2-4143-b9c6-1935c3154d0c"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:47:23 crc kubenswrapper[4728]: I0227 10:47:23.965104 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f82e439f-89e2-4143-b9c6-1935c3154d0c-console-config" (OuterVolumeSpecName: "console-config") pod "f82e439f-89e2-4143-b9c6-1935c3154d0c" (UID: "f82e439f-89e2-4143-b9c6-1935c3154d0c"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:47:23 crc kubenswrapper[4728]: I0227 10:47:23.965117 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f82e439f-89e2-4143-b9c6-1935c3154d0c-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "f82e439f-89e2-4143-b9c6-1935c3154d0c" (UID: "f82e439f-89e2-4143-b9c6-1935c3154d0c"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:47:23 crc kubenswrapper[4728]: I0227 10:47:23.965745 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f82e439f-89e2-4143-b9c6-1935c3154d0c-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "f82e439f-89e2-4143-b9c6-1935c3154d0c" (UID: "f82e439f-89e2-4143-b9c6-1935c3154d0c"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:47:23 crc kubenswrapper[4728]: I0227 10:47:23.966621 4728 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f82e439f-89e2-4143-b9c6-1935c3154d0c-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 10:47:23 crc kubenswrapper[4728]: I0227 10:47:23.966648 4728 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f82e439f-89e2-4143-b9c6-1935c3154d0c-service-ca\") on node \"crc\" DevicePath \"\"" Feb 27 10:47:23 crc kubenswrapper[4728]: I0227 10:47:23.966658 4728 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f82e439f-89e2-4143-b9c6-1935c3154d0c-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 10:47:23 crc kubenswrapper[4728]: I0227 10:47:23.966668 4728 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f82e439f-89e2-4143-b9c6-1935c3154d0c-console-config\") on node \"crc\" DevicePath \"\"" Feb 27 10:47:23 crc kubenswrapper[4728]: I0227 10:47:23.970350 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f82e439f-89e2-4143-b9c6-1935c3154d0c-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "f82e439f-89e2-4143-b9c6-1935c3154d0c" (UID: "f82e439f-89e2-4143-b9c6-1935c3154d0c"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:47:23 crc kubenswrapper[4728]: I0227 10:47:23.985319 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f82e439f-89e2-4143-b9c6-1935c3154d0c-kube-api-access-6lwfj" (OuterVolumeSpecName: "kube-api-access-6lwfj") pod "f82e439f-89e2-4143-b9c6-1935c3154d0c" (UID: "f82e439f-89e2-4143-b9c6-1935c3154d0c"). InnerVolumeSpecName "kube-api-access-6lwfj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:47:23 crc kubenswrapper[4728]: I0227 10:47:23.993695 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f82e439f-89e2-4143-b9c6-1935c3154d0c-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "f82e439f-89e2-4143-b9c6-1935c3154d0c" (UID: "f82e439f-89e2-4143-b9c6-1935c3154d0c"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:47:24 crc kubenswrapper[4728]: I0227 10:47:24.069281 4728 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f82e439f-89e2-4143-b9c6-1935c3154d0c-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 10:47:24 crc kubenswrapper[4728]: I0227 10:47:24.069675 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6lwfj\" (UniqueName: \"kubernetes.io/projected/f82e439f-89e2-4143-b9c6-1935c3154d0c-kube-api-access-6lwfj\") on node \"crc\" DevicePath \"\"" Feb 27 10:47:24 crc kubenswrapper[4728]: I0227 10:47:24.069689 4728 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f82e439f-89e2-4143-b9c6-1935c3154d0c-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 27 10:47:24 crc kubenswrapper[4728]: I0227 10:47:24.307166 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-8g6gt"] Feb 27 10:47:24 crc kubenswrapper[4728]: E0227 10:47:24.307672 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4eee28e7-e3c4-46a4-a0ee-88fdc8cb10e0" containerName="init" Feb 27 10:47:24 crc kubenswrapper[4728]: I0227 10:47:24.307696 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="4eee28e7-e3c4-46a4-a0ee-88fdc8cb10e0" containerName="init" Feb 27 10:47:24 crc kubenswrapper[4728]: E0227 10:47:24.307725 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4eee28e7-e3c4-46a4-a0ee-88fdc8cb10e0" containerName="dnsmasq-dns" Feb 27 10:47:24 crc kubenswrapper[4728]: I0227 10:47:24.307733 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="4eee28e7-e3c4-46a4-a0ee-88fdc8cb10e0" containerName="dnsmasq-dns" Feb 27 10:47:24 crc kubenswrapper[4728]: E0227 10:47:24.307754 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f82e439f-89e2-4143-b9c6-1935c3154d0c" containerName="console" Feb 27 10:47:24 crc kubenswrapper[4728]: I0227 10:47:24.307762 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="f82e439f-89e2-4143-b9c6-1935c3154d0c" containerName="console" Feb 27 10:47:24 crc kubenswrapper[4728]: I0227 10:47:24.308033 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="f82e439f-89e2-4143-b9c6-1935c3154d0c" containerName="console" Feb 27 10:47:24 crc kubenswrapper[4728]: I0227 10:47:24.308059 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="4eee28e7-e3c4-46a4-a0ee-88fdc8cb10e0" containerName="dnsmasq-dns" Feb 27 10:47:24 crc kubenswrapper[4728]: I0227 10:47:24.308961 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-8g6gt" Feb 27 10:47:24 crc kubenswrapper[4728]: I0227 10:47:24.314803 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-8g6gt"] Feb 27 10:47:24 crc kubenswrapper[4728]: I0227 10:47:24.390807 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-e32a-account-create-update-kxsr5"] Feb 27 10:47:24 crc kubenswrapper[4728]: I0227 10:47:24.392900 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-e32a-account-create-update-kxsr5" Feb 27 10:47:24 crc kubenswrapper[4728]: I0227 10:47:24.395225 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Feb 27 10:47:24 crc kubenswrapper[4728]: I0227 10:47:24.402679 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-e32a-account-create-update-kxsr5"] Feb 27 10:47:24 crc kubenswrapper[4728]: I0227 10:47:24.490467 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxpbl\" (UniqueName: \"kubernetes.io/projected/e17ee83c-ab26-4c75-b2d6-e9278ddfcc06-kube-api-access-kxpbl\") pod \"glance-db-create-8g6gt\" (UID: \"e17ee83c-ab26-4c75-b2d6-e9278ddfcc06\") " pod="openstack/glance-db-create-8g6gt" Feb 27 10:47:24 crc kubenswrapper[4728]: I0227 10:47:24.490582 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e17ee83c-ab26-4c75-b2d6-e9278ddfcc06-operator-scripts\") pod \"glance-db-create-8g6gt\" (UID: \"e17ee83c-ab26-4c75-b2d6-e9278ddfcc06\") " pod="openstack/glance-db-create-8g6gt" Feb 27 10:47:24 crc kubenswrapper[4728]: I0227 10:47:24.490633 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/039bf17c-5c3a-4393-97e1-d8d78fb20fd8-operator-scripts\") pod \"glance-e32a-account-create-update-kxsr5\" (UID: \"039bf17c-5c3a-4393-97e1-d8d78fb20fd8\") " pod="openstack/glance-e32a-account-create-update-kxsr5" Feb 27 10:47:24 crc kubenswrapper[4728]: I0227 10:47:24.490659 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjnjs\" (UniqueName: \"kubernetes.io/projected/039bf17c-5c3a-4393-97e1-d8d78fb20fd8-kube-api-access-fjnjs\") pod \"glance-e32a-account-create-update-kxsr5\" (UID: \"039bf17c-5c3a-4393-97e1-d8d78fb20fd8\") " pod="openstack/glance-e32a-account-create-update-kxsr5" Feb 27 10:47:24 crc kubenswrapper[4728]: I0227 10:47:24.594082 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kxpbl\" (UniqueName: \"kubernetes.io/projected/e17ee83c-ab26-4c75-b2d6-e9278ddfcc06-kube-api-access-kxpbl\") pod \"glance-db-create-8g6gt\" (UID: \"e17ee83c-ab26-4c75-b2d6-e9278ddfcc06\") " pod="openstack/glance-db-create-8g6gt" Feb 27 10:47:24 crc kubenswrapper[4728]: I0227 10:47:24.594287 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e17ee83c-ab26-4c75-b2d6-e9278ddfcc06-operator-scripts\") pod \"glance-db-create-8g6gt\" (UID: \"e17ee83c-ab26-4c75-b2d6-e9278ddfcc06\") " pod="openstack/glance-db-create-8g6gt" Feb 27 10:47:24 crc kubenswrapper[4728]: I0227 10:47:24.594385 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/039bf17c-5c3a-4393-97e1-d8d78fb20fd8-operator-scripts\") pod \"glance-e32a-account-create-update-kxsr5\" (UID: \"039bf17c-5c3a-4393-97e1-d8d78fb20fd8\") " pod="openstack/glance-e32a-account-create-update-kxsr5" Feb 27 10:47:24 crc kubenswrapper[4728]: I0227 10:47:24.594420 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fjnjs\" (UniqueName: \"kubernetes.io/projected/039bf17c-5c3a-4393-97e1-d8d78fb20fd8-kube-api-access-fjnjs\") pod \"glance-e32a-account-create-update-kxsr5\" (UID: \"039bf17c-5c3a-4393-97e1-d8d78fb20fd8\") " pod="openstack/glance-e32a-account-create-update-kxsr5" Feb 27 10:47:24 crc kubenswrapper[4728]: I0227 10:47:24.595482 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e17ee83c-ab26-4c75-b2d6-e9278ddfcc06-operator-scripts\") pod \"glance-db-create-8g6gt\" (UID: \"e17ee83c-ab26-4c75-b2d6-e9278ddfcc06\") " pod="openstack/glance-db-create-8g6gt" Feb 27 10:47:24 crc kubenswrapper[4728]: I0227 10:47:24.600059 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/039bf17c-5c3a-4393-97e1-d8d78fb20fd8-operator-scripts\") pod \"glance-e32a-account-create-update-kxsr5\" (UID: \"039bf17c-5c3a-4393-97e1-d8d78fb20fd8\") " pod="openstack/glance-e32a-account-create-update-kxsr5" Feb 27 10:47:24 crc kubenswrapper[4728]: I0227 10:47:24.613553 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fjnjs\" (UniqueName: \"kubernetes.io/projected/039bf17c-5c3a-4393-97e1-d8d78fb20fd8-kube-api-access-fjnjs\") pod \"glance-e32a-account-create-update-kxsr5\" (UID: \"039bf17c-5c3a-4393-97e1-d8d78fb20fd8\") " pod="openstack/glance-e32a-account-create-update-kxsr5" Feb 27 10:47:24 crc kubenswrapper[4728]: I0227 10:47:24.618069 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kxpbl\" (UniqueName: \"kubernetes.io/projected/e17ee83c-ab26-4c75-b2d6-e9278ddfcc06-kube-api-access-kxpbl\") pod \"glance-db-create-8g6gt\" (UID: \"e17ee83c-ab26-4c75-b2d6-e9278ddfcc06\") " pod="openstack/glance-db-create-8g6gt" Feb 27 10:47:24 crc kubenswrapper[4728]: I0227 10:47:24.635317 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-8g6gt" Feb 27 10:47:24 crc kubenswrapper[4728]: I0227 10:47:24.651826 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-7b97b57845-jhmxj_f82e439f-89e2-4143-b9c6-1935c3154d0c/console/0.log" Feb 27 10:47:24 crc kubenswrapper[4728]: I0227 10:47:24.652001 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7b97b57845-jhmxj" event={"ID":"f82e439f-89e2-4143-b9c6-1935c3154d0c","Type":"ContainerDied","Data":"8256301e31844a2e02d2e5f2b1a3306096242ecd13811ee048cc9d88b1bae632"} Feb 27 10:47:24 crc kubenswrapper[4728]: I0227 10:47:24.652047 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7b97b57845-jhmxj" Feb 27 10:47:24 crc kubenswrapper[4728]: I0227 10:47:24.652069 4728 scope.go:117] "RemoveContainer" containerID="7940b5dc9d3f0c57017adf88f3e553f526d8fb2e1a7753dfe7f8ec881961af03" Feb 27 10:47:24 crc kubenswrapper[4728]: I0227 10:47:24.714693 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-e32a-account-create-update-kxsr5" Feb 27 10:47:24 crc kubenswrapper[4728]: I0227 10:47:24.715695 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-7b97b57845-jhmxj"] Feb 27 10:47:24 crc kubenswrapper[4728]: I0227 10:47:24.726642 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-7b97b57845-jhmxj"] Feb 27 10:47:24 crc kubenswrapper[4728]: I0227 10:47:24.743763 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f82e439f-89e2-4143-b9c6-1935c3154d0c" path="/var/lib/kubelet/pods/f82e439f-89e2-4143-b9c6-1935c3154d0c/volumes" Feb 27 10:47:25 crc kubenswrapper[4728]: I0227 10:47:25.052634 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-jvhk9"] Feb 27 10:47:25 crc kubenswrapper[4728]: I0227 10:47:25.054109 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-jvhk9" Feb 27 10:47:25 crc kubenswrapper[4728]: I0227 10:47:25.063392 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-jvhk9"] Feb 27 10:47:25 crc kubenswrapper[4728]: I0227 10:47:25.152447 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-851e-account-create-update-8stp4"] Feb 27 10:47:25 crc kubenswrapper[4728]: I0227 10:47:25.153978 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-851e-account-create-update-8stp4" Feb 27 10:47:25 crc kubenswrapper[4728]: I0227 10:47:25.156252 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Feb 27 10:47:25 crc kubenswrapper[4728]: I0227 10:47:25.169368 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-851e-account-create-update-8stp4"] Feb 27 10:47:25 crc kubenswrapper[4728]: I0227 10:47:25.206517 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/13a5f0a3-e12d-4f71-8551-63239926851b-operator-scripts\") pod \"keystone-db-create-jvhk9\" (UID: \"13a5f0a3-e12d-4f71-8551-63239926851b\") " pod="openstack/keystone-db-create-jvhk9" Feb 27 10:47:25 crc kubenswrapper[4728]: I0227 10:47:25.206624 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9hct\" (UniqueName: \"kubernetes.io/projected/13a5f0a3-e12d-4f71-8551-63239926851b-kube-api-access-z9hct\") pod \"keystone-db-create-jvhk9\" (UID: \"13a5f0a3-e12d-4f71-8551-63239926851b\") " pod="openstack/keystone-db-create-jvhk9" Feb 27 10:47:25 crc kubenswrapper[4728]: I0227 10:47:25.277740 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-n9grl"] Feb 27 10:47:25 crc kubenswrapper[4728]: I0227 10:47:25.279112 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-n9grl" Feb 27 10:47:25 crc kubenswrapper[4728]: I0227 10:47:25.294626 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-n9grl"] Feb 27 10:47:25 crc kubenswrapper[4728]: I0227 10:47:25.308862 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z9hct\" (UniqueName: \"kubernetes.io/projected/13a5f0a3-e12d-4f71-8551-63239926851b-kube-api-access-z9hct\") pod \"keystone-db-create-jvhk9\" (UID: \"13a5f0a3-e12d-4f71-8551-63239926851b\") " pod="openstack/keystone-db-create-jvhk9" Feb 27 10:47:25 crc kubenswrapper[4728]: I0227 10:47:25.308916 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1b2435b6-3193-40c1-85b8-e177682f9d3a-operator-scripts\") pod \"keystone-851e-account-create-update-8stp4\" (UID: \"1b2435b6-3193-40c1-85b8-e177682f9d3a\") " pod="openstack/keystone-851e-account-create-update-8stp4" Feb 27 10:47:25 crc kubenswrapper[4728]: I0227 10:47:25.308981 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7dftr\" (UniqueName: \"kubernetes.io/projected/1b2435b6-3193-40c1-85b8-e177682f9d3a-kube-api-access-7dftr\") pod \"keystone-851e-account-create-update-8stp4\" (UID: \"1b2435b6-3193-40c1-85b8-e177682f9d3a\") " pod="openstack/keystone-851e-account-create-update-8stp4" Feb 27 10:47:25 crc kubenswrapper[4728]: I0227 10:47:25.309106 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/13a5f0a3-e12d-4f71-8551-63239926851b-operator-scripts\") pod \"keystone-db-create-jvhk9\" (UID: \"13a5f0a3-e12d-4f71-8551-63239926851b\") " pod="openstack/keystone-db-create-jvhk9" Feb 27 10:47:25 crc kubenswrapper[4728]: I0227 10:47:25.309855 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/13a5f0a3-e12d-4f71-8551-63239926851b-operator-scripts\") pod \"keystone-db-create-jvhk9\" (UID: \"13a5f0a3-e12d-4f71-8551-63239926851b\") " pod="openstack/keystone-db-create-jvhk9" Feb 27 10:47:25 crc kubenswrapper[4728]: I0227 10:47:25.334403 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-e32a-account-create-update-kxsr5"] Feb 27 10:47:25 crc kubenswrapper[4728]: I0227 10:47:25.338493 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z9hct\" (UniqueName: \"kubernetes.io/projected/13a5f0a3-e12d-4f71-8551-63239926851b-kube-api-access-z9hct\") pod \"keystone-db-create-jvhk9\" (UID: \"13a5f0a3-e12d-4f71-8551-63239926851b\") " pod="openstack/keystone-db-create-jvhk9" Feb 27 10:47:25 crc kubenswrapper[4728]: I0227 10:47:25.343598 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-8g6gt"] Feb 27 10:47:25 crc kubenswrapper[4728]: I0227 10:47:25.369205 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-69b9-account-create-update-4p7vx"] Feb 27 10:47:25 crc kubenswrapper[4728]: I0227 10:47:25.370625 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-69b9-account-create-update-4p7vx" Feb 27 10:47:25 crc kubenswrapper[4728]: I0227 10:47:25.375030 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Feb 27 10:47:25 crc kubenswrapper[4728]: I0227 10:47:25.376953 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-69b9-account-create-update-4p7vx"] Feb 27 10:47:25 crc kubenswrapper[4728]: I0227 10:47:25.378944 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-jvhk9" Feb 27 10:47:25 crc kubenswrapper[4728]: I0227 10:47:25.411400 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1b2435b6-3193-40c1-85b8-e177682f9d3a-operator-scripts\") pod \"keystone-851e-account-create-update-8stp4\" (UID: \"1b2435b6-3193-40c1-85b8-e177682f9d3a\") " pod="openstack/keystone-851e-account-create-update-8stp4" Feb 27 10:47:25 crc kubenswrapper[4728]: I0227 10:47:25.411629 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7dftr\" (UniqueName: \"kubernetes.io/projected/1b2435b6-3193-40c1-85b8-e177682f9d3a-kube-api-access-7dftr\") pod \"keystone-851e-account-create-update-8stp4\" (UID: \"1b2435b6-3193-40c1-85b8-e177682f9d3a\") " pod="openstack/keystone-851e-account-create-update-8stp4" Feb 27 10:47:25 crc kubenswrapper[4728]: I0227 10:47:25.411669 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mgf5x\" (UniqueName: \"kubernetes.io/projected/0f3f20b1-2c57-4b56-9b40-47dae5701446-kube-api-access-mgf5x\") pod \"placement-db-create-n9grl\" (UID: \"0f3f20b1-2c57-4b56-9b40-47dae5701446\") " pod="openstack/placement-db-create-n9grl" Feb 27 10:47:25 crc kubenswrapper[4728]: I0227 10:47:25.411705 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0f3f20b1-2c57-4b56-9b40-47dae5701446-operator-scripts\") pod \"placement-db-create-n9grl\" (UID: \"0f3f20b1-2c57-4b56-9b40-47dae5701446\") " pod="openstack/placement-db-create-n9grl" Feb 27 10:47:25 crc kubenswrapper[4728]: I0227 10:47:25.412558 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1b2435b6-3193-40c1-85b8-e177682f9d3a-operator-scripts\") pod \"keystone-851e-account-create-update-8stp4\" (UID: \"1b2435b6-3193-40c1-85b8-e177682f9d3a\") " pod="openstack/keystone-851e-account-create-update-8stp4" Feb 27 10:47:25 crc kubenswrapper[4728]: I0227 10:47:25.437298 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7dftr\" (UniqueName: \"kubernetes.io/projected/1b2435b6-3193-40c1-85b8-e177682f9d3a-kube-api-access-7dftr\") pod \"keystone-851e-account-create-update-8stp4\" (UID: \"1b2435b6-3193-40c1-85b8-e177682f9d3a\") " pod="openstack/keystone-851e-account-create-update-8stp4" Feb 27 10:47:25 crc kubenswrapper[4728]: I0227 10:47:25.470763 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-851e-account-create-update-8stp4" Feb 27 10:47:25 crc kubenswrapper[4728]: I0227 10:47:25.515723 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/439b89bb-a1bf-41c4-bb4f-612eaaeecb2f-operator-scripts\") pod \"placement-69b9-account-create-update-4p7vx\" (UID: \"439b89bb-a1bf-41c4-bb4f-612eaaeecb2f\") " pod="openstack/placement-69b9-account-create-update-4p7vx" Feb 27 10:47:25 crc kubenswrapper[4728]: I0227 10:47:25.515966 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnf2k\" (UniqueName: \"kubernetes.io/projected/439b89bb-a1bf-41c4-bb4f-612eaaeecb2f-kube-api-access-rnf2k\") pod \"placement-69b9-account-create-update-4p7vx\" (UID: \"439b89bb-a1bf-41c4-bb4f-612eaaeecb2f\") " pod="openstack/placement-69b9-account-create-update-4p7vx" Feb 27 10:47:25 crc kubenswrapper[4728]: I0227 10:47:25.516096 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mgf5x\" (UniqueName: \"kubernetes.io/projected/0f3f20b1-2c57-4b56-9b40-47dae5701446-kube-api-access-mgf5x\") pod \"placement-db-create-n9grl\" (UID: \"0f3f20b1-2c57-4b56-9b40-47dae5701446\") " pod="openstack/placement-db-create-n9grl" Feb 27 10:47:25 crc kubenswrapper[4728]: I0227 10:47:25.516146 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0f3f20b1-2c57-4b56-9b40-47dae5701446-operator-scripts\") pod \"placement-db-create-n9grl\" (UID: \"0f3f20b1-2c57-4b56-9b40-47dae5701446\") " pod="openstack/placement-db-create-n9grl" Feb 27 10:47:25 crc kubenswrapper[4728]: I0227 10:47:25.518363 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0f3f20b1-2c57-4b56-9b40-47dae5701446-operator-scripts\") pod \"placement-db-create-n9grl\" (UID: \"0f3f20b1-2c57-4b56-9b40-47dae5701446\") " pod="openstack/placement-db-create-n9grl" Feb 27 10:47:25 crc kubenswrapper[4728]: I0227 10:47:25.539382 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mgf5x\" (UniqueName: \"kubernetes.io/projected/0f3f20b1-2c57-4b56-9b40-47dae5701446-kube-api-access-mgf5x\") pod \"placement-db-create-n9grl\" (UID: \"0f3f20b1-2c57-4b56-9b40-47dae5701446\") " pod="openstack/placement-db-create-n9grl" Feb 27 10:47:25 crc kubenswrapper[4728]: I0227 10:47:25.610075 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-n9grl" Feb 27 10:47:25 crc kubenswrapper[4728]: I0227 10:47:25.619936 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/439b89bb-a1bf-41c4-bb4f-612eaaeecb2f-operator-scripts\") pod \"placement-69b9-account-create-update-4p7vx\" (UID: \"439b89bb-a1bf-41c4-bb4f-612eaaeecb2f\") " pod="openstack/placement-69b9-account-create-update-4p7vx" Feb 27 10:47:25 crc kubenswrapper[4728]: I0227 10:47:25.620050 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rnf2k\" (UniqueName: \"kubernetes.io/projected/439b89bb-a1bf-41c4-bb4f-612eaaeecb2f-kube-api-access-rnf2k\") pod \"placement-69b9-account-create-update-4p7vx\" (UID: \"439b89bb-a1bf-41c4-bb4f-612eaaeecb2f\") " pod="openstack/placement-69b9-account-create-update-4p7vx" Feb 27 10:47:25 crc kubenswrapper[4728]: I0227 10:47:25.621650 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/439b89bb-a1bf-41c4-bb4f-612eaaeecb2f-operator-scripts\") pod \"placement-69b9-account-create-update-4p7vx\" (UID: \"439b89bb-a1bf-41c4-bb4f-612eaaeecb2f\") " pod="openstack/placement-69b9-account-create-update-4p7vx" Feb 27 10:47:25 crc kubenswrapper[4728]: I0227 10:47:25.639630 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rnf2k\" (UniqueName: \"kubernetes.io/projected/439b89bb-a1bf-41c4-bb4f-612eaaeecb2f-kube-api-access-rnf2k\") pod \"placement-69b9-account-create-update-4p7vx\" (UID: \"439b89bb-a1bf-41c4-bb4f-612eaaeecb2f\") " pod="openstack/placement-69b9-account-create-update-4p7vx" Feb 27 10:47:25 crc kubenswrapper[4728]: I0227 10:47:25.695432 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-69b9-account-create-update-4p7vx" Feb 27 10:47:26 crc kubenswrapper[4728]: I0227 10:47:26.187570 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-h9f2n"] Feb 27 10:47:26 crc kubenswrapper[4728]: I0227 10:47:26.188926 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-h9f2n" Feb 27 10:47:26 crc kubenswrapper[4728]: I0227 10:47:26.202053 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-h9f2n"] Feb 27 10:47:26 crc kubenswrapper[4728]: I0227 10:47:26.235521 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6656p\" (UniqueName: \"kubernetes.io/projected/2f3369f2-cce2-4912-a3ff-59677709bda2-kube-api-access-6656p\") pod \"mysqld-exporter-openstack-db-create-h9f2n\" (UID: \"2f3369f2-cce2-4912-a3ff-59677709bda2\") " pod="openstack/mysqld-exporter-openstack-db-create-h9f2n" Feb 27 10:47:26 crc kubenswrapper[4728]: I0227 10:47:26.235878 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2f3369f2-cce2-4912-a3ff-59677709bda2-operator-scripts\") pod \"mysqld-exporter-openstack-db-create-h9f2n\" (UID: \"2f3369f2-cce2-4912-a3ff-59677709bda2\") " pod="openstack/mysqld-exporter-openstack-db-create-h9f2n" Feb 27 10:47:26 crc kubenswrapper[4728]: I0227 10:47:26.273570 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-63d8-account-create-update-ntp7w"] Feb 27 10:47:26 crc kubenswrapper[4728]: I0227 10:47:26.275050 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-63d8-account-create-update-ntp7w" Feb 27 10:47:26 crc kubenswrapper[4728]: I0227 10:47:26.276404 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-openstack-db-secret" Feb 27 10:47:26 crc kubenswrapper[4728]: I0227 10:47:26.287047 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-63d8-account-create-update-ntp7w"] Feb 27 10:47:26 crc kubenswrapper[4728]: I0227 10:47:26.340164 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jp5zx\" (UniqueName: \"kubernetes.io/projected/415d3211-9668-4f77-8ea5-6e60c3685c14-kube-api-access-jp5zx\") pod \"mysqld-exporter-63d8-account-create-update-ntp7w\" (UID: \"415d3211-9668-4f77-8ea5-6e60c3685c14\") " pod="openstack/mysqld-exporter-63d8-account-create-update-ntp7w" Feb 27 10:47:26 crc kubenswrapper[4728]: I0227 10:47:26.340231 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/415d3211-9668-4f77-8ea5-6e60c3685c14-operator-scripts\") pod \"mysqld-exporter-63d8-account-create-update-ntp7w\" (UID: \"415d3211-9668-4f77-8ea5-6e60c3685c14\") " pod="openstack/mysqld-exporter-63d8-account-create-update-ntp7w" Feb 27 10:47:26 crc kubenswrapper[4728]: I0227 10:47:26.340297 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6656p\" (UniqueName: \"kubernetes.io/projected/2f3369f2-cce2-4912-a3ff-59677709bda2-kube-api-access-6656p\") pod \"mysqld-exporter-openstack-db-create-h9f2n\" (UID: \"2f3369f2-cce2-4912-a3ff-59677709bda2\") " pod="openstack/mysqld-exporter-openstack-db-create-h9f2n" Feb 27 10:47:26 crc kubenswrapper[4728]: I0227 10:47:26.340350 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2f3369f2-cce2-4912-a3ff-59677709bda2-operator-scripts\") pod \"mysqld-exporter-openstack-db-create-h9f2n\" (UID: \"2f3369f2-cce2-4912-a3ff-59677709bda2\") " pod="openstack/mysqld-exporter-openstack-db-create-h9f2n" Feb 27 10:47:26 crc kubenswrapper[4728]: I0227 10:47:26.340993 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2f3369f2-cce2-4912-a3ff-59677709bda2-operator-scripts\") pod \"mysqld-exporter-openstack-db-create-h9f2n\" (UID: \"2f3369f2-cce2-4912-a3ff-59677709bda2\") " pod="openstack/mysqld-exporter-openstack-db-create-h9f2n" Feb 27 10:47:26 crc kubenswrapper[4728]: I0227 10:47:26.356126 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6656p\" (UniqueName: \"kubernetes.io/projected/2f3369f2-cce2-4912-a3ff-59677709bda2-kube-api-access-6656p\") pod \"mysqld-exporter-openstack-db-create-h9f2n\" (UID: \"2f3369f2-cce2-4912-a3ff-59677709bda2\") " pod="openstack/mysqld-exporter-openstack-db-create-h9f2n" Feb 27 10:47:26 crc kubenswrapper[4728]: I0227 10:47:26.442808 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jp5zx\" (UniqueName: \"kubernetes.io/projected/415d3211-9668-4f77-8ea5-6e60c3685c14-kube-api-access-jp5zx\") pod \"mysqld-exporter-63d8-account-create-update-ntp7w\" (UID: \"415d3211-9668-4f77-8ea5-6e60c3685c14\") " pod="openstack/mysqld-exporter-63d8-account-create-update-ntp7w" Feb 27 10:47:26 crc kubenswrapper[4728]: I0227 10:47:26.443108 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/415d3211-9668-4f77-8ea5-6e60c3685c14-operator-scripts\") pod \"mysqld-exporter-63d8-account-create-update-ntp7w\" (UID: \"415d3211-9668-4f77-8ea5-6e60c3685c14\") " pod="openstack/mysqld-exporter-63d8-account-create-update-ntp7w" Feb 27 10:47:26 crc kubenswrapper[4728]: I0227 10:47:26.444833 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/415d3211-9668-4f77-8ea5-6e60c3685c14-operator-scripts\") pod \"mysqld-exporter-63d8-account-create-update-ntp7w\" (UID: \"415d3211-9668-4f77-8ea5-6e60c3685c14\") " pod="openstack/mysqld-exporter-63d8-account-create-update-ntp7w" Feb 27 10:47:26 crc kubenswrapper[4728]: I0227 10:47:26.458373 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jp5zx\" (UniqueName: \"kubernetes.io/projected/415d3211-9668-4f77-8ea5-6e60c3685c14-kube-api-access-jp5zx\") pod \"mysqld-exporter-63d8-account-create-update-ntp7w\" (UID: \"415d3211-9668-4f77-8ea5-6e60c3685c14\") " pod="openstack/mysqld-exporter-63d8-account-create-update-ntp7w" Feb 27 10:47:26 crc kubenswrapper[4728]: I0227 10:47:26.509949 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-h9f2n" Feb 27 10:47:26 crc kubenswrapper[4728]: I0227 10:47:26.631690 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-63d8-account-create-update-ntp7w" Feb 27 10:47:28 crc kubenswrapper[4728]: W0227 10:47:28.489016 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode17ee83c_ab26_4c75_b2d6_e9278ddfcc06.slice/crio-c2861b8a87aa381222d050c07fcfc51b84c4f2548113ed5934221d7274209bdd WatchSource:0}: Error finding container c2861b8a87aa381222d050c07fcfc51b84c4f2548113ed5934221d7274209bdd: Status 404 returned error can't find the container with id c2861b8a87aa381222d050c07fcfc51b84c4f2548113ed5934221d7274209bdd Feb 27 10:47:28 crc kubenswrapper[4728]: W0227 10:47:28.495013 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod039bf17c_5c3a_4393_97e1_d8d78fb20fd8.slice/crio-4aaa4e657baf4382eff9fe72fb319d15937b4a23b98ff2e117d3eb119be8d874 WatchSource:0}: Error finding container 4aaa4e657baf4382eff9fe72fb319d15937b4a23b98ff2e117d3eb119be8d874: Status 404 returned error can't find the container with id 4aaa4e657baf4382eff9fe72fb319d15937b4a23b98ff2e117d3eb119be8d874 Feb 27 10:47:28 crc kubenswrapper[4728]: I0227 10:47:28.614240 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-r5wj6" Feb 27 10:47:28 crc kubenswrapper[4728]: I0227 10:47:28.704042 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-8g6gt" event={"ID":"e17ee83c-ab26-4c75-b2d6-e9278ddfcc06","Type":"ContainerStarted","Data":"c2861b8a87aa381222d050c07fcfc51b84c4f2548113ed5934221d7274209bdd"} Feb 27 10:47:28 crc kubenswrapper[4728]: I0227 10:47:28.705702 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-r5wj6" event={"ID":"1e45b815-1e80-40bf-a006-f7e62a4c0e64","Type":"ContainerDied","Data":"7c6bc2910dcaf27fe4e439395e8f94a6281b6cf311e5b5f3dc9b76f39f1e6d3c"} Feb 27 10:47:28 crc kubenswrapper[4728]: I0227 10:47:28.705731 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7c6bc2910dcaf27fe4e439395e8f94a6281b6cf311e5b5f3dc9b76f39f1e6d3c" Feb 27 10:47:28 crc kubenswrapper[4728]: I0227 10:47:28.705758 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-r5wj6" Feb 27 10:47:28 crc kubenswrapper[4728]: I0227 10:47:28.706804 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-e32a-account-create-update-kxsr5" event={"ID":"039bf17c-5c3a-4393-97e1-d8d78fb20fd8","Type":"ContainerStarted","Data":"4aaa4e657baf4382eff9fe72fb319d15937b4a23b98ff2e117d3eb119be8d874"} Feb 27 10:47:28 crc kubenswrapper[4728]: I0227 10:47:28.803227 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1e45b815-1e80-40bf-a006-f7e62a4c0e64-operator-scripts\") pod \"1e45b815-1e80-40bf-a006-f7e62a4c0e64\" (UID: \"1e45b815-1e80-40bf-a006-f7e62a4c0e64\") " Feb 27 10:47:28 crc kubenswrapper[4728]: I0227 10:47:28.803543 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f5t29\" (UniqueName: \"kubernetes.io/projected/1e45b815-1e80-40bf-a006-f7e62a4c0e64-kube-api-access-f5t29\") pod \"1e45b815-1e80-40bf-a006-f7e62a4c0e64\" (UID: \"1e45b815-1e80-40bf-a006-f7e62a4c0e64\") " Feb 27 10:47:28 crc kubenswrapper[4728]: I0227 10:47:28.805056 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1e45b815-1e80-40bf-a006-f7e62a4c0e64-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1e45b815-1e80-40bf-a006-f7e62a4c0e64" (UID: "1e45b815-1e80-40bf-a006-f7e62a4c0e64"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:47:28 crc kubenswrapper[4728]: I0227 10:47:28.806159 4728 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1e45b815-1e80-40bf-a006-f7e62a4c0e64-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 10:47:28 crc kubenswrapper[4728]: I0227 10:47:28.814978 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e45b815-1e80-40bf-a006-f7e62a4c0e64-kube-api-access-f5t29" (OuterVolumeSpecName: "kube-api-access-f5t29") pod "1e45b815-1e80-40bf-a006-f7e62a4c0e64" (UID: "1e45b815-1e80-40bf-a006-f7e62a4c0e64"). InnerVolumeSpecName "kube-api-access-f5t29". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:47:28 crc kubenswrapper[4728]: I0227 10:47:28.907578 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f5t29\" (UniqueName: \"kubernetes.io/projected/1e45b815-1e80-40bf-a006-f7e62a4c0e64-kube-api-access-f5t29\") on node \"crc\" DevicePath \"\"" Feb 27 10:47:29 crc kubenswrapper[4728]: I0227 10:47:29.160298 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-n9grl"] Feb 27 10:47:29 crc kubenswrapper[4728]: W0227 10:47:29.172924 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0f3f20b1_2c57_4b56_9b40_47dae5701446.slice/crio-8cd68ad8c1ca20d87fd25682351362d6ceed1323fa4d8a82a971a54c3a686e94 WatchSource:0}: Error finding container 8cd68ad8c1ca20d87fd25682351362d6ceed1323fa4d8a82a971a54c3a686e94: Status 404 returned error can't find the container with id 8cd68ad8c1ca20d87fd25682351362d6ceed1323fa4d8a82a971a54c3a686e94 Feb 27 10:47:29 crc kubenswrapper[4728]: I0227 10:47:29.422213 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ec0a9664-7538-43dd-904d-c386d569999e-etc-swift\") pod \"swift-storage-0\" (UID: \"ec0a9664-7538-43dd-904d-c386d569999e\") " pod="openstack/swift-storage-0" Feb 27 10:47:29 crc kubenswrapper[4728]: I0227 10:47:29.450475 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ec0a9664-7538-43dd-904d-c386d569999e-etc-swift\") pod \"swift-storage-0\" (UID: \"ec0a9664-7538-43dd-904d-c386d569999e\") " pod="openstack/swift-storage-0" Feb 27 10:47:29 crc kubenswrapper[4728]: I0227 10:47:29.550597 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-jvhk9"] Feb 27 10:47:29 crc kubenswrapper[4728]: I0227 10:47:29.579724 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-63d8-account-create-update-ntp7w"] Feb 27 10:47:29 crc kubenswrapper[4728]: I0227 10:47:29.587848 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-69b9-account-create-update-4p7vx"] Feb 27 10:47:29 crc kubenswrapper[4728]: I0227 10:47:29.595352 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-h9f2n"] Feb 27 10:47:29 crc kubenswrapper[4728]: I0227 10:47:29.626295 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-851e-account-create-update-8stp4"] Feb 27 10:47:29 crc kubenswrapper[4728]: I0227 10:47:29.674121 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 27 10:47:29 crc kubenswrapper[4728]: I0227 10:47:29.733760 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-69b9-account-create-update-4p7vx" event={"ID":"439b89bb-a1bf-41c4-bb4f-612eaaeecb2f","Type":"ContainerStarted","Data":"f9548179012d771b8dcdff59f36627dd09deff463d809adb5e71745cb054c97f"} Feb 27 10:47:29 crc kubenswrapper[4728]: I0227 10:47:29.734849 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-851e-account-create-update-8stp4" event={"ID":"1b2435b6-3193-40c1-85b8-e177682f9d3a","Type":"ContainerStarted","Data":"05a50683624f74b3ffb05a61490666514f030f45f24a24456f0c4ddb1d9af77c"} Feb 27 10:47:29 crc kubenswrapper[4728]: I0227 10:47:29.736215 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-jvhk9" event={"ID":"13a5f0a3-e12d-4f71-8551-63239926851b","Type":"ContainerStarted","Data":"dc46f4364a0f9b2e72fd87ee0d125bc6499f68436c5aacce518461a7b76f9d82"} Feb 27 10:47:29 crc kubenswrapper[4728]: I0227 10:47:29.738473 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"9d90c432-384c-4a43-a2cf-b26c3804a632","Type":"ContainerStarted","Data":"956afb4d8c2d48f6fba5208e1b27cbd395b5fdace793befa706d3932e5f70322"} Feb 27 10:47:29 crc kubenswrapper[4728]: I0227 10:47:29.741647 4728 generic.go:334] "Generic (PLEG): container finished" podID="0f3f20b1-2c57-4b56-9b40-47dae5701446" containerID="8b62547dddc79619b7ca691bcc8ad715f6f692b7ea7f6c32aa2f4755733a909a" exitCode=0 Feb 27 10:47:29 crc kubenswrapper[4728]: I0227 10:47:29.742717 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-n9grl" event={"ID":"0f3f20b1-2c57-4b56-9b40-47dae5701446","Type":"ContainerDied","Data":"8b62547dddc79619b7ca691bcc8ad715f6f692b7ea7f6c32aa2f4755733a909a"} Feb 27 10:47:29 crc kubenswrapper[4728]: I0227 10:47:29.742745 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-n9grl" event={"ID":"0f3f20b1-2c57-4b56-9b40-47dae5701446","Type":"ContainerStarted","Data":"8cd68ad8c1ca20d87fd25682351362d6ceed1323fa4d8a82a971a54c3a686e94"} Feb 27 10:47:29 crc kubenswrapper[4728]: I0227 10:47:29.745909 4728 generic.go:334] "Generic (PLEG): container finished" podID="039bf17c-5c3a-4393-97e1-d8d78fb20fd8" containerID="7c4fd34a7a78d5cd5cc475e2214a4881e77504f6f19cc947476ad6351419c003" exitCode=0 Feb 27 10:47:29 crc kubenswrapper[4728]: I0227 10:47:29.746073 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-e32a-account-create-update-kxsr5" event={"ID":"039bf17c-5c3a-4393-97e1-d8d78fb20fd8","Type":"ContainerDied","Data":"7c4fd34a7a78d5cd5cc475e2214a4881e77504f6f19cc947476ad6351419c003"} Feb 27 10:47:29 crc kubenswrapper[4728]: I0227 10:47:29.750338 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-h9f2n" event={"ID":"2f3369f2-cce2-4912-a3ff-59677709bda2","Type":"ContainerStarted","Data":"70722f706ab2983ef949a32a9b1a64f341d426e3f35f85946078e00a39c94d4e"} Feb 27 10:47:29 crc kubenswrapper[4728]: I0227 10:47:29.751424 4728 generic.go:334] "Generic (PLEG): container finished" podID="e17ee83c-ab26-4c75-b2d6-e9278ddfcc06" containerID="3af49d92b83ed7cf122a4b46d316c3c1ac4d9de6b88e604fcbfc15d926ffd8c2" exitCode=0 Feb 27 10:47:29 crc kubenswrapper[4728]: I0227 10:47:29.751467 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-8g6gt" event={"ID":"e17ee83c-ab26-4c75-b2d6-e9278ddfcc06","Type":"ContainerDied","Data":"3af49d92b83ed7cf122a4b46d316c3c1ac4d9de6b88e604fcbfc15d926ffd8c2"} Feb 27 10:47:29 crc kubenswrapper[4728]: I0227 10:47:29.753078 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-63d8-account-create-update-ntp7w" event={"ID":"415d3211-9668-4f77-8ea5-6e60c3685c14","Type":"ContainerStarted","Data":"b7ff1828bb28c9bae4ebea8dc55837dabfec23280535fdf05e7af5f4357bce64"} Feb 27 10:47:30 crc kubenswrapper[4728]: I0227 10:47:30.303657 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 27 10:47:30 crc kubenswrapper[4728]: W0227 10:47:30.492654 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podec0a9664_7538_43dd_904d_c386d569999e.slice/crio-6a69f137f2cee3453821149e7d31d55b80c86bf99f8bda53e63673e0adeed17d WatchSource:0}: Error finding container 6a69f137f2cee3453821149e7d31d55b80c86bf99f8bda53e63673e0adeed17d: Status 404 returned error can't find the container with id 6a69f137f2cee3453821149e7d31d55b80c86bf99f8bda53e63673e0adeed17d Feb 27 10:47:30 crc kubenswrapper[4728]: I0227 10:47:30.765863 4728 generic.go:334] "Generic (PLEG): container finished" podID="1b2435b6-3193-40c1-85b8-e177682f9d3a" containerID="5e2608124ad79d8b2fe74a73abf5915f99ce69f08f4eac9344a62df0fa7e7571" exitCode=0 Feb 27 10:47:30 crc kubenswrapper[4728]: I0227 10:47:30.765948 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-851e-account-create-update-8stp4" event={"ID":"1b2435b6-3193-40c1-85b8-e177682f9d3a","Type":"ContainerDied","Data":"5e2608124ad79d8b2fe74a73abf5915f99ce69f08f4eac9344a62df0fa7e7571"} Feb 27 10:47:30 crc kubenswrapper[4728]: I0227 10:47:30.769294 4728 generic.go:334] "Generic (PLEG): container finished" podID="13a5f0a3-e12d-4f71-8551-63239926851b" containerID="41f841acba8e2c07ee5924d86fe2c0f80b3baa7472969b6af931e1d6331adfdb" exitCode=0 Feb 27 10:47:30 crc kubenswrapper[4728]: I0227 10:47:30.769349 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-jvhk9" event={"ID":"13a5f0a3-e12d-4f71-8551-63239926851b","Type":"ContainerDied","Data":"41f841acba8e2c07ee5924d86fe2c0f80b3baa7472969b6af931e1d6331adfdb"} Feb 27 10:47:30 crc kubenswrapper[4728]: I0227 10:47:30.772158 4728 generic.go:334] "Generic (PLEG): container finished" podID="415d3211-9668-4f77-8ea5-6e60c3685c14" containerID="0f52ec5c91461471521b8a393b975aede7bcc72dfa8e16fdff2e79103c118587" exitCode=0 Feb 27 10:47:30 crc kubenswrapper[4728]: I0227 10:47:30.772218 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-63d8-account-create-update-ntp7w" event={"ID":"415d3211-9668-4f77-8ea5-6e60c3685c14","Type":"ContainerDied","Data":"0f52ec5c91461471521b8a393b975aede7bcc72dfa8e16fdff2e79103c118587"} Feb 27 10:47:30 crc kubenswrapper[4728]: I0227 10:47:30.774199 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ec0a9664-7538-43dd-904d-c386d569999e","Type":"ContainerStarted","Data":"6a69f137f2cee3453821149e7d31d55b80c86bf99f8bda53e63673e0adeed17d"} Feb 27 10:47:30 crc kubenswrapper[4728]: I0227 10:47:30.781648 4728 generic.go:334] "Generic (PLEG): container finished" podID="2f3369f2-cce2-4912-a3ff-59677709bda2" containerID="52fda0da1f1a149338f78ddaeedd2ffd12ef6d0c88817e6ac905bcf6be6c150b" exitCode=0 Feb 27 10:47:30 crc kubenswrapper[4728]: I0227 10:47:30.781732 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-h9f2n" event={"ID":"2f3369f2-cce2-4912-a3ff-59677709bda2","Type":"ContainerDied","Data":"52fda0da1f1a149338f78ddaeedd2ffd12ef6d0c88817e6ac905bcf6be6c150b"} Feb 27 10:47:30 crc kubenswrapper[4728]: I0227 10:47:30.784047 4728 generic.go:334] "Generic (PLEG): container finished" podID="439b89bb-a1bf-41c4-bb4f-612eaaeecb2f" containerID="962f3a1ebe83c405bb3c360206b65206f001ff682990267d92836a74bf4dfca6" exitCode=0 Feb 27 10:47:30 crc kubenswrapper[4728]: I0227 10:47:30.784093 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-69b9-account-create-update-4p7vx" event={"ID":"439b89bb-a1bf-41c4-bb4f-612eaaeecb2f","Type":"ContainerDied","Data":"962f3a1ebe83c405bb3c360206b65206f001ff682990267d92836a74bf4dfca6"} Feb 27 10:47:31 crc kubenswrapper[4728]: I0227 10:47:31.330903 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-8g6gt" Feb 27 10:47:31 crc kubenswrapper[4728]: I0227 10:47:31.380434 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kxpbl\" (UniqueName: \"kubernetes.io/projected/e17ee83c-ab26-4c75-b2d6-e9278ddfcc06-kube-api-access-kxpbl\") pod \"e17ee83c-ab26-4c75-b2d6-e9278ddfcc06\" (UID: \"e17ee83c-ab26-4c75-b2d6-e9278ddfcc06\") " Feb 27 10:47:31 crc kubenswrapper[4728]: I0227 10:47:31.380566 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e17ee83c-ab26-4c75-b2d6-e9278ddfcc06-operator-scripts\") pod \"e17ee83c-ab26-4c75-b2d6-e9278ddfcc06\" (UID: \"e17ee83c-ab26-4c75-b2d6-e9278ddfcc06\") " Feb 27 10:47:31 crc kubenswrapper[4728]: I0227 10:47:31.381387 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e17ee83c-ab26-4c75-b2d6-e9278ddfcc06-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e17ee83c-ab26-4c75-b2d6-e9278ddfcc06" (UID: "e17ee83c-ab26-4c75-b2d6-e9278ddfcc06"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:47:31 crc kubenswrapper[4728]: I0227 10:47:31.483465 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e17ee83c-ab26-4c75-b2d6-e9278ddfcc06-kube-api-access-kxpbl" (OuterVolumeSpecName: "kube-api-access-kxpbl") pod "e17ee83c-ab26-4c75-b2d6-e9278ddfcc06" (UID: "e17ee83c-ab26-4c75-b2d6-e9278ddfcc06"). InnerVolumeSpecName "kube-api-access-kxpbl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:47:31 crc kubenswrapper[4728]: I0227 10:47:31.488488 4728 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e17ee83c-ab26-4c75-b2d6-e9278ddfcc06-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 10:47:31 crc kubenswrapper[4728]: I0227 10:47:31.488775 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kxpbl\" (UniqueName: \"kubernetes.io/projected/e17ee83c-ab26-4c75-b2d6-e9278ddfcc06-kube-api-access-kxpbl\") on node \"crc\" DevicePath \"\"" Feb 27 10:47:31 crc kubenswrapper[4728]: I0227 10:47:31.799256 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-e32a-account-create-update-kxsr5" event={"ID":"039bf17c-5c3a-4393-97e1-d8d78fb20fd8","Type":"ContainerDied","Data":"4aaa4e657baf4382eff9fe72fb319d15937b4a23b98ff2e117d3eb119be8d874"} Feb 27 10:47:31 crc kubenswrapper[4728]: I0227 10:47:31.799554 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4aaa4e657baf4382eff9fe72fb319d15937b4a23b98ff2e117d3eb119be8d874" Feb 27 10:47:31 crc kubenswrapper[4728]: I0227 10:47:31.801138 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-8g6gt" Feb 27 10:47:31 crc kubenswrapper[4728]: I0227 10:47:31.801133 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-8g6gt" event={"ID":"e17ee83c-ab26-4c75-b2d6-e9278ddfcc06","Type":"ContainerDied","Data":"c2861b8a87aa381222d050c07fcfc51b84c4f2548113ed5934221d7274209bdd"} Feb 27 10:47:31 crc kubenswrapper[4728]: I0227 10:47:31.801202 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c2861b8a87aa381222d050c07fcfc51b84c4f2548113ed5934221d7274209bdd" Feb 27 10:47:31 crc kubenswrapper[4728]: I0227 10:47:31.803252 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-n9grl" event={"ID":"0f3f20b1-2c57-4b56-9b40-47dae5701446","Type":"ContainerDied","Data":"8cd68ad8c1ca20d87fd25682351362d6ceed1323fa4d8a82a971a54c3a686e94"} Feb 27 10:47:31 crc kubenswrapper[4728]: I0227 10:47:31.803297 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8cd68ad8c1ca20d87fd25682351362d6ceed1323fa4d8a82a971a54c3a686e94" Feb 27 10:47:31 crc kubenswrapper[4728]: I0227 10:47:31.826122 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-e32a-account-create-update-kxsr5" Feb 27 10:47:31 crc kubenswrapper[4728]: I0227 10:47:31.843747 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-n9grl" Feb 27 10:47:31 crc kubenswrapper[4728]: I0227 10:47:31.898623 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/039bf17c-5c3a-4393-97e1-d8d78fb20fd8-operator-scripts\") pod \"039bf17c-5c3a-4393-97e1-d8d78fb20fd8\" (UID: \"039bf17c-5c3a-4393-97e1-d8d78fb20fd8\") " Feb 27 10:47:31 crc kubenswrapper[4728]: I0227 10:47:31.898673 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fjnjs\" (UniqueName: \"kubernetes.io/projected/039bf17c-5c3a-4393-97e1-d8d78fb20fd8-kube-api-access-fjnjs\") pod \"039bf17c-5c3a-4393-97e1-d8d78fb20fd8\" (UID: \"039bf17c-5c3a-4393-97e1-d8d78fb20fd8\") " Feb 27 10:47:31 crc kubenswrapper[4728]: I0227 10:47:31.898747 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mgf5x\" (UniqueName: \"kubernetes.io/projected/0f3f20b1-2c57-4b56-9b40-47dae5701446-kube-api-access-mgf5x\") pod \"0f3f20b1-2c57-4b56-9b40-47dae5701446\" (UID: \"0f3f20b1-2c57-4b56-9b40-47dae5701446\") " Feb 27 10:47:31 crc kubenswrapper[4728]: I0227 10:47:31.898830 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0f3f20b1-2c57-4b56-9b40-47dae5701446-operator-scripts\") pod \"0f3f20b1-2c57-4b56-9b40-47dae5701446\" (UID: \"0f3f20b1-2c57-4b56-9b40-47dae5701446\") " Feb 27 10:47:31 crc kubenswrapper[4728]: I0227 10:47:31.900067 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/039bf17c-5c3a-4393-97e1-d8d78fb20fd8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "039bf17c-5c3a-4393-97e1-d8d78fb20fd8" (UID: "039bf17c-5c3a-4393-97e1-d8d78fb20fd8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:47:31 crc kubenswrapper[4728]: I0227 10:47:31.900128 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0f3f20b1-2c57-4b56-9b40-47dae5701446-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0f3f20b1-2c57-4b56-9b40-47dae5701446" (UID: "0f3f20b1-2c57-4b56-9b40-47dae5701446"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:47:31 crc kubenswrapper[4728]: I0227 10:47:31.905354 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f3f20b1-2c57-4b56-9b40-47dae5701446-kube-api-access-mgf5x" (OuterVolumeSpecName: "kube-api-access-mgf5x") pod "0f3f20b1-2c57-4b56-9b40-47dae5701446" (UID: "0f3f20b1-2c57-4b56-9b40-47dae5701446"). InnerVolumeSpecName "kube-api-access-mgf5x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:47:31 crc kubenswrapper[4728]: I0227 10:47:31.909733 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/039bf17c-5c3a-4393-97e1-d8d78fb20fd8-kube-api-access-fjnjs" (OuterVolumeSpecName: "kube-api-access-fjnjs") pod "039bf17c-5c3a-4393-97e1-d8d78fb20fd8" (UID: "039bf17c-5c3a-4393-97e1-d8d78fb20fd8"). InnerVolumeSpecName "kube-api-access-fjnjs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:47:32 crc kubenswrapper[4728]: I0227 10:47:32.000716 4728 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/039bf17c-5c3a-4393-97e1-d8d78fb20fd8-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 10:47:32 crc kubenswrapper[4728]: I0227 10:47:32.000840 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fjnjs\" (UniqueName: \"kubernetes.io/projected/039bf17c-5c3a-4393-97e1-d8d78fb20fd8-kube-api-access-fjnjs\") on node \"crc\" DevicePath \"\"" Feb 27 10:47:32 crc kubenswrapper[4728]: I0227 10:47:32.000912 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mgf5x\" (UniqueName: \"kubernetes.io/projected/0f3f20b1-2c57-4b56-9b40-47dae5701446-kube-api-access-mgf5x\") on node \"crc\" DevicePath \"\"" Feb 27 10:47:32 crc kubenswrapper[4728]: I0227 10:47:32.000964 4728 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0f3f20b1-2c57-4b56-9b40-47dae5701446-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 10:47:32 crc kubenswrapper[4728]: I0227 10:47:32.217153 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Feb 27 10:47:32 crc kubenswrapper[4728]: I0227 10:47:32.267478 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-h9f2n" Feb 27 10:47:32 crc kubenswrapper[4728]: I0227 10:47:32.305554 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2f3369f2-cce2-4912-a3ff-59677709bda2-operator-scripts\") pod \"2f3369f2-cce2-4912-a3ff-59677709bda2\" (UID: \"2f3369f2-cce2-4912-a3ff-59677709bda2\") " Feb 27 10:47:32 crc kubenswrapper[4728]: I0227 10:47:32.305771 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6656p\" (UniqueName: \"kubernetes.io/projected/2f3369f2-cce2-4912-a3ff-59677709bda2-kube-api-access-6656p\") pod \"2f3369f2-cce2-4912-a3ff-59677709bda2\" (UID: \"2f3369f2-cce2-4912-a3ff-59677709bda2\") " Feb 27 10:47:32 crc kubenswrapper[4728]: I0227 10:47:32.306040 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2f3369f2-cce2-4912-a3ff-59677709bda2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2f3369f2-cce2-4912-a3ff-59677709bda2" (UID: "2f3369f2-cce2-4912-a3ff-59677709bda2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:47:32 crc kubenswrapper[4728]: I0227 10:47:32.306333 4728 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2f3369f2-cce2-4912-a3ff-59677709bda2-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 10:47:32 crc kubenswrapper[4728]: I0227 10:47:32.308415 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f3369f2-cce2-4912-a3ff-59677709bda2-kube-api-access-6656p" (OuterVolumeSpecName: "kube-api-access-6656p") pod "2f3369f2-cce2-4912-a3ff-59677709bda2" (UID: "2f3369f2-cce2-4912-a3ff-59677709bda2"). InnerVolumeSpecName "kube-api-access-6656p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:47:32 crc kubenswrapper[4728]: I0227 10:47:32.407973 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6656p\" (UniqueName: \"kubernetes.io/projected/2f3369f2-cce2-4912-a3ff-59677709bda2-kube-api-access-6656p\") on node \"crc\" DevicePath \"\"" Feb 27 10:47:32 crc kubenswrapper[4728]: I0227 10:47:32.483949 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-r5wj6"] Feb 27 10:47:32 crc kubenswrapper[4728]: I0227 10:47:32.493680 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-r5wj6"] Feb 27 10:47:32 crc kubenswrapper[4728]: I0227 10:47:32.512750 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-69b9-account-create-update-4p7vx" Feb 27 10:47:32 crc kubenswrapper[4728]: I0227 10:47:32.534354 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-63d8-account-create-update-ntp7w" Feb 27 10:47:32 crc kubenswrapper[4728]: I0227 10:47:32.554034 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-jvhk9" Feb 27 10:47:32 crc kubenswrapper[4728]: I0227 10:47:32.561895 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-851e-account-create-update-8stp4" Feb 27 10:47:32 crc kubenswrapper[4728]: I0227 10:47:32.624101 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jp5zx\" (UniqueName: \"kubernetes.io/projected/415d3211-9668-4f77-8ea5-6e60c3685c14-kube-api-access-jp5zx\") pod \"415d3211-9668-4f77-8ea5-6e60c3685c14\" (UID: \"415d3211-9668-4f77-8ea5-6e60c3685c14\") " Feb 27 10:47:32 crc kubenswrapper[4728]: I0227 10:47:32.624160 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7dftr\" (UniqueName: \"kubernetes.io/projected/1b2435b6-3193-40c1-85b8-e177682f9d3a-kube-api-access-7dftr\") pod \"1b2435b6-3193-40c1-85b8-e177682f9d3a\" (UID: \"1b2435b6-3193-40c1-85b8-e177682f9d3a\") " Feb 27 10:47:32 crc kubenswrapper[4728]: I0227 10:47:32.624251 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1b2435b6-3193-40c1-85b8-e177682f9d3a-operator-scripts\") pod \"1b2435b6-3193-40c1-85b8-e177682f9d3a\" (UID: \"1b2435b6-3193-40c1-85b8-e177682f9d3a\") " Feb 27 10:47:32 crc kubenswrapper[4728]: I0227 10:47:32.624277 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/439b89bb-a1bf-41c4-bb4f-612eaaeecb2f-operator-scripts\") pod \"439b89bb-a1bf-41c4-bb4f-612eaaeecb2f\" (UID: \"439b89bb-a1bf-41c4-bb4f-612eaaeecb2f\") " Feb 27 10:47:32 crc kubenswrapper[4728]: I0227 10:47:32.624330 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnf2k\" (UniqueName: \"kubernetes.io/projected/439b89bb-a1bf-41c4-bb4f-612eaaeecb2f-kube-api-access-rnf2k\") pod \"439b89bb-a1bf-41c4-bb4f-612eaaeecb2f\" (UID: \"439b89bb-a1bf-41c4-bb4f-612eaaeecb2f\") " Feb 27 10:47:32 crc kubenswrapper[4728]: I0227 10:47:32.624476 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z9hct\" (UniqueName: \"kubernetes.io/projected/13a5f0a3-e12d-4f71-8551-63239926851b-kube-api-access-z9hct\") pod \"13a5f0a3-e12d-4f71-8551-63239926851b\" (UID: \"13a5f0a3-e12d-4f71-8551-63239926851b\") " Feb 27 10:47:32 crc kubenswrapper[4728]: I0227 10:47:32.624548 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/13a5f0a3-e12d-4f71-8551-63239926851b-operator-scripts\") pod \"13a5f0a3-e12d-4f71-8551-63239926851b\" (UID: \"13a5f0a3-e12d-4f71-8551-63239926851b\") " Feb 27 10:47:32 crc kubenswrapper[4728]: I0227 10:47:32.624569 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/415d3211-9668-4f77-8ea5-6e60c3685c14-operator-scripts\") pod \"415d3211-9668-4f77-8ea5-6e60c3685c14\" (UID: \"415d3211-9668-4f77-8ea5-6e60c3685c14\") " Feb 27 10:47:32 crc kubenswrapper[4728]: I0227 10:47:32.625233 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/439b89bb-a1bf-41c4-bb4f-612eaaeecb2f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "439b89bb-a1bf-41c4-bb4f-612eaaeecb2f" (UID: "439b89bb-a1bf-41c4-bb4f-612eaaeecb2f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:47:32 crc kubenswrapper[4728]: I0227 10:47:32.625415 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/415d3211-9668-4f77-8ea5-6e60c3685c14-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "415d3211-9668-4f77-8ea5-6e60c3685c14" (UID: "415d3211-9668-4f77-8ea5-6e60c3685c14"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:47:32 crc kubenswrapper[4728]: I0227 10:47:32.625557 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1b2435b6-3193-40c1-85b8-e177682f9d3a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1b2435b6-3193-40c1-85b8-e177682f9d3a" (UID: "1b2435b6-3193-40c1-85b8-e177682f9d3a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:47:32 crc kubenswrapper[4728]: I0227 10:47:32.626793 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/13a5f0a3-e12d-4f71-8551-63239926851b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "13a5f0a3-e12d-4f71-8551-63239926851b" (UID: "13a5f0a3-e12d-4f71-8551-63239926851b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:47:32 crc kubenswrapper[4728]: I0227 10:47:32.630728 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b2435b6-3193-40c1-85b8-e177682f9d3a-kube-api-access-7dftr" (OuterVolumeSpecName: "kube-api-access-7dftr") pod "1b2435b6-3193-40c1-85b8-e177682f9d3a" (UID: "1b2435b6-3193-40c1-85b8-e177682f9d3a"). InnerVolumeSpecName "kube-api-access-7dftr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:47:32 crc kubenswrapper[4728]: I0227 10:47:32.631555 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13a5f0a3-e12d-4f71-8551-63239926851b-kube-api-access-z9hct" (OuterVolumeSpecName: "kube-api-access-z9hct") pod "13a5f0a3-e12d-4f71-8551-63239926851b" (UID: "13a5f0a3-e12d-4f71-8551-63239926851b"). InnerVolumeSpecName "kube-api-access-z9hct". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:47:32 crc kubenswrapper[4728]: I0227 10:47:32.631987 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/415d3211-9668-4f77-8ea5-6e60c3685c14-kube-api-access-jp5zx" (OuterVolumeSpecName: "kube-api-access-jp5zx") pod "415d3211-9668-4f77-8ea5-6e60c3685c14" (UID: "415d3211-9668-4f77-8ea5-6e60c3685c14"). InnerVolumeSpecName "kube-api-access-jp5zx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:47:32 crc kubenswrapper[4728]: I0227 10:47:32.634223 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/439b89bb-a1bf-41c4-bb4f-612eaaeecb2f-kube-api-access-rnf2k" (OuterVolumeSpecName: "kube-api-access-rnf2k") pod "439b89bb-a1bf-41c4-bb4f-612eaaeecb2f" (UID: "439b89bb-a1bf-41c4-bb4f-612eaaeecb2f"). InnerVolumeSpecName "kube-api-access-rnf2k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:47:32 crc kubenswrapper[4728]: I0227 10:47:32.730429 4728 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/439b89bb-a1bf-41c4-bb4f-612eaaeecb2f-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 10:47:32 crc kubenswrapper[4728]: I0227 10:47:32.730486 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnf2k\" (UniqueName: \"kubernetes.io/projected/439b89bb-a1bf-41c4-bb4f-612eaaeecb2f-kube-api-access-rnf2k\") on node \"crc\" DevicePath \"\"" Feb 27 10:47:32 crc kubenswrapper[4728]: I0227 10:47:32.730519 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z9hct\" (UniqueName: \"kubernetes.io/projected/13a5f0a3-e12d-4f71-8551-63239926851b-kube-api-access-z9hct\") on node \"crc\" DevicePath \"\"" Feb 27 10:47:32 crc kubenswrapper[4728]: I0227 10:47:32.730531 4728 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/13a5f0a3-e12d-4f71-8551-63239926851b-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 10:47:32 crc kubenswrapper[4728]: I0227 10:47:32.730540 4728 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/415d3211-9668-4f77-8ea5-6e60c3685c14-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 10:47:32 crc kubenswrapper[4728]: I0227 10:47:32.730548 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jp5zx\" (UniqueName: \"kubernetes.io/projected/415d3211-9668-4f77-8ea5-6e60c3685c14-kube-api-access-jp5zx\") on node \"crc\" DevicePath \"\"" Feb 27 10:47:32 crc kubenswrapper[4728]: I0227 10:47:32.730559 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7dftr\" (UniqueName: \"kubernetes.io/projected/1b2435b6-3193-40c1-85b8-e177682f9d3a-kube-api-access-7dftr\") on node \"crc\" DevicePath \"\"" Feb 27 10:47:32 crc kubenswrapper[4728]: I0227 10:47:32.730568 4728 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1b2435b6-3193-40c1-85b8-e177682f9d3a-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 10:47:32 crc kubenswrapper[4728]: I0227 10:47:32.738415 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e45b815-1e80-40bf-a006-f7e62a4c0e64" path="/var/lib/kubelet/pods/1e45b815-1e80-40bf-a006-f7e62a4c0e64/volumes" Feb 27 10:47:32 crc kubenswrapper[4728]: I0227 10:47:32.812201 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-h9f2n" Feb 27 10:47:32 crc kubenswrapper[4728]: I0227 10:47:32.812199 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-h9f2n" event={"ID":"2f3369f2-cce2-4912-a3ff-59677709bda2","Type":"ContainerDied","Data":"70722f706ab2983ef949a32a9b1a64f341d426e3f35f85946078e00a39c94d4e"} Feb 27 10:47:32 crc kubenswrapper[4728]: I0227 10:47:32.813711 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-69b9-account-create-update-4p7vx" Feb 27 10:47:32 crc kubenswrapper[4728]: I0227 10:47:32.813713 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="70722f706ab2983ef949a32a9b1a64f341d426e3f35f85946078e00a39c94d4e" Feb 27 10:47:32 crc kubenswrapper[4728]: I0227 10:47:32.813844 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-69b9-account-create-update-4p7vx" event={"ID":"439b89bb-a1bf-41c4-bb4f-612eaaeecb2f","Type":"ContainerDied","Data":"f9548179012d771b8dcdff59f36627dd09deff463d809adb5e71745cb054c97f"} Feb 27 10:47:32 crc kubenswrapper[4728]: I0227 10:47:32.813884 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f9548179012d771b8dcdff59f36627dd09deff463d809adb5e71745cb054c97f" Feb 27 10:47:32 crc kubenswrapper[4728]: I0227 10:47:32.815564 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-851e-account-create-update-8stp4" Feb 27 10:47:32 crc kubenswrapper[4728]: I0227 10:47:32.815574 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-851e-account-create-update-8stp4" event={"ID":"1b2435b6-3193-40c1-85b8-e177682f9d3a","Type":"ContainerDied","Data":"05a50683624f74b3ffb05a61490666514f030f45f24a24456f0c4ddb1d9af77c"} Feb 27 10:47:32 crc kubenswrapper[4728]: I0227 10:47:32.815596 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="05a50683624f74b3ffb05a61490666514f030f45f24a24456f0c4ddb1d9af77c" Feb 27 10:47:32 crc kubenswrapper[4728]: I0227 10:47:32.818076 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-jvhk9" event={"ID":"13a5f0a3-e12d-4f71-8551-63239926851b","Type":"ContainerDied","Data":"dc46f4364a0f9b2e72fd87ee0d125bc6499f68436c5aacce518461a7b76f9d82"} Feb 27 10:47:32 crc kubenswrapper[4728]: I0227 10:47:32.818100 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dc46f4364a0f9b2e72fd87ee0d125bc6499f68436c5aacce518461a7b76f9d82" Feb 27 10:47:32 crc kubenswrapper[4728]: I0227 10:47:32.818216 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-jvhk9" Feb 27 10:47:32 crc kubenswrapper[4728]: I0227 10:47:32.819878 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-63d8-account-create-update-ntp7w" event={"ID":"415d3211-9668-4f77-8ea5-6e60c3685c14","Type":"ContainerDied","Data":"b7ff1828bb28c9bae4ebea8dc55837dabfec23280535fdf05e7af5f4357bce64"} Feb 27 10:47:32 crc kubenswrapper[4728]: I0227 10:47:32.819947 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b7ff1828bb28c9bae4ebea8dc55837dabfec23280535fdf05e7af5f4357bce64" Feb 27 10:47:32 crc kubenswrapper[4728]: I0227 10:47:32.819899 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-63d8-account-create-update-ntp7w" Feb 27 10:47:32 crc kubenswrapper[4728]: I0227 10:47:32.822570 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ec0a9664-7538-43dd-904d-c386d569999e","Type":"ContainerStarted","Data":"d41c27a0184f1c44a7b380d6440e6ab07add34c9331b77384ff89dd652046eac"} Feb 27 10:47:32 crc kubenswrapper[4728]: I0227 10:47:32.822597 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ec0a9664-7538-43dd-904d-c386d569999e","Type":"ContainerStarted","Data":"272bc71e334d26ce40f35c4d7efd53091588b173a63ef0cfb5fa8353450e047f"} Feb 27 10:47:32 crc kubenswrapper[4728]: I0227 10:47:32.822606 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ec0a9664-7538-43dd-904d-c386d569999e","Type":"ContainerStarted","Data":"22780a23845764634692a8d77ad88235fb35a84c084bb31b575c37558116ebe1"} Feb 27 10:47:32 crc kubenswrapper[4728]: I0227 10:47:32.825248 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"9d90c432-384c-4a43-a2cf-b26c3804a632","Type":"ContainerStarted","Data":"397869c57417caa95e8810cfd09cda540ce0e2c2325cea42dfea7dd8548be2c3"} Feb 27 10:47:32 crc kubenswrapper[4728]: I0227 10:47:32.825260 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-n9grl" Feb 27 10:47:32 crc kubenswrapper[4728]: I0227 10:47:32.825282 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-e32a-account-create-update-kxsr5" Feb 27 10:47:33 crc kubenswrapper[4728]: I0227 10:47:33.869972 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ec0a9664-7538-43dd-904d-c386d569999e","Type":"ContainerStarted","Data":"990ec1bde08dc640f6e71e6832b95671935f9d92c56ad536ddc16269f0174f5b"} Feb 27 10:47:34 crc kubenswrapper[4728]: I0227 10:47:34.378035 4728 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-bd5fc" podUID="20d22b86-c3cb-4b12-8e88-35369d033e1e" containerName="ovn-controller" probeResult="failure" output=< Feb 27 10:47:34 crc kubenswrapper[4728]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Feb 27 10:47:34 crc kubenswrapper[4728]: > Feb 27 10:47:34 crc kubenswrapper[4728]: I0227 10:47:34.715825 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-98848"] Feb 27 10:47:34 crc kubenswrapper[4728]: E0227 10:47:34.716359 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b2435b6-3193-40c1-85b8-e177682f9d3a" containerName="mariadb-account-create-update" Feb 27 10:47:34 crc kubenswrapper[4728]: I0227 10:47:34.716378 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b2435b6-3193-40c1-85b8-e177682f9d3a" containerName="mariadb-account-create-update" Feb 27 10:47:34 crc kubenswrapper[4728]: E0227 10:47:34.716395 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="439b89bb-a1bf-41c4-bb4f-612eaaeecb2f" containerName="mariadb-account-create-update" Feb 27 10:47:34 crc kubenswrapper[4728]: I0227 10:47:34.716402 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="439b89bb-a1bf-41c4-bb4f-612eaaeecb2f" containerName="mariadb-account-create-update" Feb 27 10:47:34 crc kubenswrapper[4728]: E0227 10:47:34.716581 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e45b815-1e80-40bf-a006-f7e62a4c0e64" containerName="mariadb-account-create-update" Feb 27 10:47:34 crc kubenswrapper[4728]: I0227 10:47:34.716588 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e45b815-1e80-40bf-a006-f7e62a4c0e64" containerName="mariadb-account-create-update" Feb 27 10:47:34 crc kubenswrapper[4728]: E0227 10:47:34.716599 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13a5f0a3-e12d-4f71-8551-63239926851b" containerName="mariadb-database-create" Feb 27 10:47:34 crc kubenswrapper[4728]: I0227 10:47:34.716625 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="13a5f0a3-e12d-4f71-8551-63239926851b" containerName="mariadb-database-create" Feb 27 10:47:34 crc kubenswrapper[4728]: E0227 10:47:34.716634 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f3369f2-cce2-4912-a3ff-59677709bda2" containerName="mariadb-database-create" Feb 27 10:47:34 crc kubenswrapper[4728]: I0227 10:47:34.716639 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f3369f2-cce2-4912-a3ff-59677709bda2" containerName="mariadb-database-create" Feb 27 10:47:34 crc kubenswrapper[4728]: E0227 10:47:34.716650 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f3f20b1-2c57-4b56-9b40-47dae5701446" containerName="mariadb-database-create" Feb 27 10:47:34 crc kubenswrapper[4728]: I0227 10:47:34.716656 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f3f20b1-2c57-4b56-9b40-47dae5701446" containerName="mariadb-database-create" Feb 27 10:47:34 crc kubenswrapper[4728]: E0227 10:47:34.716677 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="039bf17c-5c3a-4393-97e1-d8d78fb20fd8" containerName="mariadb-account-create-update" Feb 27 10:47:34 crc kubenswrapper[4728]: I0227 10:47:34.716684 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="039bf17c-5c3a-4393-97e1-d8d78fb20fd8" containerName="mariadb-account-create-update" Feb 27 10:47:34 crc kubenswrapper[4728]: E0227 10:47:34.716694 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e17ee83c-ab26-4c75-b2d6-e9278ddfcc06" containerName="mariadb-database-create" Feb 27 10:47:34 crc kubenswrapper[4728]: I0227 10:47:34.716700 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="e17ee83c-ab26-4c75-b2d6-e9278ddfcc06" containerName="mariadb-database-create" Feb 27 10:47:34 crc kubenswrapper[4728]: E0227 10:47:34.716715 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="415d3211-9668-4f77-8ea5-6e60c3685c14" containerName="mariadb-account-create-update" Feb 27 10:47:34 crc kubenswrapper[4728]: I0227 10:47:34.716720 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="415d3211-9668-4f77-8ea5-6e60c3685c14" containerName="mariadb-account-create-update" Feb 27 10:47:34 crc kubenswrapper[4728]: I0227 10:47:34.716881 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="039bf17c-5c3a-4393-97e1-d8d78fb20fd8" containerName="mariadb-account-create-update" Feb 27 10:47:34 crc kubenswrapper[4728]: I0227 10:47:34.716894 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="415d3211-9668-4f77-8ea5-6e60c3685c14" containerName="mariadb-account-create-update" Feb 27 10:47:34 crc kubenswrapper[4728]: I0227 10:47:34.716906 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e45b815-1e80-40bf-a006-f7e62a4c0e64" containerName="mariadb-account-create-update" Feb 27 10:47:34 crc kubenswrapper[4728]: I0227 10:47:34.716917 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f3f20b1-2c57-4b56-9b40-47dae5701446" containerName="mariadb-database-create" Feb 27 10:47:34 crc kubenswrapper[4728]: I0227 10:47:34.716926 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="13a5f0a3-e12d-4f71-8551-63239926851b" containerName="mariadb-database-create" Feb 27 10:47:34 crc kubenswrapper[4728]: I0227 10:47:34.716934 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f3369f2-cce2-4912-a3ff-59677709bda2" containerName="mariadb-database-create" Feb 27 10:47:34 crc kubenswrapper[4728]: I0227 10:47:34.716944 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b2435b6-3193-40c1-85b8-e177682f9d3a" containerName="mariadb-account-create-update" Feb 27 10:47:34 crc kubenswrapper[4728]: I0227 10:47:34.716957 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="439b89bb-a1bf-41c4-bb4f-612eaaeecb2f" containerName="mariadb-account-create-update" Feb 27 10:47:34 crc kubenswrapper[4728]: I0227 10:47:34.716968 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="e17ee83c-ab26-4c75-b2d6-e9278ddfcc06" containerName="mariadb-database-create" Feb 27 10:47:34 crc kubenswrapper[4728]: I0227 10:47:34.717614 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-98848" Feb 27 10:47:34 crc kubenswrapper[4728]: I0227 10:47:34.720007 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-qvchz" Feb 27 10:47:34 crc kubenswrapper[4728]: I0227 10:47:34.723479 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Feb 27 10:47:34 crc kubenswrapper[4728]: I0227 10:47:34.746252 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-98848"] Feb 27 10:47:34 crc kubenswrapper[4728]: I0227 10:47:34.788656 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc607cb0-0557-4198-8bae-07f9a55cf4a5-config-data\") pod \"glance-db-sync-98848\" (UID: \"cc607cb0-0557-4198-8bae-07f9a55cf4a5\") " pod="openstack/glance-db-sync-98848" Feb 27 10:47:34 crc kubenswrapper[4728]: I0227 10:47:34.788724 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kkppg\" (UniqueName: \"kubernetes.io/projected/cc607cb0-0557-4198-8bae-07f9a55cf4a5-kube-api-access-kkppg\") pod \"glance-db-sync-98848\" (UID: \"cc607cb0-0557-4198-8bae-07f9a55cf4a5\") " pod="openstack/glance-db-sync-98848" Feb 27 10:47:34 crc kubenswrapper[4728]: I0227 10:47:34.788752 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc607cb0-0557-4198-8bae-07f9a55cf4a5-combined-ca-bundle\") pod \"glance-db-sync-98848\" (UID: \"cc607cb0-0557-4198-8bae-07f9a55cf4a5\") " pod="openstack/glance-db-sync-98848" Feb 27 10:47:34 crc kubenswrapper[4728]: I0227 10:47:34.788779 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/cc607cb0-0557-4198-8bae-07f9a55cf4a5-db-sync-config-data\") pod \"glance-db-sync-98848\" (UID: \"cc607cb0-0557-4198-8bae-07f9a55cf4a5\") " pod="openstack/glance-db-sync-98848" Feb 27 10:47:34 crc kubenswrapper[4728]: I0227 10:47:34.892960 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc607cb0-0557-4198-8bae-07f9a55cf4a5-combined-ca-bundle\") pod \"glance-db-sync-98848\" (UID: \"cc607cb0-0557-4198-8bae-07f9a55cf4a5\") " pod="openstack/glance-db-sync-98848" Feb 27 10:47:34 crc kubenswrapper[4728]: I0227 10:47:34.893056 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/cc607cb0-0557-4198-8bae-07f9a55cf4a5-db-sync-config-data\") pod \"glance-db-sync-98848\" (UID: \"cc607cb0-0557-4198-8bae-07f9a55cf4a5\") " pod="openstack/glance-db-sync-98848" Feb 27 10:47:34 crc kubenswrapper[4728]: I0227 10:47:34.893329 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc607cb0-0557-4198-8bae-07f9a55cf4a5-config-data\") pod \"glance-db-sync-98848\" (UID: \"cc607cb0-0557-4198-8bae-07f9a55cf4a5\") " pod="openstack/glance-db-sync-98848" Feb 27 10:47:34 crc kubenswrapper[4728]: I0227 10:47:34.893397 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kkppg\" (UniqueName: \"kubernetes.io/projected/cc607cb0-0557-4198-8bae-07f9a55cf4a5-kube-api-access-kkppg\") pod \"glance-db-sync-98848\" (UID: \"cc607cb0-0557-4198-8bae-07f9a55cf4a5\") " pod="openstack/glance-db-sync-98848" Feb 27 10:47:34 crc kubenswrapper[4728]: I0227 10:47:34.899942 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/cc607cb0-0557-4198-8bae-07f9a55cf4a5-db-sync-config-data\") pod \"glance-db-sync-98848\" (UID: \"cc607cb0-0557-4198-8bae-07f9a55cf4a5\") " pod="openstack/glance-db-sync-98848" Feb 27 10:47:34 crc kubenswrapper[4728]: I0227 10:47:34.900852 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc607cb0-0557-4198-8bae-07f9a55cf4a5-combined-ca-bundle\") pod \"glance-db-sync-98848\" (UID: \"cc607cb0-0557-4198-8bae-07f9a55cf4a5\") " pod="openstack/glance-db-sync-98848" Feb 27 10:47:34 crc kubenswrapper[4728]: I0227 10:47:34.910088 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc607cb0-0557-4198-8bae-07f9a55cf4a5-config-data\") pod \"glance-db-sync-98848\" (UID: \"cc607cb0-0557-4198-8bae-07f9a55cf4a5\") " pod="openstack/glance-db-sync-98848" Feb 27 10:47:34 crc kubenswrapper[4728]: I0227 10:47:34.916298 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kkppg\" (UniqueName: \"kubernetes.io/projected/cc607cb0-0557-4198-8bae-07f9a55cf4a5-kube-api-access-kkppg\") pod \"glance-db-sync-98848\" (UID: \"cc607cb0-0557-4198-8bae-07f9a55cf4a5\") " pod="openstack/glance-db-sync-98848" Feb 27 10:47:35 crc kubenswrapper[4728]: I0227 10:47:35.045175 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-98848" Feb 27 10:47:35 crc kubenswrapper[4728]: I0227 10:47:35.896972 4728 generic.go:334] "Generic (PLEG): container finished" podID="ad00da50-2e05-4612-a862-5cccd698e77b" containerID="ece13434c955547aaf3f7f164eaf74b912d99426d2f94d33488bf7c110f9b30c" exitCode=0 Feb 27 10:47:35 crc kubenswrapper[4728]: I0227 10:47:35.897025 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"ad00da50-2e05-4612-a862-5cccd698e77b","Type":"ContainerDied","Data":"ece13434c955547aaf3f7f164eaf74b912d99426d2f94d33488bf7c110f9b30c"} Feb 27 10:47:36 crc kubenswrapper[4728]: I0227 10:47:36.545957 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-xbvtd"] Feb 27 10:47:36 crc kubenswrapper[4728]: I0227 10:47:36.548149 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-xbvtd" Feb 27 10:47:36 crc kubenswrapper[4728]: I0227 10:47:36.556366 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-xbvtd"] Feb 27 10:47:36 crc kubenswrapper[4728]: I0227 10:47:36.642079 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7nvm\" (UniqueName: \"kubernetes.io/projected/f18f0df8-5474-49a0-b699-b8199f62036e-kube-api-access-q7nvm\") pod \"mysqld-exporter-openstack-cell1-db-create-xbvtd\" (UID: \"f18f0df8-5474-49a0-b699-b8199f62036e\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-xbvtd" Feb 27 10:47:36 crc kubenswrapper[4728]: I0227 10:47:36.642765 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f18f0df8-5474-49a0-b699-b8199f62036e-operator-scripts\") pod \"mysqld-exporter-openstack-cell1-db-create-xbvtd\" (UID: \"f18f0df8-5474-49a0-b699-b8199f62036e\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-xbvtd" Feb 27 10:47:36 crc kubenswrapper[4728]: I0227 10:47:36.745885 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f18f0df8-5474-49a0-b699-b8199f62036e-operator-scripts\") pod \"mysqld-exporter-openstack-cell1-db-create-xbvtd\" (UID: \"f18f0df8-5474-49a0-b699-b8199f62036e\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-xbvtd" Feb 27 10:47:36 crc kubenswrapper[4728]: I0227 10:47:36.746408 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q7nvm\" (UniqueName: \"kubernetes.io/projected/f18f0df8-5474-49a0-b699-b8199f62036e-kube-api-access-q7nvm\") pod \"mysqld-exporter-openstack-cell1-db-create-xbvtd\" (UID: \"f18f0df8-5474-49a0-b699-b8199f62036e\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-xbvtd" Feb 27 10:47:36 crc kubenswrapper[4728]: I0227 10:47:36.747419 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f18f0df8-5474-49a0-b699-b8199f62036e-operator-scripts\") pod \"mysqld-exporter-openstack-cell1-db-create-xbvtd\" (UID: \"f18f0df8-5474-49a0-b699-b8199f62036e\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-xbvtd" Feb 27 10:47:36 crc kubenswrapper[4728]: I0227 10:47:36.766869 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-6a86-account-create-update-5cwvr"] Feb 27 10:47:36 crc kubenswrapper[4728]: I0227 10:47:36.767387 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q7nvm\" (UniqueName: \"kubernetes.io/projected/f18f0df8-5474-49a0-b699-b8199f62036e-kube-api-access-q7nvm\") pod \"mysqld-exporter-openstack-cell1-db-create-xbvtd\" (UID: \"f18f0df8-5474-49a0-b699-b8199f62036e\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-xbvtd" Feb 27 10:47:36 crc kubenswrapper[4728]: I0227 10:47:36.768632 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-6a86-account-create-update-5cwvr" Feb 27 10:47:36 crc kubenswrapper[4728]: I0227 10:47:36.777386 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-openstack-cell1-db-secret" Feb 27 10:47:36 crc kubenswrapper[4728]: W0227 10:47:36.780257 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcc607cb0_0557_4198_8bae_07f9a55cf4a5.slice/crio-836ca9132bd4f002a28430cb9d38b0569502e39063b8d6f6523eacceec8a35e1 WatchSource:0}: Error finding container 836ca9132bd4f002a28430cb9d38b0569502e39063b8d6f6523eacceec8a35e1: Status 404 returned error can't find the container with id 836ca9132bd4f002a28430cb9d38b0569502e39063b8d6f6523eacceec8a35e1 Feb 27 10:47:36 crc kubenswrapper[4728]: I0227 10:47:36.797131 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-6a86-account-create-update-5cwvr"] Feb 27 10:47:36 crc kubenswrapper[4728]: I0227 10:47:36.806833 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-98848"] Feb 27 10:47:36 crc kubenswrapper[4728]: I0227 10:47:36.878179 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-xbvtd" Feb 27 10:47:36 crc kubenswrapper[4728]: I0227 10:47:36.911583 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ec0a9664-7538-43dd-904d-c386d569999e","Type":"ContainerStarted","Data":"f1388c89c02fea322004ffd93272e1e89443ab96daeb3683d921188cdc6256ce"} Feb 27 10:47:36 crc kubenswrapper[4728]: I0227 10:47:36.911632 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ec0a9664-7538-43dd-904d-c386d569999e","Type":"ContainerStarted","Data":"935f3c6594043066542a191f82993cea40cb3474e470986576737adff084c183"} Feb 27 10:47:36 crc kubenswrapper[4728]: I0227 10:47:36.914704 4728 generic.go:334] "Generic (PLEG): container finished" podID="d96ab6cd-ed9d-4924-9566-91930411701d" containerID="e4997d36ad5328d03f64dcc85aa7e6861c52e45b14220b75791b03e527d710b7" exitCode=0 Feb 27 10:47:36 crc kubenswrapper[4728]: I0227 10:47:36.914767 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"d96ab6cd-ed9d-4924-9566-91930411701d","Type":"ContainerDied","Data":"e4997d36ad5328d03f64dcc85aa7e6861c52e45b14220b75791b03e527d710b7"} Feb 27 10:47:36 crc kubenswrapper[4728]: I0227 10:47:36.919911 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-98848" event={"ID":"cc607cb0-0557-4198-8bae-07f9a55cf4a5","Type":"ContainerStarted","Data":"836ca9132bd4f002a28430cb9d38b0569502e39063b8d6f6523eacceec8a35e1"} Feb 27 10:47:36 crc kubenswrapper[4728]: I0227 10:47:36.924094 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"9d90c432-384c-4a43-a2cf-b26c3804a632","Type":"ContainerStarted","Data":"7ac31a4868dcef8ffc330424a4b9b6b6de7975e4b28d6d2ecc5063dcc32400cc"} Feb 27 10:47:36 crc kubenswrapper[4728]: I0227 10:47:36.925944 4728 generic.go:334] "Generic (PLEG): container finished" podID="5948716b-2c2b-4a90-b4b5-f8daad17f020" containerID="1ea7d949740097930544753162eda041e0ea42aee0a8c73e677e4887cdfaae0c" exitCode=0 Feb 27 10:47:36 crc kubenswrapper[4728]: I0227 10:47:36.926144 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"5948716b-2c2b-4a90-b4b5-f8daad17f020","Type":"ContainerDied","Data":"1ea7d949740097930544753162eda041e0ea42aee0a8c73e677e4887cdfaae0c"} Feb 27 10:47:36 crc kubenswrapper[4728]: I0227 10:47:36.930272 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"ad00da50-2e05-4612-a862-5cccd698e77b","Type":"ContainerStarted","Data":"e23737f0ce37aa688192a9b087adece80e03ee173aeb672a9a8026ba67e7e977"} Feb 27 10:47:36 crc kubenswrapper[4728]: I0227 10:47:36.931040 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-2" Feb 27 10:47:36 crc kubenswrapper[4728]: I0227 10:47:36.952180 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/170c93c3-680a-4c65-a684-9fcf38798fb2-operator-scripts\") pod \"mysqld-exporter-6a86-account-create-update-5cwvr\" (UID: \"170c93c3-680a-4c65-a684-9fcf38798fb2\") " pod="openstack/mysqld-exporter-6a86-account-create-update-5cwvr" Feb 27 10:47:36 crc kubenswrapper[4728]: I0227 10:47:36.952469 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4zh7\" (UniqueName: \"kubernetes.io/projected/170c93c3-680a-4c65-a684-9fcf38798fb2-kube-api-access-t4zh7\") pod \"mysqld-exporter-6a86-account-create-update-5cwvr\" (UID: \"170c93c3-680a-4c65-a684-9fcf38798fb2\") " pod="openstack/mysqld-exporter-6a86-account-create-update-5cwvr" Feb 27 10:47:36 crc kubenswrapper[4728]: I0227 10:47:36.981233 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=12.020037224 podStartE2EDuration="1m0.981180167s" podCreationTimestamp="2026-02-27 10:46:36 +0000 UTC" firstStartedPulling="2026-02-27 10:46:47.250060149 +0000 UTC m=+1227.212426255" lastFinishedPulling="2026-02-27 10:47:36.211203082 +0000 UTC m=+1276.173569198" observedRunningTime="2026-02-27 10:47:36.966548495 +0000 UTC m=+1276.928914611" watchObservedRunningTime="2026-02-27 10:47:36.981180167 +0000 UTC m=+1276.943546273" Feb 27 10:47:37 crc kubenswrapper[4728]: I0227 10:47:37.029526 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-2" podStartSLOduration=54.64005461 podStartE2EDuration="1m8.029453945s" podCreationTimestamp="2026-02-27 10:46:29 +0000 UTC" firstStartedPulling="2026-02-27 10:46:45.700405622 +0000 UTC m=+1225.662771748" lastFinishedPulling="2026-02-27 10:46:59.089804977 +0000 UTC m=+1239.052171083" observedRunningTime="2026-02-27 10:47:36.997839545 +0000 UTC m=+1276.960205661" watchObservedRunningTime="2026-02-27 10:47:37.029453945 +0000 UTC m=+1276.991820041" Feb 27 10:47:37 crc kubenswrapper[4728]: I0227 10:47:37.056050 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/170c93c3-680a-4c65-a684-9fcf38798fb2-operator-scripts\") pod \"mysqld-exporter-6a86-account-create-update-5cwvr\" (UID: \"170c93c3-680a-4c65-a684-9fcf38798fb2\") " pod="openstack/mysqld-exporter-6a86-account-create-update-5cwvr" Feb 27 10:47:37 crc kubenswrapper[4728]: I0227 10:47:37.056436 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t4zh7\" (UniqueName: \"kubernetes.io/projected/170c93c3-680a-4c65-a684-9fcf38798fb2-kube-api-access-t4zh7\") pod \"mysqld-exporter-6a86-account-create-update-5cwvr\" (UID: \"170c93c3-680a-4c65-a684-9fcf38798fb2\") " pod="openstack/mysqld-exporter-6a86-account-create-update-5cwvr" Feb 27 10:47:37 crc kubenswrapper[4728]: I0227 10:47:37.059296 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/170c93c3-680a-4c65-a684-9fcf38798fb2-operator-scripts\") pod \"mysqld-exporter-6a86-account-create-update-5cwvr\" (UID: \"170c93c3-680a-4c65-a684-9fcf38798fb2\") " pod="openstack/mysqld-exporter-6a86-account-create-update-5cwvr" Feb 27 10:47:37 crc kubenswrapper[4728]: I0227 10:47:37.100286 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t4zh7\" (UniqueName: \"kubernetes.io/projected/170c93c3-680a-4c65-a684-9fcf38798fb2-kube-api-access-t4zh7\") pod \"mysqld-exporter-6a86-account-create-update-5cwvr\" (UID: \"170c93c3-680a-4c65-a684-9fcf38798fb2\") " pod="openstack/mysqld-exporter-6a86-account-create-update-5cwvr" Feb 27 10:47:37 crc kubenswrapper[4728]: I0227 10:47:37.399035 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-6a86-account-create-update-5cwvr" Feb 27 10:47:37 crc kubenswrapper[4728]: I0227 10:47:37.463884 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-xbvtd"] Feb 27 10:47:37 crc kubenswrapper[4728]: I0227 10:47:37.512199 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-m2jc8"] Feb 27 10:47:37 crc kubenswrapper[4728]: I0227 10:47:37.552924 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-m2jc8"] Feb 27 10:47:37 crc kubenswrapper[4728]: I0227 10:47:37.559727 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-m2jc8" Feb 27 10:47:37 crc kubenswrapper[4728]: I0227 10:47:37.563055 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Feb 27 10:47:37 crc kubenswrapper[4728]: I0227 10:47:37.570869 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltjlj\" (UniqueName: \"kubernetes.io/projected/95695a61-7232-4058-a53f-4452a50cead2-kube-api-access-ltjlj\") pod \"root-account-create-update-m2jc8\" (UID: \"95695a61-7232-4058-a53f-4452a50cead2\") " pod="openstack/root-account-create-update-m2jc8" Feb 27 10:47:37 crc kubenswrapper[4728]: I0227 10:47:37.571021 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/95695a61-7232-4058-a53f-4452a50cead2-operator-scripts\") pod \"root-account-create-update-m2jc8\" (UID: \"95695a61-7232-4058-a53f-4452a50cead2\") " pod="openstack/root-account-create-update-m2jc8" Feb 27 10:47:37 crc kubenswrapper[4728]: I0227 10:47:37.672716 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/95695a61-7232-4058-a53f-4452a50cead2-operator-scripts\") pod \"root-account-create-update-m2jc8\" (UID: \"95695a61-7232-4058-a53f-4452a50cead2\") " pod="openstack/root-account-create-update-m2jc8" Feb 27 10:47:37 crc kubenswrapper[4728]: I0227 10:47:37.673030 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ltjlj\" (UniqueName: \"kubernetes.io/projected/95695a61-7232-4058-a53f-4452a50cead2-kube-api-access-ltjlj\") pod \"root-account-create-update-m2jc8\" (UID: \"95695a61-7232-4058-a53f-4452a50cead2\") " pod="openstack/root-account-create-update-m2jc8" Feb 27 10:47:37 crc kubenswrapper[4728]: I0227 10:47:37.674085 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/95695a61-7232-4058-a53f-4452a50cead2-operator-scripts\") pod \"root-account-create-update-m2jc8\" (UID: \"95695a61-7232-4058-a53f-4452a50cead2\") " pod="openstack/root-account-create-update-m2jc8" Feb 27 10:47:37 crc kubenswrapper[4728]: I0227 10:47:37.703297 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ltjlj\" (UniqueName: \"kubernetes.io/projected/95695a61-7232-4058-a53f-4452a50cead2-kube-api-access-ltjlj\") pod \"root-account-create-update-m2jc8\" (UID: \"95695a61-7232-4058-a53f-4452a50cead2\") " pod="openstack/root-account-create-update-m2jc8" Feb 27 10:47:37 crc kubenswrapper[4728]: I0227 10:47:37.708935 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Feb 27 10:47:37 crc kubenswrapper[4728]: I0227 10:47:37.708966 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Feb 27 10:47:37 crc kubenswrapper[4728]: I0227 10:47:37.717317 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Feb 27 10:47:37 crc kubenswrapper[4728]: I0227 10:47:37.946391 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"5948716b-2c2b-4a90-b4b5-f8daad17f020","Type":"ContainerStarted","Data":"9134c47853f75890d3740060e1e872c3d4608294607fb6e716c7ae974c2862b6"} Feb 27 10:47:37 crc kubenswrapper[4728]: I0227 10:47:37.946910 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 27 10:47:37 crc kubenswrapper[4728]: I0227 10:47:37.948718 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-xbvtd" event={"ID":"f18f0df8-5474-49a0-b699-b8199f62036e","Type":"ContainerStarted","Data":"4a65eeb578e610dfac302ef1f9c4e085056dcb3dd256b13a5eefe2a38efd4186"} Feb 27 10:47:37 crc kubenswrapper[4728]: I0227 10:47:37.950220 4728 generic.go:334] "Generic (PLEG): container finished" podID="26ecfb63-8476-497d-9cb3-3729c4961b4e" containerID="7085535c1cf06df2af491ea6ba1e48ccf7c883b1ebac3eccf340158c02955b37" exitCode=0 Feb 27 10:47:37 crc kubenswrapper[4728]: I0227 10:47:37.950272 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"26ecfb63-8476-497d-9cb3-3729c4961b4e","Type":"ContainerDied","Data":"7085535c1cf06df2af491ea6ba1e48ccf7c883b1ebac3eccf340158c02955b37"} Feb 27 10:47:37 crc kubenswrapper[4728]: I0227 10:47:37.955477 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-m2jc8" Feb 27 10:47:37 crc kubenswrapper[4728]: I0227 10:47:37.972985 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ec0a9664-7538-43dd-904d-c386d569999e","Type":"ContainerStarted","Data":"9b2089aaa39eaddc85b652297945639b065d49f4583e29a09e446498898ce8aa"} Feb 27 10:47:37 crc kubenswrapper[4728]: I0227 10:47:37.973039 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ec0a9664-7538-43dd-904d-c386d569999e","Type":"ContainerStarted","Data":"8f566ead7fca6709c9826ec8cc4ee7ecdded571e775f3ecfd290559af8c1dfa7"} Feb 27 10:47:38 crc kubenswrapper[4728]: I0227 10:47:38.005909 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"d96ab6cd-ed9d-4924-9566-91930411701d","Type":"ContainerStarted","Data":"58a583aacc893b48f9a492c1d4b391f04e81fe2b5efd19706fceab87c90e9031"} Feb 27 10:47:38 crc kubenswrapper[4728]: I0227 10:47:38.007953 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-1" Feb 27 10:47:38 crc kubenswrapper[4728]: I0227 10:47:38.009156 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=55.125171039 podStartE2EDuration="1m9.009134687s" podCreationTimestamp="2026-02-27 10:46:29 +0000 UTC" firstStartedPulling="2026-02-27 10:46:47.273738299 +0000 UTC m=+1227.236104425" lastFinishedPulling="2026-02-27 10:47:01.157701967 +0000 UTC m=+1241.120068073" observedRunningTime="2026-02-27 10:47:37.972862219 +0000 UTC m=+1277.935228335" watchObservedRunningTime="2026-02-27 10:47:38.009134687 +0000 UTC m=+1277.971500793" Feb 27 10:47:38 crc kubenswrapper[4728]: I0227 10:47:38.012444 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Feb 27 10:47:38 crc kubenswrapper[4728]: I0227 10:47:38.048855 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-1" podStartSLOduration=55.068566774 podStartE2EDuration="1m9.04883958s" podCreationTimestamp="2026-02-27 10:46:29 +0000 UTC" firstStartedPulling="2026-02-27 10:46:47.273705268 +0000 UTC m=+1227.236071374" lastFinishedPulling="2026-02-27 10:47:01.253978034 +0000 UTC m=+1241.216344180" observedRunningTime="2026-02-27 10:47:38.036800728 +0000 UTC m=+1277.999166824" watchObservedRunningTime="2026-02-27 10:47:38.04883958 +0000 UTC m=+1278.011205686" Feb 27 10:47:38 crc kubenswrapper[4728]: I0227 10:47:38.155914 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-6a86-account-create-update-5cwvr"] Feb 27 10:47:38 crc kubenswrapper[4728]: W0227 10:47:38.576112 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod95695a61_7232_4058_a53f_4452a50cead2.slice/crio-497852ae6d2a2535edcbcd0e3b5ac2c9236b099b32c572b6f3cefc14d02cccf7 WatchSource:0}: Error finding container 497852ae6d2a2535edcbcd0e3b5ac2c9236b099b32c572b6f3cefc14d02cccf7: Status 404 returned error can't find the container with id 497852ae6d2a2535edcbcd0e3b5ac2c9236b099b32c572b6f3cefc14d02cccf7 Feb 27 10:47:38 crc kubenswrapper[4728]: I0227 10:47:38.604120 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-m2jc8"] Feb 27 10:47:39 crc kubenswrapper[4728]: I0227 10:47:39.014596 4728 generic.go:334] "Generic (PLEG): container finished" podID="f18f0df8-5474-49a0-b699-b8199f62036e" containerID="faf9aa723ed87358c780fd4235cfc9648bd021e9891cfab804e4c1dd0f651c01" exitCode=0 Feb 27 10:47:39 crc kubenswrapper[4728]: I0227 10:47:39.014702 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-xbvtd" event={"ID":"f18f0df8-5474-49a0-b699-b8199f62036e","Type":"ContainerDied","Data":"faf9aa723ed87358c780fd4235cfc9648bd021e9891cfab804e4c1dd0f651c01"} Feb 27 10:47:39 crc kubenswrapper[4728]: I0227 10:47:39.020591 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"26ecfb63-8476-497d-9cb3-3729c4961b4e","Type":"ContainerStarted","Data":"a163ccc7b13763efebb2e9f850417e278c80c19e153b529f3a59ccf7d23a4aec"} Feb 27 10:47:39 crc kubenswrapper[4728]: I0227 10:47:39.020791 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 27 10:47:39 crc kubenswrapper[4728]: I0227 10:47:39.022835 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-m2jc8" event={"ID":"95695a61-7232-4058-a53f-4452a50cead2","Type":"ContainerStarted","Data":"491c13b241208261f9fd3853edb11bcbae0923fb864e625b7c56cb8f7831783a"} Feb 27 10:47:39 crc kubenswrapper[4728]: I0227 10:47:39.022870 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-m2jc8" event={"ID":"95695a61-7232-4058-a53f-4452a50cead2","Type":"ContainerStarted","Data":"497852ae6d2a2535edcbcd0e3b5ac2c9236b099b32c572b6f3cefc14d02cccf7"} Feb 27 10:47:39 crc kubenswrapper[4728]: I0227 10:47:39.023901 4728 generic.go:334] "Generic (PLEG): container finished" podID="170c93c3-680a-4c65-a684-9fcf38798fb2" containerID="869c05cfc5f32c79cbd4629a72add801bba595dab4720b90437efd90d5bca092" exitCode=0 Feb 27 10:47:39 crc kubenswrapper[4728]: I0227 10:47:39.024957 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-6a86-account-create-update-5cwvr" event={"ID":"170c93c3-680a-4c65-a684-9fcf38798fb2","Type":"ContainerDied","Data":"869c05cfc5f32c79cbd4629a72add801bba595dab4720b90437efd90d5bca092"} Feb 27 10:47:39 crc kubenswrapper[4728]: I0227 10:47:39.024983 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-6a86-account-create-update-5cwvr" event={"ID":"170c93c3-680a-4c65-a684-9fcf38798fb2","Type":"ContainerStarted","Data":"133b669a83a5fc9999e7f35f8c3b17b59f7364767a2b4ce27dc0c4f81414df04"} Feb 27 10:47:39 crc kubenswrapper[4728]: I0227 10:47:39.106255 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=56.098169229 podStartE2EDuration="1m10.106237929s" podCreationTimestamp="2026-02-27 10:46:29 +0000 UTC" firstStartedPulling="2026-02-27 10:46:47.273237366 +0000 UTC m=+1227.235603492" lastFinishedPulling="2026-02-27 10:47:01.281306076 +0000 UTC m=+1241.243672192" observedRunningTime="2026-02-27 10:47:39.103424922 +0000 UTC m=+1279.065791028" watchObservedRunningTime="2026-02-27 10:47:39.106237929 +0000 UTC m=+1279.068604035" Feb 27 10:47:39 crc kubenswrapper[4728]: I0227 10:47:39.129798 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-m2jc8" podStartSLOduration=2.129785117 podStartE2EDuration="2.129785117s" podCreationTimestamp="2026-02-27 10:47:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:47:39.125621443 +0000 UTC m=+1279.087987549" watchObservedRunningTime="2026-02-27 10:47:39.129785117 +0000 UTC m=+1279.092151223" Feb 27 10:47:39 crc kubenswrapper[4728]: I0227 10:47:39.387594 4728 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-bd5fc" podUID="20d22b86-c3cb-4b12-8e88-35369d033e1e" containerName="ovn-controller" probeResult="failure" output=< Feb 27 10:47:39 crc kubenswrapper[4728]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Feb 27 10:47:39 crc kubenswrapper[4728]: > Feb 27 10:47:39 crc kubenswrapper[4728]: I0227 10:47:39.396043 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-bhldn" Feb 27 10:47:39 crc kubenswrapper[4728]: I0227 10:47:39.413114 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-bhldn" Feb 27 10:47:39 crc kubenswrapper[4728]: I0227 10:47:39.631732 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-bd5fc-config-dwf88"] Feb 27 10:47:39 crc kubenswrapper[4728]: I0227 10:47:39.636591 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-bd5fc-config-dwf88" Feb 27 10:47:39 crc kubenswrapper[4728]: I0227 10:47:39.641660 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Feb 27 10:47:39 crc kubenswrapper[4728]: I0227 10:47:39.656861 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-bd5fc-config-dwf88"] Feb 27 10:47:39 crc kubenswrapper[4728]: I0227 10:47:39.733424 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/48103299-8805-4df6-b07c-613d65a2edac-additional-scripts\") pod \"ovn-controller-bd5fc-config-dwf88\" (UID: \"48103299-8805-4df6-b07c-613d65a2edac\") " pod="openstack/ovn-controller-bd5fc-config-dwf88" Feb 27 10:47:39 crc kubenswrapper[4728]: I0227 10:47:39.733516 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/48103299-8805-4df6-b07c-613d65a2edac-var-run\") pod \"ovn-controller-bd5fc-config-dwf88\" (UID: \"48103299-8805-4df6-b07c-613d65a2edac\") " pod="openstack/ovn-controller-bd5fc-config-dwf88" Feb 27 10:47:39 crc kubenswrapper[4728]: I0227 10:47:39.733553 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/48103299-8805-4df6-b07c-613d65a2edac-scripts\") pod \"ovn-controller-bd5fc-config-dwf88\" (UID: \"48103299-8805-4df6-b07c-613d65a2edac\") " pod="openstack/ovn-controller-bd5fc-config-dwf88" Feb 27 10:47:39 crc kubenswrapper[4728]: I0227 10:47:39.733588 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/48103299-8805-4df6-b07c-613d65a2edac-var-run-ovn\") pod \"ovn-controller-bd5fc-config-dwf88\" (UID: \"48103299-8805-4df6-b07c-613d65a2edac\") " pod="openstack/ovn-controller-bd5fc-config-dwf88" Feb 27 10:47:39 crc kubenswrapper[4728]: I0227 10:47:39.733607 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/48103299-8805-4df6-b07c-613d65a2edac-var-log-ovn\") pod \"ovn-controller-bd5fc-config-dwf88\" (UID: \"48103299-8805-4df6-b07c-613d65a2edac\") " pod="openstack/ovn-controller-bd5fc-config-dwf88" Feb 27 10:47:39 crc kubenswrapper[4728]: I0227 10:47:39.733658 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7gl9\" (UniqueName: \"kubernetes.io/projected/48103299-8805-4df6-b07c-613d65a2edac-kube-api-access-v7gl9\") pod \"ovn-controller-bd5fc-config-dwf88\" (UID: \"48103299-8805-4df6-b07c-613d65a2edac\") " pod="openstack/ovn-controller-bd5fc-config-dwf88" Feb 27 10:47:39 crc kubenswrapper[4728]: I0227 10:47:39.834931 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v7gl9\" (UniqueName: \"kubernetes.io/projected/48103299-8805-4df6-b07c-613d65a2edac-kube-api-access-v7gl9\") pod \"ovn-controller-bd5fc-config-dwf88\" (UID: \"48103299-8805-4df6-b07c-613d65a2edac\") " pod="openstack/ovn-controller-bd5fc-config-dwf88" Feb 27 10:47:39 crc kubenswrapper[4728]: I0227 10:47:39.835099 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/48103299-8805-4df6-b07c-613d65a2edac-additional-scripts\") pod \"ovn-controller-bd5fc-config-dwf88\" (UID: \"48103299-8805-4df6-b07c-613d65a2edac\") " pod="openstack/ovn-controller-bd5fc-config-dwf88" Feb 27 10:47:39 crc kubenswrapper[4728]: I0227 10:47:39.835887 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/48103299-8805-4df6-b07c-613d65a2edac-additional-scripts\") pod \"ovn-controller-bd5fc-config-dwf88\" (UID: \"48103299-8805-4df6-b07c-613d65a2edac\") " pod="openstack/ovn-controller-bd5fc-config-dwf88" Feb 27 10:47:39 crc kubenswrapper[4728]: I0227 10:47:39.836031 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/48103299-8805-4df6-b07c-613d65a2edac-var-run\") pod \"ovn-controller-bd5fc-config-dwf88\" (UID: \"48103299-8805-4df6-b07c-613d65a2edac\") " pod="openstack/ovn-controller-bd5fc-config-dwf88" Feb 27 10:47:39 crc kubenswrapper[4728]: I0227 10:47:39.836131 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/48103299-8805-4df6-b07c-613d65a2edac-scripts\") pod \"ovn-controller-bd5fc-config-dwf88\" (UID: \"48103299-8805-4df6-b07c-613d65a2edac\") " pod="openstack/ovn-controller-bd5fc-config-dwf88" Feb 27 10:47:39 crc kubenswrapper[4728]: I0227 10:47:39.836213 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/48103299-8805-4df6-b07c-613d65a2edac-var-run-ovn\") pod \"ovn-controller-bd5fc-config-dwf88\" (UID: \"48103299-8805-4df6-b07c-613d65a2edac\") " pod="openstack/ovn-controller-bd5fc-config-dwf88" Feb 27 10:47:39 crc kubenswrapper[4728]: I0227 10:47:39.836233 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/48103299-8805-4df6-b07c-613d65a2edac-var-log-ovn\") pod \"ovn-controller-bd5fc-config-dwf88\" (UID: \"48103299-8805-4df6-b07c-613d65a2edac\") " pod="openstack/ovn-controller-bd5fc-config-dwf88" Feb 27 10:47:39 crc kubenswrapper[4728]: I0227 10:47:39.836812 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/48103299-8805-4df6-b07c-613d65a2edac-var-run\") pod \"ovn-controller-bd5fc-config-dwf88\" (UID: \"48103299-8805-4df6-b07c-613d65a2edac\") " pod="openstack/ovn-controller-bd5fc-config-dwf88" Feb 27 10:47:39 crc kubenswrapper[4728]: I0227 10:47:39.836852 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/48103299-8805-4df6-b07c-613d65a2edac-var-run-ovn\") pod \"ovn-controller-bd5fc-config-dwf88\" (UID: \"48103299-8805-4df6-b07c-613d65a2edac\") " pod="openstack/ovn-controller-bd5fc-config-dwf88" Feb 27 10:47:39 crc kubenswrapper[4728]: I0227 10:47:39.836884 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/48103299-8805-4df6-b07c-613d65a2edac-var-log-ovn\") pod \"ovn-controller-bd5fc-config-dwf88\" (UID: \"48103299-8805-4df6-b07c-613d65a2edac\") " pod="openstack/ovn-controller-bd5fc-config-dwf88" Feb 27 10:47:39 crc kubenswrapper[4728]: I0227 10:47:39.838453 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/48103299-8805-4df6-b07c-613d65a2edac-scripts\") pod \"ovn-controller-bd5fc-config-dwf88\" (UID: \"48103299-8805-4df6-b07c-613d65a2edac\") " pod="openstack/ovn-controller-bd5fc-config-dwf88" Feb 27 10:47:39 crc kubenswrapper[4728]: I0227 10:47:39.857275 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v7gl9\" (UniqueName: \"kubernetes.io/projected/48103299-8805-4df6-b07c-613d65a2edac-kube-api-access-v7gl9\") pod \"ovn-controller-bd5fc-config-dwf88\" (UID: \"48103299-8805-4df6-b07c-613d65a2edac\") " pod="openstack/ovn-controller-bd5fc-config-dwf88" Feb 27 10:47:40 crc kubenswrapper[4728]: I0227 10:47:40.035798 4728 generic.go:334] "Generic (PLEG): container finished" podID="95695a61-7232-4058-a53f-4452a50cead2" containerID="491c13b241208261f9fd3853edb11bcbae0923fb864e625b7c56cb8f7831783a" exitCode=0 Feb 27 10:47:40 crc kubenswrapper[4728]: I0227 10:47:40.035857 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-m2jc8" event={"ID":"95695a61-7232-4058-a53f-4452a50cead2","Type":"ContainerDied","Data":"491c13b241208261f9fd3853edb11bcbae0923fb864e625b7c56cb8f7831783a"} Feb 27 10:47:40 crc kubenswrapper[4728]: I0227 10:47:40.041679 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ec0a9664-7538-43dd-904d-c386d569999e","Type":"ContainerStarted","Data":"7c1fbc646319c3a4aa93e5d36ecc4686b6aa53e55cbce130e389bf7aef05ceba"} Feb 27 10:47:40 crc kubenswrapper[4728]: I0227 10:47:40.073167 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-bd5fc-config-dwf88" Feb 27 10:47:40 crc kubenswrapper[4728]: I0227 10:47:40.595436 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-6a86-account-create-update-5cwvr" Feb 27 10:47:40 crc kubenswrapper[4728]: I0227 10:47:40.690448 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/170c93c3-680a-4c65-a684-9fcf38798fb2-operator-scripts\") pod \"170c93c3-680a-4c65-a684-9fcf38798fb2\" (UID: \"170c93c3-680a-4c65-a684-9fcf38798fb2\") " Feb 27 10:47:40 crc kubenswrapper[4728]: I0227 10:47:40.690823 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t4zh7\" (UniqueName: \"kubernetes.io/projected/170c93c3-680a-4c65-a684-9fcf38798fb2-kube-api-access-t4zh7\") pod \"170c93c3-680a-4c65-a684-9fcf38798fb2\" (UID: \"170c93c3-680a-4c65-a684-9fcf38798fb2\") " Feb 27 10:47:40 crc kubenswrapper[4728]: I0227 10:47:40.691297 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/170c93c3-680a-4c65-a684-9fcf38798fb2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "170c93c3-680a-4c65-a684-9fcf38798fb2" (UID: "170c93c3-680a-4c65-a684-9fcf38798fb2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:47:40 crc kubenswrapper[4728]: I0227 10:47:40.691847 4728 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/170c93c3-680a-4c65-a684-9fcf38798fb2-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 10:47:40 crc kubenswrapper[4728]: I0227 10:47:40.694742 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/170c93c3-680a-4c65-a684-9fcf38798fb2-kube-api-access-t4zh7" (OuterVolumeSpecName: "kube-api-access-t4zh7") pod "170c93c3-680a-4c65-a684-9fcf38798fb2" (UID: "170c93c3-680a-4c65-a684-9fcf38798fb2"). InnerVolumeSpecName "kube-api-access-t4zh7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:47:40 crc kubenswrapper[4728]: I0227 10:47:40.795530 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t4zh7\" (UniqueName: \"kubernetes.io/projected/170c93c3-680a-4c65-a684-9fcf38798fb2-kube-api-access-t4zh7\") on node \"crc\" DevicePath \"\"" Feb 27 10:47:40 crc kubenswrapper[4728]: I0227 10:47:40.802949 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-xbvtd" Feb 27 10:47:40 crc kubenswrapper[4728]: I0227 10:47:40.873247 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 27 10:47:40 crc kubenswrapper[4728]: I0227 10:47:40.873631 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="9d90c432-384c-4a43-a2cf-b26c3804a632" containerName="prometheus" containerID="cri-o://956afb4d8c2d48f6fba5208e1b27cbd395b5fdace793befa706d3932e5f70322" gracePeriod=600 Feb 27 10:47:40 crc kubenswrapper[4728]: I0227 10:47:40.873688 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="9d90c432-384c-4a43-a2cf-b26c3804a632" containerName="thanos-sidecar" containerID="cri-o://7ac31a4868dcef8ffc330424a4b9b6b6de7975e4b28d6d2ecc5063dcc32400cc" gracePeriod=600 Feb 27 10:47:40 crc kubenswrapper[4728]: I0227 10:47:40.873736 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="9d90c432-384c-4a43-a2cf-b26c3804a632" containerName="config-reloader" containerID="cri-o://397869c57417caa95e8810cfd09cda540ce0e2c2325cea42dfea7dd8548be2c3" gracePeriod=600 Feb 27 10:47:40 crc kubenswrapper[4728]: I0227 10:47:40.898435 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f18f0df8-5474-49a0-b699-b8199f62036e-operator-scripts\") pod \"f18f0df8-5474-49a0-b699-b8199f62036e\" (UID: \"f18f0df8-5474-49a0-b699-b8199f62036e\") " Feb 27 10:47:40 crc kubenswrapper[4728]: I0227 10:47:40.898731 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q7nvm\" (UniqueName: \"kubernetes.io/projected/f18f0df8-5474-49a0-b699-b8199f62036e-kube-api-access-q7nvm\") pod \"f18f0df8-5474-49a0-b699-b8199f62036e\" (UID: \"f18f0df8-5474-49a0-b699-b8199f62036e\") " Feb 27 10:47:40 crc kubenswrapper[4728]: I0227 10:47:40.900697 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f18f0df8-5474-49a0-b699-b8199f62036e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f18f0df8-5474-49a0-b699-b8199f62036e" (UID: "f18f0df8-5474-49a0-b699-b8199f62036e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:47:40 crc kubenswrapper[4728]: I0227 10:47:40.906154 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f18f0df8-5474-49a0-b699-b8199f62036e-kube-api-access-q7nvm" (OuterVolumeSpecName: "kube-api-access-q7nvm") pod "f18f0df8-5474-49a0-b699-b8199f62036e" (UID: "f18f0df8-5474-49a0-b699-b8199f62036e"). InnerVolumeSpecName "kube-api-access-q7nvm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:47:41 crc kubenswrapper[4728]: I0227 10:47:41.001215 4728 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f18f0df8-5474-49a0-b699-b8199f62036e-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 10:47:41 crc kubenswrapper[4728]: I0227 10:47:41.001246 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q7nvm\" (UniqueName: \"kubernetes.io/projected/f18f0df8-5474-49a0-b699-b8199f62036e-kube-api-access-q7nvm\") on node \"crc\" DevicePath \"\"" Feb 27 10:47:41 crc kubenswrapper[4728]: I0227 10:47:41.019945 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-bd5fc-config-dwf88"] Feb 27 10:47:41 crc kubenswrapper[4728]: W0227 10:47:41.036492 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod48103299_8805_4df6_b07c_613d65a2edac.slice/crio-acabb38c7eb21a13842e39ae8aec53a196017ab8e1b80519a05a504801279133 WatchSource:0}: Error finding container acabb38c7eb21a13842e39ae8aec53a196017ab8e1b80519a05a504801279133: Status 404 returned error can't find the container with id acabb38c7eb21a13842e39ae8aec53a196017ab8e1b80519a05a504801279133 Feb 27 10:47:41 crc kubenswrapper[4728]: I0227 10:47:41.056334 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ec0a9664-7538-43dd-904d-c386d569999e","Type":"ContainerStarted","Data":"c5ed3898aa4dc25f800fc627f7d51f2475527a98d7d5b4ff48d5e73a1972eecb"} Feb 27 10:47:41 crc kubenswrapper[4728]: I0227 10:47:41.057920 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-bd5fc-config-dwf88" event={"ID":"48103299-8805-4df6-b07c-613d65a2edac","Type":"ContainerStarted","Data":"acabb38c7eb21a13842e39ae8aec53a196017ab8e1b80519a05a504801279133"} Feb 27 10:47:41 crc kubenswrapper[4728]: I0227 10:47:41.061241 4728 generic.go:334] "Generic (PLEG): container finished" podID="9d90c432-384c-4a43-a2cf-b26c3804a632" containerID="7ac31a4868dcef8ffc330424a4b9b6b6de7975e4b28d6d2ecc5063dcc32400cc" exitCode=0 Feb 27 10:47:41 crc kubenswrapper[4728]: I0227 10:47:41.061275 4728 generic.go:334] "Generic (PLEG): container finished" podID="9d90c432-384c-4a43-a2cf-b26c3804a632" containerID="956afb4d8c2d48f6fba5208e1b27cbd395b5fdace793befa706d3932e5f70322" exitCode=0 Feb 27 10:47:41 crc kubenswrapper[4728]: I0227 10:47:41.061289 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"9d90c432-384c-4a43-a2cf-b26c3804a632","Type":"ContainerDied","Data":"7ac31a4868dcef8ffc330424a4b9b6b6de7975e4b28d6d2ecc5063dcc32400cc"} Feb 27 10:47:41 crc kubenswrapper[4728]: I0227 10:47:41.061336 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"9d90c432-384c-4a43-a2cf-b26c3804a632","Type":"ContainerDied","Data":"956afb4d8c2d48f6fba5208e1b27cbd395b5fdace793befa706d3932e5f70322"} Feb 27 10:47:41 crc kubenswrapper[4728]: I0227 10:47:41.064578 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-6a86-account-create-update-5cwvr" event={"ID":"170c93c3-680a-4c65-a684-9fcf38798fb2","Type":"ContainerDied","Data":"133b669a83a5fc9999e7f35f8c3b17b59f7364767a2b4ce27dc0c4f81414df04"} Feb 27 10:47:41 crc kubenswrapper[4728]: I0227 10:47:41.064616 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="133b669a83a5fc9999e7f35f8c3b17b59f7364767a2b4ce27dc0c4f81414df04" Feb 27 10:47:41 crc kubenswrapper[4728]: I0227 10:47:41.064686 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-6a86-account-create-update-5cwvr" Feb 27 10:47:41 crc kubenswrapper[4728]: I0227 10:47:41.072179 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-xbvtd" Feb 27 10:47:41 crc kubenswrapper[4728]: I0227 10:47:41.078630 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-xbvtd" event={"ID":"f18f0df8-5474-49a0-b699-b8199f62036e","Type":"ContainerDied","Data":"4a65eeb578e610dfac302ef1f9c4e085056dcb3dd256b13a5eefe2a38efd4186"} Feb 27 10:47:41 crc kubenswrapper[4728]: I0227 10:47:41.078675 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4a65eeb578e610dfac302ef1f9c4e085056dcb3dd256b13a5eefe2a38efd4186" Feb 27 10:47:41 crc kubenswrapper[4728]: I0227 10:47:41.543957 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-m2jc8" Feb 27 10:47:41 crc kubenswrapper[4728]: I0227 10:47:41.716581 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ltjlj\" (UniqueName: \"kubernetes.io/projected/95695a61-7232-4058-a53f-4452a50cead2-kube-api-access-ltjlj\") pod \"95695a61-7232-4058-a53f-4452a50cead2\" (UID: \"95695a61-7232-4058-a53f-4452a50cead2\") " Feb 27 10:47:41 crc kubenswrapper[4728]: I0227 10:47:41.716951 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/95695a61-7232-4058-a53f-4452a50cead2-operator-scripts\") pod \"95695a61-7232-4058-a53f-4452a50cead2\" (UID: \"95695a61-7232-4058-a53f-4452a50cead2\") " Feb 27 10:47:41 crc kubenswrapper[4728]: I0227 10:47:41.717923 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95695a61-7232-4058-a53f-4452a50cead2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "95695a61-7232-4058-a53f-4452a50cead2" (UID: "95695a61-7232-4058-a53f-4452a50cead2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:47:41 crc kubenswrapper[4728]: I0227 10:47:41.723699 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95695a61-7232-4058-a53f-4452a50cead2-kube-api-access-ltjlj" (OuterVolumeSpecName: "kube-api-access-ltjlj") pod "95695a61-7232-4058-a53f-4452a50cead2" (UID: "95695a61-7232-4058-a53f-4452a50cead2"). InnerVolumeSpecName "kube-api-access-ltjlj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:47:41 crc kubenswrapper[4728]: I0227 10:47:41.835048 4728 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/95695a61-7232-4058-a53f-4452a50cead2-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 10:47:41 crc kubenswrapper[4728]: I0227 10:47:41.835081 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ltjlj\" (UniqueName: \"kubernetes.io/projected/95695a61-7232-4058-a53f-4452a50cead2-kube-api-access-ltjlj\") on node \"crc\" DevicePath \"\"" Feb 27 10:47:42 crc kubenswrapper[4728]: I0227 10:47:42.108345 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-m2jc8" event={"ID":"95695a61-7232-4058-a53f-4452a50cead2","Type":"ContainerDied","Data":"497852ae6d2a2535edcbcd0e3b5ac2c9236b099b32c572b6f3cefc14d02cccf7"} Feb 27 10:47:42 crc kubenswrapper[4728]: I0227 10:47:42.108623 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="497852ae6d2a2535edcbcd0e3b5ac2c9236b099b32c572b6f3cefc14d02cccf7" Feb 27 10:47:42 crc kubenswrapper[4728]: I0227 10:47:42.108674 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-m2jc8" Feb 27 10:47:42 crc kubenswrapper[4728]: I0227 10:47:42.123437 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-bd5fc-config-dwf88" event={"ID":"48103299-8805-4df6-b07c-613d65a2edac","Type":"ContainerStarted","Data":"3fbb2369f8eb8d4dc2f2cefe59f193afaff562c8f0f8aaee9d7219477ee7249e"} Feb 27 10:47:42 crc kubenswrapper[4728]: I0227 10:47:42.125385 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 27 10:47:42 crc kubenswrapper[4728]: I0227 10:47:42.138195 4728 generic.go:334] "Generic (PLEG): container finished" podID="9d90c432-384c-4a43-a2cf-b26c3804a632" containerID="397869c57417caa95e8810cfd09cda540ce0e2c2325cea42dfea7dd8548be2c3" exitCode=0 Feb 27 10:47:42 crc kubenswrapper[4728]: I0227 10:47:42.138242 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"9d90c432-384c-4a43-a2cf-b26c3804a632","Type":"ContainerDied","Data":"397869c57417caa95e8810cfd09cda540ce0e2c2325cea42dfea7dd8548be2c3"} Feb 27 10:47:42 crc kubenswrapper[4728]: I0227 10:47:42.138289 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"9d90c432-384c-4a43-a2cf-b26c3804a632","Type":"ContainerDied","Data":"230864acd20b1d218307455ac86d10812859e6b9018472e64fe0c97099a0802f"} Feb 27 10:47:42 crc kubenswrapper[4728]: I0227 10:47:42.138308 4728 scope.go:117] "RemoveContainer" containerID="7ac31a4868dcef8ffc330424a4b9b6b6de7975e4b28d6d2ecc5063dcc32400cc" Feb 27 10:47:42 crc kubenswrapper[4728]: I0227 10:47:42.148187 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/9d90c432-384c-4a43-a2cf-b26c3804a632-prometheus-metric-storage-rulefiles-1\") pod \"9d90c432-384c-4a43-a2cf-b26c3804a632\" (UID: \"9d90c432-384c-4a43-a2cf-b26c3804a632\") " Feb 27 10:47:42 crc kubenswrapper[4728]: I0227 10:47:42.148267 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/9d90c432-384c-4a43-a2cf-b26c3804a632-web-config\") pod \"9d90c432-384c-4a43-a2cf-b26c3804a632\" (UID: \"9d90c432-384c-4a43-a2cf-b26c3804a632\") " Feb 27 10:47:42 crc kubenswrapper[4728]: I0227 10:47:42.148309 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/9d90c432-384c-4a43-a2cf-b26c3804a632-prometheus-metric-storage-rulefiles-2\") pod \"9d90c432-384c-4a43-a2cf-b26c3804a632\" (UID: \"9d90c432-384c-4a43-a2cf-b26c3804a632\") " Feb 27 10:47:42 crc kubenswrapper[4728]: I0227 10:47:42.148334 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/9d90c432-384c-4a43-a2cf-b26c3804a632-config\") pod \"9d90c432-384c-4a43-a2cf-b26c3804a632\" (UID: \"9d90c432-384c-4a43-a2cf-b26c3804a632\") " Feb 27 10:47:42 crc kubenswrapper[4728]: I0227 10:47:42.148366 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/9d90c432-384c-4a43-a2cf-b26c3804a632-thanos-prometheus-http-client-file\") pod \"9d90c432-384c-4a43-a2cf-b26c3804a632\" (UID: \"9d90c432-384c-4a43-a2cf-b26c3804a632\") " Feb 27 10:47:42 crc kubenswrapper[4728]: I0227 10:47:42.148407 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/9d90c432-384c-4a43-a2cf-b26c3804a632-tls-assets\") pod \"9d90c432-384c-4a43-a2cf-b26c3804a632\" (UID: \"9d90c432-384c-4a43-a2cf-b26c3804a632\") " Feb 27 10:47:42 crc kubenswrapper[4728]: I0227 10:47:42.148570 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6054c06d-bbc1-4903-8542-ab0378034a6a\") pod \"9d90c432-384c-4a43-a2cf-b26c3804a632\" (UID: \"9d90c432-384c-4a43-a2cf-b26c3804a632\") " Feb 27 10:47:42 crc kubenswrapper[4728]: I0227 10:47:42.148628 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/9d90c432-384c-4a43-a2cf-b26c3804a632-prometheus-metric-storage-rulefiles-0\") pod \"9d90c432-384c-4a43-a2cf-b26c3804a632\" (UID: \"9d90c432-384c-4a43-a2cf-b26c3804a632\") " Feb 27 10:47:42 crc kubenswrapper[4728]: I0227 10:47:42.148646 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pbs59\" (UniqueName: \"kubernetes.io/projected/9d90c432-384c-4a43-a2cf-b26c3804a632-kube-api-access-pbs59\") pod \"9d90c432-384c-4a43-a2cf-b26c3804a632\" (UID: \"9d90c432-384c-4a43-a2cf-b26c3804a632\") " Feb 27 10:47:42 crc kubenswrapper[4728]: I0227 10:47:42.148762 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/9d90c432-384c-4a43-a2cf-b26c3804a632-config-out\") pod \"9d90c432-384c-4a43-a2cf-b26c3804a632\" (UID: \"9d90c432-384c-4a43-a2cf-b26c3804a632\") " Feb 27 10:47:42 crc kubenswrapper[4728]: I0227 10:47:42.149001 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-bd5fc-config-dwf88" podStartSLOduration=3.148981699 podStartE2EDuration="3.148981699s" podCreationTimestamp="2026-02-27 10:47:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:47:42.141295268 +0000 UTC m=+1282.103661374" watchObservedRunningTime="2026-02-27 10:47:42.148981699 +0000 UTC m=+1282.111347805" Feb 27 10:47:42 crc kubenswrapper[4728]: I0227 10:47:42.151575 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d90c432-384c-4a43-a2cf-b26c3804a632-prometheus-metric-storage-rulefiles-2" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-2") pod "9d90c432-384c-4a43-a2cf-b26c3804a632" (UID: "9d90c432-384c-4a43-a2cf-b26c3804a632"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-2". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:47:42 crc kubenswrapper[4728]: I0227 10:47:42.151934 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d90c432-384c-4a43-a2cf-b26c3804a632-prometheus-metric-storage-rulefiles-1" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-1") pod "9d90c432-384c-4a43-a2cf-b26c3804a632" (UID: "9d90c432-384c-4a43-a2cf-b26c3804a632"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:47:42 crc kubenswrapper[4728]: I0227 10:47:42.153560 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d90c432-384c-4a43-a2cf-b26c3804a632-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "9d90c432-384c-4a43-a2cf-b26c3804a632" (UID: "9d90c432-384c-4a43-a2cf-b26c3804a632"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:47:42 crc kubenswrapper[4728]: I0227 10:47:42.154141 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d90c432-384c-4a43-a2cf-b26c3804a632-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "9d90c432-384c-4a43-a2cf-b26c3804a632" (UID: "9d90c432-384c-4a43-a2cf-b26c3804a632"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:47:42 crc kubenswrapper[4728]: I0227 10:47:42.154231 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d90c432-384c-4a43-a2cf-b26c3804a632-kube-api-access-pbs59" (OuterVolumeSpecName: "kube-api-access-pbs59") pod "9d90c432-384c-4a43-a2cf-b26c3804a632" (UID: "9d90c432-384c-4a43-a2cf-b26c3804a632"). InnerVolumeSpecName "kube-api-access-pbs59". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:47:42 crc kubenswrapper[4728]: I0227 10:47:42.160296 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d90c432-384c-4a43-a2cf-b26c3804a632-config" (OuterVolumeSpecName: "config") pod "9d90c432-384c-4a43-a2cf-b26c3804a632" (UID: "9d90c432-384c-4a43-a2cf-b26c3804a632"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:47:42 crc kubenswrapper[4728]: I0227 10:47:42.161438 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9d90c432-384c-4a43-a2cf-b26c3804a632-config-out" (OuterVolumeSpecName: "config-out") pod "9d90c432-384c-4a43-a2cf-b26c3804a632" (UID: "9d90c432-384c-4a43-a2cf-b26c3804a632"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:47:42 crc kubenswrapper[4728]: I0227 10:47:42.173702 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d90c432-384c-4a43-a2cf-b26c3804a632-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "9d90c432-384c-4a43-a2cf-b26c3804a632" (UID: "9d90c432-384c-4a43-a2cf-b26c3804a632"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:47:42 crc kubenswrapper[4728]: I0227 10:47:42.178328 4728 scope.go:117] "RemoveContainer" containerID="397869c57417caa95e8810cfd09cda540ce0e2c2325cea42dfea7dd8548be2c3" Feb 27 10:47:42 crc kubenswrapper[4728]: I0227 10:47:42.187153 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ec0a9664-7538-43dd-904d-c386d569999e","Type":"ContainerStarted","Data":"3671163f3cc4f27b1f85a06cab70168b30df6f41874e20eefa6854c17d83c7f8"} Feb 27 10:47:42 crc kubenswrapper[4728]: I0227 10:47:42.187968 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ec0a9664-7538-43dd-904d-c386d569999e","Type":"ContainerStarted","Data":"736441b4a119126f0c31325d503a2f67701d81f9412a8bd259b07c1adb2bf3d6"} Feb 27 10:47:42 crc kubenswrapper[4728]: I0227 10:47:42.188069 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ec0a9664-7538-43dd-904d-c386d569999e","Type":"ContainerStarted","Data":"9105d735fa9bc08ab480c226a4807e9f93bc397a0feb6c2f64ddf851f087bb75"} Feb 27 10:47:42 crc kubenswrapper[4728]: I0227 10:47:42.188185 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ec0a9664-7538-43dd-904d-c386d569999e","Type":"ContainerStarted","Data":"23717a3fc199596130c71a8af1ad1fb2fe8b88a3b5f8f2be75ebf4c3e55b8578"} Feb 27 10:47:42 crc kubenswrapper[4728]: I0227 10:47:42.244744 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6054c06d-bbc1-4903-8542-ab0378034a6a" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "9d90c432-384c-4a43-a2cf-b26c3804a632" (UID: "9d90c432-384c-4a43-a2cf-b26c3804a632"). InnerVolumeSpecName "pvc-6054c06d-bbc1-4903-8542-ab0378034a6a". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 27 10:47:42 crc kubenswrapper[4728]: I0227 10:47:42.244985 4728 scope.go:117] "RemoveContainer" containerID="956afb4d8c2d48f6fba5208e1b27cbd395b5fdace793befa706d3932e5f70322" Feb 27 10:47:42 crc kubenswrapper[4728]: I0227 10:47:42.249979 4728 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/9d90c432-384c-4a43-a2cf-b26c3804a632-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Feb 27 10:47:42 crc kubenswrapper[4728]: I0227 10:47:42.250004 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pbs59\" (UniqueName: \"kubernetes.io/projected/9d90c432-384c-4a43-a2cf-b26c3804a632-kube-api-access-pbs59\") on node \"crc\" DevicePath \"\"" Feb 27 10:47:42 crc kubenswrapper[4728]: I0227 10:47:42.250015 4728 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/9d90c432-384c-4a43-a2cf-b26c3804a632-config-out\") on node \"crc\" DevicePath \"\"" Feb 27 10:47:42 crc kubenswrapper[4728]: I0227 10:47:42.250024 4728 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/9d90c432-384c-4a43-a2cf-b26c3804a632-prometheus-metric-storage-rulefiles-1\") on node \"crc\" DevicePath \"\"" Feb 27 10:47:42 crc kubenswrapper[4728]: I0227 10:47:42.250035 4728 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/9d90c432-384c-4a43-a2cf-b26c3804a632-prometheus-metric-storage-rulefiles-2\") on node \"crc\" DevicePath \"\"" Feb 27 10:47:42 crc kubenswrapper[4728]: I0227 10:47:42.250045 4728 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/9d90c432-384c-4a43-a2cf-b26c3804a632-config\") on node \"crc\" DevicePath \"\"" Feb 27 10:47:42 crc kubenswrapper[4728]: I0227 10:47:42.250055 4728 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/9d90c432-384c-4a43-a2cf-b26c3804a632-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Feb 27 10:47:42 crc kubenswrapper[4728]: I0227 10:47:42.250064 4728 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/9d90c432-384c-4a43-a2cf-b26c3804a632-tls-assets\") on node \"crc\" DevicePath \"\"" Feb 27 10:47:42 crc kubenswrapper[4728]: I0227 10:47:42.250091 4728 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-6054c06d-bbc1-4903-8542-ab0378034a6a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6054c06d-bbc1-4903-8542-ab0378034a6a\") on node \"crc\" " Feb 27 10:47:42 crc kubenswrapper[4728]: I0227 10:47:42.280190 4728 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 27 10:47:42 crc kubenswrapper[4728]: I0227 10:47:42.280364 4728 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-6054c06d-bbc1-4903-8542-ab0378034a6a" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6054c06d-bbc1-4903-8542-ab0378034a6a") on node "crc" Feb 27 10:47:42 crc kubenswrapper[4728]: I0227 10:47:42.286713 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d90c432-384c-4a43-a2cf-b26c3804a632-web-config" (OuterVolumeSpecName: "web-config") pod "9d90c432-384c-4a43-a2cf-b26c3804a632" (UID: "9d90c432-384c-4a43-a2cf-b26c3804a632"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:47:42 crc kubenswrapper[4728]: I0227 10:47:42.291442 4728 scope.go:117] "RemoveContainer" containerID="557b65567de2e726ad23081e70f8b025af2dcf1b9d751df1a88af8882fa6306c" Feb 27 10:47:42 crc kubenswrapper[4728]: I0227 10:47:42.325031 4728 scope.go:117] "RemoveContainer" containerID="7ac31a4868dcef8ffc330424a4b9b6b6de7975e4b28d6d2ecc5063dcc32400cc" Feb 27 10:47:42 crc kubenswrapper[4728]: E0227 10:47:42.325483 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7ac31a4868dcef8ffc330424a4b9b6b6de7975e4b28d6d2ecc5063dcc32400cc\": container with ID starting with 7ac31a4868dcef8ffc330424a4b9b6b6de7975e4b28d6d2ecc5063dcc32400cc not found: ID does not exist" containerID="7ac31a4868dcef8ffc330424a4b9b6b6de7975e4b28d6d2ecc5063dcc32400cc" Feb 27 10:47:42 crc kubenswrapper[4728]: I0227 10:47:42.325653 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ac31a4868dcef8ffc330424a4b9b6b6de7975e4b28d6d2ecc5063dcc32400cc"} err="failed to get container status \"7ac31a4868dcef8ffc330424a4b9b6b6de7975e4b28d6d2ecc5063dcc32400cc\": rpc error: code = NotFound desc = could not find container \"7ac31a4868dcef8ffc330424a4b9b6b6de7975e4b28d6d2ecc5063dcc32400cc\": container with ID starting with 7ac31a4868dcef8ffc330424a4b9b6b6de7975e4b28d6d2ecc5063dcc32400cc not found: ID does not exist" Feb 27 10:47:42 crc kubenswrapper[4728]: I0227 10:47:42.325680 4728 scope.go:117] "RemoveContainer" containerID="397869c57417caa95e8810cfd09cda540ce0e2c2325cea42dfea7dd8548be2c3" Feb 27 10:47:42 crc kubenswrapper[4728]: E0227 10:47:42.326022 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"397869c57417caa95e8810cfd09cda540ce0e2c2325cea42dfea7dd8548be2c3\": container with ID starting with 397869c57417caa95e8810cfd09cda540ce0e2c2325cea42dfea7dd8548be2c3 not found: ID does not exist" containerID="397869c57417caa95e8810cfd09cda540ce0e2c2325cea42dfea7dd8548be2c3" Feb 27 10:47:42 crc kubenswrapper[4728]: I0227 10:47:42.326054 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"397869c57417caa95e8810cfd09cda540ce0e2c2325cea42dfea7dd8548be2c3"} err="failed to get container status \"397869c57417caa95e8810cfd09cda540ce0e2c2325cea42dfea7dd8548be2c3\": rpc error: code = NotFound desc = could not find container \"397869c57417caa95e8810cfd09cda540ce0e2c2325cea42dfea7dd8548be2c3\": container with ID starting with 397869c57417caa95e8810cfd09cda540ce0e2c2325cea42dfea7dd8548be2c3 not found: ID does not exist" Feb 27 10:47:42 crc kubenswrapper[4728]: I0227 10:47:42.326074 4728 scope.go:117] "RemoveContainer" containerID="956afb4d8c2d48f6fba5208e1b27cbd395b5fdace793befa706d3932e5f70322" Feb 27 10:47:42 crc kubenswrapper[4728]: E0227 10:47:42.326294 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"956afb4d8c2d48f6fba5208e1b27cbd395b5fdace793befa706d3932e5f70322\": container with ID starting with 956afb4d8c2d48f6fba5208e1b27cbd395b5fdace793befa706d3932e5f70322 not found: ID does not exist" containerID="956afb4d8c2d48f6fba5208e1b27cbd395b5fdace793befa706d3932e5f70322" Feb 27 10:47:42 crc kubenswrapper[4728]: I0227 10:47:42.326331 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"956afb4d8c2d48f6fba5208e1b27cbd395b5fdace793befa706d3932e5f70322"} err="failed to get container status \"956afb4d8c2d48f6fba5208e1b27cbd395b5fdace793befa706d3932e5f70322\": rpc error: code = NotFound desc = could not find container \"956afb4d8c2d48f6fba5208e1b27cbd395b5fdace793befa706d3932e5f70322\": container with ID starting with 956afb4d8c2d48f6fba5208e1b27cbd395b5fdace793befa706d3932e5f70322 not found: ID does not exist" Feb 27 10:47:42 crc kubenswrapper[4728]: I0227 10:47:42.326358 4728 scope.go:117] "RemoveContainer" containerID="557b65567de2e726ad23081e70f8b025af2dcf1b9d751df1a88af8882fa6306c" Feb 27 10:47:42 crc kubenswrapper[4728]: E0227 10:47:42.326792 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"557b65567de2e726ad23081e70f8b025af2dcf1b9d751df1a88af8882fa6306c\": container with ID starting with 557b65567de2e726ad23081e70f8b025af2dcf1b9d751df1a88af8882fa6306c not found: ID does not exist" containerID="557b65567de2e726ad23081e70f8b025af2dcf1b9d751df1a88af8882fa6306c" Feb 27 10:47:42 crc kubenswrapper[4728]: I0227 10:47:42.326827 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"557b65567de2e726ad23081e70f8b025af2dcf1b9d751df1a88af8882fa6306c"} err="failed to get container status \"557b65567de2e726ad23081e70f8b025af2dcf1b9d751df1a88af8882fa6306c\": rpc error: code = NotFound desc = could not find container \"557b65567de2e726ad23081e70f8b025af2dcf1b9d751df1a88af8882fa6306c\": container with ID starting with 557b65567de2e726ad23081e70f8b025af2dcf1b9d751df1a88af8882fa6306c not found: ID does not exist" Feb 27 10:47:42 crc kubenswrapper[4728]: I0227 10:47:42.352172 4728 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/9d90c432-384c-4a43-a2cf-b26c3804a632-web-config\") on node \"crc\" DevicePath \"\"" Feb 27 10:47:42 crc kubenswrapper[4728]: I0227 10:47:42.352872 4728 reconciler_common.go:293] "Volume detached for volume \"pvc-6054c06d-bbc1-4903-8542-ab0378034a6a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6054c06d-bbc1-4903-8542-ab0378034a6a\") on node \"crc\" DevicePath \"\"" Feb 27 10:47:43 crc kubenswrapper[4728]: I0227 10:47:43.239585 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ec0a9664-7538-43dd-904d-c386d569999e","Type":"ContainerStarted","Data":"c249472040ae89aa3a9df17ac8205388cef8c45ba4907b1492fde4966d82c9cf"} Feb 27 10:47:43 crc kubenswrapper[4728]: I0227 10:47:43.242156 4728 generic.go:334] "Generic (PLEG): container finished" podID="48103299-8805-4df6-b07c-613d65a2edac" containerID="3fbb2369f8eb8d4dc2f2cefe59f193afaff562c8f0f8aaee9d7219477ee7249e" exitCode=0 Feb 27 10:47:43 crc kubenswrapper[4728]: I0227 10:47:43.242240 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-bd5fc-config-dwf88" event={"ID":"48103299-8805-4df6-b07c-613d65a2edac","Type":"ContainerDied","Data":"3fbb2369f8eb8d4dc2f2cefe59f193afaff562c8f0f8aaee9d7219477ee7249e"} Feb 27 10:47:43 crc kubenswrapper[4728]: I0227 10:47:43.247866 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 27 10:47:43 crc kubenswrapper[4728]: I0227 10:47:43.286303 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=38.138098949 podStartE2EDuration="47.286257955s" podCreationTimestamp="2026-02-27 10:46:56 +0000 UTC" firstStartedPulling="2026-02-27 10:47:30.496207062 +0000 UTC m=+1270.458573168" lastFinishedPulling="2026-02-27 10:47:39.644366068 +0000 UTC m=+1279.606732174" observedRunningTime="2026-02-27 10:47:43.277544035 +0000 UTC m=+1283.239910141" watchObservedRunningTime="2026-02-27 10:47:43.286257955 +0000 UTC m=+1283.248624061" Feb 27 10:47:43 crc kubenswrapper[4728]: I0227 10:47:43.326590 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 27 10:47:43 crc kubenswrapper[4728]: I0227 10:47:43.331631 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 27 10:47:43 crc kubenswrapper[4728]: I0227 10:47:43.373541 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 27 10:47:43 crc kubenswrapper[4728]: E0227 10:47:43.373967 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="170c93c3-680a-4c65-a684-9fcf38798fb2" containerName="mariadb-account-create-update" Feb 27 10:47:43 crc kubenswrapper[4728]: I0227 10:47:43.373985 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="170c93c3-680a-4c65-a684-9fcf38798fb2" containerName="mariadb-account-create-update" Feb 27 10:47:43 crc kubenswrapper[4728]: E0227 10:47:43.374010 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f18f0df8-5474-49a0-b699-b8199f62036e" containerName="mariadb-database-create" Feb 27 10:47:43 crc kubenswrapper[4728]: I0227 10:47:43.374016 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="f18f0df8-5474-49a0-b699-b8199f62036e" containerName="mariadb-database-create" Feb 27 10:47:43 crc kubenswrapper[4728]: E0227 10:47:43.374032 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d90c432-384c-4a43-a2cf-b26c3804a632" containerName="thanos-sidecar" Feb 27 10:47:43 crc kubenswrapper[4728]: I0227 10:47:43.374038 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d90c432-384c-4a43-a2cf-b26c3804a632" containerName="thanos-sidecar" Feb 27 10:47:43 crc kubenswrapper[4728]: E0227 10:47:43.374050 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95695a61-7232-4058-a53f-4452a50cead2" containerName="mariadb-account-create-update" Feb 27 10:47:43 crc kubenswrapper[4728]: I0227 10:47:43.374055 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="95695a61-7232-4058-a53f-4452a50cead2" containerName="mariadb-account-create-update" Feb 27 10:47:43 crc kubenswrapper[4728]: E0227 10:47:43.374063 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d90c432-384c-4a43-a2cf-b26c3804a632" containerName="prometheus" Feb 27 10:47:43 crc kubenswrapper[4728]: I0227 10:47:43.374069 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d90c432-384c-4a43-a2cf-b26c3804a632" containerName="prometheus" Feb 27 10:47:43 crc kubenswrapper[4728]: E0227 10:47:43.374088 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d90c432-384c-4a43-a2cf-b26c3804a632" containerName="config-reloader" Feb 27 10:47:43 crc kubenswrapper[4728]: I0227 10:47:43.374093 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d90c432-384c-4a43-a2cf-b26c3804a632" containerName="config-reloader" Feb 27 10:47:43 crc kubenswrapper[4728]: E0227 10:47:43.374101 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d90c432-384c-4a43-a2cf-b26c3804a632" containerName="init-config-reloader" Feb 27 10:47:43 crc kubenswrapper[4728]: I0227 10:47:43.374107 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d90c432-384c-4a43-a2cf-b26c3804a632" containerName="init-config-reloader" Feb 27 10:47:43 crc kubenswrapper[4728]: I0227 10:47:43.374286 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d90c432-384c-4a43-a2cf-b26c3804a632" containerName="config-reloader" Feb 27 10:47:43 crc kubenswrapper[4728]: I0227 10:47:43.374302 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="170c93c3-680a-4c65-a684-9fcf38798fb2" containerName="mariadb-account-create-update" Feb 27 10:47:43 crc kubenswrapper[4728]: I0227 10:47:43.374317 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="95695a61-7232-4058-a53f-4452a50cead2" containerName="mariadb-account-create-update" Feb 27 10:47:43 crc kubenswrapper[4728]: I0227 10:47:43.374329 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="f18f0df8-5474-49a0-b699-b8199f62036e" containerName="mariadb-database-create" Feb 27 10:47:43 crc kubenswrapper[4728]: I0227 10:47:43.374339 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d90c432-384c-4a43-a2cf-b26c3804a632" containerName="thanos-sidecar" Feb 27 10:47:43 crc kubenswrapper[4728]: I0227 10:47:43.374346 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d90c432-384c-4a43-a2cf-b26c3804a632" containerName="prometheus" Feb 27 10:47:43 crc kubenswrapper[4728]: I0227 10:47:43.378344 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 27 10:47:43 crc kubenswrapper[4728]: I0227 10:47:43.382247 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Feb 27 10:47:43 crc kubenswrapper[4728]: I0227 10:47:43.382284 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-bzqqf" Feb 27 10:47:43 crc kubenswrapper[4728]: I0227 10:47:43.382553 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Feb 27 10:47:43 crc kubenswrapper[4728]: I0227 10:47:43.382936 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Feb 27 10:47:43 crc kubenswrapper[4728]: I0227 10:47:43.383131 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Feb 27 10:47:43 crc kubenswrapper[4728]: I0227 10:47:43.383448 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Feb 27 10:47:43 crc kubenswrapper[4728]: I0227 10:47:43.383686 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Feb 27 10:47:43 crc kubenswrapper[4728]: I0227 10:47:43.384269 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-metric-storage-prometheus-svc" Feb 27 10:47:43 crc kubenswrapper[4728]: I0227 10:47:43.388823 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Feb 27 10:47:43 crc kubenswrapper[4728]: I0227 10:47:43.389548 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 27 10:47:43 crc kubenswrapper[4728]: I0227 10:47:43.479222 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d232c99d-32bc-45e3-bc7b-c9dca99571ac-config\") pod \"prometheus-metric-storage-0\" (UID: \"d232c99d-32bc-45e3-bc7b-c9dca99571ac\") " pod="openstack/prometheus-metric-storage-0" Feb 27 10:47:43 crc kubenswrapper[4728]: I0227 10:47:43.479270 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/d232c99d-32bc-45e3-bc7b-c9dca99571ac-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"d232c99d-32bc-45e3-bc7b-c9dca99571ac\") " pod="openstack/prometheus-metric-storage-0" Feb 27 10:47:43 crc kubenswrapper[4728]: I0227 10:47:43.479320 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/d232c99d-32bc-45e3-bc7b-c9dca99571ac-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"d232c99d-32bc-45e3-bc7b-c9dca99571ac\") " pod="openstack/prometheus-metric-storage-0" Feb 27 10:47:43 crc kubenswrapper[4728]: I0227 10:47:43.479343 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/d232c99d-32bc-45e3-bc7b-c9dca99571ac-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"d232c99d-32bc-45e3-bc7b-c9dca99571ac\") " pod="openstack/prometheus-metric-storage-0" Feb 27 10:47:43 crc kubenswrapper[4728]: I0227 10:47:43.479371 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/d232c99d-32bc-45e3-bc7b-c9dca99571ac-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"d232c99d-32bc-45e3-bc7b-c9dca99571ac\") " pod="openstack/prometheus-metric-storage-0" Feb 27 10:47:43 crc kubenswrapper[4728]: I0227 10:47:43.479392 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9kq7l\" (UniqueName: \"kubernetes.io/projected/d232c99d-32bc-45e3-bc7b-c9dca99571ac-kube-api-access-9kq7l\") pod \"prometheus-metric-storage-0\" (UID: \"d232c99d-32bc-45e3-bc7b-c9dca99571ac\") " pod="openstack/prometheus-metric-storage-0" Feb 27 10:47:43 crc kubenswrapper[4728]: I0227 10:47:43.479422 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/d232c99d-32bc-45e3-bc7b-c9dca99571ac-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"d232c99d-32bc-45e3-bc7b-c9dca99571ac\") " pod="openstack/prometheus-metric-storage-0" Feb 27 10:47:43 crc kubenswrapper[4728]: I0227 10:47:43.479447 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/d232c99d-32bc-45e3-bc7b-c9dca99571ac-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"d232c99d-32bc-45e3-bc7b-c9dca99571ac\") " pod="openstack/prometheus-metric-storage-0" Feb 27 10:47:43 crc kubenswrapper[4728]: I0227 10:47:43.479466 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-6054c06d-bbc1-4903-8542-ab0378034a6a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6054c06d-bbc1-4903-8542-ab0378034a6a\") pod \"prometheus-metric-storage-0\" (UID: \"d232c99d-32bc-45e3-bc7b-c9dca99571ac\") " pod="openstack/prometheus-metric-storage-0" Feb 27 10:47:43 crc kubenswrapper[4728]: I0227 10:47:43.479488 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/d232c99d-32bc-45e3-bc7b-c9dca99571ac-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"d232c99d-32bc-45e3-bc7b-c9dca99571ac\") " pod="openstack/prometheus-metric-storage-0" Feb 27 10:47:43 crc kubenswrapper[4728]: I0227 10:47:43.479521 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/d232c99d-32bc-45e3-bc7b-c9dca99571ac-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"d232c99d-32bc-45e3-bc7b-c9dca99571ac\") " pod="openstack/prometheus-metric-storage-0" Feb 27 10:47:43 crc kubenswrapper[4728]: I0227 10:47:43.479537 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/d232c99d-32bc-45e3-bc7b-c9dca99571ac-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"d232c99d-32bc-45e3-bc7b-c9dca99571ac\") " pod="openstack/prometheus-metric-storage-0" Feb 27 10:47:43 crc kubenswrapper[4728]: I0227 10:47:43.479577 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d232c99d-32bc-45e3-bc7b-c9dca99571ac-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"d232c99d-32bc-45e3-bc7b-c9dca99571ac\") " pod="openstack/prometheus-metric-storage-0" Feb 27 10:47:43 crc kubenswrapper[4728]: I0227 10:47:43.581767 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d232c99d-32bc-45e3-bc7b-c9dca99571ac-config\") pod \"prometheus-metric-storage-0\" (UID: \"d232c99d-32bc-45e3-bc7b-c9dca99571ac\") " pod="openstack/prometheus-metric-storage-0" Feb 27 10:47:43 crc kubenswrapper[4728]: I0227 10:47:43.582539 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/d232c99d-32bc-45e3-bc7b-c9dca99571ac-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"d232c99d-32bc-45e3-bc7b-c9dca99571ac\") " pod="openstack/prometheus-metric-storage-0" Feb 27 10:47:43 crc kubenswrapper[4728]: I0227 10:47:43.582714 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/d232c99d-32bc-45e3-bc7b-c9dca99571ac-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"d232c99d-32bc-45e3-bc7b-c9dca99571ac\") " pod="openstack/prometheus-metric-storage-0" Feb 27 10:47:43 crc kubenswrapper[4728]: I0227 10:47:43.582796 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/d232c99d-32bc-45e3-bc7b-c9dca99571ac-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"d232c99d-32bc-45e3-bc7b-c9dca99571ac\") " pod="openstack/prometheus-metric-storage-0" Feb 27 10:47:43 crc kubenswrapper[4728]: I0227 10:47:43.582892 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/d232c99d-32bc-45e3-bc7b-c9dca99571ac-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"d232c99d-32bc-45e3-bc7b-c9dca99571ac\") " pod="openstack/prometheus-metric-storage-0" Feb 27 10:47:43 crc kubenswrapper[4728]: I0227 10:47:43.583332 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9kq7l\" (UniqueName: \"kubernetes.io/projected/d232c99d-32bc-45e3-bc7b-c9dca99571ac-kube-api-access-9kq7l\") pod \"prometheus-metric-storage-0\" (UID: \"d232c99d-32bc-45e3-bc7b-c9dca99571ac\") " pod="openstack/prometheus-metric-storage-0" Feb 27 10:47:43 crc kubenswrapper[4728]: I0227 10:47:43.583450 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/d232c99d-32bc-45e3-bc7b-c9dca99571ac-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"d232c99d-32bc-45e3-bc7b-c9dca99571ac\") " pod="openstack/prometheus-metric-storage-0" Feb 27 10:47:43 crc kubenswrapper[4728]: I0227 10:47:43.583568 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/d232c99d-32bc-45e3-bc7b-c9dca99571ac-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"d232c99d-32bc-45e3-bc7b-c9dca99571ac\") " pod="openstack/prometheus-metric-storage-0" Feb 27 10:47:43 crc kubenswrapper[4728]: I0227 10:47:43.584117 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/d232c99d-32bc-45e3-bc7b-c9dca99571ac-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"d232c99d-32bc-45e3-bc7b-c9dca99571ac\") " pod="openstack/prometheus-metric-storage-0" Feb 27 10:47:43 crc kubenswrapper[4728]: I0227 10:47:43.584387 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/d232c99d-32bc-45e3-bc7b-c9dca99571ac-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"d232c99d-32bc-45e3-bc7b-c9dca99571ac\") " pod="openstack/prometheus-metric-storage-0" Feb 27 10:47:43 crc kubenswrapper[4728]: I0227 10:47:43.584484 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-6054c06d-bbc1-4903-8542-ab0378034a6a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6054c06d-bbc1-4903-8542-ab0378034a6a\") pod \"prometheus-metric-storage-0\" (UID: \"d232c99d-32bc-45e3-bc7b-c9dca99571ac\") " pod="openstack/prometheus-metric-storage-0" Feb 27 10:47:43 crc kubenswrapper[4728]: I0227 10:47:43.584659 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/d232c99d-32bc-45e3-bc7b-c9dca99571ac-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"d232c99d-32bc-45e3-bc7b-c9dca99571ac\") " pod="openstack/prometheus-metric-storage-0" Feb 27 10:47:43 crc kubenswrapper[4728]: I0227 10:47:43.584789 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/d232c99d-32bc-45e3-bc7b-c9dca99571ac-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"d232c99d-32bc-45e3-bc7b-c9dca99571ac\") " pod="openstack/prometheus-metric-storage-0" Feb 27 10:47:43 crc kubenswrapper[4728]: I0227 10:47:43.585350 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/d232c99d-32bc-45e3-bc7b-c9dca99571ac-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"d232c99d-32bc-45e3-bc7b-c9dca99571ac\") " pod="openstack/prometheus-metric-storage-0" Feb 27 10:47:43 crc kubenswrapper[4728]: I0227 10:47:43.585317 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/d232c99d-32bc-45e3-bc7b-c9dca99571ac-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"d232c99d-32bc-45e3-bc7b-c9dca99571ac\") " pod="openstack/prometheus-metric-storage-0" Feb 27 10:47:43 crc kubenswrapper[4728]: I0227 10:47:43.585837 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d232c99d-32bc-45e3-bc7b-c9dca99571ac-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"d232c99d-32bc-45e3-bc7b-c9dca99571ac\") " pod="openstack/prometheus-metric-storage-0" Feb 27 10:47:43 crc kubenswrapper[4728]: I0227 10:47:43.588310 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/d232c99d-32bc-45e3-bc7b-c9dca99571ac-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"d232c99d-32bc-45e3-bc7b-c9dca99571ac\") " pod="openstack/prometheus-metric-storage-0" Feb 27 10:47:43 crc kubenswrapper[4728]: I0227 10:47:43.588335 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/d232c99d-32bc-45e3-bc7b-c9dca99571ac-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"d232c99d-32bc-45e3-bc7b-c9dca99571ac\") " pod="openstack/prometheus-metric-storage-0" Feb 27 10:47:43 crc kubenswrapper[4728]: I0227 10:47:43.588755 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/d232c99d-32bc-45e3-bc7b-c9dca99571ac-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"d232c99d-32bc-45e3-bc7b-c9dca99571ac\") " pod="openstack/prometheus-metric-storage-0" Feb 27 10:47:43 crc kubenswrapper[4728]: I0227 10:47:43.588917 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/d232c99d-32bc-45e3-bc7b-c9dca99571ac-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"d232c99d-32bc-45e3-bc7b-c9dca99571ac\") " pod="openstack/prometheus-metric-storage-0" Feb 27 10:47:43 crc kubenswrapper[4728]: I0227 10:47:43.589278 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/d232c99d-32bc-45e3-bc7b-c9dca99571ac-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"d232c99d-32bc-45e3-bc7b-c9dca99571ac\") " pod="openstack/prometheus-metric-storage-0" Feb 27 10:47:43 crc kubenswrapper[4728]: I0227 10:47:43.604219 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9kq7l\" (UniqueName: \"kubernetes.io/projected/d232c99d-32bc-45e3-bc7b-c9dca99571ac-kube-api-access-9kq7l\") pod \"prometheus-metric-storage-0\" (UID: \"d232c99d-32bc-45e3-bc7b-c9dca99571ac\") " pod="openstack/prometheus-metric-storage-0" Feb 27 10:47:43 crc kubenswrapper[4728]: I0227 10:47:43.604351 4728 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 27 10:47:43 crc kubenswrapper[4728]: I0227 10:47:43.604391 4728 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-6054c06d-bbc1-4903-8542-ab0378034a6a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6054c06d-bbc1-4903-8542-ab0378034a6a\") pod \"prometheus-metric-storage-0\" (UID: \"d232c99d-32bc-45e3-bc7b-c9dca99571ac\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/da9c5ded63a0483190dd926a1aaaeae7e9f2dd385d7e3f523dfb80808a161f9d/globalmount\"" pod="openstack/prometheus-metric-storage-0" Feb 27 10:47:43 crc kubenswrapper[4728]: I0227 10:47:43.605541 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/d232c99d-32bc-45e3-bc7b-c9dca99571ac-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"d232c99d-32bc-45e3-bc7b-c9dca99571ac\") " pod="openstack/prometheus-metric-storage-0" Feb 27 10:47:43 crc kubenswrapper[4728]: I0227 10:47:43.623603 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d232c99d-32bc-45e3-bc7b-c9dca99571ac-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"d232c99d-32bc-45e3-bc7b-c9dca99571ac\") " pod="openstack/prometheus-metric-storage-0" Feb 27 10:47:43 crc kubenswrapper[4728]: I0227 10:47:43.623827 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/d232c99d-32bc-45e3-bc7b-c9dca99571ac-config\") pod \"prometheus-metric-storage-0\" (UID: \"d232c99d-32bc-45e3-bc7b-c9dca99571ac\") " pod="openstack/prometheus-metric-storage-0" Feb 27 10:47:43 crc kubenswrapper[4728]: I0227 10:47:43.629472 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6d5b6d6b67-sm4dz"] Feb 27 10:47:43 crc kubenswrapper[4728]: I0227 10:47:43.636627 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d5b6d6b67-sm4dz" Feb 27 10:47:43 crc kubenswrapper[4728]: I0227 10:47:43.641128 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Feb 27 10:47:43 crc kubenswrapper[4728]: I0227 10:47:43.643459 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6d5b6d6b67-sm4dz"] Feb 27 10:47:43 crc kubenswrapper[4728]: I0227 10:47:43.698623 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-6054c06d-bbc1-4903-8542-ab0378034a6a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6054c06d-bbc1-4903-8542-ab0378034a6a\") pod \"prometheus-metric-storage-0\" (UID: \"d232c99d-32bc-45e3-bc7b-c9dca99571ac\") " pod="openstack/prometheus-metric-storage-0" Feb 27 10:47:43 crc kubenswrapper[4728]: I0227 10:47:43.713990 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 27 10:47:43 crc kubenswrapper[4728]: I0227 10:47:43.790652 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6bf98b91-deff-4ed7-b4e9-ec72db4d9d92-ovsdbserver-sb\") pod \"dnsmasq-dns-6d5b6d6b67-sm4dz\" (UID: \"6bf98b91-deff-4ed7-b4e9-ec72db4d9d92\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-sm4dz" Feb 27 10:47:43 crc kubenswrapper[4728]: I0227 10:47:43.790748 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6bf98b91-deff-4ed7-b4e9-ec72db4d9d92-dns-svc\") pod \"dnsmasq-dns-6d5b6d6b67-sm4dz\" (UID: \"6bf98b91-deff-4ed7-b4e9-ec72db4d9d92\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-sm4dz" Feb 27 10:47:43 crc kubenswrapper[4728]: I0227 10:47:43.790821 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6bf98b91-deff-4ed7-b4e9-ec72db4d9d92-dns-swift-storage-0\") pod \"dnsmasq-dns-6d5b6d6b67-sm4dz\" (UID: \"6bf98b91-deff-4ed7-b4e9-ec72db4d9d92\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-sm4dz" Feb 27 10:47:43 crc kubenswrapper[4728]: I0227 10:47:43.790845 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6bf98b91-deff-4ed7-b4e9-ec72db4d9d92-ovsdbserver-nb\") pod \"dnsmasq-dns-6d5b6d6b67-sm4dz\" (UID: \"6bf98b91-deff-4ed7-b4e9-ec72db4d9d92\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-sm4dz" Feb 27 10:47:43 crc kubenswrapper[4728]: I0227 10:47:43.790867 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5g8r\" (UniqueName: \"kubernetes.io/projected/6bf98b91-deff-4ed7-b4e9-ec72db4d9d92-kube-api-access-k5g8r\") pod \"dnsmasq-dns-6d5b6d6b67-sm4dz\" (UID: \"6bf98b91-deff-4ed7-b4e9-ec72db4d9d92\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-sm4dz" Feb 27 10:47:43 crc kubenswrapper[4728]: I0227 10:47:43.790957 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6bf98b91-deff-4ed7-b4e9-ec72db4d9d92-config\") pod \"dnsmasq-dns-6d5b6d6b67-sm4dz\" (UID: \"6bf98b91-deff-4ed7-b4e9-ec72db4d9d92\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-sm4dz" Feb 27 10:47:43 crc kubenswrapper[4728]: I0227 10:47:43.894620 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6bf98b91-deff-4ed7-b4e9-ec72db4d9d92-ovsdbserver-sb\") pod \"dnsmasq-dns-6d5b6d6b67-sm4dz\" (UID: \"6bf98b91-deff-4ed7-b4e9-ec72db4d9d92\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-sm4dz" Feb 27 10:47:43 crc kubenswrapper[4728]: I0227 10:47:43.894917 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6bf98b91-deff-4ed7-b4e9-ec72db4d9d92-dns-svc\") pod \"dnsmasq-dns-6d5b6d6b67-sm4dz\" (UID: \"6bf98b91-deff-4ed7-b4e9-ec72db4d9d92\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-sm4dz" Feb 27 10:47:43 crc kubenswrapper[4728]: I0227 10:47:43.894977 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6bf98b91-deff-4ed7-b4e9-ec72db4d9d92-dns-swift-storage-0\") pod \"dnsmasq-dns-6d5b6d6b67-sm4dz\" (UID: \"6bf98b91-deff-4ed7-b4e9-ec72db4d9d92\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-sm4dz" Feb 27 10:47:43 crc kubenswrapper[4728]: I0227 10:47:43.895000 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6bf98b91-deff-4ed7-b4e9-ec72db4d9d92-ovsdbserver-nb\") pod \"dnsmasq-dns-6d5b6d6b67-sm4dz\" (UID: \"6bf98b91-deff-4ed7-b4e9-ec72db4d9d92\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-sm4dz" Feb 27 10:47:43 crc kubenswrapper[4728]: I0227 10:47:43.895024 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k5g8r\" (UniqueName: \"kubernetes.io/projected/6bf98b91-deff-4ed7-b4e9-ec72db4d9d92-kube-api-access-k5g8r\") pod \"dnsmasq-dns-6d5b6d6b67-sm4dz\" (UID: \"6bf98b91-deff-4ed7-b4e9-ec72db4d9d92\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-sm4dz" Feb 27 10:47:43 crc kubenswrapper[4728]: I0227 10:47:43.895096 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6bf98b91-deff-4ed7-b4e9-ec72db4d9d92-config\") pod \"dnsmasq-dns-6d5b6d6b67-sm4dz\" (UID: \"6bf98b91-deff-4ed7-b4e9-ec72db4d9d92\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-sm4dz" Feb 27 10:47:43 crc kubenswrapper[4728]: I0227 10:47:43.895929 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6bf98b91-deff-4ed7-b4e9-ec72db4d9d92-config\") pod \"dnsmasq-dns-6d5b6d6b67-sm4dz\" (UID: \"6bf98b91-deff-4ed7-b4e9-ec72db4d9d92\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-sm4dz" Feb 27 10:47:43 crc kubenswrapper[4728]: I0227 10:47:43.896114 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6bf98b91-deff-4ed7-b4e9-ec72db4d9d92-dns-swift-storage-0\") pod \"dnsmasq-dns-6d5b6d6b67-sm4dz\" (UID: \"6bf98b91-deff-4ed7-b4e9-ec72db4d9d92\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-sm4dz" Feb 27 10:47:43 crc kubenswrapper[4728]: I0227 10:47:43.896436 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6bf98b91-deff-4ed7-b4e9-ec72db4d9d92-ovsdbserver-nb\") pod \"dnsmasq-dns-6d5b6d6b67-sm4dz\" (UID: \"6bf98b91-deff-4ed7-b4e9-ec72db4d9d92\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-sm4dz" Feb 27 10:47:43 crc kubenswrapper[4728]: I0227 10:47:43.896639 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6bf98b91-deff-4ed7-b4e9-ec72db4d9d92-ovsdbserver-sb\") pod \"dnsmasq-dns-6d5b6d6b67-sm4dz\" (UID: \"6bf98b91-deff-4ed7-b4e9-ec72db4d9d92\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-sm4dz" Feb 27 10:47:43 crc kubenswrapper[4728]: I0227 10:47:43.897175 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6bf98b91-deff-4ed7-b4e9-ec72db4d9d92-dns-svc\") pod \"dnsmasq-dns-6d5b6d6b67-sm4dz\" (UID: \"6bf98b91-deff-4ed7-b4e9-ec72db4d9d92\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-sm4dz" Feb 27 10:47:43 crc kubenswrapper[4728]: I0227 10:47:43.946702 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5g8r\" (UniqueName: \"kubernetes.io/projected/6bf98b91-deff-4ed7-b4e9-ec72db4d9d92-kube-api-access-k5g8r\") pod \"dnsmasq-dns-6d5b6d6b67-sm4dz\" (UID: \"6bf98b91-deff-4ed7-b4e9-ec72db4d9d92\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-sm4dz" Feb 27 10:47:44 crc kubenswrapper[4728]: I0227 10:47:44.042289 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d5b6d6b67-sm4dz" Feb 27 10:47:44 crc kubenswrapper[4728]: I0227 10:47:44.253577 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 27 10:47:44 crc kubenswrapper[4728]: I0227 10:47:44.411621 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-bd5fc" Feb 27 10:47:44 crc kubenswrapper[4728]: I0227 10:47:44.647493 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6d5b6d6b67-sm4dz"] Feb 27 10:47:44 crc kubenswrapper[4728]: I0227 10:47:44.788311 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d90c432-384c-4a43-a2cf-b26c3804a632" path="/var/lib/kubelet/pods/9d90c432-384c-4a43-a2cf-b26c3804a632/volumes" Feb 27 10:47:44 crc kubenswrapper[4728]: I0227 10:47:44.844319 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-bd5fc-config-dwf88" Feb 27 10:47:44 crc kubenswrapper[4728]: I0227 10:47:44.923343 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/48103299-8805-4df6-b07c-613d65a2edac-additional-scripts\") pod \"48103299-8805-4df6-b07c-613d65a2edac\" (UID: \"48103299-8805-4df6-b07c-613d65a2edac\") " Feb 27 10:47:44 crc kubenswrapper[4728]: I0227 10:47:44.923450 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/48103299-8805-4df6-b07c-613d65a2edac-var-log-ovn\") pod \"48103299-8805-4df6-b07c-613d65a2edac\" (UID: \"48103299-8805-4df6-b07c-613d65a2edac\") " Feb 27 10:47:44 crc kubenswrapper[4728]: I0227 10:47:44.923532 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/48103299-8805-4df6-b07c-613d65a2edac-var-run\") pod \"48103299-8805-4df6-b07c-613d65a2edac\" (UID: \"48103299-8805-4df6-b07c-613d65a2edac\") " Feb 27 10:47:44 crc kubenswrapper[4728]: I0227 10:47:44.923593 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/48103299-8805-4df6-b07c-613d65a2edac-var-run-ovn\") pod \"48103299-8805-4df6-b07c-613d65a2edac\" (UID: \"48103299-8805-4df6-b07c-613d65a2edac\") " Feb 27 10:47:44 crc kubenswrapper[4728]: I0227 10:47:44.923623 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v7gl9\" (UniqueName: \"kubernetes.io/projected/48103299-8805-4df6-b07c-613d65a2edac-kube-api-access-v7gl9\") pod \"48103299-8805-4df6-b07c-613d65a2edac\" (UID: \"48103299-8805-4df6-b07c-613d65a2edac\") " Feb 27 10:47:44 crc kubenswrapper[4728]: I0227 10:47:44.923674 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/48103299-8805-4df6-b07c-613d65a2edac-scripts\") pod \"48103299-8805-4df6-b07c-613d65a2edac\" (UID: \"48103299-8805-4df6-b07c-613d65a2edac\") " Feb 27 10:47:44 crc kubenswrapper[4728]: I0227 10:47:44.923803 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/48103299-8805-4df6-b07c-613d65a2edac-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "48103299-8805-4df6-b07c-613d65a2edac" (UID: "48103299-8805-4df6-b07c-613d65a2edac"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 10:47:44 crc kubenswrapper[4728]: I0227 10:47:44.923814 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/48103299-8805-4df6-b07c-613d65a2edac-var-run" (OuterVolumeSpecName: "var-run") pod "48103299-8805-4df6-b07c-613d65a2edac" (UID: "48103299-8805-4df6-b07c-613d65a2edac"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 10:47:44 crc kubenswrapper[4728]: I0227 10:47:44.923855 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/48103299-8805-4df6-b07c-613d65a2edac-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "48103299-8805-4df6-b07c-613d65a2edac" (UID: "48103299-8805-4df6-b07c-613d65a2edac"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 10:47:44 crc kubenswrapper[4728]: I0227 10:47:44.924571 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/48103299-8805-4df6-b07c-613d65a2edac-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "48103299-8805-4df6-b07c-613d65a2edac" (UID: "48103299-8805-4df6-b07c-613d65a2edac"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:47:44 crc kubenswrapper[4728]: I0227 10:47:44.925128 4728 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/48103299-8805-4df6-b07c-613d65a2edac-additional-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 10:47:44 crc kubenswrapper[4728]: I0227 10:47:44.925147 4728 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/48103299-8805-4df6-b07c-613d65a2edac-var-log-ovn\") on node \"crc\" DevicePath \"\"" Feb 27 10:47:44 crc kubenswrapper[4728]: I0227 10:47:44.925159 4728 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/48103299-8805-4df6-b07c-613d65a2edac-var-run\") on node \"crc\" DevicePath \"\"" Feb 27 10:47:44 crc kubenswrapper[4728]: I0227 10:47:44.925172 4728 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/48103299-8805-4df6-b07c-613d65a2edac-var-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 27 10:47:44 crc kubenswrapper[4728]: I0227 10:47:44.925963 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/48103299-8805-4df6-b07c-613d65a2edac-scripts" (OuterVolumeSpecName: "scripts") pod "48103299-8805-4df6-b07c-613d65a2edac" (UID: "48103299-8805-4df6-b07c-613d65a2edac"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:47:44 crc kubenswrapper[4728]: I0227 10:47:44.929092 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48103299-8805-4df6-b07c-613d65a2edac-kube-api-access-v7gl9" (OuterVolumeSpecName: "kube-api-access-v7gl9") pod "48103299-8805-4df6-b07c-613d65a2edac" (UID: "48103299-8805-4df6-b07c-613d65a2edac"). InnerVolumeSpecName "kube-api-access-v7gl9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:47:45 crc kubenswrapper[4728]: I0227 10:47:45.027214 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v7gl9\" (UniqueName: \"kubernetes.io/projected/48103299-8805-4df6-b07c-613d65a2edac-kube-api-access-v7gl9\") on node \"crc\" DevicePath \"\"" Feb 27 10:47:45 crc kubenswrapper[4728]: I0227 10:47:45.027246 4728 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/48103299-8805-4df6-b07c-613d65a2edac-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 10:47:45 crc kubenswrapper[4728]: I0227 10:47:45.268802 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-bd5fc-config-dwf88"] Feb 27 10:47:45 crc kubenswrapper[4728]: I0227 10:47:45.295251 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-bd5fc-config-dwf88"] Feb 27 10:47:45 crc kubenswrapper[4728]: I0227 10:47:45.300615 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-bd5fc-config-dwf88" Feb 27 10:47:45 crc kubenswrapper[4728]: I0227 10:47:45.300661 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="acabb38c7eb21a13842e39ae8aec53a196017ab8e1b80519a05a504801279133" Feb 27 10:47:45 crc kubenswrapper[4728]: I0227 10:47:45.310711 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"d232c99d-32bc-45e3-bc7b-c9dca99571ac","Type":"ContainerStarted","Data":"42343c451a1a51bcd9756f36094d57bff1034a14abfc12d8b2515ee5b5bc0462"} Feb 27 10:47:45 crc kubenswrapper[4728]: I0227 10:47:45.325742 4728 generic.go:334] "Generic (PLEG): container finished" podID="6bf98b91-deff-4ed7-b4e9-ec72db4d9d92" containerID="45e022385d294c164127ce63fff1552108326576682438f4f40428626fcc4346" exitCode=0 Feb 27 10:47:45 crc kubenswrapper[4728]: I0227 10:47:45.325789 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d5b6d6b67-sm4dz" event={"ID":"6bf98b91-deff-4ed7-b4e9-ec72db4d9d92","Type":"ContainerDied","Data":"45e022385d294c164127ce63fff1552108326576682438f4f40428626fcc4346"} Feb 27 10:47:45 crc kubenswrapper[4728]: I0227 10:47:45.325818 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d5b6d6b67-sm4dz" event={"ID":"6bf98b91-deff-4ed7-b4e9-ec72db4d9d92","Type":"ContainerStarted","Data":"34908154719a607500d550ac79b69ee4e6c563cdc5dcf3e44e143cb43c6f6ae1"} Feb 27 10:47:45 crc kubenswrapper[4728]: I0227 10:47:45.397528 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-bd5fc-config-lnpq6"] Feb 27 10:47:45 crc kubenswrapper[4728]: E0227 10:47:45.398097 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48103299-8805-4df6-b07c-613d65a2edac" containerName="ovn-config" Feb 27 10:47:45 crc kubenswrapper[4728]: I0227 10:47:45.398120 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="48103299-8805-4df6-b07c-613d65a2edac" containerName="ovn-config" Feb 27 10:47:45 crc kubenswrapper[4728]: I0227 10:47:45.398375 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="48103299-8805-4df6-b07c-613d65a2edac" containerName="ovn-config" Feb 27 10:47:45 crc kubenswrapper[4728]: I0227 10:47:45.399019 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-bd5fc-config-lnpq6" Feb 27 10:47:45 crc kubenswrapper[4728]: I0227 10:47:45.404834 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Feb 27 10:47:45 crc kubenswrapper[4728]: I0227 10:47:45.424230 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-bd5fc-config-lnpq6"] Feb 27 10:47:45 crc kubenswrapper[4728]: I0227 10:47:45.435930 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrssl\" (UniqueName: \"kubernetes.io/projected/f2b5bd38-8883-4de7-b4a5-8988fd083e21-kube-api-access-mrssl\") pod \"ovn-controller-bd5fc-config-lnpq6\" (UID: \"f2b5bd38-8883-4de7-b4a5-8988fd083e21\") " pod="openstack/ovn-controller-bd5fc-config-lnpq6" Feb 27 10:47:45 crc kubenswrapper[4728]: I0227 10:47:45.436220 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f2b5bd38-8883-4de7-b4a5-8988fd083e21-var-run\") pod \"ovn-controller-bd5fc-config-lnpq6\" (UID: \"f2b5bd38-8883-4de7-b4a5-8988fd083e21\") " pod="openstack/ovn-controller-bd5fc-config-lnpq6" Feb 27 10:47:45 crc kubenswrapper[4728]: I0227 10:47:45.436373 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/f2b5bd38-8883-4de7-b4a5-8988fd083e21-var-log-ovn\") pod \"ovn-controller-bd5fc-config-lnpq6\" (UID: \"f2b5bd38-8883-4de7-b4a5-8988fd083e21\") " pod="openstack/ovn-controller-bd5fc-config-lnpq6" Feb 27 10:47:45 crc kubenswrapper[4728]: I0227 10:47:45.436447 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/f2b5bd38-8883-4de7-b4a5-8988fd083e21-additional-scripts\") pod \"ovn-controller-bd5fc-config-lnpq6\" (UID: \"f2b5bd38-8883-4de7-b4a5-8988fd083e21\") " pod="openstack/ovn-controller-bd5fc-config-lnpq6" Feb 27 10:47:45 crc kubenswrapper[4728]: I0227 10:47:45.436475 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f2b5bd38-8883-4de7-b4a5-8988fd083e21-scripts\") pod \"ovn-controller-bd5fc-config-lnpq6\" (UID: \"f2b5bd38-8883-4de7-b4a5-8988fd083e21\") " pod="openstack/ovn-controller-bd5fc-config-lnpq6" Feb 27 10:47:45 crc kubenswrapper[4728]: I0227 10:47:45.436540 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/f2b5bd38-8883-4de7-b4a5-8988fd083e21-var-run-ovn\") pod \"ovn-controller-bd5fc-config-lnpq6\" (UID: \"f2b5bd38-8883-4de7-b4a5-8988fd083e21\") " pod="openstack/ovn-controller-bd5fc-config-lnpq6" Feb 27 10:47:45 crc kubenswrapper[4728]: I0227 10:47:45.538419 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f2b5bd38-8883-4de7-b4a5-8988fd083e21-var-run\") pod \"ovn-controller-bd5fc-config-lnpq6\" (UID: \"f2b5bd38-8883-4de7-b4a5-8988fd083e21\") " pod="openstack/ovn-controller-bd5fc-config-lnpq6" Feb 27 10:47:45 crc kubenswrapper[4728]: I0227 10:47:45.538533 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/f2b5bd38-8883-4de7-b4a5-8988fd083e21-var-log-ovn\") pod \"ovn-controller-bd5fc-config-lnpq6\" (UID: \"f2b5bd38-8883-4de7-b4a5-8988fd083e21\") " pod="openstack/ovn-controller-bd5fc-config-lnpq6" Feb 27 10:47:45 crc kubenswrapper[4728]: I0227 10:47:45.538831 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/f2b5bd38-8883-4de7-b4a5-8988fd083e21-var-log-ovn\") pod \"ovn-controller-bd5fc-config-lnpq6\" (UID: \"f2b5bd38-8883-4de7-b4a5-8988fd083e21\") " pod="openstack/ovn-controller-bd5fc-config-lnpq6" Feb 27 10:47:45 crc kubenswrapper[4728]: I0227 10:47:45.538852 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f2b5bd38-8883-4de7-b4a5-8988fd083e21-var-run\") pod \"ovn-controller-bd5fc-config-lnpq6\" (UID: \"f2b5bd38-8883-4de7-b4a5-8988fd083e21\") " pod="openstack/ovn-controller-bd5fc-config-lnpq6" Feb 27 10:47:45 crc kubenswrapper[4728]: I0227 10:47:45.538569 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/f2b5bd38-8883-4de7-b4a5-8988fd083e21-additional-scripts\") pod \"ovn-controller-bd5fc-config-lnpq6\" (UID: \"f2b5bd38-8883-4de7-b4a5-8988fd083e21\") " pod="openstack/ovn-controller-bd5fc-config-lnpq6" Feb 27 10:47:45 crc kubenswrapper[4728]: I0227 10:47:45.539113 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f2b5bd38-8883-4de7-b4a5-8988fd083e21-scripts\") pod \"ovn-controller-bd5fc-config-lnpq6\" (UID: \"f2b5bd38-8883-4de7-b4a5-8988fd083e21\") " pod="openstack/ovn-controller-bd5fc-config-lnpq6" Feb 27 10:47:45 crc kubenswrapper[4728]: I0227 10:47:45.539142 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/f2b5bd38-8883-4de7-b4a5-8988fd083e21-var-run-ovn\") pod \"ovn-controller-bd5fc-config-lnpq6\" (UID: \"f2b5bd38-8883-4de7-b4a5-8988fd083e21\") " pod="openstack/ovn-controller-bd5fc-config-lnpq6" Feb 27 10:47:45 crc kubenswrapper[4728]: I0227 10:47:45.539678 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/f2b5bd38-8883-4de7-b4a5-8988fd083e21-additional-scripts\") pod \"ovn-controller-bd5fc-config-lnpq6\" (UID: \"f2b5bd38-8883-4de7-b4a5-8988fd083e21\") " pod="openstack/ovn-controller-bd5fc-config-lnpq6" Feb 27 10:47:45 crc kubenswrapper[4728]: I0227 10:47:45.541397 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f2b5bd38-8883-4de7-b4a5-8988fd083e21-scripts\") pod \"ovn-controller-bd5fc-config-lnpq6\" (UID: \"f2b5bd38-8883-4de7-b4a5-8988fd083e21\") " pod="openstack/ovn-controller-bd5fc-config-lnpq6" Feb 27 10:47:45 crc kubenswrapper[4728]: I0227 10:47:45.541517 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/f2b5bd38-8883-4de7-b4a5-8988fd083e21-var-run-ovn\") pod \"ovn-controller-bd5fc-config-lnpq6\" (UID: \"f2b5bd38-8883-4de7-b4a5-8988fd083e21\") " pod="openstack/ovn-controller-bd5fc-config-lnpq6" Feb 27 10:47:45 crc kubenswrapper[4728]: I0227 10:47:45.541623 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mrssl\" (UniqueName: \"kubernetes.io/projected/f2b5bd38-8883-4de7-b4a5-8988fd083e21-kube-api-access-mrssl\") pod \"ovn-controller-bd5fc-config-lnpq6\" (UID: \"f2b5bd38-8883-4de7-b4a5-8988fd083e21\") " pod="openstack/ovn-controller-bd5fc-config-lnpq6" Feb 27 10:47:45 crc kubenswrapper[4728]: I0227 10:47:45.563557 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mrssl\" (UniqueName: \"kubernetes.io/projected/f2b5bd38-8883-4de7-b4a5-8988fd083e21-kube-api-access-mrssl\") pod \"ovn-controller-bd5fc-config-lnpq6\" (UID: \"f2b5bd38-8883-4de7-b4a5-8988fd083e21\") " pod="openstack/ovn-controller-bd5fc-config-lnpq6" Feb 27 10:47:45 crc kubenswrapper[4728]: I0227 10:47:45.765955 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-bd5fc-config-lnpq6" Feb 27 10:47:46 crc kubenswrapper[4728]: I0227 10:47:46.291717 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-bd5fc-config-lnpq6"] Feb 27 10:47:46 crc kubenswrapper[4728]: W0227 10:47:46.306778 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf2b5bd38_8883_4de7_b4a5_8988fd083e21.slice/crio-6745252f397b0b74e8893644257066eba46494d4aced33f4a772eaa7c0addb60 WatchSource:0}: Error finding container 6745252f397b0b74e8893644257066eba46494d4aced33f4a772eaa7c0addb60: Status 404 returned error can't find the container with id 6745252f397b0b74e8893644257066eba46494d4aced33f4a772eaa7c0addb60 Feb 27 10:47:46 crc kubenswrapper[4728]: I0227 10:47:46.338719 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-bd5fc-config-lnpq6" event={"ID":"f2b5bd38-8883-4de7-b4a5-8988fd083e21","Type":"ContainerStarted","Data":"6745252f397b0b74e8893644257066eba46494d4aced33f4a772eaa7c0addb60"} Feb 27 10:47:46 crc kubenswrapper[4728]: I0227 10:47:46.342478 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d5b6d6b67-sm4dz" event={"ID":"6bf98b91-deff-4ed7-b4e9-ec72db4d9d92","Type":"ContainerStarted","Data":"02ed52082e50a91686c703b06fb559e5c0dee80506d11e273846680ad2c0a233"} Feb 27 10:47:46 crc kubenswrapper[4728]: I0227 10:47:46.342639 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6d5b6d6b67-sm4dz" Feb 27 10:47:46 crc kubenswrapper[4728]: I0227 10:47:46.372149 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6d5b6d6b67-sm4dz" podStartSLOduration=3.37212596 podStartE2EDuration="3.37212596s" podCreationTimestamp="2026-02-27 10:47:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:47:46.366787953 +0000 UTC m=+1286.329154069" watchObservedRunningTime="2026-02-27 10:47:46.37212596 +0000 UTC m=+1286.334492066" Feb 27 10:47:46 crc kubenswrapper[4728]: I0227 10:47:46.739902 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48103299-8805-4df6-b07c-613d65a2edac" path="/var/lib/kubelet/pods/48103299-8805-4df6-b07c-613d65a2edac/volumes" Feb 27 10:47:46 crc kubenswrapper[4728]: I0227 10:47:46.824402 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-0"] Feb 27 10:47:46 crc kubenswrapper[4728]: I0227 10:47:46.826370 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Feb 27 10:47:46 crc kubenswrapper[4728]: I0227 10:47:46.828330 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-config-data" Feb 27 10:47:46 crc kubenswrapper[4728]: I0227 10:47:46.840404 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Feb 27 10:47:46 crc kubenswrapper[4728]: I0227 10:47:46.871998 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60433146-3d7a-433d-a3c3-3152b7591e49-config-data\") pod \"mysqld-exporter-0\" (UID: \"60433146-3d7a-433d-a3c3-3152b7591e49\") " pod="openstack/mysqld-exporter-0" Feb 27 10:47:46 crc kubenswrapper[4728]: I0227 10:47:46.872223 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60433146-3d7a-433d-a3c3-3152b7591e49-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"60433146-3d7a-433d-a3c3-3152b7591e49\") " pod="openstack/mysqld-exporter-0" Feb 27 10:47:46 crc kubenswrapper[4728]: I0227 10:47:46.872290 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ptsbx\" (UniqueName: \"kubernetes.io/projected/60433146-3d7a-433d-a3c3-3152b7591e49-kube-api-access-ptsbx\") pod \"mysqld-exporter-0\" (UID: \"60433146-3d7a-433d-a3c3-3152b7591e49\") " pod="openstack/mysqld-exporter-0" Feb 27 10:47:46 crc kubenswrapper[4728]: I0227 10:47:46.976196 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60433146-3d7a-433d-a3c3-3152b7591e49-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"60433146-3d7a-433d-a3c3-3152b7591e49\") " pod="openstack/mysqld-exporter-0" Feb 27 10:47:46 crc kubenswrapper[4728]: I0227 10:47:46.976294 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ptsbx\" (UniqueName: \"kubernetes.io/projected/60433146-3d7a-433d-a3c3-3152b7591e49-kube-api-access-ptsbx\") pod \"mysqld-exporter-0\" (UID: \"60433146-3d7a-433d-a3c3-3152b7591e49\") " pod="openstack/mysqld-exporter-0" Feb 27 10:47:46 crc kubenswrapper[4728]: I0227 10:47:46.976364 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60433146-3d7a-433d-a3c3-3152b7591e49-config-data\") pod \"mysqld-exporter-0\" (UID: \"60433146-3d7a-433d-a3c3-3152b7591e49\") " pod="openstack/mysqld-exporter-0" Feb 27 10:47:47 crc kubenswrapper[4728]: I0227 10:47:47.086424 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60433146-3d7a-433d-a3c3-3152b7591e49-config-data\") pod \"mysqld-exporter-0\" (UID: \"60433146-3d7a-433d-a3c3-3152b7591e49\") " pod="openstack/mysqld-exporter-0" Feb 27 10:47:47 crc kubenswrapper[4728]: I0227 10:47:47.087800 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ptsbx\" (UniqueName: \"kubernetes.io/projected/60433146-3d7a-433d-a3c3-3152b7591e49-kube-api-access-ptsbx\") pod \"mysqld-exporter-0\" (UID: \"60433146-3d7a-433d-a3c3-3152b7591e49\") " pod="openstack/mysqld-exporter-0" Feb 27 10:47:47 crc kubenswrapper[4728]: I0227 10:47:47.087816 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60433146-3d7a-433d-a3c3-3152b7591e49-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"60433146-3d7a-433d-a3c3-3152b7591e49\") " pod="openstack/mysqld-exporter-0" Feb 27 10:47:47 crc kubenswrapper[4728]: I0227 10:47:47.147428 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Feb 27 10:47:47 crc kubenswrapper[4728]: I0227 10:47:47.353682 4728 generic.go:334] "Generic (PLEG): container finished" podID="f2b5bd38-8883-4de7-b4a5-8988fd083e21" containerID="9673b4727a86843edf95617bbaae3ebd2e504d020d4c212960c6cba20335ef3a" exitCode=0 Feb 27 10:47:47 crc kubenswrapper[4728]: I0227 10:47:47.353761 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-bd5fc-config-lnpq6" event={"ID":"f2b5bd38-8883-4de7-b4a5-8988fd083e21","Type":"ContainerDied","Data":"9673b4727a86843edf95617bbaae3ebd2e504d020d4c212960c6cba20335ef3a"} Feb 27 10:47:48 crc kubenswrapper[4728]: I0227 10:47:48.370075 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"d232c99d-32bc-45e3-bc7b-c9dca99571ac","Type":"ContainerStarted","Data":"ccc2e8c6af34122f89130e2a0bf68c63243d0ce2ccf9d44b350eb27a9f4bfd26"} Feb 27 10:47:51 crc kubenswrapper[4728]: I0227 10:47:51.011741 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 27 10:47:51 crc kubenswrapper[4728]: I0227 10:47:51.340669 4728 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="5948716b-2c2b-4a90-b4b5-f8daad17f020" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.135:5671: connect: connection refused" Feb 27 10:47:51 crc kubenswrapper[4728]: I0227 10:47:51.692886 4728 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-2" podUID="ad00da50-2e05-4612-a862-5cccd698e77b" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.136:5671: connect: connection refused" Feb 27 10:47:51 crc kubenswrapper[4728]: I0227 10:47:51.709318 4728 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-1" podUID="d96ab6cd-ed9d-4924-9566-91930411701d" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.137:5671: connect: connection refused" Feb 27 10:47:54 crc kubenswrapper[4728]: I0227 10:47:54.045888 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6d5b6d6b67-sm4dz" Feb 27 10:47:54 crc kubenswrapper[4728]: I0227 10:47:54.125335 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-cp22r"] Feb 27 10:47:54 crc kubenswrapper[4728]: I0227 10:47:54.125588 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-b8fbc5445-cp22r" podUID="259eb6ad-1caa-4bb6-a1d5-2b81ba757e27" containerName="dnsmasq-dns" containerID="cri-o://e0b03c27257e3fcc204efc5377d7300bb0d3af0e465c13afadc6c96431efa460" gracePeriod=10 Feb 27 10:47:54 crc kubenswrapper[4728]: I0227 10:47:54.509323 4728 generic.go:334] "Generic (PLEG): container finished" podID="259eb6ad-1caa-4bb6-a1d5-2b81ba757e27" containerID="e0b03c27257e3fcc204efc5377d7300bb0d3af0e465c13afadc6c96431efa460" exitCode=0 Feb 27 10:47:54 crc kubenswrapper[4728]: I0227 10:47:54.509375 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-cp22r" event={"ID":"259eb6ad-1caa-4bb6-a1d5-2b81ba757e27","Type":"ContainerDied","Data":"e0b03c27257e3fcc204efc5377d7300bb0d3af0e465c13afadc6c96431efa460"} Feb 27 10:47:54 crc kubenswrapper[4728]: E0227 10:47:54.546206 4728 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod259eb6ad_1caa_4bb6_a1d5_2b81ba757e27.slice/crio-e0b03c27257e3fcc204efc5377d7300bb0d3af0e465c13afadc6c96431efa460.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod259eb6ad_1caa_4bb6_a1d5_2b81ba757e27.slice/crio-conmon-e0b03c27257e3fcc204efc5377d7300bb0d3af0e465c13afadc6c96431efa460.scope\": RecentStats: unable to find data in memory cache]" Feb 27 10:47:55 crc kubenswrapper[4728]: I0227 10:47:55.527599 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-bd5fc-config-lnpq6" event={"ID":"f2b5bd38-8883-4de7-b4a5-8988fd083e21","Type":"ContainerDied","Data":"6745252f397b0b74e8893644257066eba46494d4aced33f4a772eaa7c0addb60"} Feb 27 10:47:55 crc kubenswrapper[4728]: I0227 10:47:55.527945 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6745252f397b0b74e8893644257066eba46494d4aced33f4a772eaa7c0addb60" Feb 27 10:47:55 crc kubenswrapper[4728]: I0227 10:47:55.725840 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-bd5fc-config-lnpq6" Feb 27 10:47:55 crc kubenswrapper[4728]: I0227 10:47:55.902636 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f2b5bd38-8883-4de7-b4a5-8988fd083e21-scripts\") pod \"f2b5bd38-8883-4de7-b4a5-8988fd083e21\" (UID: \"f2b5bd38-8883-4de7-b4a5-8988fd083e21\") " Feb 27 10:47:55 crc kubenswrapper[4728]: I0227 10:47:55.902998 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/f2b5bd38-8883-4de7-b4a5-8988fd083e21-var-log-ovn\") pod \"f2b5bd38-8883-4de7-b4a5-8988fd083e21\" (UID: \"f2b5bd38-8883-4de7-b4a5-8988fd083e21\") " Feb 27 10:47:55 crc kubenswrapper[4728]: I0227 10:47:55.903054 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f2b5bd38-8883-4de7-b4a5-8988fd083e21-var-run\") pod \"f2b5bd38-8883-4de7-b4a5-8988fd083e21\" (UID: \"f2b5bd38-8883-4de7-b4a5-8988fd083e21\") " Feb 27 10:47:55 crc kubenswrapper[4728]: I0227 10:47:55.903076 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/f2b5bd38-8883-4de7-b4a5-8988fd083e21-var-run-ovn\") pod \"f2b5bd38-8883-4de7-b4a5-8988fd083e21\" (UID: \"f2b5bd38-8883-4de7-b4a5-8988fd083e21\") " Feb 27 10:47:55 crc kubenswrapper[4728]: I0227 10:47:55.903113 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mrssl\" (UniqueName: \"kubernetes.io/projected/f2b5bd38-8883-4de7-b4a5-8988fd083e21-kube-api-access-mrssl\") pod \"f2b5bd38-8883-4de7-b4a5-8988fd083e21\" (UID: \"f2b5bd38-8883-4de7-b4a5-8988fd083e21\") " Feb 27 10:47:55 crc kubenswrapper[4728]: I0227 10:47:55.903163 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/f2b5bd38-8883-4de7-b4a5-8988fd083e21-additional-scripts\") pod \"f2b5bd38-8883-4de7-b4a5-8988fd083e21\" (UID: \"f2b5bd38-8883-4de7-b4a5-8988fd083e21\") " Feb 27 10:47:55 crc kubenswrapper[4728]: I0227 10:47:55.903488 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f2b5bd38-8883-4de7-b4a5-8988fd083e21-var-run" (OuterVolumeSpecName: "var-run") pod "f2b5bd38-8883-4de7-b4a5-8988fd083e21" (UID: "f2b5bd38-8883-4de7-b4a5-8988fd083e21"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 10:47:55 crc kubenswrapper[4728]: I0227 10:47:55.903556 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f2b5bd38-8883-4de7-b4a5-8988fd083e21-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "f2b5bd38-8883-4de7-b4a5-8988fd083e21" (UID: "f2b5bd38-8883-4de7-b4a5-8988fd083e21"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 10:47:55 crc kubenswrapper[4728]: I0227 10:47:55.903580 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f2b5bd38-8883-4de7-b4a5-8988fd083e21-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "f2b5bd38-8883-4de7-b4a5-8988fd083e21" (UID: "f2b5bd38-8883-4de7-b4a5-8988fd083e21"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 10:47:55 crc kubenswrapper[4728]: I0227 10:47:55.903822 4728 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/f2b5bd38-8883-4de7-b4a5-8988fd083e21-var-log-ovn\") on node \"crc\" DevicePath \"\"" Feb 27 10:47:55 crc kubenswrapper[4728]: I0227 10:47:55.903836 4728 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f2b5bd38-8883-4de7-b4a5-8988fd083e21-var-run\") on node \"crc\" DevicePath \"\"" Feb 27 10:47:55 crc kubenswrapper[4728]: I0227 10:47:55.903845 4728 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/f2b5bd38-8883-4de7-b4a5-8988fd083e21-var-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 27 10:47:55 crc kubenswrapper[4728]: I0227 10:47:55.904371 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f2b5bd38-8883-4de7-b4a5-8988fd083e21-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "f2b5bd38-8883-4de7-b4a5-8988fd083e21" (UID: "f2b5bd38-8883-4de7-b4a5-8988fd083e21"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:47:55 crc kubenswrapper[4728]: I0227 10:47:55.904703 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f2b5bd38-8883-4de7-b4a5-8988fd083e21-scripts" (OuterVolumeSpecName: "scripts") pod "f2b5bd38-8883-4de7-b4a5-8988fd083e21" (UID: "f2b5bd38-8883-4de7-b4a5-8988fd083e21"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:47:55 crc kubenswrapper[4728]: I0227 10:47:55.906450 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-cp22r" Feb 27 10:47:55 crc kubenswrapper[4728]: I0227 10:47:55.909444 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2b5bd38-8883-4de7-b4a5-8988fd083e21-kube-api-access-mrssl" (OuterVolumeSpecName: "kube-api-access-mrssl") pod "f2b5bd38-8883-4de7-b4a5-8988fd083e21" (UID: "f2b5bd38-8883-4de7-b4a5-8988fd083e21"). InnerVolumeSpecName "kube-api-access-mrssl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:47:56 crc kubenswrapper[4728]: I0227 10:47:56.006208 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mrssl\" (UniqueName: \"kubernetes.io/projected/f2b5bd38-8883-4de7-b4a5-8988fd083e21-kube-api-access-mrssl\") on node \"crc\" DevicePath \"\"" Feb 27 10:47:56 crc kubenswrapper[4728]: I0227 10:47:56.006239 4728 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/f2b5bd38-8883-4de7-b4a5-8988fd083e21-additional-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 10:47:56 crc kubenswrapper[4728]: I0227 10:47:56.006250 4728 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f2b5bd38-8883-4de7-b4a5-8988fd083e21-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 10:47:56 crc kubenswrapper[4728]: I0227 10:47:56.107826 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wcpj7\" (UniqueName: \"kubernetes.io/projected/259eb6ad-1caa-4bb6-a1d5-2b81ba757e27-kube-api-access-wcpj7\") pod \"259eb6ad-1caa-4bb6-a1d5-2b81ba757e27\" (UID: \"259eb6ad-1caa-4bb6-a1d5-2b81ba757e27\") " Feb 27 10:47:56 crc kubenswrapper[4728]: I0227 10:47:56.107965 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/259eb6ad-1caa-4bb6-a1d5-2b81ba757e27-ovsdbserver-sb\") pod \"259eb6ad-1caa-4bb6-a1d5-2b81ba757e27\" (UID: \"259eb6ad-1caa-4bb6-a1d5-2b81ba757e27\") " Feb 27 10:47:56 crc kubenswrapper[4728]: I0227 10:47:56.108062 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/259eb6ad-1caa-4bb6-a1d5-2b81ba757e27-config\") pod \"259eb6ad-1caa-4bb6-a1d5-2b81ba757e27\" (UID: \"259eb6ad-1caa-4bb6-a1d5-2b81ba757e27\") " Feb 27 10:47:56 crc kubenswrapper[4728]: I0227 10:47:56.108142 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/259eb6ad-1caa-4bb6-a1d5-2b81ba757e27-ovsdbserver-nb\") pod \"259eb6ad-1caa-4bb6-a1d5-2b81ba757e27\" (UID: \"259eb6ad-1caa-4bb6-a1d5-2b81ba757e27\") " Feb 27 10:47:56 crc kubenswrapper[4728]: I0227 10:47:56.108238 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/259eb6ad-1caa-4bb6-a1d5-2b81ba757e27-dns-svc\") pod \"259eb6ad-1caa-4bb6-a1d5-2b81ba757e27\" (UID: \"259eb6ad-1caa-4bb6-a1d5-2b81ba757e27\") " Feb 27 10:47:56 crc kubenswrapper[4728]: I0227 10:47:56.109767 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Feb 27 10:47:56 crc kubenswrapper[4728]: I0227 10:47:56.111794 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/259eb6ad-1caa-4bb6-a1d5-2b81ba757e27-kube-api-access-wcpj7" (OuterVolumeSpecName: "kube-api-access-wcpj7") pod "259eb6ad-1caa-4bb6-a1d5-2b81ba757e27" (UID: "259eb6ad-1caa-4bb6-a1d5-2b81ba757e27"). InnerVolumeSpecName "kube-api-access-wcpj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:47:56 crc kubenswrapper[4728]: I0227 10:47:56.157689 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/259eb6ad-1caa-4bb6-a1d5-2b81ba757e27-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "259eb6ad-1caa-4bb6-a1d5-2b81ba757e27" (UID: "259eb6ad-1caa-4bb6-a1d5-2b81ba757e27"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:47:56 crc kubenswrapper[4728]: I0227 10:47:56.158597 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/259eb6ad-1caa-4bb6-a1d5-2b81ba757e27-config" (OuterVolumeSpecName: "config") pod "259eb6ad-1caa-4bb6-a1d5-2b81ba757e27" (UID: "259eb6ad-1caa-4bb6-a1d5-2b81ba757e27"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:47:56 crc kubenswrapper[4728]: I0227 10:47:56.163487 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/259eb6ad-1caa-4bb6-a1d5-2b81ba757e27-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "259eb6ad-1caa-4bb6-a1d5-2b81ba757e27" (UID: "259eb6ad-1caa-4bb6-a1d5-2b81ba757e27"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:47:56 crc kubenswrapper[4728]: I0227 10:47:56.164729 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/259eb6ad-1caa-4bb6-a1d5-2b81ba757e27-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "259eb6ad-1caa-4bb6-a1d5-2b81ba757e27" (UID: "259eb6ad-1caa-4bb6-a1d5-2b81ba757e27"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:47:56 crc kubenswrapper[4728]: I0227 10:47:56.210410 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wcpj7\" (UniqueName: \"kubernetes.io/projected/259eb6ad-1caa-4bb6-a1d5-2b81ba757e27-kube-api-access-wcpj7\") on node \"crc\" DevicePath \"\"" Feb 27 10:47:56 crc kubenswrapper[4728]: I0227 10:47:56.210670 4728 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/259eb6ad-1caa-4bb6-a1d5-2b81ba757e27-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 27 10:47:56 crc kubenswrapper[4728]: I0227 10:47:56.210762 4728 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/259eb6ad-1caa-4bb6-a1d5-2b81ba757e27-config\") on node \"crc\" DevicePath \"\"" Feb 27 10:47:56 crc kubenswrapper[4728]: I0227 10:47:56.210830 4728 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/259eb6ad-1caa-4bb6-a1d5-2b81ba757e27-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 27 10:47:56 crc kubenswrapper[4728]: I0227 10:47:56.210899 4728 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/259eb6ad-1caa-4bb6-a1d5-2b81ba757e27-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 27 10:47:56 crc kubenswrapper[4728]: I0227 10:47:56.547671 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-98848" event={"ID":"cc607cb0-0557-4198-8bae-07f9a55cf4a5","Type":"ContainerStarted","Data":"4da79aa54374319530ddbe894d643f5b81add54efea317a80f7bb617f94ba060"} Feb 27 10:47:56 crc kubenswrapper[4728]: I0227 10:47:56.549784 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"60433146-3d7a-433d-a3c3-3152b7591e49","Type":"ContainerStarted","Data":"506e4d97e0054efa81f65ca8df0b276044dedc3386fd4ecb65498ad2bd659bff"} Feb 27 10:47:56 crc kubenswrapper[4728]: I0227 10:47:56.553065 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-bd5fc-config-lnpq6" Feb 27 10:47:56 crc kubenswrapper[4728]: I0227 10:47:56.553102 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-cp22r" Feb 27 10:47:56 crc kubenswrapper[4728]: I0227 10:47:56.553073 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-cp22r" event={"ID":"259eb6ad-1caa-4bb6-a1d5-2b81ba757e27","Type":"ContainerDied","Data":"2b151b66d4ac6e85c04a7c1d444cd39284a3f2bd10db382d59bf7f0a4b0d1e99"} Feb 27 10:47:56 crc kubenswrapper[4728]: I0227 10:47:56.553198 4728 scope.go:117] "RemoveContainer" containerID="e0b03c27257e3fcc204efc5377d7300bb0d3af0e465c13afadc6c96431efa460" Feb 27 10:47:56 crc kubenswrapper[4728]: I0227 10:47:56.581792 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-98848" podStartSLOduration=3.756883761 podStartE2EDuration="22.581768259s" podCreationTimestamp="2026-02-27 10:47:34 +0000 UTC" firstStartedPulling="2026-02-27 10:47:36.787122701 +0000 UTC m=+1276.749488807" lastFinishedPulling="2026-02-27 10:47:55.612007199 +0000 UTC m=+1295.574373305" observedRunningTime="2026-02-27 10:47:56.570977562 +0000 UTC m=+1296.533343708" watchObservedRunningTime="2026-02-27 10:47:56.581768259 +0000 UTC m=+1296.544134375" Feb 27 10:47:56 crc kubenswrapper[4728]: I0227 10:47:56.600497 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-cp22r"] Feb 27 10:47:56 crc kubenswrapper[4728]: I0227 10:47:56.601114 4728 scope.go:117] "RemoveContainer" containerID="2905360e69e8ca10873c633e7134b4ef77dc9b252607c9695e9944df54067b73" Feb 27 10:47:56 crc kubenswrapper[4728]: I0227 10:47:56.626321 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-cp22r"] Feb 27 10:47:56 crc kubenswrapper[4728]: I0227 10:47:56.745033 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="259eb6ad-1caa-4bb6-a1d5-2b81ba757e27" path="/var/lib/kubelet/pods/259eb6ad-1caa-4bb6-a1d5-2b81ba757e27/volumes" Feb 27 10:47:56 crc kubenswrapper[4728]: I0227 10:47:56.811967 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-bd5fc-config-lnpq6"] Feb 27 10:47:56 crc kubenswrapper[4728]: I0227 10:47:56.823599 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-bd5fc-config-lnpq6"] Feb 27 10:47:57 crc kubenswrapper[4728]: I0227 10:47:57.609962 4728 generic.go:334] "Generic (PLEG): container finished" podID="d232c99d-32bc-45e3-bc7b-c9dca99571ac" containerID="ccc2e8c6af34122f89130e2a0bf68c63243d0ce2ccf9d44b350eb27a9f4bfd26" exitCode=0 Feb 27 10:47:57 crc kubenswrapper[4728]: I0227 10:47:57.610198 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"d232c99d-32bc-45e3-bc7b-c9dca99571ac","Type":"ContainerDied","Data":"ccc2e8c6af34122f89130e2a0bf68c63243d0ce2ccf9d44b350eb27a9f4bfd26"} Feb 27 10:47:58 crc kubenswrapper[4728]: I0227 10:47:58.623470 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"60433146-3d7a-433d-a3c3-3152b7591e49","Type":"ContainerStarted","Data":"a340d4a1674ed3d59c7300b26bdc066d546b7c44c64c94ca95931667f740f667"} Feb 27 10:47:58 crc kubenswrapper[4728]: I0227 10:47:58.626020 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"d232c99d-32bc-45e3-bc7b-c9dca99571ac","Type":"ContainerStarted","Data":"e09e4f1ad9f3515b484ce41b57cea27a3223065a6a8b73d0db7f7ea59460ac4f"} Feb 27 10:47:58 crc kubenswrapper[4728]: I0227 10:47:58.643246 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-0" podStartSLOduration=11.255689112 podStartE2EDuration="12.643219721s" podCreationTimestamp="2026-02-27 10:47:46 +0000 UTC" firstStartedPulling="2026-02-27 10:47:56.11491273 +0000 UTC m=+1296.077278836" lastFinishedPulling="2026-02-27 10:47:57.502443339 +0000 UTC m=+1297.464809445" observedRunningTime="2026-02-27 10:47:58.637979408 +0000 UTC m=+1298.600345534" watchObservedRunningTime="2026-02-27 10:47:58.643219721 +0000 UTC m=+1298.605585837" Feb 27 10:47:58 crc kubenswrapper[4728]: I0227 10:47:58.742103 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2b5bd38-8883-4de7-b4a5-8988fd083e21" path="/var/lib/kubelet/pods/f2b5bd38-8883-4de7-b4a5-8988fd083e21/volumes" Feb 27 10:48:00 crc kubenswrapper[4728]: I0227 10:48:00.156440 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536488-bw8pl"] Feb 27 10:48:00 crc kubenswrapper[4728]: E0227 10:48:00.157444 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="259eb6ad-1caa-4bb6-a1d5-2b81ba757e27" containerName="init" Feb 27 10:48:00 crc kubenswrapper[4728]: I0227 10:48:00.157462 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="259eb6ad-1caa-4bb6-a1d5-2b81ba757e27" containerName="init" Feb 27 10:48:00 crc kubenswrapper[4728]: E0227 10:48:00.157565 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="259eb6ad-1caa-4bb6-a1d5-2b81ba757e27" containerName="dnsmasq-dns" Feb 27 10:48:00 crc kubenswrapper[4728]: I0227 10:48:00.157576 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="259eb6ad-1caa-4bb6-a1d5-2b81ba757e27" containerName="dnsmasq-dns" Feb 27 10:48:00 crc kubenswrapper[4728]: E0227 10:48:00.157602 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2b5bd38-8883-4de7-b4a5-8988fd083e21" containerName="ovn-config" Feb 27 10:48:00 crc kubenswrapper[4728]: I0227 10:48:00.157610 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2b5bd38-8883-4de7-b4a5-8988fd083e21" containerName="ovn-config" Feb 27 10:48:00 crc kubenswrapper[4728]: I0227 10:48:00.158097 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="259eb6ad-1caa-4bb6-a1d5-2b81ba757e27" containerName="dnsmasq-dns" Feb 27 10:48:00 crc kubenswrapper[4728]: I0227 10:48:00.158131 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2b5bd38-8883-4de7-b4a5-8988fd083e21" containerName="ovn-config" Feb 27 10:48:00 crc kubenswrapper[4728]: I0227 10:48:00.159720 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536488-bw8pl" Feb 27 10:48:00 crc kubenswrapper[4728]: I0227 10:48:00.164459 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 10:48:00 crc kubenswrapper[4728]: I0227 10:48:00.164757 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-wxdfj" Feb 27 10:48:00 crc kubenswrapper[4728]: I0227 10:48:00.164257 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 10:48:00 crc kubenswrapper[4728]: I0227 10:48:00.185717 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536488-bw8pl"] Feb 27 10:48:00 crc kubenswrapper[4728]: I0227 10:48:00.324002 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjqxd\" (UniqueName: \"kubernetes.io/projected/f1047017-329b-497b-b8a0-235fd7b5681f-kube-api-access-vjqxd\") pod \"auto-csr-approver-29536488-bw8pl\" (UID: \"f1047017-329b-497b-b8a0-235fd7b5681f\") " pod="openshift-infra/auto-csr-approver-29536488-bw8pl" Feb 27 10:48:00 crc kubenswrapper[4728]: I0227 10:48:00.427063 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vjqxd\" (UniqueName: \"kubernetes.io/projected/f1047017-329b-497b-b8a0-235fd7b5681f-kube-api-access-vjqxd\") pod \"auto-csr-approver-29536488-bw8pl\" (UID: \"f1047017-329b-497b-b8a0-235fd7b5681f\") " pod="openshift-infra/auto-csr-approver-29536488-bw8pl" Feb 27 10:48:00 crc kubenswrapper[4728]: I0227 10:48:00.580888 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vjqxd\" (UniqueName: \"kubernetes.io/projected/f1047017-329b-497b-b8a0-235fd7b5681f-kube-api-access-vjqxd\") pod \"auto-csr-approver-29536488-bw8pl\" (UID: \"f1047017-329b-497b-b8a0-235fd7b5681f\") " pod="openshift-infra/auto-csr-approver-29536488-bw8pl" Feb 27 10:48:00 crc kubenswrapper[4728]: I0227 10:48:00.650490 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536488-bw8pl" Feb 27 10:48:01 crc kubenswrapper[4728]: I0227 10:48:01.241726 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536488-bw8pl"] Feb 27 10:48:01 crc kubenswrapper[4728]: W0227 10:48:01.245962 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf1047017_329b_497b_b8a0_235fd7b5681f.slice/crio-22a773af482b28d8e1eefe7203136d90b680f799a1540f03e80d2399eff64680 WatchSource:0}: Error finding container 22a773af482b28d8e1eefe7203136d90b680f799a1540f03e80d2399eff64680: Status 404 returned error can't find the container with id 22a773af482b28d8e1eefe7203136d90b680f799a1540f03e80d2399eff64680 Feb 27 10:48:01 crc kubenswrapper[4728]: I0227 10:48:01.339778 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 27 10:48:01 crc kubenswrapper[4728]: I0227 10:48:01.657095 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536488-bw8pl" event={"ID":"f1047017-329b-497b-b8a0-235fd7b5681f","Type":"ContainerStarted","Data":"22a773af482b28d8e1eefe7203136d90b680f799a1540f03e80d2399eff64680"} Feb 27 10:48:01 crc kubenswrapper[4728]: I0227 10:48:01.692900 4728 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-2" podUID="ad00da50-2e05-4612-a862-5cccd698e77b" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.136:5671: connect: connection refused" Feb 27 10:48:01 crc kubenswrapper[4728]: I0227 10:48:01.708687 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-1" Feb 27 10:48:02 crc kubenswrapper[4728]: I0227 10:48:02.673581 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"d232c99d-32bc-45e3-bc7b-c9dca99571ac","Type":"ContainerStarted","Data":"c096e888333fa6454b440c5ae68384fbb77c96c3399f09917d83fd66e8e38520"} Feb 27 10:48:02 crc kubenswrapper[4728]: I0227 10:48:02.674342 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"d232c99d-32bc-45e3-bc7b-c9dca99571ac","Type":"ContainerStarted","Data":"200c93ddc214dd2c1a7eabb2f8c58c7273bd26a6dae6f028b69a542c0a3fe49c"} Feb 27 10:48:02 crc kubenswrapper[4728]: I0227 10:48:02.678400 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536488-bw8pl" event={"ID":"f1047017-329b-497b-b8a0-235fd7b5681f","Type":"ContainerStarted","Data":"44f3186867c78c9f2e5005207e64bf55d17b81abd564ea2079614a63c852129d"} Feb 27 10:48:02 crc kubenswrapper[4728]: I0227 10:48:02.717991 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=19.717973943 podStartE2EDuration="19.717973943s" podCreationTimestamp="2026-02-27 10:47:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:48:02.716740128 +0000 UTC m=+1302.679106234" watchObservedRunningTime="2026-02-27 10:48:02.717973943 +0000 UTC m=+1302.680340049" Feb 27 10:48:02 crc kubenswrapper[4728]: I0227 10:48:02.748070 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29536488-bw8pl" podStartSLOduration=1.783034771 podStartE2EDuration="2.748040709s" podCreationTimestamp="2026-02-27 10:48:00 +0000 UTC" firstStartedPulling="2026-02-27 10:48:01.248223853 +0000 UTC m=+1301.210589959" lastFinishedPulling="2026-02-27 10:48:02.213229791 +0000 UTC m=+1302.175595897" observedRunningTime="2026-02-27 10:48:02.736396739 +0000 UTC m=+1302.698762845" watchObservedRunningTime="2026-02-27 10:48:02.748040709 +0000 UTC m=+1302.710406835" Feb 27 10:48:03 crc kubenswrapper[4728]: I0227 10:48:03.689631 4728 generic.go:334] "Generic (PLEG): container finished" podID="f1047017-329b-497b-b8a0-235fd7b5681f" containerID="44f3186867c78c9f2e5005207e64bf55d17b81abd564ea2079614a63c852129d" exitCode=0 Feb 27 10:48:03 crc kubenswrapper[4728]: I0227 10:48:03.690752 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536488-bw8pl" event={"ID":"f1047017-329b-497b-b8a0-235fd7b5681f","Type":"ContainerDied","Data":"44f3186867c78c9f2e5005207e64bf55d17b81abd564ea2079614a63c852129d"} Feb 27 10:48:03 crc kubenswrapper[4728]: I0227 10:48:03.715421 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Feb 27 10:48:04 crc kubenswrapper[4728]: I0227 10:48:04.701833 4728 generic.go:334] "Generic (PLEG): container finished" podID="cc607cb0-0557-4198-8bae-07f9a55cf4a5" containerID="4da79aa54374319530ddbe894d643f5b81add54efea317a80f7bb617f94ba060" exitCode=0 Feb 27 10:48:04 crc kubenswrapper[4728]: I0227 10:48:04.701946 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-98848" event={"ID":"cc607cb0-0557-4198-8bae-07f9a55cf4a5","Type":"ContainerDied","Data":"4da79aa54374319530ddbe894d643f5b81add54efea317a80f7bb617f94ba060"} Feb 27 10:48:05 crc kubenswrapper[4728]: I0227 10:48:05.130221 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536488-bw8pl" Feb 27 10:48:05 crc kubenswrapper[4728]: I0227 10:48:05.243144 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vjqxd\" (UniqueName: \"kubernetes.io/projected/f1047017-329b-497b-b8a0-235fd7b5681f-kube-api-access-vjqxd\") pod \"f1047017-329b-497b-b8a0-235fd7b5681f\" (UID: \"f1047017-329b-497b-b8a0-235fd7b5681f\") " Feb 27 10:48:05 crc kubenswrapper[4728]: I0227 10:48:05.256699 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1047017-329b-497b-b8a0-235fd7b5681f-kube-api-access-vjqxd" (OuterVolumeSpecName: "kube-api-access-vjqxd") pod "f1047017-329b-497b-b8a0-235fd7b5681f" (UID: "f1047017-329b-497b-b8a0-235fd7b5681f"). InnerVolumeSpecName "kube-api-access-vjqxd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:48:05 crc kubenswrapper[4728]: I0227 10:48:05.345798 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vjqxd\" (UniqueName: \"kubernetes.io/projected/f1047017-329b-497b-b8a0-235fd7b5681f-kube-api-access-vjqxd\") on node \"crc\" DevicePath \"\"" Feb 27 10:48:05 crc kubenswrapper[4728]: I0227 10:48:05.730017 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536488-bw8pl" Feb 27 10:48:05 crc kubenswrapper[4728]: I0227 10:48:05.730066 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536488-bw8pl" event={"ID":"f1047017-329b-497b-b8a0-235fd7b5681f","Type":"ContainerDied","Data":"22a773af482b28d8e1eefe7203136d90b680f799a1540f03e80d2399eff64680"} Feb 27 10:48:05 crc kubenswrapper[4728]: I0227 10:48:05.730132 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="22a773af482b28d8e1eefe7203136d90b680f799a1540f03e80d2399eff64680" Feb 27 10:48:05 crc kubenswrapper[4728]: I0227 10:48:05.922425 4728 patch_prober.go:28] interesting pod/machine-config-daemon-mf2hh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 10:48:05 crc kubenswrapper[4728]: I0227 10:48:05.923043 4728 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 10:48:06 crc kubenswrapper[4728]: I0227 10:48:06.209877 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536482-5svg8"] Feb 27 10:48:06 crc kubenswrapper[4728]: I0227 10:48:06.221382 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536482-5svg8"] Feb 27 10:48:06 crc kubenswrapper[4728]: I0227 10:48:06.330535 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-98848" Feb 27 10:48:06 crc kubenswrapper[4728]: I0227 10:48:06.468048 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc607cb0-0557-4198-8bae-07f9a55cf4a5-combined-ca-bundle\") pod \"cc607cb0-0557-4198-8bae-07f9a55cf4a5\" (UID: \"cc607cb0-0557-4198-8bae-07f9a55cf4a5\") " Feb 27 10:48:06 crc kubenswrapper[4728]: I0227 10:48:06.468137 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kkppg\" (UniqueName: \"kubernetes.io/projected/cc607cb0-0557-4198-8bae-07f9a55cf4a5-kube-api-access-kkppg\") pod \"cc607cb0-0557-4198-8bae-07f9a55cf4a5\" (UID: \"cc607cb0-0557-4198-8bae-07f9a55cf4a5\") " Feb 27 10:48:06 crc kubenswrapper[4728]: I0227 10:48:06.468345 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc607cb0-0557-4198-8bae-07f9a55cf4a5-config-data\") pod \"cc607cb0-0557-4198-8bae-07f9a55cf4a5\" (UID: \"cc607cb0-0557-4198-8bae-07f9a55cf4a5\") " Feb 27 10:48:06 crc kubenswrapper[4728]: I0227 10:48:06.468397 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/cc607cb0-0557-4198-8bae-07f9a55cf4a5-db-sync-config-data\") pod \"cc607cb0-0557-4198-8bae-07f9a55cf4a5\" (UID: \"cc607cb0-0557-4198-8bae-07f9a55cf4a5\") " Feb 27 10:48:06 crc kubenswrapper[4728]: I0227 10:48:06.472932 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc607cb0-0557-4198-8bae-07f9a55cf4a5-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "cc607cb0-0557-4198-8bae-07f9a55cf4a5" (UID: "cc607cb0-0557-4198-8bae-07f9a55cf4a5"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:48:06 crc kubenswrapper[4728]: I0227 10:48:06.473206 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc607cb0-0557-4198-8bae-07f9a55cf4a5-kube-api-access-kkppg" (OuterVolumeSpecName: "kube-api-access-kkppg") pod "cc607cb0-0557-4198-8bae-07f9a55cf4a5" (UID: "cc607cb0-0557-4198-8bae-07f9a55cf4a5"). InnerVolumeSpecName "kube-api-access-kkppg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:48:06 crc kubenswrapper[4728]: I0227 10:48:06.499671 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc607cb0-0557-4198-8bae-07f9a55cf4a5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cc607cb0-0557-4198-8bae-07f9a55cf4a5" (UID: "cc607cb0-0557-4198-8bae-07f9a55cf4a5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:48:06 crc kubenswrapper[4728]: I0227 10:48:06.524265 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc607cb0-0557-4198-8bae-07f9a55cf4a5-config-data" (OuterVolumeSpecName: "config-data") pod "cc607cb0-0557-4198-8bae-07f9a55cf4a5" (UID: "cc607cb0-0557-4198-8bae-07f9a55cf4a5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:48:06 crc kubenswrapper[4728]: I0227 10:48:06.570924 4728 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc607cb0-0557-4198-8bae-07f9a55cf4a5-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 10:48:06 crc kubenswrapper[4728]: I0227 10:48:06.571143 4728 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/cc607cb0-0557-4198-8bae-07f9a55cf4a5-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 10:48:06 crc kubenswrapper[4728]: I0227 10:48:06.571203 4728 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc607cb0-0557-4198-8bae-07f9a55cf4a5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 10:48:06 crc kubenswrapper[4728]: I0227 10:48:06.571270 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kkppg\" (UniqueName: \"kubernetes.io/projected/cc607cb0-0557-4198-8bae-07f9a55cf4a5-kube-api-access-kkppg\") on node \"crc\" DevicePath \"\"" Feb 27 10:48:06 crc kubenswrapper[4728]: I0227 10:48:06.741657 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f16b827-2fa4-4f30-9c9f-d5eeabaa1793" path="/var/lib/kubelet/pods/7f16b827-2fa4-4f30-9c9f-d5eeabaa1793/volumes" Feb 27 10:48:06 crc kubenswrapper[4728]: I0227 10:48:06.743916 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-98848" event={"ID":"cc607cb0-0557-4198-8bae-07f9a55cf4a5","Type":"ContainerDied","Data":"836ca9132bd4f002a28430cb9d38b0569502e39063b8d6f6523eacceec8a35e1"} Feb 27 10:48:06 crc kubenswrapper[4728]: I0227 10:48:06.743962 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="836ca9132bd4f002a28430cb9d38b0569502e39063b8d6f6523eacceec8a35e1" Feb 27 10:48:06 crc kubenswrapper[4728]: I0227 10:48:06.744028 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-98848" Feb 27 10:48:07 crc kubenswrapper[4728]: I0227 10:48:07.154660 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-895cf5cf-lfgq9"] Feb 27 10:48:07 crc kubenswrapper[4728]: E0227 10:48:07.155079 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1047017-329b-497b-b8a0-235fd7b5681f" containerName="oc" Feb 27 10:48:07 crc kubenswrapper[4728]: I0227 10:48:07.155097 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1047017-329b-497b-b8a0-235fd7b5681f" containerName="oc" Feb 27 10:48:07 crc kubenswrapper[4728]: E0227 10:48:07.155112 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc607cb0-0557-4198-8bae-07f9a55cf4a5" containerName="glance-db-sync" Feb 27 10:48:07 crc kubenswrapper[4728]: I0227 10:48:07.155118 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc607cb0-0557-4198-8bae-07f9a55cf4a5" containerName="glance-db-sync" Feb 27 10:48:07 crc kubenswrapper[4728]: I0227 10:48:07.155409 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc607cb0-0557-4198-8bae-07f9a55cf4a5" containerName="glance-db-sync" Feb 27 10:48:07 crc kubenswrapper[4728]: I0227 10:48:07.155438 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1047017-329b-497b-b8a0-235fd7b5681f" containerName="oc" Feb 27 10:48:07 crc kubenswrapper[4728]: I0227 10:48:07.157190 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-895cf5cf-lfgq9" Feb 27 10:48:07 crc kubenswrapper[4728]: I0227 10:48:07.171827 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-895cf5cf-lfgq9"] Feb 27 10:48:07 crc kubenswrapper[4728]: I0227 10:48:07.291612 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1080a33e-f552-404c-8381-281cf9ab1078-dns-svc\") pod \"dnsmasq-dns-895cf5cf-lfgq9\" (UID: \"1080a33e-f552-404c-8381-281cf9ab1078\") " pod="openstack/dnsmasq-dns-895cf5cf-lfgq9" Feb 27 10:48:07 crc kubenswrapper[4728]: I0227 10:48:07.291653 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1080a33e-f552-404c-8381-281cf9ab1078-ovsdbserver-sb\") pod \"dnsmasq-dns-895cf5cf-lfgq9\" (UID: \"1080a33e-f552-404c-8381-281cf9ab1078\") " pod="openstack/dnsmasq-dns-895cf5cf-lfgq9" Feb 27 10:48:07 crc kubenswrapper[4728]: I0227 10:48:07.291700 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1080a33e-f552-404c-8381-281cf9ab1078-dns-swift-storage-0\") pod \"dnsmasq-dns-895cf5cf-lfgq9\" (UID: \"1080a33e-f552-404c-8381-281cf9ab1078\") " pod="openstack/dnsmasq-dns-895cf5cf-lfgq9" Feb 27 10:48:07 crc kubenswrapper[4728]: I0227 10:48:07.291720 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1080a33e-f552-404c-8381-281cf9ab1078-config\") pod \"dnsmasq-dns-895cf5cf-lfgq9\" (UID: \"1080a33e-f552-404c-8381-281cf9ab1078\") " pod="openstack/dnsmasq-dns-895cf5cf-lfgq9" Feb 27 10:48:07 crc kubenswrapper[4728]: I0227 10:48:07.291772 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1080a33e-f552-404c-8381-281cf9ab1078-ovsdbserver-nb\") pod \"dnsmasq-dns-895cf5cf-lfgq9\" (UID: \"1080a33e-f552-404c-8381-281cf9ab1078\") " pod="openstack/dnsmasq-dns-895cf5cf-lfgq9" Feb 27 10:48:07 crc kubenswrapper[4728]: I0227 10:48:07.291812 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66rdk\" (UniqueName: \"kubernetes.io/projected/1080a33e-f552-404c-8381-281cf9ab1078-kube-api-access-66rdk\") pod \"dnsmasq-dns-895cf5cf-lfgq9\" (UID: \"1080a33e-f552-404c-8381-281cf9ab1078\") " pod="openstack/dnsmasq-dns-895cf5cf-lfgq9" Feb 27 10:48:07 crc kubenswrapper[4728]: I0227 10:48:07.393379 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1080a33e-f552-404c-8381-281cf9ab1078-dns-svc\") pod \"dnsmasq-dns-895cf5cf-lfgq9\" (UID: \"1080a33e-f552-404c-8381-281cf9ab1078\") " pod="openstack/dnsmasq-dns-895cf5cf-lfgq9" Feb 27 10:48:07 crc kubenswrapper[4728]: I0227 10:48:07.393753 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1080a33e-f552-404c-8381-281cf9ab1078-ovsdbserver-sb\") pod \"dnsmasq-dns-895cf5cf-lfgq9\" (UID: \"1080a33e-f552-404c-8381-281cf9ab1078\") " pod="openstack/dnsmasq-dns-895cf5cf-lfgq9" Feb 27 10:48:07 crc kubenswrapper[4728]: I0227 10:48:07.393842 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1080a33e-f552-404c-8381-281cf9ab1078-dns-swift-storage-0\") pod \"dnsmasq-dns-895cf5cf-lfgq9\" (UID: \"1080a33e-f552-404c-8381-281cf9ab1078\") " pod="openstack/dnsmasq-dns-895cf5cf-lfgq9" Feb 27 10:48:07 crc kubenswrapper[4728]: I0227 10:48:07.393877 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1080a33e-f552-404c-8381-281cf9ab1078-config\") pod \"dnsmasq-dns-895cf5cf-lfgq9\" (UID: \"1080a33e-f552-404c-8381-281cf9ab1078\") " pod="openstack/dnsmasq-dns-895cf5cf-lfgq9" Feb 27 10:48:07 crc kubenswrapper[4728]: I0227 10:48:07.393977 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1080a33e-f552-404c-8381-281cf9ab1078-ovsdbserver-nb\") pod \"dnsmasq-dns-895cf5cf-lfgq9\" (UID: \"1080a33e-f552-404c-8381-281cf9ab1078\") " pod="openstack/dnsmasq-dns-895cf5cf-lfgq9" Feb 27 10:48:07 crc kubenswrapper[4728]: I0227 10:48:07.394073 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-66rdk\" (UniqueName: \"kubernetes.io/projected/1080a33e-f552-404c-8381-281cf9ab1078-kube-api-access-66rdk\") pod \"dnsmasq-dns-895cf5cf-lfgq9\" (UID: \"1080a33e-f552-404c-8381-281cf9ab1078\") " pod="openstack/dnsmasq-dns-895cf5cf-lfgq9" Feb 27 10:48:07 crc kubenswrapper[4728]: I0227 10:48:07.394963 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1080a33e-f552-404c-8381-281cf9ab1078-ovsdbserver-nb\") pod \"dnsmasq-dns-895cf5cf-lfgq9\" (UID: \"1080a33e-f552-404c-8381-281cf9ab1078\") " pod="openstack/dnsmasq-dns-895cf5cf-lfgq9" Feb 27 10:48:07 crc kubenswrapper[4728]: I0227 10:48:07.395149 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1080a33e-f552-404c-8381-281cf9ab1078-ovsdbserver-sb\") pod \"dnsmasq-dns-895cf5cf-lfgq9\" (UID: \"1080a33e-f552-404c-8381-281cf9ab1078\") " pod="openstack/dnsmasq-dns-895cf5cf-lfgq9" Feb 27 10:48:07 crc kubenswrapper[4728]: I0227 10:48:07.395360 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1080a33e-f552-404c-8381-281cf9ab1078-config\") pod \"dnsmasq-dns-895cf5cf-lfgq9\" (UID: \"1080a33e-f552-404c-8381-281cf9ab1078\") " pod="openstack/dnsmasq-dns-895cf5cf-lfgq9" Feb 27 10:48:07 crc kubenswrapper[4728]: I0227 10:48:07.395401 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1080a33e-f552-404c-8381-281cf9ab1078-dns-swift-storage-0\") pod \"dnsmasq-dns-895cf5cf-lfgq9\" (UID: \"1080a33e-f552-404c-8381-281cf9ab1078\") " pod="openstack/dnsmasq-dns-895cf5cf-lfgq9" Feb 27 10:48:07 crc kubenswrapper[4728]: I0227 10:48:07.395731 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1080a33e-f552-404c-8381-281cf9ab1078-dns-svc\") pod \"dnsmasq-dns-895cf5cf-lfgq9\" (UID: \"1080a33e-f552-404c-8381-281cf9ab1078\") " pod="openstack/dnsmasq-dns-895cf5cf-lfgq9" Feb 27 10:48:07 crc kubenswrapper[4728]: I0227 10:48:07.433600 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-66rdk\" (UniqueName: \"kubernetes.io/projected/1080a33e-f552-404c-8381-281cf9ab1078-kube-api-access-66rdk\") pod \"dnsmasq-dns-895cf5cf-lfgq9\" (UID: \"1080a33e-f552-404c-8381-281cf9ab1078\") " pod="openstack/dnsmasq-dns-895cf5cf-lfgq9" Feb 27 10:48:07 crc kubenswrapper[4728]: I0227 10:48:07.473906 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-895cf5cf-lfgq9" Feb 27 10:48:08 crc kubenswrapper[4728]: I0227 10:48:08.028551 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-895cf5cf-lfgq9"] Feb 27 10:48:08 crc kubenswrapper[4728]: I0227 10:48:08.772945 4728 generic.go:334] "Generic (PLEG): container finished" podID="1080a33e-f552-404c-8381-281cf9ab1078" containerID="393cc683237d0f1226c9deb8bd8852cd4c871681f45d92bcc309735445638319" exitCode=0 Feb 27 10:48:08 crc kubenswrapper[4728]: I0227 10:48:08.773447 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-895cf5cf-lfgq9" event={"ID":"1080a33e-f552-404c-8381-281cf9ab1078","Type":"ContainerDied","Data":"393cc683237d0f1226c9deb8bd8852cd4c871681f45d92bcc309735445638319"} Feb 27 10:48:08 crc kubenswrapper[4728]: I0227 10:48:08.773542 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-895cf5cf-lfgq9" event={"ID":"1080a33e-f552-404c-8381-281cf9ab1078","Type":"ContainerStarted","Data":"0e1b42e708d2188bf4de26042523426417bd09220d68a4ce2a056ee050552c4d"} Feb 27 10:48:09 crc kubenswrapper[4728]: I0227 10:48:09.786397 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-895cf5cf-lfgq9" event={"ID":"1080a33e-f552-404c-8381-281cf9ab1078","Type":"ContainerStarted","Data":"ead82b73a5ce8b18f1147934bda38e7b8c4732a6f3fc5d0b5ac5d099b38fb766"} Feb 27 10:48:09 crc kubenswrapper[4728]: I0227 10:48:09.787894 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-895cf5cf-lfgq9" Feb 27 10:48:09 crc kubenswrapper[4728]: I0227 10:48:09.815833 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-895cf5cf-lfgq9" podStartSLOduration=2.8158099119999997 podStartE2EDuration="2.815809912s" podCreationTimestamp="2026-02-27 10:48:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:48:09.808913632 +0000 UTC m=+1309.771279738" watchObservedRunningTime="2026-02-27 10:48:09.815809912 +0000 UTC m=+1309.778176028" Feb 27 10:48:11 crc kubenswrapper[4728]: I0227 10:48:11.693761 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-2" Feb 27 10:48:12 crc kubenswrapper[4728]: I0227 10:48:12.062262 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-52njv"] Feb 27 10:48:12 crc kubenswrapper[4728]: I0227 10:48:12.063808 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-52njv" Feb 27 10:48:12 crc kubenswrapper[4728]: I0227 10:48:12.076524 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-52njv"] Feb 27 10:48:12 crc kubenswrapper[4728]: I0227 10:48:12.213659 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/85f6bc27-1d0e-48ad-9a23-60062c5f8bdb-operator-scripts\") pod \"cinder-db-create-52njv\" (UID: \"85f6bc27-1d0e-48ad-9a23-60062c5f8bdb\") " pod="openstack/cinder-db-create-52njv" Feb 27 10:48:12 crc kubenswrapper[4728]: I0227 10:48:12.213745 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4b8pm\" (UniqueName: \"kubernetes.io/projected/85f6bc27-1d0e-48ad-9a23-60062c5f8bdb-kube-api-access-4b8pm\") pod \"cinder-db-create-52njv\" (UID: \"85f6bc27-1d0e-48ad-9a23-60062c5f8bdb\") " pod="openstack/cinder-db-create-52njv" Feb 27 10:48:12 crc kubenswrapper[4728]: I0227 10:48:12.265024 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-6bca-account-create-update-bcjb2"] Feb 27 10:48:12 crc kubenswrapper[4728]: I0227 10:48:12.266365 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-6bca-account-create-update-bcjb2" Feb 27 10:48:12 crc kubenswrapper[4728]: I0227 10:48:12.269150 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-db-secret" Feb 27 10:48:12 crc kubenswrapper[4728]: I0227 10:48:12.277837 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-6bca-account-create-update-bcjb2"] Feb 27 10:48:12 crc kubenswrapper[4728]: I0227 10:48:12.315622 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/85f6bc27-1d0e-48ad-9a23-60062c5f8bdb-operator-scripts\") pod \"cinder-db-create-52njv\" (UID: \"85f6bc27-1d0e-48ad-9a23-60062c5f8bdb\") " pod="openstack/cinder-db-create-52njv" Feb 27 10:48:12 crc kubenswrapper[4728]: I0227 10:48:12.315891 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4b8pm\" (UniqueName: \"kubernetes.io/projected/85f6bc27-1d0e-48ad-9a23-60062c5f8bdb-kube-api-access-4b8pm\") pod \"cinder-db-create-52njv\" (UID: \"85f6bc27-1d0e-48ad-9a23-60062c5f8bdb\") " pod="openstack/cinder-db-create-52njv" Feb 27 10:48:12 crc kubenswrapper[4728]: I0227 10:48:12.316254 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/85f6bc27-1d0e-48ad-9a23-60062c5f8bdb-operator-scripts\") pod \"cinder-db-create-52njv\" (UID: \"85f6bc27-1d0e-48ad-9a23-60062c5f8bdb\") " pod="openstack/cinder-db-create-52njv" Feb 27 10:48:12 crc kubenswrapper[4728]: I0227 10:48:12.339871 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4b8pm\" (UniqueName: \"kubernetes.io/projected/85f6bc27-1d0e-48ad-9a23-60062c5f8bdb-kube-api-access-4b8pm\") pod \"cinder-db-create-52njv\" (UID: \"85f6bc27-1d0e-48ad-9a23-60062c5f8bdb\") " pod="openstack/cinder-db-create-52njv" Feb 27 10:48:12 crc kubenswrapper[4728]: I0227 10:48:12.353874 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-create-lb6tb"] Feb 27 10:48:12 crc kubenswrapper[4728]: I0227 10:48:12.355065 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-lb6tb" Feb 27 10:48:12 crc kubenswrapper[4728]: I0227 10:48:12.371212 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-lb6tb"] Feb 27 10:48:12 crc kubenswrapper[4728]: I0227 10:48:12.380333 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-151d-account-create-update-dsl49"] Feb 27 10:48:12 crc kubenswrapper[4728]: I0227 10:48:12.381904 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-151d-account-create-update-dsl49" Feb 27 10:48:12 crc kubenswrapper[4728]: I0227 10:48:12.383918 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Feb 27 10:48:12 crc kubenswrapper[4728]: I0227 10:48:12.385994 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-52njv" Feb 27 10:48:12 crc kubenswrapper[4728]: I0227 10:48:12.417569 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6lkz\" (UniqueName: \"kubernetes.io/projected/8f7f72d1-b9f2-4a8f-b965-6892e4aaa6bd-kube-api-access-t6lkz\") pod \"heat-6bca-account-create-update-bcjb2\" (UID: \"8f7f72d1-b9f2-4a8f-b965-6892e4aaa6bd\") " pod="openstack/heat-6bca-account-create-update-bcjb2" Feb 27 10:48:12 crc kubenswrapper[4728]: I0227 10:48:12.417656 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8f7f72d1-b9f2-4a8f-b965-6892e4aaa6bd-operator-scripts\") pod \"heat-6bca-account-create-update-bcjb2\" (UID: \"8f7f72d1-b9f2-4a8f-b965-6892e4aaa6bd\") " pod="openstack/heat-6bca-account-create-update-bcjb2" Feb 27 10:48:12 crc kubenswrapper[4728]: I0227 10:48:12.441184 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-151d-account-create-update-dsl49"] Feb 27 10:48:12 crc kubenswrapper[4728]: I0227 10:48:12.499563 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-bc9dk"] Feb 27 10:48:12 crc kubenswrapper[4728]: I0227 10:48:12.501048 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-bc9dk" Feb 27 10:48:12 crc kubenswrapper[4728]: I0227 10:48:12.504878 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 27 10:48:12 crc kubenswrapper[4728]: I0227 10:48:12.505075 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 27 10:48:12 crc kubenswrapper[4728]: I0227 10:48:12.512550 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-bc9dk"] Feb 27 10:48:12 crc kubenswrapper[4728]: I0227 10:48:12.517055 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 27 10:48:12 crc kubenswrapper[4728]: I0227 10:48:12.517276 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-2jf8f" Feb 27 10:48:12 crc kubenswrapper[4728]: I0227 10:48:12.518963 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8f7f72d1-b9f2-4a8f-b965-6892e4aaa6bd-operator-scripts\") pod \"heat-6bca-account-create-update-bcjb2\" (UID: \"8f7f72d1-b9f2-4a8f-b965-6892e4aaa6bd\") " pod="openstack/heat-6bca-account-create-update-bcjb2" Feb 27 10:48:12 crc kubenswrapper[4728]: I0227 10:48:12.519010 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btvh7\" (UniqueName: \"kubernetes.io/projected/abbbfc84-fe7d-4fc9-8f96-d360b6356660-kube-api-access-btvh7\") pod \"cinder-151d-account-create-update-dsl49\" (UID: \"abbbfc84-fe7d-4fc9-8f96-d360b6356660\") " pod="openstack/cinder-151d-account-create-update-dsl49" Feb 27 10:48:12 crc kubenswrapper[4728]: I0227 10:48:12.519054 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/abbbfc84-fe7d-4fc9-8f96-d360b6356660-operator-scripts\") pod \"cinder-151d-account-create-update-dsl49\" (UID: \"abbbfc84-fe7d-4fc9-8f96-d360b6356660\") " pod="openstack/cinder-151d-account-create-update-dsl49" Feb 27 10:48:12 crc kubenswrapper[4728]: I0227 10:48:12.519097 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7160eba2-79dd-42c9-8540-30f948234052-operator-scripts\") pod \"heat-db-create-lb6tb\" (UID: \"7160eba2-79dd-42c9-8540-30f948234052\") " pod="openstack/heat-db-create-lb6tb" Feb 27 10:48:12 crc kubenswrapper[4728]: I0227 10:48:12.519155 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxlmv\" (UniqueName: \"kubernetes.io/projected/7160eba2-79dd-42c9-8540-30f948234052-kube-api-access-mxlmv\") pod \"heat-db-create-lb6tb\" (UID: \"7160eba2-79dd-42c9-8540-30f948234052\") " pod="openstack/heat-db-create-lb6tb" Feb 27 10:48:12 crc kubenswrapper[4728]: I0227 10:48:12.519193 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t6lkz\" (UniqueName: \"kubernetes.io/projected/8f7f72d1-b9f2-4a8f-b965-6892e4aaa6bd-kube-api-access-t6lkz\") pod \"heat-6bca-account-create-update-bcjb2\" (UID: \"8f7f72d1-b9f2-4a8f-b965-6892e4aaa6bd\") " pod="openstack/heat-6bca-account-create-update-bcjb2" Feb 27 10:48:12 crc kubenswrapper[4728]: I0227 10:48:12.522690 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8f7f72d1-b9f2-4a8f-b965-6892e4aaa6bd-operator-scripts\") pod \"heat-6bca-account-create-update-bcjb2\" (UID: \"8f7f72d1-b9f2-4a8f-b965-6892e4aaa6bd\") " pod="openstack/heat-6bca-account-create-update-bcjb2" Feb 27 10:48:12 crc kubenswrapper[4728]: I0227 10:48:12.544981 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-4rkxc"] Feb 27 10:48:12 crc kubenswrapper[4728]: I0227 10:48:12.546366 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-4rkxc" Feb 27 10:48:12 crc kubenswrapper[4728]: I0227 10:48:12.571599 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-4rkxc"] Feb 27 10:48:12 crc kubenswrapper[4728]: I0227 10:48:12.576953 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t6lkz\" (UniqueName: \"kubernetes.io/projected/8f7f72d1-b9f2-4a8f-b965-6892e4aaa6bd-kube-api-access-t6lkz\") pod \"heat-6bca-account-create-update-bcjb2\" (UID: \"8f7f72d1-b9f2-4a8f-b965-6892e4aaa6bd\") " pod="openstack/heat-6bca-account-create-update-bcjb2" Feb 27 10:48:12 crc kubenswrapper[4728]: I0227 10:48:12.581119 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-6bca-account-create-update-bcjb2" Feb 27 10:48:12 crc kubenswrapper[4728]: I0227 10:48:12.634747 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-btvh7\" (UniqueName: \"kubernetes.io/projected/abbbfc84-fe7d-4fc9-8f96-d360b6356660-kube-api-access-btvh7\") pod \"cinder-151d-account-create-update-dsl49\" (UID: \"abbbfc84-fe7d-4fc9-8f96-d360b6356660\") " pod="openstack/cinder-151d-account-create-update-dsl49" Feb 27 10:48:12 crc kubenswrapper[4728]: I0227 10:48:12.635030 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4wvcx\" (UniqueName: \"kubernetes.io/projected/7100f5fe-cf67-4a79-b69f-c2ccf91d9426-kube-api-access-4wvcx\") pod \"barbican-db-create-4rkxc\" (UID: \"7100f5fe-cf67-4a79-b69f-c2ccf91d9426\") " pod="openstack/barbican-db-create-4rkxc" Feb 27 10:48:12 crc kubenswrapper[4728]: I0227 10:48:12.635060 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/abbbfc84-fe7d-4fc9-8f96-d360b6356660-operator-scripts\") pod \"cinder-151d-account-create-update-dsl49\" (UID: \"abbbfc84-fe7d-4fc9-8f96-d360b6356660\") " pod="openstack/cinder-151d-account-create-update-dsl49" Feb 27 10:48:12 crc kubenswrapper[4728]: I0227 10:48:12.635083 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d31f26e9-dded-4375-abb8-f038bce13899-config-data\") pod \"keystone-db-sync-bc9dk\" (UID: \"d31f26e9-dded-4375-abb8-f038bce13899\") " pod="openstack/keystone-db-sync-bc9dk" Feb 27 10:48:12 crc kubenswrapper[4728]: I0227 10:48:12.635125 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7160eba2-79dd-42c9-8540-30f948234052-operator-scripts\") pod \"heat-db-create-lb6tb\" (UID: \"7160eba2-79dd-42c9-8540-30f948234052\") " pod="openstack/heat-db-create-lb6tb" Feb 27 10:48:12 crc kubenswrapper[4728]: I0227 10:48:12.635161 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d31f26e9-dded-4375-abb8-f038bce13899-combined-ca-bundle\") pod \"keystone-db-sync-bc9dk\" (UID: \"d31f26e9-dded-4375-abb8-f038bce13899\") " pod="openstack/keystone-db-sync-bc9dk" Feb 27 10:48:12 crc kubenswrapper[4728]: I0227 10:48:12.635185 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dksn2\" (UniqueName: \"kubernetes.io/projected/d31f26e9-dded-4375-abb8-f038bce13899-kube-api-access-dksn2\") pod \"keystone-db-sync-bc9dk\" (UID: \"d31f26e9-dded-4375-abb8-f038bce13899\") " pod="openstack/keystone-db-sync-bc9dk" Feb 27 10:48:12 crc kubenswrapper[4728]: I0227 10:48:12.635213 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7100f5fe-cf67-4a79-b69f-c2ccf91d9426-operator-scripts\") pod \"barbican-db-create-4rkxc\" (UID: \"7100f5fe-cf67-4a79-b69f-c2ccf91d9426\") " pod="openstack/barbican-db-create-4rkxc" Feb 27 10:48:12 crc kubenswrapper[4728]: I0227 10:48:12.635240 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mxlmv\" (UniqueName: \"kubernetes.io/projected/7160eba2-79dd-42c9-8540-30f948234052-kube-api-access-mxlmv\") pod \"heat-db-create-lb6tb\" (UID: \"7160eba2-79dd-42c9-8540-30f948234052\") " pod="openstack/heat-db-create-lb6tb" Feb 27 10:48:12 crc kubenswrapper[4728]: I0227 10:48:12.636293 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/abbbfc84-fe7d-4fc9-8f96-d360b6356660-operator-scripts\") pod \"cinder-151d-account-create-update-dsl49\" (UID: \"abbbfc84-fe7d-4fc9-8f96-d360b6356660\") " pod="openstack/cinder-151d-account-create-update-dsl49" Feb 27 10:48:12 crc kubenswrapper[4728]: I0227 10:48:12.636465 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7160eba2-79dd-42c9-8540-30f948234052-operator-scripts\") pod \"heat-db-create-lb6tb\" (UID: \"7160eba2-79dd-42c9-8540-30f948234052\") " pod="openstack/heat-db-create-lb6tb" Feb 27 10:48:12 crc kubenswrapper[4728]: I0227 10:48:12.669014 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-btvh7\" (UniqueName: \"kubernetes.io/projected/abbbfc84-fe7d-4fc9-8f96-d360b6356660-kube-api-access-btvh7\") pod \"cinder-151d-account-create-update-dsl49\" (UID: \"abbbfc84-fe7d-4fc9-8f96-d360b6356660\") " pod="openstack/cinder-151d-account-create-update-dsl49" Feb 27 10:48:12 crc kubenswrapper[4728]: I0227 10:48:12.678758 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mxlmv\" (UniqueName: \"kubernetes.io/projected/7160eba2-79dd-42c9-8540-30f948234052-kube-api-access-mxlmv\") pod \"heat-db-create-lb6tb\" (UID: \"7160eba2-79dd-42c9-8540-30f948234052\") " pod="openstack/heat-db-create-lb6tb" Feb 27 10:48:12 crc kubenswrapper[4728]: I0227 10:48:12.708577 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-332e-account-create-update-mt6gn"] Feb 27 10:48:12 crc kubenswrapper[4728]: I0227 10:48:12.712864 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-332e-account-create-update-mt6gn" Feb 27 10:48:12 crc kubenswrapper[4728]: I0227 10:48:12.717378 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Feb 27 10:48:12 crc kubenswrapper[4728]: I0227 10:48:12.742056 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4wvcx\" (UniqueName: \"kubernetes.io/projected/7100f5fe-cf67-4a79-b69f-c2ccf91d9426-kube-api-access-4wvcx\") pod \"barbican-db-create-4rkxc\" (UID: \"7100f5fe-cf67-4a79-b69f-c2ccf91d9426\") " pod="openstack/barbican-db-create-4rkxc" Feb 27 10:48:12 crc kubenswrapper[4728]: I0227 10:48:12.748031 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d31f26e9-dded-4375-abb8-f038bce13899-config-data\") pod \"keystone-db-sync-bc9dk\" (UID: \"d31f26e9-dded-4375-abb8-f038bce13899\") " pod="openstack/keystone-db-sync-bc9dk" Feb 27 10:48:12 crc kubenswrapper[4728]: I0227 10:48:12.748336 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d31f26e9-dded-4375-abb8-f038bce13899-combined-ca-bundle\") pod \"keystone-db-sync-bc9dk\" (UID: \"d31f26e9-dded-4375-abb8-f038bce13899\") " pod="openstack/keystone-db-sync-bc9dk" Feb 27 10:48:12 crc kubenswrapper[4728]: I0227 10:48:12.749053 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dksn2\" (UniqueName: \"kubernetes.io/projected/d31f26e9-dded-4375-abb8-f038bce13899-kube-api-access-dksn2\") pod \"keystone-db-sync-bc9dk\" (UID: \"d31f26e9-dded-4375-abb8-f038bce13899\") " pod="openstack/keystone-db-sync-bc9dk" Feb 27 10:48:12 crc kubenswrapper[4728]: I0227 10:48:12.749178 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7100f5fe-cf67-4a79-b69f-c2ccf91d9426-operator-scripts\") pod \"barbican-db-create-4rkxc\" (UID: \"7100f5fe-cf67-4a79-b69f-c2ccf91d9426\") " pod="openstack/barbican-db-create-4rkxc" Feb 27 10:48:12 crc kubenswrapper[4728]: I0227 10:48:12.750830 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7100f5fe-cf67-4a79-b69f-c2ccf91d9426-operator-scripts\") pod \"barbican-db-create-4rkxc\" (UID: \"7100f5fe-cf67-4a79-b69f-c2ccf91d9426\") " pod="openstack/barbican-db-create-4rkxc" Feb 27 10:48:12 crc kubenswrapper[4728]: I0227 10:48:12.752441 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d31f26e9-dded-4375-abb8-f038bce13899-config-data\") pod \"keystone-db-sync-bc9dk\" (UID: \"d31f26e9-dded-4375-abb8-f038bce13899\") " pod="openstack/keystone-db-sync-bc9dk" Feb 27 10:48:12 crc kubenswrapper[4728]: I0227 10:48:12.754426 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d31f26e9-dded-4375-abb8-f038bce13899-combined-ca-bundle\") pod \"keystone-db-sync-bc9dk\" (UID: \"d31f26e9-dded-4375-abb8-f038bce13899\") " pod="openstack/keystone-db-sync-bc9dk" Feb 27 10:48:12 crc kubenswrapper[4728]: I0227 10:48:12.788521 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-332e-account-create-update-mt6gn"] Feb 27 10:48:12 crc kubenswrapper[4728]: I0227 10:48:12.806258 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-lb6tb" Feb 27 10:48:12 crc kubenswrapper[4728]: I0227 10:48:12.817048 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dksn2\" (UniqueName: \"kubernetes.io/projected/d31f26e9-dded-4375-abb8-f038bce13899-kube-api-access-dksn2\") pod \"keystone-db-sync-bc9dk\" (UID: \"d31f26e9-dded-4375-abb8-f038bce13899\") " pod="openstack/keystone-db-sync-bc9dk" Feb 27 10:48:12 crc kubenswrapper[4728]: I0227 10:48:12.820280 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4wvcx\" (UniqueName: \"kubernetes.io/projected/7100f5fe-cf67-4a79-b69f-c2ccf91d9426-kube-api-access-4wvcx\") pod \"barbican-db-create-4rkxc\" (UID: \"7100f5fe-cf67-4a79-b69f-c2ccf91d9426\") " pod="openstack/barbican-db-create-4rkxc" Feb 27 10:48:12 crc kubenswrapper[4728]: I0227 10:48:12.832053 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-qqckp"] Feb 27 10:48:12 crc kubenswrapper[4728]: I0227 10:48:12.834536 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-qqckp" Feb 27 10:48:12 crc kubenswrapper[4728]: I0227 10:48:12.836547 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-151d-account-create-update-dsl49" Feb 27 10:48:12 crc kubenswrapper[4728]: I0227 10:48:12.848264 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-bc9dk" Feb 27 10:48:12 crc kubenswrapper[4728]: I0227 10:48:12.855470 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxxh7\" (UniqueName: \"kubernetes.io/projected/ea36ec6b-4135-4ead-9535-8d54c658c5c3-kube-api-access-wxxh7\") pod \"neutron-332e-account-create-update-mt6gn\" (UID: \"ea36ec6b-4135-4ead-9535-8d54c658c5c3\") " pod="openstack/neutron-332e-account-create-update-mt6gn" Feb 27 10:48:12 crc kubenswrapper[4728]: I0227 10:48:12.855586 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ea36ec6b-4135-4ead-9535-8d54c658c5c3-operator-scripts\") pod \"neutron-332e-account-create-update-mt6gn\" (UID: \"ea36ec6b-4135-4ead-9535-8d54c658c5c3\") " pod="openstack/neutron-332e-account-create-update-mt6gn" Feb 27 10:48:12 crc kubenswrapper[4728]: I0227 10:48:12.880972 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-31b7-account-create-update-gwfvg"] Feb 27 10:48:12 crc kubenswrapper[4728]: I0227 10:48:12.882854 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-31b7-account-create-update-gwfvg" Feb 27 10:48:12 crc kubenswrapper[4728]: I0227 10:48:12.884786 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Feb 27 10:48:12 crc kubenswrapper[4728]: I0227 10:48:12.894648 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-31b7-account-create-update-gwfvg"] Feb 27 10:48:12 crc kubenswrapper[4728]: I0227 10:48:12.907978 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-qqckp"] Feb 27 10:48:12 crc kubenswrapper[4728]: I0227 10:48:12.957845 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wnxlq\" (UniqueName: \"kubernetes.io/projected/b856c7d2-6928-4bd0-b327-2abd6d6f664f-kube-api-access-wnxlq\") pod \"barbican-31b7-account-create-update-gwfvg\" (UID: \"b856c7d2-6928-4bd0-b327-2abd6d6f664f\") " pod="openstack/barbican-31b7-account-create-update-gwfvg" Feb 27 10:48:12 crc kubenswrapper[4728]: I0227 10:48:12.957929 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f9136d15-a48d-43e3-aff8-ca09ca1f222b-operator-scripts\") pod \"neutron-db-create-qqckp\" (UID: \"f9136d15-a48d-43e3-aff8-ca09ca1f222b\") " pod="openstack/neutron-db-create-qqckp" Feb 27 10:48:12 crc kubenswrapper[4728]: I0227 10:48:12.957983 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wxxh7\" (UniqueName: \"kubernetes.io/projected/ea36ec6b-4135-4ead-9535-8d54c658c5c3-kube-api-access-wxxh7\") pod \"neutron-332e-account-create-update-mt6gn\" (UID: \"ea36ec6b-4135-4ead-9535-8d54c658c5c3\") " pod="openstack/neutron-332e-account-create-update-mt6gn" Feb 27 10:48:12 crc kubenswrapper[4728]: I0227 10:48:12.958053 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b856c7d2-6928-4bd0-b327-2abd6d6f664f-operator-scripts\") pod \"barbican-31b7-account-create-update-gwfvg\" (UID: \"b856c7d2-6928-4bd0-b327-2abd6d6f664f\") " pod="openstack/barbican-31b7-account-create-update-gwfvg" Feb 27 10:48:12 crc kubenswrapper[4728]: I0227 10:48:12.958075 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqpqx\" (UniqueName: \"kubernetes.io/projected/f9136d15-a48d-43e3-aff8-ca09ca1f222b-kube-api-access-bqpqx\") pod \"neutron-db-create-qqckp\" (UID: \"f9136d15-a48d-43e3-aff8-ca09ca1f222b\") " pod="openstack/neutron-db-create-qqckp" Feb 27 10:48:12 crc kubenswrapper[4728]: I0227 10:48:12.958102 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ea36ec6b-4135-4ead-9535-8d54c658c5c3-operator-scripts\") pod \"neutron-332e-account-create-update-mt6gn\" (UID: \"ea36ec6b-4135-4ead-9535-8d54c658c5c3\") " pod="openstack/neutron-332e-account-create-update-mt6gn" Feb 27 10:48:12 crc kubenswrapper[4728]: I0227 10:48:12.958882 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ea36ec6b-4135-4ead-9535-8d54c658c5c3-operator-scripts\") pod \"neutron-332e-account-create-update-mt6gn\" (UID: \"ea36ec6b-4135-4ead-9535-8d54c658c5c3\") " pod="openstack/neutron-332e-account-create-update-mt6gn" Feb 27 10:48:12 crc kubenswrapper[4728]: I0227 10:48:12.972371 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-4rkxc" Feb 27 10:48:12 crc kubenswrapper[4728]: I0227 10:48:12.998080 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wxxh7\" (UniqueName: \"kubernetes.io/projected/ea36ec6b-4135-4ead-9535-8d54c658c5c3-kube-api-access-wxxh7\") pod \"neutron-332e-account-create-update-mt6gn\" (UID: \"ea36ec6b-4135-4ead-9535-8d54c658c5c3\") " pod="openstack/neutron-332e-account-create-update-mt6gn" Feb 27 10:48:13 crc kubenswrapper[4728]: I0227 10:48:13.060063 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wnxlq\" (UniqueName: \"kubernetes.io/projected/b856c7d2-6928-4bd0-b327-2abd6d6f664f-kube-api-access-wnxlq\") pod \"barbican-31b7-account-create-update-gwfvg\" (UID: \"b856c7d2-6928-4bd0-b327-2abd6d6f664f\") " pod="openstack/barbican-31b7-account-create-update-gwfvg" Feb 27 10:48:13 crc kubenswrapper[4728]: I0227 10:48:13.060182 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f9136d15-a48d-43e3-aff8-ca09ca1f222b-operator-scripts\") pod \"neutron-db-create-qqckp\" (UID: \"f9136d15-a48d-43e3-aff8-ca09ca1f222b\") " pod="openstack/neutron-db-create-qqckp" Feb 27 10:48:13 crc kubenswrapper[4728]: I0227 10:48:13.060302 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b856c7d2-6928-4bd0-b327-2abd6d6f664f-operator-scripts\") pod \"barbican-31b7-account-create-update-gwfvg\" (UID: \"b856c7d2-6928-4bd0-b327-2abd6d6f664f\") " pod="openstack/barbican-31b7-account-create-update-gwfvg" Feb 27 10:48:13 crc kubenswrapper[4728]: I0227 10:48:13.060347 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bqpqx\" (UniqueName: \"kubernetes.io/projected/f9136d15-a48d-43e3-aff8-ca09ca1f222b-kube-api-access-bqpqx\") pod \"neutron-db-create-qqckp\" (UID: \"f9136d15-a48d-43e3-aff8-ca09ca1f222b\") " pod="openstack/neutron-db-create-qqckp" Feb 27 10:48:13 crc kubenswrapper[4728]: I0227 10:48:13.061129 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f9136d15-a48d-43e3-aff8-ca09ca1f222b-operator-scripts\") pod \"neutron-db-create-qqckp\" (UID: \"f9136d15-a48d-43e3-aff8-ca09ca1f222b\") " pod="openstack/neutron-db-create-qqckp" Feb 27 10:48:13 crc kubenswrapper[4728]: I0227 10:48:13.061617 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b856c7d2-6928-4bd0-b327-2abd6d6f664f-operator-scripts\") pod \"barbican-31b7-account-create-update-gwfvg\" (UID: \"b856c7d2-6928-4bd0-b327-2abd6d6f664f\") " pod="openstack/barbican-31b7-account-create-update-gwfvg" Feb 27 10:48:13 crc kubenswrapper[4728]: I0227 10:48:13.077984 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bqpqx\" (UniqueName: \"kubernetes.io/projected/f9136d15-a48d-43e3-aff8-ca09ca1f222b-kube-api-access-bqpqx\") pod \"neutron-db-create-qqckp\" (UID: \"f9136d15-a48d-43e3-aff8-ca09ca1f222b\") " pod="openstack/neutron-db-create-qqckp" Feb 27 10:48:13 crc kubenswrapper[4728]: I0227 10:48:13.079038 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-332e-account-create-update-mt6gn" Feb 27 10:48:13 crc kubenswrapper[4728]: I0227 10:48:13.080322 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wnxlq\" (UniqueName: \"kubernetes.io/projected/b856c7d2-6928-4bd0-b327-2abd6d6f664f-kube-api-access-wnxlq\") pod \"barbican-31b7-account-create-update-gwfvg\" (UID: \"b856c7d2-6928-4bd0-b327-2abd6d6f664f\") " pod="openstack/barbican-31b7-account-create-update-gwfvg" Feb 27 10:48:13 crc kubenswrapper[4728]: I0227 10:48:13.161782 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-qqckp" Feb 27 10:48:13 crc kubenswrapper[4728]: I0227 10:48:13.171221 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-52njv"] Feb 27 10:48:13 crc kubenswrapper[4728]: W0227 10:48:13.189672 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod85f6bc27_1d0e_48ad_9a23_60062c5f8bdb.slice/crio-d371829dce21b5b1d816128472eb0efe1a7f9e56265c0b4912bd94af5f161c19 WatchSource:0}: Error finding container d371829dce21b5b1d816128472eb0efe1a7f9e56265c0b4912bd94af5f161c19: Status 404 returned error can't find the container with id d371829dce21b5b1d816128472eb0efe1a7f9e56265c0b4912bd94af5f161c19 Feb 27 10:48:13 crc kubenswrapper[4728]: I0227 10:48:13.200649 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-31b7-account-create-update-gwfvg" Feb 27 10:48:13 crc kubenswrapper[4728]: I0227 10:48:13.325560 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-6bca-account-create-update-bcjb2"] Feb 27 10:48:13 crc kubenswrapper[4728]: I0227 10:48:13.581714 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-151d-account-create-update-dsl49"] Feb 27 10:48:13 crc kubenswrapper[4728]: I0227 10:48:13.715271 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Feb 27 10:48:13 crc kubenswrapper[4728]: I0227 10:48:13.730244 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Feb 27 10:48:13 crc kubenswrapper[4728]: I0227 10:48:13.846694 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-151d-account-create-update-dsl49" event={"ID":"abbbfc84-fe7d-4fc9-8f96-d360b6356660","Type":"ContainerStarted","Data":"3f2d62d89760bd5119b8fe7c6f2d7b68058fb598b0b67614dd23d2036c73bed5"} Feb 27 10:48:13 crc kubenswrapper[4728]: I0227 10:48:13.849050 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-6bca-account-create-update-bcjb2" event={"ID":"8f7f72d1-b9f2-4a8f-b965-6892e4aaa6bd","Type":"ContainerStarted","Data":"1ed38f7030e6d4cbea31f6daacb51d924eb7f2ce569c297ad6aeb8564bf60313"} Feb 27 10:48:13 crc kubenswrapper[4728]: I0227 10:48:13.852716 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-52njv" event={"ID":"85f6bc27-1d0e-48ad-9a23-60062c5f8bdb","Type":"ContainerStarted","Data":"0806a4561ad4b075a2627fa32d9790de05161694af2fb1330de565f2a8ac485d"} Feb 27 10:48:13 crc kubenswrapper[4728]: I0227 10:48:13.852854 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-52njv" event={"ID":"85f6bc27-1d0e-48ad-9a23-60062c5f8bdb","Type":"ContainerStarted","Data":"d371829dce21b5b1d816128472eb0efe1a7f9e56265c0b4912bd94af5f161c19"} Feb 27 10:48:13 crc kubenswrapper[4728]: I0227 10:48:13.857427 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Feb 27 10:48:13 crc kubenswrapper[4728]: I0227 10:48:13.891085 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-lb6tb"] Feb 27 10:48:13 crc kubenswrapper[4728]: I0227 10:48:13.897076 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-create-52njv" podStartSLOduration=1.897057571 podStartE2EDuration="1.897057571s" podCreationTimestamp="2026-02-27 10:48:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:48:13.876484535 +0000 UTC m=+1313.838850641" watchObservedRunningTime="2026-02-27 10:48:13.897057571 +0000 UTC m=+1313.859423677" Feb 27 10:48:13 crc kubenswrapper[4728]: I0227 10:48:13.932582 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-4rkxc"] Feb 27 10:48:13 crc kubenswrapper[4728]: W0227 10:48:13.936619 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7100f5fe_cf67_4a79_b69f_c2ccf91d9426.slice/crio-9d3f0e3c40878353230e829c3b48186910eae7a77fd0a8d02ade2509f52abf6d WatchSource:0}: Error finding container 9d3f0e3c40878353230e829c3b48186910eae7a77fd0a8d02ade2509f52abf6d: Status 404 returned error can't find the container with id 9d3f0e3c40878353230e829c3b48186910eae7a77fd0a8d02ade2509f52abf6d Feb 27 10:48:13 crc kubenswrapper[4728]: I0227 10:48:13.941539 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-bc9dk"] Feb 27 10:48:14 crc kubenswrapper[4728]: I0227 10:48:14.108414 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-332e-account-create-update-mt6gn"] Feb 27 10:48:14 crc kubenswrapper[4728]: I0227 10:48:14.237564 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-qqckp"] Feb 27 10:48:14 crc kubenswrapper[4728]: I0227 10:48:14.415640 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-31b7-account-create-update-gwfvg"] Feb 27 10:48:14 crc kubenswrapper[4728]: I0227 10:48:14.870807 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-bc9dk" event={"ID":"d31f26e9-dded-4375-abb8-f038bce13899","Type":"ContainerStarted","Data":"20dad8f26658d56489992fb684c21b4929c2fe75122ac7bd2b21399777837780"} Feb 27 10:48:14 crc kubenswrapper[4728]: I0227 10:48:14.873403 4728 generic.go:334] "Generic (PLEG): container finished" podID="f9136d15-a48d-43e3-aff8-ca09ca1f222b" containerID="2772bdd65c95354316ecfc963f4c5a14af38ca8c9ccd5d8d81c9c86136755062" exitCode=0 Feb 27 10:48:14 crc kubenswrapper[4728]: I0227 10:48:14.873468 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-qqckp" event={"ID":"f9136d15-a48d-43e3-aff8-ca09ca1f222b","Type":"ContainerDied","Data":"2772bdd65c95354316ecfc963f4c5a14af38ca8c9ccd5d8d81c9c86136755062"} Feb 27 10:48:14 crc kubenswrapper[4728]: I0227 10:48:14.873494 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-qqckp" event={"ID":"f9136d15-a48d-43e3-aff8-ca09ca1f222b","Type":"ContainerStarted","Data":"03d5e197ff86696fa6d30217a9533817a9743b80aebd37f7cccfed92d6679431"} Feb 27 10:48:14 crc kubenswrapper[4728]: I0227 10:48:14.875765 4728 generic.go:334] "Generic (PLEG): container finished" podID="85f6bc27-1d0e-48ad-9a23-60062c5f8bdb" containerID="0806a4561ad4b075a2627fa32d9790de05161694af2fb1330de565f2a8ac485d" exitCode=0 Feb 27 10:48:14 crc kubenswrapper[4728]: I0227 10:48:14.875817 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-52njv" event={"ID":"85f6bc27-1d0e-48ad-9a23-60062c5f8bdb","Type":"ContainerDied","Data":"0806a4561ad4b075a2627fa32d9790de05161694af2fb1330de565f2a8ac485d"} Feb 27 10:48:14 crc kubenswrapper[4728]: I0227 10:48:14.877256 4728 generic.go:334] "Generic (PLEG): container finished" podID="ea36ec6b-4135-4ead-9535-8d54c658c5c3" containerID="8caff2e6a77c3078e4fba1cbe58ca491f7dafb5852c1814771cc5b8218eb1488" exitCode=0 Feb 27 10:48:14 crc kubenswrapper[4728]: I0227 10:48:14.877326 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-332e-account-create-update-mt6gn" event={"ID":"ea36ec6b-4135-4ead-9535-8d54c658c5c3","Type":"ContainerDied","Data":"8caff2e6a77c3078e4fba1cbe58ca491f7dafb5852c1814771cc5b8218eb1488"} Feb 27 10:48:14 crc kubenswrapper[4728]: I0227 10:48:14.877353 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-332e-account-create-update-mt6gn" event={"ID":"ea36ec6b-4135-4ead-9535-8d54c658c5c3","Type":"ContainerStarted","Data":"22de3a186b9ba44d39c30656089de69cbbd6e4a389431b14dcb14711c674a848"} Feb 27 10:48:14 crc kubenswrapper[4728]: I0227 10:48:14.878839 4728 generic.go:334] "Generic (PLEG): container finished" podID="7100f5fe-cf67-4a79-b69f-c2ccf91d9426" containerID="9e8a5e2c6b151274c38fd27234450268911df4a0d9684570d479781e0a023a50" exitCode=0 Feb 27 10:48:14 crc kubenswrapper[4728]: I0227 10:48:14.878892 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-4rkxc" event={"ID":"7100f5fe-cf67-4a79-b69f-c2ccf91d9426","Type":"ContainerDied","Data":"9e8a5e2c6b151274c38fd27234450268911df4a0d9684570d479781e0a023a50"} Feb 27 10:48:14 crc kubenswrapper[4728]: I0227 10:48:14.878911 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-4rkxc" event={"ID":"7100f5fe-cf67-4a79-b69f-c2ccf91d9426","Type":"ContainerStarted","Data":"9d3f0e3c40878353230e829c3b48186910eae7a77fd0a8d02ade2509f52abf6d"} Feb 27 10:48:14 crc kubenswrapper[4728]: I0227 10:48:14.882526 4728 generic.go:334] "Generic (PLEG): container finished" podID="abbbfc84-fe7d-4fc9-8f96-d360b6356660" containerID="719698de05e31a5bdedf94ade490685ae23fe85cb8b9a4a4a10c9ea62d2bc702" exitCode=0 Feb 27 10:48:14 crc kubenswrapper[4728]: I0227 10:48:14.882576 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-151d-account-create-update-dsl49" event={"ID":"abbbfc84-fe7d-4fc9-8f96-d360b6356660","Type":"ContainerDied","Data":"719698de05e31a5bdedf94ade490685ae23fe85cb8b9a4a4a10c9ea62d2bc702"} Feb 27 10:48:14 crc kubenswrapper[4728]: I0227 10:48:14.885813 4728 generic.go:334] "Generic (PLEG): container finished" podID="8f7f72d1-b9f2-4a8f-b965-6892e4aaa6bd" containerID="cc4747e9802a8d462b2d4c2235792b8ceb2cceb46be16b9796c075451ca903b4" exitCode=0 Feb 27 10:48:14 crc kubenswrapper[4728]: I0227 10:48:14.885924 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-6bca-account-create-update-bcjb2" event={"ID":"8f7f72d1-b9f2-4a8f-b965-6892e4aaa6bd","Type":"ContainerDied","Data":"cc4747e9802a8d462b2d4c2235792b8ceb2cceb46be16b9796c075451ca903b4"} Feb 27 10:48:14 crc kubenswrapper[4728]: I0227 10:48:14.893596 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-31b7-account-create-update-gwfvg" event={"ID":"b856c7d2-6928-4bd0-b327-2abd6d6f664f","Type":"ContainerStarted","Data":"07cf9fd2a18d77b3cbb2d6342d8c3ea4e18e815b38ce23ee3984ce1885adf4ea"} Feb 27 10:48:14 crc kubenswrapper[4728]: I0227 10:48:14.897079 4728 generic.go:334] "Generic (PLEG): container finished" podID="7160eba2-79dd-42c9-8540-30f948234052" containerID="6b7582970aa14ffa741abca577db0d6d0bc4e25d8e05e7e1d66642688674ae26" exitCode=0 Feb 27 10:48:14 crc kubenswrapper[4728]: I0227 10:48:14.897308 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-lb6tb" event={"ID":"7160eba2-79dd-42c9-8540-30f948234052","Type":"ContainerDied","Data":"6b7582970aa14ffa741abca577db0d6d0bc4e25d8e05e7e1d66642688674ae26"} Feb 27 10:48:14 crc kubenswrapper[4728]: I0227 10:48:14.897354 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-lb6tb" event={"ID":"7160eba2-79dd-42c9-8540-30f948234052","Type":"ContainerStarted","Data":"f21837ee8710b1267ea97fef6b11e658ae640631bbbb50e7f721a9345c8b82c3"} Feb 27 10:48:15 crc kubenswrapper[4728]: I0227 10:48:15.909022 4728 generic.go:334] "Generic (PLEG): container finished" podID="b856c7d2-6928-4bd0-b327-2abd6d6f664f" containerID="b15315e8121f267a59c25c275594fd0210f3375e891f65b83d56685bbc950f17" exitCode=0 Feb 27 10:48:15 crc kubenswrapper[4728]: I0227 10:48:15.909317 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-31b7-account-create-update-gwfvg" event={"ID":"b856c7d2-6928-4bd0-b327-2abd6d6f664f","Type":"ContainerDied","Data":"b15315e8121f267a59c25c275594fd0210f3375e891f65b83d56685bbc950f17"} Feb 27 10:48:17 crc kubenswrapper[4728]: I0227 10:48:17.476882 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-895cf5cf-lfgq9" Feb 27 10:48:17 crc kubenswrapper[4728]: I0227 10:48:17.595363 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6d5b6d6b67-sm4dz"] Feb 27 10:48:17 crc kubenswrapper[4728]: I0227 10:48:17.595590 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6d5b6d6b67-sm4dz" podUID="6bf98b91-deff-4ed7-b4e9-ec72db4d9d92" containerName="dnsmasq-dns" containerID="cri-o://02ed52082e50a91686c703b06fb559e5c0dee80506d11e273846680ad2c0a233" gracePeriod=10 Feb 27 10:48:17 crc kubenswrapper[4728]: I0227 10:48:17.945656 4728 generic.go:334] "Generic (PLEG): container finished" podID="6bf98b91-deff-4ed7-b4e9-ec72db4d9d92" containerID="02ed52082e50a91686c703b06fb559e5c0dee80506d11e273846680ad2c0a233" exitCode=0 Feb 27 10:48:17 crc kubenswrapper[4728]: I0227 10:48:17.945699 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d5b6d6b67-sm4dz" event={"ID":"6bf98b91-deff-4ed7-b4e9-ec72db4d9d92","Type":"ContainerDied","Data":"02ed52082e50a91686c703b06fb559e5c0dee80506d11e273846680ad2c0a233"} Feb 27 10:48:19 crc kubenswrapper[4728]: I0227 10:48:19.044329 4728 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6d5b6d6b67-sm4dz" podUID="6bf98b91-deff-4ed7-b4e9-ec72db4d9d92" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.172:5353: connect: connection refused" Feb 27 10:48:19 crc kubenswrapper[4728]: I0227 10:48:19.485868 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-qqckp" Feb 27 10:48:19 crc kubenswrapper[4728]: I0227 10:48:19.497123 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-lb6tb" Feb 27 10:48:19 crc kubenswrapper[4728]: I0227 10:48:19.531137 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-31b7-account-create-update-gwfvg" Feb 27 10:48:19 crc kubenswrapper[4728]: I0227 10:48:19.541779 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bqpqx\" (UniqueName: \"kubernetes.io/projected/f9136d15-a48d-43e3-aff8-ca09ca1f222b-kube-api-access-bqpqx\") pod \"f9136d15-a48d-43e3-aff8-ca09ca1f222b\" (UID: \"f9136d15-a48d-43e3-aff8-ca09ca1f222b\") " Feb 27 10:48:19 crc kubenswrapper[4728]: I0227 10:48:19.541840 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f9136d15-a48d-43e3-aff8-ca09ca1f222b-operator-scripts\") pod \"f9136d15-a48d-43e3-aff8-ca09ca1f222b\" (UID: \"f9136d15-a48d-43e3-aff8-ca09ca1f222b\") " Feb 27 10:48:19 crc kubenswrapper[4728]: I0227 10:48:19.541915 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mxlmv\" (UniqueName: \"kubernetes.io/projected/7160eba2-79dd-42c9-8540-30f948234052-kube-api-access-mxlmv\") pod \"7160eba2-79dd-42c9-8540-30f948234052\" (UID: \"7160eba2-79dd-42c9-8540-30f948234052\") " Feb 27 10:48:19 crc kubenswrapper[4728]: I0227 10:48:19.542003 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7160eba2-79dd-42c9-8540-30f948234052-operator-scripts\") pod \"7160eba2-79dd-42c9-8540-30f948234052\" (UID: \"7160eba2-79dd-42c9-8540-30f948234052\") " Feb 27 10:48:19 crc kubenswrapper[4728]: I0227 10:48:19.543425 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7160eba2-79dd-42c9-8540-30f948234052-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7160eba2-79dd-42c9-8540-30f948234052" (UID: "7160eba2-79dd-42c9-8540-30f948234052"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:48:19 crc kubenswrapper[4728]: I0227 10:48:19.544140 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f9136d15-a48d-43e3-aff8-ca09ca1f222b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f9136d15-a48d-43e3-aff8-ca09ca1f222b" (UID: "f9136d15-a48d-43e3-aff8-ca09ca1f222b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:48:19 crc kubenswrapper[4728]: I0227 10:48:19.549361 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9136d15-a48d-43e3-aff8-ca09ca1f222b-kube-api-access-bqpqx" (OuterVolumeSpecName: "kube-api-access-bqpqx") pod "f9136d15-a48d-43e3-aff8-ca09ca1f222b" (UID: "f9136d15-a48d-43e3-aff8-ca09ca1f222b"). InnerVolumeSpecName "kube-api-access-bqpqx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:48:19 crc kubenswrapper[4728]: I0227 10:48:19.553263 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7160eba2-79dd-42c9-8540-30f948234052-kube-api-access-mxlmv" (OuterVolumeSpecName: "kube-api-access-mxlmv") pod "7160eba2-79dd-42c9-8540-30f948234052" (UID: "7160eba2-79dd-42c9-8540-30f948234052"). InnerVolumeSpecName "kube-api-access-mxlmv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:48:19 crc kubenswrapper[4728]: I0227 10:48:19.553306 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-6bca-account-create-update-bcjb2" Feb 27 10:48:19 crc kubenswrapper[4728]: I0227 10:48:19.623774 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-332e-account-create-update-mt6gn" Feb 27 10:48:19 crc kubenswrapper[4728]: I0227 10:48:19.631169 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-151d-account-create-update-dsl49" Feb 27 10:48:19 crc kubenswrapper[4728]: I0227 10:48:19.646754 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wnxlq\" (UniqueName: \"kubernetes.io/projected/b856c7d2-6928-4bd0-b327-2abd6d6f664f-kube-api-access-wnxlq\") pod \"b856c7d2-6928-4bd0-b327-2abd6d6f664f\" (UID: \"b856c7d2-6928-4bd0-b327-2abd6d6f664f\") " Feb 27 10:48:19 crc kubenswrapper[4728]: I0227 10:48:19.646838 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b856c7d2-6928-4bd0-b327-2abd6d6f664f-operator-scripts\") pod \"b856c7d2-6928-4bd0-b327-2abd6d6f664f\" (UID: \"b856c7d2-6928-4bd0-b327-2abd6d6f664f\") " Feb 27 10:48:19 crc kubenswrapper[4728]: I0227 10:48:19.646955 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8f7f72d1-b9f2-4a8f-b965-6892e4aaa6bd-operator-scripts\") pod \"8f7f72d1-b9f2-4a8f-b965-6892e4aaa6bd\" (UID: \"8f7f72d1-b9f2-4a8f-b965-6892e4aaa6bd\") " Feb 27 10:48:19 crc kubenswrapper[4728]: I0227 10:48:19.647180 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t6lkz\" (UniqueName: \"kubernetes.io/projected/8f7f72d1-b9f2-4a8f-b965-6892e4aaa6bd-kube-api-access-t6lkz\") pod \"8f7f72d1-b9f2-4a8f-b965-6892e4aaa6bd\" (UID: \"8f7f72d1-b9f2-4a8f-b965-6892e4aaa6bd\") " Feb 27 10:48:19 crc kubenswrapper[4728]: I0227 10:48:19.647587 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b856c7d2-6928-4bd0-b327-2abd6d6f664f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b856c7d2-6928-4bd0-b327-2abd6d6f664f" (UID: "b856c7d2-6928-4bd0-b327-2abd6d6f664f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:48:19 crc kubenswrapper[4728]: I0227 10:48:19.647848 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bqpqx\" (UniqueName: \"kubernetes.io/projected/f9136d15-a48d-43e3-aff8-ca09ca1f222b-kube-api-access-bqpqx\") on node \"crc\" DevicePath \"\"" Feb 27 10:48:19 crc kubenswrapper[4728]: I0227 10:48:19.647871 4728 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f9136d15-a48d-43e3-aff8-ca09ca1f222b-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 10:48:19 crc kubenswrapper[4728]: I0227 10:48:19.647897 4728 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b856c7d2-6928-4bd0-b327-2abd6d6f664f-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 10:48:19 crc kubenswrapper[4728]: I0227 10:48:19.647908 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mxlmv\" (UniqueName: \"kubernetes.io/projected/7160eba2-79dd-42c9-8540-30f948234052-kube-api-access-mxlmv\") on node \"crc\" DevicePath \"\"" Feb 27 10:48:19 crc kubenswrapper[4728]: I0227 10:48:19.647919 4728 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7160eba2-79dd-42c9-8540-30f948234052-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 10:48:19 crc kubenswrapper[4728]: I0227 10:48:19.647977 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f7f72d1-b9f2-4a8f-b965-6892e4aaa6bd-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8f7f72d1-b9f2-4a8f-b965-6892e4aaa6bd" (UID: "8f7f72d1-b9f2-4a8f-b965-6892e4aaa6bd"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:48:19 crc kubenswrapper[4728]: I0227 10:48:19.651916 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-4rkxc" Feb 27 10:48:19 crc kubenswrapper[4728]: I0227 10:48:19.652257 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b856c7d2-6928-4bd0-b327-2abd6d6f664f-kube-api-access-wnxlq" (OuterVolumeSpecName: "kube-api-access-wnxlq") pod "b856c7d2-6928-4bd0-b327-2abd6d6f664f" (UID: "b856c7d2-6928-4bd0-b327-2abd6d6f664f"). InnerVolumeSpecName "kube-api-access-wnxlq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:48:19 crc kubenswrapper[4728]: I0227 10:48:19.652465 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f7f72d1-b9f2-4a8f-b965-6892e4aaa6bd-kube-api-access-t6lkz" (OuterVolumeSpecName: "kube-api-access-t6lkz") pod "8f7f72d1-b9f2-4a8f-b965-6892e4aaa6bd" (UID: "8f7f72d1-b9f2-4a8f-b965-6892e4aaa6bd"). InnerVolumeSpecName "kube-api-access-t6lkz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:48:19 crc kubenswrapper[4728]: I0227 10:48:19.668081 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-52njv" Feb 27 10:48:19 crc kubenswrapper[4728]: I0227 10:48:19.687477 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d5b6d6b67-sm4dz" Feb 27 10:48:19 crc kubenswrapper[4728]: I0227 10:48:19.751023 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6bf98b91-deff-4ed7-b4e9-ec72db4d9d92-ovsdbserver-nb\") pod \"6bf98b91-deff-4ed7-b4e9-ec72db4d9d92\" (UID: \"6bf98b91-deff-4ed7-b4e9-ec72db4d9d92\") " Feb 27 10:48:19 crc kubenswrapper[4728]: I0227 10:48:19.751151 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4b8pm\" (UniqueName: \"kubernetes.io/projected/85f6bc27-1d0e-48ad-9a23-60062c5f8bdb-kube-api-access-4b8pm\") pod \"85f6bc27-1d0e-48ad-9a23-60062c5f8bdb\" (UID: \"85f6bc27-1d0e-48ad-9a23-60062c5f8bdb\") " Feb 27 10:48:19 crc kubenswrapper[4728]: I0227 10:48:19.751185 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-btvh7\" (UniqueName: \"kubernetes.io/projected/abbbfc84-fe7d-4fc9-8f96-d360b6356660-kube-api-access-btvh7\") pod \"abbbfc84-fe7d-4fc9-8f96-d360b6356660\" (UID: \"abbbfc84-fe7d-4fc9-8f96-d360b6356660\") " Feb 27 10:48:19 crc kubenswrapper[4728]: I0227 10:48:19.751242 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4wvcx\" (UniqueName: \"kubernetes.io/projected/7100f5fe-cf67-4a79-b69f-c2ccf91d9426-kube-api-access-4wvcx\") pod \"7100f5fe-cf67-4a79-b69f-c2ccf91d9426\" (UID: \"7100f5fe-cf67-4a79-b69f-c2ccf91d9426\") " Feb 27 10:48:19 crc kubenswrapper[4728]: I0227 10:48:19.752341 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ea36ec6b-4135-4ead-9535-8d54c658c5c3-operator-scripts\") pod \"ea36ec6b-4135-4ead-9535-8d54c658c5c3\" (UID: \"ea36ec6b-4135-4ead-9535-8d54c658c5c3\") " Feb 27 10:48:19 crc kubenswrapper[4728]: I0227 10:48:19.752396 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k5g8r\" (UniqueName: \"kubernetes.io/projected/6bf98b91-deff-4ed7-b4e9-ec72db4d9d92-kube-api-access-k5g8r\") pod \"6bf98b91-deff-4ed7-b4e9-ec72db4d9d92\" (UID: \"6bf98b91-deff-4ed7-b4e9-ec72db4d9d92\") " Feb 27 10:48:19 crc kubenswrapper[4728]: I0227 10:48:19.752425 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxxh7\" (UniqueName: \"kubernetes.io/projected/ea36ec6b-4135-4ead-9535-8d54c658c5c3-kube-api-access-wxxh7\") pod \"ea36ec6b-4135-4ead-9535-8d54c658c5c3\" (UID: \"ea36ec6b-4135-4ead-9535-8d54c658c5c3\") " Feb 27 10:48:19 crc kubenswrapper[4728]: I0227 10:48:19.752549 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6bf98b91-deff-4ed7-b4e9-ec72db4d9d92-ovsdbserver-sb\") pod \"6bf98b91-deff-4ed7-b4e9-ec72db4d9d92\" (UID: \"6bf98b91-deff-4ed7-b4e9-ec72db4d9d92\") " Feb 27 10:48:19 crc kubenswrapper[4728]: I0227 10:48:19.752608 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6bf98b91-deff-4ed7-b4e9-ec72db4d9d92-config\") pod \"6bf98b91-deff-4ed7-b4e9-ec72db4d9d92\" (UID: \"6bf98b91-deff-4ed7-b4e9-ec72db4d9d92\") " Feb 27 10:48:19 crc kubenswrapper[4728]: I0227 10:48:19.752751 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/abbbfc84-fe7d-4fc9-8f96-d360b6356660-operator-scripts\") pod \"abbbfc84-fe7d-4fc9-8f96-d360b6356660\" (UID: \"abbbfc84-fe7d-4fc9-8f96-d360b6356660\") " Feb 27 10:48:19 crc kubenswrapper[4728]: I0227 10:48:19.752906 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/85f6bc27-1d0e-48ad-9a23-60062c5f8bdb-operator-scripts\") pod \"85f6bc27-1d0e-48ad-9a23-60062c5f8bdb\" (UID: \"85f6bc27-1d0e-48ad-9a23-60062c5f8bdb\") " Feb 27 10:48:19 crc kubenswrapper[4728]: I0227 10:48:19.752929 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ea36ec6b-4135-4ead-9535-8d54c658c5c3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ea36ec6b-4135-4ead-9535-8d54c658c5c3" (UID: "ea36ec6b-4135-4ead-9535-8d54c658c5c3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:48:19 crc kubenswrapper[4728]: I0227 10:48:19.753061 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7100f5fe-cf67-4a79-b69f-c2ccf91d9426-operator-scripts\") pod \"7100f5fe-cf67-4a79-b69f-c2ccf91d9426\" (UID: \"7100f5fe-cf67-4a79-b69f-c2ccf91d9426\") " Feb 27 10:48:19 crc kubenswrapper[4728]: I0227 10:48:19.753125 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6bf98b91-deff-4ed7-b4e9-ec72db4d9d92-dns-swift-storage-0\") pod \"6bf98b91-deff-4ed7-b4e9-ec72db4d9d92\" (UID: \"6bf98b91-deff-4ed7-b4e9-ec72db4d9d92\") " Feb 27 10:48:19 crc kubenswrapper[4728]: I0227 10:48:19.753217 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6bf98b91-deff-4ed7-b4e9-ec72db4d9d92-dns-svc\") pod \"6bf98b91-deff-4ed7-b4e9-ec72db4d9d92\" (UID: \"6bf98b91-deff-4ed7-b4e9-ec72db4d9d92\") " Feb 27 10:48:19 crc kubenswrapper[4728]: I0227 10:48:19.753788 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/85f6bc27-1d0e-48ad-9a23-60062c5f8bdb-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "85f6bc27-1d0e-48ad-9a23-60062c5f8bdb" (UID: "85f6bc27-1d0e-48ad-9a23-60062c5f8bdb"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:48:19 crc kubenswrapper[4728]: I0227 10:48:19.753976 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7100f5fe-cf67-4a79-b69f-c2ccf91d9426-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7100f5fe-cf67-4a79-b69f-c2ccf91d9426" (UID: "7100f5fe-cf67-4a79-b69f-c2ccf91d9426"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:48:19 crc kubenswrapper[4728]: I0227 10:48:19.754194 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/abbbfc84-fe7d-4fc9-8f96-d360b6356660-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "abbbfc84-fe7d-4fc9-8f96-d360b6356660" (UID: "abbbfc84-fe7d-4fc9-8f96-d360b6356660"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:48:19 crc kubenswrapper[4728]: I0227 10:48:19.754517 4728 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ea36ec6b-4135-4ead-9535-8d54c658c5c3-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 10:48:19 crc kubenswrapper[4728]: I0227 10:48:19.754964 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wnxlq\" (UniqueName: \"kubernetes.io/projected/b856c7d2-6928-4bd0-b327-2abd6d6f664f-kube-api-access-wnxlq\") on node \"crc\" DevicePath \"\"" Feb 27 10:48:19 crc kubenswrapper[4728]: I0227 10:48:19.755042 4728 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8f7f72d1-b9f2-4a8f-b965-6892e4aaa6bd-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 10:48:19 crc kubenswrapper[4728]: I0227 10:48:19.755102 4728 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/abbbfc84-fe7d-4fc9-8f96-d360b6356660-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 10:48:19 crc kubenswrapper[4728]: I0227 10:48:19.755688 4728 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/85f6bc27-1d0e-48ad-9a23-60062c5f8bdb-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 10:48:19 crc kubenswrapper[4728]: I0227 10:48:19.755769 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t6lkz\" (UniqueName: \"kubernetes.io/projected/8f7f72d1-b9f2-4a8f-b965-6892e4aaa6bd-kube-api-access-t6lkz\") on node \"crc\" DevicePath \"\"" Feb 27 10:48:19 crc kubenswrapper[4728]: I0227 10:48:19.756316 4728 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7100f5fe-cf67-4a79-b69f-c2ccf91d9426-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 10:48:19 crc kubenswrapper[4728]: I0227 10:48:19.756278 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85f6bc27-1d0e-48ad-9a23-60062c5f8bdb-kube-api-access-4b8pm" (OuterVolumeSpecName: "kube-api-access-4b8pm") pod "85f6bc27-1d0e-48ad-9a23-60062c5f8bdb" (UID: "85f6bc27-1d0e-48ad-9a23-60062c5f8bdb"). InnerVolumeSpecName "kube-api-access-4b8pm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:48:19 crc kubenswrapper[4728]: I0227 10:48:19.757222 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/abbbfc84-fe7d-4fc9-8f96-d360b6356660-kube-api-access-btvh7" (OuterVolumeSpecName: "kube-api-access-btvh7") pod "abbbfc84-fe7d-4fc9-8f96-d360b6356660" (UID: "abbbfc84-fe7d-4fc9-8f96-d360b6356660"). InnerVolumeSpecName "kube-api-access-btvh7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:48:19 crc kubenswrapper[4728]: I0227 10:48:19.763864 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7100f5fe-cf67-4a79-b69f-c2ccf91d9426-kube-api-access-4wvcx" (OuterVolumeSpecName: "kube-api-access-4wvcx") pod "7100f5fe-cf67-4a79-b69f-c2ccf91d9426" (UID: "7100f5fe-cf67-4a79-b69f-c2ccf91d9426"). InnerVolumeSpecName "kube-api-access-4wvcx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:48:19 crc kubenswrapper[4728]: I0227 10:48:19.763982 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea36ec6b-4135-4ead-9535-8d54c658c5c3-kube-api-access-wxxh7" (OuterVolumeSpecName: "kube-api-access-wxxh7") pod "ea36ec6b-4135-4ead-9535-8d54c658c5c3" (UID: "ea36ec6b-4135-4ead-9535-8d54c658c5c3"). InnerVolumeSpecName "kube-api-access-wxxh7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:48:19 crc kubenswrapper[4728]: I0227 10:48:19.770129 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6bf98b91-deff-4ed7-b4e9-ec72db4d9d92-kube-api-access-k5g8r" (OuterVolumeSpecName: "kube-api-access-k5g8r") pod "6bf98b91-deff-4ed7-b4e9-ec72db4d9d92" (UID: "6bf98b91-deff-4ed7-b4e9-ec72db4d9d92"). InnerVolumeSpecName "kube-api-access-k5g8r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:48:19 crc kubenswrapper[4728]: I0227 10:48:19.812414 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6bf98b91-deff-4ed7-b4e9-ec72db4d9d92-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "6bf98b91-deff-4ed7-b4e9-ec72db4d9d92" (UID: "6bf98b91-deff-4ed7-b4e9-ec72db4d9d92"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:48:19 crc kubenswrapper[4728]: I0227 10:48:19.814943 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6bf98b91-deff-4ed7-b4e9-ec72db4d9d92-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "6bf98b91-deff-4ed7-b4e9-ec72db4d9d92" (UID: "6bf98b91-deff-4ed7-b4e9-ec72db4d9d92"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:48:19 crc kubenswrapper[4728]: I0227 10:48:19.820947 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6bf98b91-deff-4ed7-b4e9-ec72db4d9d92-config" (OuterVolumeSpecName: "config") pod "6bf98b91-deff-4ed7-b4e9-ec72db4d9d92" (UID: "6bf98b91-deff-4ed7-b4e9-ec72db4d9d92"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:48:19 crc kubenswrapper[4728]: I0227 10:48:19.822182 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6bf98b91-deff-4ed7-b4e9-ec72db4d9d92-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6bf98b91-deff-4ed7-b4e9-ec72db4d9d92" (UID: "6bf98b91-deff-4ed7-b4e9-ec72db4d9d92"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:48:19 crc kubenswrapper[4728]: I0227 10:48:19.823171 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6bf98b91-deff-4ed7-b4e9-ec72db4d9d92-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "6bf98b91-deff-4ed7-b4e9-ec72db4d9d92" (UID: "6bf98b91-deff-4ed7-b4e9-ec72db4d9d92"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:48:19 crc kubenswrapper[4728]: I0227 10:48:19.858200 4728 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6bf98b91-deff-4ed7-b4e9-ec72db4d9d92-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 27 10:48:19 crc kubenswrapper[4728]: I0227 10:48:19.858249 4728 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6bf98b91-deff-4ed7-b4e9-ec72db4d9d92-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 27 10:48:19 crc kubenswrapper[4728]: I0227 10:48:19.858264 4728 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6bf98b91-deff-4ed7-b4e9-ec72db4d9d92-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 27 10:48:19 crc kubenswrapper[4728]: I0227 10:48:19.858279 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4b8pm\" (UniqueName: \"kubernetes.io/projected/85f6bc27-1d0e-48ad-9a23-60062c5f8bdb-kube-api-access-4b8pm\") on node \"crc\" DevicePath \"\"" Feb 27 10:48:19 crc kubenswrapper[4728]: I0227 10:48:19.858294 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-btvh7\" (UniqueName: \"kubernetes.io/projected/abbbfc84-fe7d-4fc9-8f96-d360b6356660-kube-api-access-btvh7\") on node \"crc\" DevicePath \"\"" Feb 27 10:48:19 crc kubenswrapper[4728]: I0227 10:48:19.858306 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4wvcx\" (UniqueName: \"kubernetes.io/projected/7100f5fe-cf67-4a79-b69f-c2ccf91d9426-kube-api-access-4wvcx\") on node \"crc\" DevicePath \"\"" Feb 27 10:48:19 crc kubenswrapper[4728]: I0227 10:48:19.858318 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k5g8r\" (UniqueName: \"kubernetes.io/projected/6bf98b91-deff-4ed7-b4e9-ec72db4d9d92-kube-api-access-k5g8r\") on node \"crc\" DevicePath \"\"" Feb 27 10:48:19 crc kubenswrapper[4728]: I0227 10:48:19.858330 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxxh7\" (UniqueName: \"kubernetes.io/projected/ea36ec6b-4135-4ead-9535-8d54c658c5c3-kube-api-access-wxxh7\") on node \"crc\" DevicePath \"\"" Feb 27 10:48:19 crc kubenswrapper[4728]: I0227 10:48:19.858343 4728 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6bf98b91-deff-4ed7-b4e9-ec72db4d9d92-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 27 10:48:19 crc kubenswrapper[4728]: I0227 10:48:19.858354 4728 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6bf98b91-deff-4ed7-b4e9-ec72db4d9d92-config\") on node \"crc\" DevicePath \"\"" Feb 27 10:48:19 crc kubenswrapper[4728]: I0227 10:48:19.966828 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-qqckp" event={"ID":"f9136d15-a48d-43e3-aff8-ca09ca1f222b","Type":"ContainerDied","Data":"03d5e197ff86696fa6d30217a9533817a9743b80aebd37f7cccfed92d6679431"} Feb 27 10:48:19 crc kubenswrapper[4728]: I0227 10:48:19.966877 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="03d5e197ff86696fa6d30217a9533817a9743b80aebd37f7cccfed92d6679431" Feb 27 10:48:19 crc kubenswrapper[4728]: I0227 10:48:19.966935 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-qqckp" Feb 27 10:48:19 crc kubenswrapper[4728]: I0227 10:48:19.975623 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-6bca-account-create-update-bcjb2" event={"ID":"8f7f72d1-b9f2-4a8f-b965-6892e4aaa6bd","Type":"ContainerDied","Data":"1ed38f7030e6d4cbea31f6daacb51d924eb7f2ce569c297ad6aeb8564bf60313"} Feb 27 10:48:19 crc kubenswrapper[4728]: I0227 10:48:19.975655 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1ed38f7030e6d4cbea31f6daacb51d924eb7f2ce569c297ad6aeb8564bf60313" Feb 27 10:48:19 crc kubenswrapper[4728]: I0227 10:48:19.975704 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-6bca-account-create-update-bcjb2" Feb 27 10:48:19 crc kubenswrapper[4728]: I0227 10:48:19.980152 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d5b6d6b67-sm4dz" event={"ID":"6bf98b91-deff-4ed7-b4e9-ec72db4d9d92","Type":"ContainerDied","Data":"34908154719a607500d550ac79b69ee4e6c563cdc5dcf3e44e143cb43c6f6ae1"} Feb 27 10:48:19 crc kubenswrapper[4728]: I0227 10:48:19.980299 4728 scope.go:117] "RemoveContainer" containerID="02ed52082e50a91686c703b06fb559e5c0dee80506d11e273846680ad2c0a233" Feb 27 10:48:19 crc kubenswrapper[4728]: I0227 10:48:19.980543 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d5b6d6b67-sm4dz" Feb 27 10:48:19 crc kubenswrapper[4728]: I0227 10:48:19.994948 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-bc9dk" event={"ID":"d31f26e9-dded-4375-abb8-f038bce13899","Type":"ContainerStarted","Data":"d7b6eec4e9f9cca6b7772e37722b6730767a340092347fd394452c1526e98955"} Feb 27 10:48:20 crc kubenswrapper[4728]: I0227 10:48:20.001428 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-4rkxc" event={"ID":"7100f5fe-cf67-4a79-b69f-c2ccf91d9426","Type":"ContainerDied","Data":"9d3f0e3c40878353230e829c3b48186910eae7a77fd0a8d02ade2509f52abf6d"} Feb 27 10:48:20 crc kubenswrapper[4728]: I0227 10:48:20.001486 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-4rkxc" Feb 27 10:48:20 crc kubenswrapper[4728]: I0227 10:48:20.001601 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9d3f0e3c40878353230e829c3b48186910eae7a77fd0a8d02ade2509f52abf6d" Feb 27 10:48:20 crc kubenswrapper[4728]: I0227 10:48:20.005716 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-151d-account-create-update-dsl49" Feb 27 10:48:20 crc kubenswrapper[4728]: I0227 10:48:20.006222 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-151d-account-create-update-dsl49" event={"ID":"abbbfc84-fe7d-4fc9-8f96-d360b6356660","Type":"ContainerDied","Data":"3f2d62d89760bd5119b8fe7c6f2d7b68058fb598b0b67614dd23d2036c73bed5"} Feb 27 10:48:20 crc kubenswrapper[4728]: I0227 10:48:20.006377 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3f2d62d89760bd5119b8fe7c6f2d7b68058fb598b0b67614dd23d2036c73bed5" Feb 27 10:48:20 crc kubenswrapper[4728]: I0227 10:48:20.009123 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-31b7-account-create-update-gwfvg" event={"ID":"b856c7d2-6928-4bd0-b327-2abd6d6f664f","Type":"ContainerDied","Data":"07cf9fd2a18d77b3cbb2d6342d8c3ea4e18e815b38ce23ee3984ce1885adf4ea"} Feb 27 10:48:20 crc kubenswrapper[4728]: I0227 10:48:20.009166 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="07cf9fd2a18d77b3cbb2d6342d8c3ea4e18e815b38ce23ee3984ce1885adf4ea" Feb 27 10:48:20 crc kubenswrapper[4728]: I0227 10:48:20.009344 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-31b7-account-create-update-gwfvg" Feb 27 10:48:20 crc kubenswrapper[4728]: I0227 10:48:20.017038 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-lb6tb" Feb 27 10:48:20 crc kubenswrapper[4728]: I0227 10:48:20.017132 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-lb6tb" event={"ID":"7160eba2-79dd-42c9-8540-30f948234052","Type":"ContainerDied","Data":"f21837ee8710b1267ea97fef6b11e658ae640631bbbb50e7f721a9345c8b82c3"} Feb 27 10:48:20 crc kubenswrapper[4728]: I0227 10:48:20.017158 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f21837ee8710b1267ea97fef6b11e658ae640631bbbb50e7f721a9345c8b82c3" Feb 27 10:48:20 crc kubenswrapper[4728]: I0227 10:48:20.024983 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-52njv" event={"ID":"85f6bc27-1d0e-48ad-9a23-60062c5f8bdb","Type":"ContainerDied","Data":"d371829dce21b5b1d816128472eb0efe1a7f9e56265c0b4912bd94af5f161c19"} Feb 27 10:48:20 crc kubenswrapper[4728]: I0227 10:48:20.025018 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d371829dce21b5b1d816128472eb0efe1a7f9e56265c0b4912bd94af5f161c19" Feb 27 10:48:20 crc kubenswrapper[4728]: I0227 10:48:20.025071 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-52njv" Feb 27 10:48:20 crc kubenswrapper[4728]: I0227 10:48:20.026038 4728 scope.go:117] "RemoveContainer" containerID="45e022385d294c164127ce63fff1552108326576682438f4f40428626fcc4346" Feb 27 10:48:20 crc kubenswrapper[4728]: I0227 10:48:20.032341 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-332e-account-create-update-mt6gn" event={"ID":"ea36ec6b-4135-4ead-9535-8d54c658c5c3","Type":"ContainerDied","Data":"22de3a186b9ba44d39c30656089de69cbbd6e4a389431b14dcb14711c674a848"} Feb 27 10:48:20 crc kubenswrapper[4728]: I0227 10:48:20.032388 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="22de3a186b9ba44d39c30656089de69cbbd6e4a389431b14dcb14711c674a848" Feb 27 10:48:20 crc kubenswrapper[4728]: I0227 10:48:20.032431 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-332e-account-create-update-mt6gn" Feb 27 10:48:20 crc kubenswrapper[4728]: I0227 10:48:20.042348 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-bc9dk" podStartSLOduration=2.7513654560000003 podStartE2EDuration="8.042317164s" podCreationTimestamp="2026-02-27 10:48:12 +0000 UTC" firstStartedPulling="2026-02-27 10:48:13.957529904 +0000 UTC m=+1313.919896010" lastFinishedPulling="2026-02-27 10:48:19.248481602 +0000 UTC m=+1319.210847718" observedRunningTime="2026-02-27 10:48:20.026569771 +0000 UTC m=+1319.988935897" watchObservedRunningTime="2026-02-27 10:48:20.042317164 +0000 UTC m=+1320.004683280" Feb 27 10:48:20 crc kubenswrapper[4728]: I0227 10:48:20.070861 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6d5b6d6b67-sm4dz"] Feb 27 10:48:20 crc kubenswrapper[4728]: I0227 10:48:20.096634 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6d5b6d6b67-sm4dz"] Feb 27 10:48:20 crc kubenswrapper[4728]: I0227 10:48:20.742670 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6bf98b91-deff-4ed7-b4e9-ec72db4d9d92" path="/var/lib/kubelet/pods/6bf98b91-deff-4ed7-b4e9-ec72db4d9d92/volumes" Feb 27 10:48:24 crc kubenswrapper[4728]: I0227 10:48:24.072285 4728 generic.go:334] "Generic (PLEG): container finished" podID="d31f26e9-dded-4375-abb8-f038bce13899" containerID="d7b6eec4e9f9cca6b7772e37722b6730767a340092347fd394452c1526e98955" exitCode=0 Feb 27 10:48:24 crc kubenswrapper[4728]: I0227 10:48:24.072329 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-bc9dk" event={"ID":"d31f26e9-dded-4375-abb8-f038bce13899","Type":"ContainerDied","Data":"d7b6eec4e9f9cca6b7772e37722b6730767a340092347fd394452c1526e98955"} Feb 27 10:48:25 crc kubenswrapper[4728]: I0227 10:48:25.549783 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-bc9dk" Feb 27 10:48:25 crc kubenswrapper[4728]: I0227 10:48:25.582707 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dksn2\" (UniqueName: \"kubernetes.io/projected/d31f26e9-dded-4375-abb8-f038bce13899-kube-api-access-dksn2\") pod \"d31f26e9-dded-4375-abb8-f038bce13899\" (UID: \"d31f26e9-dded-4375-abb8-f038bce13899\") " Feb 27 10:48:25 crc kubenswrapper[4728]: I0227 10:48:25.583002 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d31f26e9-dded-4375-abb8-f038bce13899-combined-ca-bundle\") pod \"d31f26e9-dded-4375-abb8-f038bce13899\" (UID: \"d31f26e9-dded-4375-abb8-f038bce13899\") " Feb 27 10:48:25 crc kubenswrapper[4728]: I0227 10:48:25.583032 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d31f26e9-dded-4375-abb8-f038bce13899-config-data\") pod \"d31f26e9-dded-4375-abb8-f038bce13899\" (UID: \"d31f26e9-dded-4375-abb8-f038bce13899\") " Feb 27 10:48:25 crc kubenswrapper[4728]: I0227 10:48:25.592683 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d31f26e9-dded-4375-abb8-f038bce13899-kube-api-access-dksn2" (OuterVolumeSpecName: "kube-api-access-dksn2") pod "d31f26e9-dded-4375-abb8-f038bce13899" (UID: "d31f26e9-dded-4375-abb8-f038bce13899"). InnerVolumeSpecName "kube-api-access-dksn2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:48:25 crc kubenswrapper[4728]: I0227 10:48:25.611255 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d31f26e9-dded-4375-abb8-f038bce13899-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d31f26e9-dded-4375-abb8-f038bce13899" (UID: "d31f26e9-dded-4375-abb8-f038bce13899"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:48:25 crc kubenswrapper[4728]: I0227 10:48:25.649482 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d31f26e9-dded-4375-abb8-f038bce13899-config-data" (OuterVolumeSpecName: "config-data") pod "d31f26e9-dded-4375-abb8-f038bce13899" (UID: "d31f26e9-dded-4375-abb8-f038bce13899"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:48:25 crc kubenswrapper[4728]: I0227 10:48:25.685688 4728 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d31f26e9-dded-4375-abb8-f038bce13899-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 10:48:25 crc kubenswrapper[4728]: I0227 10:48:25.685727 4728 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d31f26e9-dded-4375-abb8-f038bce13899-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 10:48:25 crc kubenswrapper[4728]: I0227 10:48:25.685739 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dksn2\" (UniqueName: \"kubernetes.io/projected/d31f26e9-dded-4375-abb8-f038bce13899-kube-api-access-dksn2\") on node \"crc\" DevicePath \"\"" Feb 27 10:48:26 crc kubenswrapper[4728]: I0227 10:48:26.096930 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-bc9dk" event={"ID":"d31f26e9-dded-4375-abb8-f038bce13899","Type":"ContainerDied","Data":"20dad8f26658d56489992fb684c21b4929c2fe75122ac7bd2b21399777837780"} Feb 27 10:48:26 crc kubenswrapper[4728]: I0227 10:48:26.096987 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-bc9dk" Feb 27 10:48:26 crc kubenswrapper[4728]: I0227 10:48:26.096993 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="20dad8f26658d56489992fb684c21b4929c2fe75122ac7bd2b21399777837780" Feb 27 10:48:26 crc kubenswrapper[4728]: I0227 10:48:26.471878 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-lt869"] Feb 27 10:48:26 crc kubenswrapper[4728]: E0227 10:48:26.474202 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d31f26e9-dded-4375-abb8-f038bce13899" containerName="keystone-db-sync" Feb 27 10:48:26 crc kubenswrapper[4728]: I0227 10:48:26.474987 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="d31f26e9-dded-4375-abb8-f038bce13899" containerName="keystone-db-sync" Feb 27 10:48:26 crc kubenswrapper[4728]: E0227 10:48:26.475109 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9136d15-a48d-43e3-aff8-ca09ca1f222b" containerName="mariadb-database-create" Feb 27 10:48:26 crc kubenswrapper[4728]: I0227 10:48:26.475175 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9136d15-a48d-43e3-aff8-ca09ca1f222b" containerName="mariadb-database-create" Feb 27 10:48:26 crc kubenswrapper[4728]: E0227 10:48:26.475254 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7160eba2-79dd-42c9-8540-30f948234052" containerName="mariadb-database-create" Feb 27 10:48:26 crc kubenswrapper[4728]: I0227 10:48:26.475303 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="7160eba2-79dd-42c9-8540-30f948234052" containerName="mariadb-database-create" Feb 27 10:48:26 crc kubenswrapper[4728]: E0227 10:48:26.475369 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7100f5fe-cf67-4a79-b69f-c2ccf91d9426" containerName="mariadb-database-create" Feb 27 10:48:26 crc kubenswrapper[4728]: I0227 10:48:26.475424 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="7100f5fe-cf67-4a79-b69f-c2ccf91d9426" containerName="mariadb-database-create" Feb 27 10:48:26 crc kubenswrapper[4728]: E0227 10:48:26.475518 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6bf98b91-deff-4ed7-b4e9-ec72db4d9d92" containerName="init" Feb 27 10:48:26 crc kubenswrapper[4728]: I0227 10:48:26.475579 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="6bf98b91-deff-4ed7-b4e9-ec72db4d9d92" containerName="init" Feb 27 10:48:26 crc kubenswrapper[4728]: E0227 10:48:26.475650 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea36ec6b-4135-4ead-9535-8d54c658c5c3" containerName="mariadb-account-create-update" Feb 27 10:48:26 crc kubenswrapper[4728]: I0227 10:48:26.475705 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea36ec6b-4135-4ead-9535-8d54c658c5c3" containerName="mariadb-account-create-update" Feb 27 10:48:26 crc kubenswrapper[4728]: E0227 10:48:26.475780 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f7f72d1-b9f2-4a8f-b965-6892e4aaa6bd" containerName="mariadb-account-create-update" Feb 27 10:48:26 crc kubenswrapper[4728]: I0227 10:48:26.475832 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f7f72d1-b9f2-4a8f-b965-6892e4aaa6bd" containerName="mariadb-account-create-update" Feb 27 10:48:26 crc kubenswrapper[4728]: E0227 10:48:26.475899 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6bf98b91-deff-4ed7-b4e9-ec72db4d9d92" containerName="dnsmasq-dns" Feb 27 10:48:26 crc kubenswrapper[4728]: I0227 10:48:26.475951 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="6bf98b91-deff-4ed7-b4e9-ec72db4d9d92" containerName="dnsmasq-dns" Feb 27 10:48:26 crc kubenswrapper[4728]: E0227 10:48:26.476021 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b856c7d2-6928-4bd0-b327-2abd6d6f664f" containerName="mariadb-account-create-update" Feb 27 10:48:26 crc kubenswrapper[4728]: I0227 10:48:26.476087 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="b856c7d2-6928-4bd0-b327-2abd6d6f664f" containerName="mariadb-account-create-update" Feb 27 10:48:26 crc kubenswrapper[4728]: E0227 10:48:26.476172 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abbbfc84-fe7d-4fc9-8f96-d360b6356660" containerName="mariadb-account-create-update" Feb 27 10:48:26 crc kubenswrapper[4728]: I0227 10:48:26.476222 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="abbbfc84-fe7d-4fc9-8f96-d360b6356660" containerName="mariadb-account-create-update" Feb 27 10:48:26 crc kubenswrapper[4728]: E0227 10:48:26.476277 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85f6bc27-1d0e-48ad-9a23-60062c5f8bdb" containerName="mariadb-database-create" Feb 27 10:48:26 crc kubenswrapper[4728]: I0227 10:48:26.476330 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="85f6bc27-1d0e-48ad-9a23-60062c5f8bdb" containerName="mariadb-database-create" Feb 27 10:48:26 crc kubenswrapper[4728]: I0227 10:48:26.476931 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="85f6bc27-1d0e-48ad-9a23-60062c5f8bdb" containerName="mariadb-database-create" Feb 27 10:48:26 crc kubenswrapper[4728]: I0227 10:48:26.477020 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="d31f26e9-dded-4375-abb8-f038bce13899" containerName="keystone-db-sync" Feb 27 10:48:26 crc kubenswrapper[4728]: I0227 10:48:26.477096 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea36ec6b-4135-4ead-9535-8d54c658c5c3" containerName="mariadb-account-create-update" Feb 27 10:48:26 crc kubenswrapper[4728]: I0227 10:48:26.477165 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f7f72d1-b9f2-4a8f-b965-6892e4aaa6bd" containerName="mariadb-account-create-update" Feb 27 10:48:26 crc kubenswrapper[4728]: I0227 10:48:26.477231 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="7160eba2-79dd-42c9-8540-30f948234052" containerName="mariadb-database-create" Feb 27 10:48:26 crc kubenswrapper[4728]: I0227 10:48:26.477305 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="7100f5fe-cf67-4a79-b69f-c2ccf91d9426" containerName="mariadb-database-create" Feb 27 10:48:26 crc kubenswrapper[4728]: I0227 10:48:26.477378 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="b856c7d2-6928-4bd0-b327-2abd6d6f664f" containerName="mariadb-account-create-update" Feb 27 10:48:26 crc kubenswrapper[4728]: I0227 10:48:26.477455 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="6bf98b91-deff-4ed7-b4e9-ec72db4d9d92" containerName="dnsmasq-dns" Feb 27 10:48:26 crc kubenswrapper[4728]: I0227 10:48:26.478237 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9136d15-a48d-43e3-aff8-ca09ca1f222b" containerName="mariadb-database-create" Feb 27 10:48:26 crc kubenswrapper[4728]: I0227 10:48:26.478322 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="abbbfc84-fe7d-4fc9-8f96-d360b6356660" containerName="mariadb-account-create-update" Feb 27 10:48:26 crc kubenswrapper[4728]: I0227 10:48:26.479539 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-lt869" Feb 27 10:48:26 crc kubenswrapper[4728]: I0227 10:48:26.488369 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 27 10:48:26 crc kubenswrapper[4728]: I0227 10:48:26.488601 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-2jf8f" Feb 27 10:48:26 crc kubenswrapper[4728]: I0227 10:48:26.489348 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 27 10:48:26 crc kubenswrapper[4728]: I0227 10:48:26.489566 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 27 10:48:26 crc kubenswrapper[4728]: I0227 10:48:26.489799 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 27 10:48:26 crc kubenswrapper[4728]: I0227 10:48:26.504520 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/bcd8863f-4937-4e77-80b9-0bfbf852fcac-fernet-keys\") pod \"keystone-bootstrap-lt869\" (UID: \"bcd8863f-4937-4e77-80b9-0bfbf852fcac\") " pod="openstack/keystone-bootstrap-lt869" Feb 27 10:48:26 crc kubenswrapper[4728]: I0227 10:48:26.504597 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bcd8863f-4937-4e77-80b9-0bfbf852fcac-combined-ca-bundle\") pod \"keystone-bootstrap-lt869\" (UID: \"bcd8863f-4937-4e77-80b9-0bfbf852fcac\") " pod="openstack/keystone-bootstrap-lt869" Feb 27 10:48:26 crc kubenswrapper[4728]: I0227 10:48:26.504650 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bcd8863f-4937-4e77-80b9-0bfbf852fcac-config-data\") pod \"keystone-bootstrap-lt869\" (UID: \"bcd8863f-4937-4e77-80b9-0bfbf852fcac\") " pod="openstack/keystone-bootstrap-lt869" Feb 27 10:48:26 crc kubenswrapper[4728]: I0227 10:48:26.504721 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/bcd8863f-4937-4e77-80b9-0bfbf852fcac-credential-keys\") pod \"keystone-bootstrap-lt869\" (UID: \"bcd8863f-4937-4e77-80b9-0bfbf852fcac\") " pod="openstack/keystone-bootstrap-lt869" Feb 27 10:48:26 crc kubenswrapper[4728]: I0227 10:48:26.504745 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bcd8863f-4937-4e77-80b9-0bfbf852fcac-scripts\") pod \"keystone-bootstrap-lt869\" (UID: \"bcd8863f-4937-4e77-80b9-0bfbf852fcac\") " pod="openstack/keystone-bootstrap-lt869" Feb 27 10:48:26 crc kubenswrapper[4728]: I0227 10:48:26.504761 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqtrs\" (UniqueName: \"kubernetes.io/projected/bcd8863f-4937-4e77-80b9-0bfbf852fcac-kube-api-access-wqtrs\") pod \"keystone-bootstrap-lt869\" (UID: \"bcd8863f-4937-4e77-80b9-0bfbf852fcac\") " pod="openstack/keystone-bootstrap-lt869" Feb 27 10:48:26 crc kubenswrapper[4728]: I0227 10:48:26.547978 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6c9c9f998c-4w8s7"] Feb 27 10:48:26 crc kubenswrapper[4728]: I0227 10:48:26.550120 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c9c9f998c-4w8s7" Feb 27 10:48:26 crc kubenswrapper[4728]: I0227 10:48:26.580560 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-lt869"] Feb 27 10:48:26 crc kubenswrapper[4728]: I0227 10:48:26.610901 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/484474dd-d534-4903-99fb-afb3f53aa146-ovsdbserver-nb\") pod \"dnsmasq-dns-6c9c9f998c-4w8s7\" (UID: \"484474dd-d534-4903-99fb-afb3f53aa146\") " pod="openstack/dnsmasq-dns-6c9c9f998c-4w8s7" Feb 27 10:48:26 crc kubenswrapper[4728]: I0227 10:48:26.610957 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hsmpc\" (UniqueName: \"kubernetes.io/projected/484474dd-d534-4903-99fb-afb3f53aa146-kube-api-access-hsmpc\") pod \"dnsmasq-dns-6c9c9f998c-4w8s7\" (UID: \"484474dd-d534-4903-99fb-afb3f53aa146\") " pod="openstack/dnsmasq-dns-6c9c9f998c-4w8s7" Feb 27 10:48:26 crc kubenswrapper[4728]: I0227 10:48:26.610978 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/484474dd-d534-4903-99fb-afb3f53aa146-dns-swift-storage-0\") pod \"dnsmasq-dns-6c9c9f998c-4w8s7\" (UID: \"484474dd-d534-4903-99fb-afb3f53aa146\") " pod="openstack/dnsmasq-dns-6c9c9f998c-4w8s7" Feb 27 10:48:26 crc kubenswrapper[4728]: I0227 10:48:26.611000 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bcd8863f-4937-4e77-80b9-0bfbf852fcac-config-data\") pod \"keystone-bootstrap-lt869\" (UID: \"bcd8863f-4937-4e77-80b9-0bfbf852fcac\") " pod="openstack/keystone-bootstrap-lt869" Feb 27 10:48:26 crc kubenswrapper[4728]: I0227 10:48:26.611028 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/484474dd-d534-4903-99fb-afb3f53aa146-ovsdbserver-sb\") pod \"dnsmasq-dns-6c9c9f998c-4w8s7\" (UID: \"484474dd-d534-4903-99fb-afb3f53aa146\") " pod="openstack/dnsmasq-dns-6c9c9f998c-4w8s7" Feb 27 10:48:26 crc kubenswrapper[4728]: I0227 10:48:26.611086 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/bcd8863f-4937-4e77-80b9-0bfbf852fcac-credential-keys\") pod \"keystone-bootstrap-lt869\" (UID: \"bcd8863f-4937-4e77-80b9-0bfbf852fcac\") " pod="openstack/keystone-bootstrap-lt869" Feb 27 10:48:26 crc kubenswrapper[4728]: I0227 10:48:26.611110 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bcd8863f-4937-4e77-80b9-0bfbf852fcac-scripts\") pod \"keystone-bootstrap-lt869\" (UID: \"bcd8863f-4937-4e77-80b9-0bfbf852fcac\") " pod="openstack/keystone-bootstrap-lt869" Feb 27 10:48:26 crc kubenswrapper[4728]: I0227 10:48:26.611127 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wqtrs\" (UniqueName: \"kubernetes.io/projected/bcd8863f-4937-4e77-80b9-0bfbf852fcac-kube-api-access-wqtrs\") pod \"keystone-bootstrap-lt869\" (UID: \"bcd8863f-4937-4e77-80b9-0bfbf852fcac\") " pod="openstack/keystone-bootstrap-lt869" Feb 27 10:48:26 crc kubenswrapper[4728]: I0227 10:48:26.611166 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/484474dd-d534-4903-99fb-afb3f53aa146-dns-svc\") pod \"dnsmasq-dns-6c9c9f998c-4w8s7\" (UID: \"484474dd-d534-4903-99fb-afb3f53aa146\") " pod="openstack/dnsmasq-dns-6c9c9f998c-4w8s7" Feb 27 10:48:26 crc kubenswrapper[4728]: I0227 10:48:26.611205 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/484474dd-d534-4903-99fb-afb3f53aa146-config\") pod \"dnsmasq-dns-6c9c9f998c-4w8s7\" (UID: \"484474dd-d534-4903-99fb-afb3f53aa146\") " pod="openstack/dnsmasq-dns-6c9c9f998c-4w8s7" Feb 27 10:48:26 crc kubenswrapper[4728]: I0227 10:48:26.611239 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/bcd8863f-4937-4e77-80b9-0bfbf852fcac-fernet-keys\") pod \"keystone-bootstrap-lt869\" (UID: \"bcd8863f-4937-4e77-80b9-0bfbf852fcac\") " pod="openstack/keystone-bootstrap-lt869" Feb 27 10:48:26 crc kubenswrapper[4728]: I0227 10:48:26.611272 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bcd8863f-4937-4e77-80b9-0bfbf852fcac-combined-ca-bundle\") pod \"keystone-bootstrap-lt869\" (UID: \"bcd8863f-4937-4e77-80b9-0bfbf852fcac\") " pod="openstack/keystone-bootstrap-lt869" Feb 27 10:48:26 crc kubenswrapper[4728]: I0227 10:48:26.622561 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6c9c9f998c-4w8s7"] Feb 27 10:48:26 crc kubenswrapper[4728]: I0227 10:48:26.624513 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/bcd8863f-4937-4e77-80b9-0bfbf852fcac-fernet-keys\") pod \"keystone-bootstrap-lt869\" (UID: \"bcd8863f-4937-4e77-80b9-0bfbf852fcac\") " pod="openstack/keystone-bootstrap-lt869" Feb 27 10:48:26 crc kubenswrapper[4728]: I0227 10:48:26.627603 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/bcd8863f-4937-4e77-80b9-0bfbf852fcac-credential-keys\") pod \"keystone-bootstrap-lt869\" (UID: \"bcd8863f-4937-4e77-80b9-0bfbf852fcac\") " pod="openstack/keystone-bootstrap-lt869" Feb 27 10:48:26 crc kubenswrapper[4728]: I0227 10:48:26.628849 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bcd8863f-4937-4e77-80b9-0bfbf852fcac-scripts\") pod \"keystone-bootstrap-lt869\" (UID: \"bcd8863f-4937-4e77-80b9-0bfbf852fcac\") " pod="openstack/keystone-bootstrap-lt869" Feb 27 10:48:26 crc kubenswrapper[4728]: I0227 10:48:26.629405 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bcd8863f-4937-4e77-80b9-0bfbf852fcac-config-data\") pod \"keystone-bootstrap-lt869\" (UID: \"bcd8863f-4937-4e77-80b9-0bfbf852fcac\") " pod="openstack/keystone-bootstrap-lt869" Feb 27 10:48:26 crc kubenswrapper[4728]: I0227 10:48:26.644770 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bcd8863f-4937-4e77-80b9-0bfbf852fcac-combined-ca-bundle\") pod \"keystone-bootstrap-lt869\" (UID: \"bcd8863f-4937-4e77-80b9-0bfbf852fcac\") " pod="openstack/keystone-bootstrap-lt869" Feb 27 10:48:26 crc kubenswrapper[4728]: I0227 10:48:26.647828 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wqtrs\" (UniqueName: \"kubernetes.io/projected/bcd8863f-4937-4e77-80b9-0bfbf852fcac-kube-api-access-wqtrs\") pod \"keystone-bootstrap-lt869\" (UID: \"bcd8863f-4937-4e77-80b9-0bfbf852fcac\") " pod="openstack/keystone-bootstrap-lt869" Feb 27 10:48:26 crc kubenswrapper[4728]: I0227 10:48:26.706286 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-sync-dgjpm"] Feb 27 10:48:26 crc kubenswrapper[4728]: I0227 10:48:26.707638 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-dgjpm" Feb 27 10:48:26 crc kubenswrapper[4728]: I0227 10:48:26.717517 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-s58fl" Feb 27 10:48:26 crc kubenswrapper[4728]: I0227 10:48:26.717701 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Feb 27 10:48:26 crc kubenswrapper[4728]: I0227 10:48:26.718300 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/484474dd-d534-4903-99fb-afb3f53aa146-ovsdbserver-nb\") pod \"dnsmasq-dns-6c9c9f998c-4w8s7\" (UID: \"484474dd-d534-4903-99fb-afb3f53aa146\") " pod="openstack/dnsmasq-dns-6c9c9f998c-4w8s7" Feb 27 10:48:26 crc kubenswrapper[4728]: I0227 10:48:26.718376 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hsmpc\" (UniqueName: \"kubernetes.io/projected/484474dd-d534-4903-99fb-afb3f53aa146-kube-api-access-hsmpc\") pod \"dnsmasq-dns-6c9c9f998c-4w8s7\" (UID: \"484474dd-d534-4903-99fb-afb3f53aa146\") " pod="openstack/dnsmasq-dns-6c9c9f998c-4w8s7" Feb 27 10:48:26 crc kubenswrapper[4728]: I0227 10:48:26.718400 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/484474dd-d534-4903-99fb-afb3f53aa146-dns-swift-storage-0\") pod \"dnsmasq-dns-6c9c9f998c-4w8s7\" (UID: \"484474dd-d534-4903-99fb-afb3f53aa146\") " pod="openstack/dnsmasq-dns-6c9c9f998c-4w8s7" Feb 27 10:48:26 crc kubenswrapper[4728]: I0227 10:48:26.718431 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/484474dd-d534-4903-99fb-afb3f53aa146-ovsdbserver-sb\") pod \"dnsmasq-dns-6c9c9f998c-4w8s7\" (UID: \"484474dd-d534-4903-99fb-afb3f53aa146\") " pod="openstack/dnsmasq-dns-6c9c9f998c-4w8s7" Feb 27 10:48:26 crc kubenswrapper[4728]: I0227 10:48:26.718543 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/484474dd-d534-4903-99fb-afb3f53aa146-dns-svc\") pod \"dnsmasq-dns-6c9c9f998c-4w8s7\" (UID: \"484474dd-d534-4903-99fb-afb3f53aa146\") " pod="openstack/dnsmasq-dns-6c9c9f998c-4w8s7" Feb 27 10:48:26 crc kubenswrapper[4728]: I0227 10:48:26.718589 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/484474dd-d534-4903-99fb-afb3f53aa146-config\") pod \"dnsmasq-dns-6c9c9f998c-4w8s7\" (UID: \"484474dd-d534-4903-99fb-afb3f53aa146\") " pod="openstack/dnsmasq-dns-6c9c9f998c-4w8s7" Feb 27 10:48:26 crc kubenswrapper[4728]: I0227 10:48:26.719547 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/484474dd-d534-4903-99fb-afb3f53aa146-config\") pod \"dnsmasq-dns-6c9c9f998c-4w8s7\" (UID: \"484474dd-d534-4903-99fb-afb3f53aa146\") " pod="openstack/dnsmasq-dns-6c9c9f998c-4w8s7" Feb 27 10:48:26 crc kubenswrapper[4728]: I0227 10:48:26.720155 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/484474dd-d534-4903-99fb-afb3f53aa146-ovsdbserver-nb\") pod \"dnsmasq-dns-6c9c9f998c-4w8s7\" (UID: \"484474dd-d534-4903-99fb-afb3f53aa146\") " pod="openstack/dnsmasq-dns-6c9c9f998c-4w8s7" Feb 27 10:48:26 crc kubenswrapper[4728]: I0227 10:48:26.721117 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/484474dd-d534-4903-99fb-afb3f53aa146-dns-swift-storage-0\") pod \"dnsmasq-dns-6c9c9f998c-4w8s7\" (UID: \"484474dd-d534-4903-99fb-afb3f53aa146\") " pod="openstack/dnsmasq-dns-6c9c9f998c-4w8s7" Feb 27 10:48:26 crc kubenswrapper[4728]: I0227 10:48:26.721712 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/484474dd-d534-4903-99fb-afb3f53aa146-ovsdbserver-sb\") pod \"dnsmasq-dns-6c9c9f998c-4w8s7\" (UID: \"484474dd-d534-4903-99fb-afb3f53aa146\") " pod="openstack/dnsmasq-dns-6c9c9f998c-4w8s7" Feb 27 10:48:26 crc kubenswrapper[4728]: I0227 10:48:26.722288 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/484474dd-d534-4903-99fb-afb3f53aa146-dns-svc\") pod \"dnsmasq-dns-6c9c9f998c-4w8s7\" (UID: \"484474dd-d534-4903-99fb-afb3f53aa146\") " pod="openstack/dnsmasq-dns-6c9c9f998c-4w8s7" Feb 27 10:48:26 crc kubenswrapper[4728]: I0227 10:48:26.746444 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-dgjpm"] Feb 27 10:48:26 crc kubenswrapper[4728]: I0227 10:48:26.774730 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hsmpc\" (UniqueName: \"kubernetes.io/projected/484474dd-d534-4903-99fb-afb3f53aa146-kube-api-access-hsmpc\") pod \"dnsmasq-dns-6c9c9f998c-4w8s7\" (UID: \"484474dd-d534-4903-99fb-afb3f53aa146\") " pod="openstack/dnsmasq-dns-6c9c9f998c-4w8s7" Feb 27 10:48:26 crc kubenswrapper[4728]: I0227 10:48:26.777065 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-dds8x"] Feb 27 10:48:26 crc kubenswrapper[4728]: I0227 10:48:26.778640 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-dds8x" Feb 27 10:48:26 crc kubenswrapper[4728]: I0227 10:48:26.787281 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-lfz4z" Feb 27 10:48:26 crc kubenswrapper[4728]: I0227 10:48:26.787472 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 27 10:48:26 crc kubenswrapper[4728]: I0227 10:48:26.787702 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 27 10:48:26 crc kubenswrapper[4728]: I0227 10:48:26.800915 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-dds8x"] Feb 27 10:48:26 crc kubenswrapper[4728]: I0227 10:48:26.822937 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e293ec90-6006-49f0-8e24-8a0f4327d2cf-config-data\") pod \"cinder-db-sync-dds8x\" (UID: \"e293ec90-6006-49f0-8e24-8a0f4327d2cf\") " pod="openstack/cinder-db-sync-dds8x" Feb 27 10:48:26 crc kubenswrapper[4728]: I0227 10:48:26.822995 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8gs7\" (UniqueName: \"kubernetes.io/projected/e977ffad-2764-4871-bdc8-24f0c3b4caf1-kube-api-access-b8gs7\") pod \"heat-db-sync-dgjpm\" (UID: \"e977ffad-2764-4871-bdc8-24f0c3b4caf1\") " pod="openstack/heat-db-sync-dgjpm" Feb 27 10:48:26 crc kubenswrapper[4728]: I0227 10:48:26.823016 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e977ffad-2764-4871-bdc8-24f0c3b4caf1-combined-ca-bundle\") pod \"heat-db-sync-dgjpm\" (UID: \"e977ffad-2764-4871-bdc8-24f0c3b4caf1\") " pod="openstack/heat-db-sync-dgjpm" Feb 27 10:48:26 crc kubenswrapper[4728]: I0227 10:48:26.823041 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sm7br\" (UniqueName: \"kubernetes.io/projected/e293ec90-6006-49f0-8e24-8a0f4327d2cf-kube-api-access-sm7br\") pod \"cinder-db-sync-dds8x\" (UID: \"e293ec90-6006-49f0-8e24-8a0f4327d2cf\") " pod="openstack/cinder-db-sync-dds8x" Feb 27 10:48:26 crc kubenswrapper[4728]: I0227 10:48:26.823058 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e977ffad-2764-4871-bdc8-24f0c3b4caf1-config-data\") pod \"heat-db-sync-dgjpm\" (UID: \"e977ffad-2764-4871-bdc8-24f0c3b4caf1\") " pod="openstack/heat-db-sync-dgjpm" Feb 27 10:48:26 crc kubenswrapper[4728]: I0227 10:48:26.823111 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e293ec90-6006-49f0-8e24-8a0f4327d2cf-db-sync-config-data\") pod \"cinder-db-sync-dds8x\" (UID: \"e293ec90-6006-49f0-8e24-8a0f4327d2cf\") " pod="openstack/cinder-db-sync-dds8x" Feb 27 10:48:26 crc kubenswrapper[4728]: I0227 10:48:26.823127 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e293ec90-6006-49f0-8e24-8a0f4327d2cf-etc-machine-id\") pod \"cinder-db-sync-dds8x\" (UID: \"e293ec90-6006-49f0-8e24-8a0f4327d2cf\") " pod="openstack/cinder-db-sync-dds8x" Feb 27 10:48:26 crc kubenswrapper[4728]: I0227 10:48:26.823158 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e293ec90-6006-49f0-8e24-8a0f4327d2cf-scripts\") pod \"cinder-db-sync-dds8x\" (UID: \"e293ec90-6006-49f0-8e24-8a0f4327d2cf\") " pod="openstack/cinder-db-sync-dds8x" Feb 27 10:48:26 crc kubenswrapper[4728]: I0227 10:48:26.823231 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e293ec90-6006-49f0-8e24-8a0f4327d2cf-combined-ca-bundle\") pod \"cinder-db-sync-dds8x\" (UID: \"e293ec90-6006-49f0-8e24-8a0f4327d2cf\") " pod="openstack/cinder-db-sync-dds8x" Feb 27 10:48:26 crc kubenswrapper[4728]: I0227 10:48:26.831362 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-lt869" Feb 27 10:48:26 crc kubenswrapper[4728]: I0227 10:48:26.845836 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-4msxf"] Feb 27 10:48:26 crc kubenswrapper[4728]: I0227 10:48:26.847586 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-4msxf" Feb 27 10:48:26 crc kubenswrapper[4728]: I0227 10:48:26.863122 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 27 10:48:26 crc kubenswrapper[4728]: I0227 10:48:26.864309 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-w6w2k" Feb 27 10:48:26 crc kubenswrapper[4728]: I0227 10:48:26.864317 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 27 10:48:26 crc kubenswrapper[4728]: I0227 10:48:26.900962 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c9c9f998c-4w8s7" Feb 27 10:48:26 crc kubenswrapper[4728]: I0227 10:48:26.931841 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56z9r\" (UniqueName: \"kubernetes.io/projected/085a2ebe-2930-448c-aec9-396b505fb399-kube-api-access-56z9r\") pod \"neutron-db-sync-4msxf\" (UID: \"085a2ebe-2930-448c-aec9-396b505fb399\") " pod="openstack/neutron-db-sync-4msxf" Feb 27 10:48:26 crc kubenswrapper[4728]: I0227 10:48:26.931907 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e293ec90-6006-49f0-8e24-8a0f4327d2cf-db-sync-config-data\") pod \"cinder-db-sync-dds8x\" (UID: \"e293ec90-6006-49f0-8e24-8a0f4327d2cf\") " pod="openstack/cinder-db-sync-dds8x" Feb 27 10:48:26 crc kubenswrapper[4728]: I0227 10:48:26.931931 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e293ec90-6006-49f0-8e24-8a0f4327d2cf-etc-machine-id\") pod \"cinder-db-sync-dds8x\" (UID: \"e293ec90-6006-49f0-8e24-8a0f4327d2cf\") " pod="openstack/cinder-db-sync-dds8x" Feb 27 10:48:26 crc kubenswrapper[4728]: I0227 10:48:26.931961 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e293ec90-6006-49f0-8e24-8a0f4327d2cf-scripts\") pod \"cinder-db-sync-dds8x\" (UID: \"e293ec90-6006-49f0-8e24-8a0f4327d2cf\") " pod="openstack/cinder-db-sync-dds8x" Feb 27 10:48:26 crc kubenswrapper[4728]: I0227 10:48:26.932021 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/085a2ebe-2930-448c-aec9-396b505fb399-combined-ca-bundle\") pod \"neutron-db-sync-4msxf\" (UID: \"085a2ebe-2930-448c-aec9-396b505fb399\") " pod="openstack/neutron-db-sync-4msxf" Feb 27 10:48:26 crc kubenswrapper[4728]: I0227 10:48:26.932107 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e293ec90-6006-49f0-8e24-8a0f4327d2cf-combined-ca-bundle\") pod \"cinder-db-sync-dds8x\" (UID: \"e293ec90-6006-49f0-8e24-8a0f4327d2cf\") " pod="openstack/cinder-db-sync-dds8x" Feb 27 10:48:26 crc kubenswrapper[4728]: I0227 10:48:26.932222 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e293ec90-6006-49f0-8e24-8a0f4327d2cf-config-data\") pod \"cinder-db-sync-dds8x\" (UID: \"e293ec90-6006-49f0-8e24-8a0f4327d2cf\") " pod="openstack/cinder-db-sync-dds8x" Feb 27 10:48:26 crc kubenswrapper[4728]: I0227 10:48:26.932279 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b8gs7\" (UniqueName: \"kubernetes.io/projected/e977ffad-2764-4871-bdc8-24f0c3b4caf1-kube-api-access-b8gs7\") pod \"heat-db-sync-dgjpm\" (UID: \"e977ffad-2764-4871-bdc8-24f0c3b4caf1\") " pod="openstack/heat-db-sync-dgjpm" Feb 27 10:48:26 crc kubenswrapper[4728]: I0227 10:48:26.932295 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e977ffad-2764-4871-bdc8-24f0c3b4caf1-combined-ca-bundle\") pod \"heat-db-sync-dgjpm\" (UID: \"e977ffad-2764-4871-bdc8-24f0c3b4caf1\") " pod="openstack/heat-db-sync-dgjpm" Feb 27 10:48:26 crc kubenswrapper[4728]: I0227 10:48:26.932323 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sm7br\" (UniqueName: \"kubernetes.io/projected/e293ec90-6006-49f0-8e24-8a0f4327d2cf-kube-api-access-sm7br\") pod \"cinder-db-sync-dds8x\" (UID: \"e293ec90-6006-49f0-8e24-8a0f4327d2cf\") " pod="openstack/cinder-db-sync-dds8x" Feb 27 10:48:26 crc kubenswrapper[4728]: I0227 10:48:26.932340 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e977ffad-2764-4871-bdc8-24f0c3b4caf1-config-data\") pod \"heat-db-sync-dgjpm\" (UID: \"e977ffad-2764-4871-bdc8-24f0c3b4caf1\") " pod="openstack/heat-db-sync-dgjpm" Feb 27 10:48:26 crc kubenswrapper[4728]: I0227 10:48:26.932392 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/085a2ebe-2930-448c-aec9-396b505fb399-config\") pod \"neutron-db-sync-4msxf\" (UID: \"085a2ebe-2930-448c-aec9-396b505fb399\") " pod="openstack/neutron-db-sync-4msxf" Feb 27 10:48:26 crc kubenswrapper[4728]: I0227 10:48:26.936996 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e293ec90-6006-49f0-8e24-8a0f4327d2cf-etc-machine-id\") pod \"cinder-db-sync-dds8x\" (UID: \"e293ec90-6006-49f0-8e24-8a0f4327d2cf\") " pod="openstack/cinder-db-sync-dds8x" Feb 27 10:48:26 crc kubenswrapper[4728]: I0227 10:48:26.946520 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e977ffad-2764-4871-bdc8-24f0c3b4caf1-combined-ca-bundle\") pod \"heat-db-sync-dgjpm\" (UID: \"e977ffad-2764-4871-bdc8-24f0c3b4caf1\") " pod="openstack/heat-db-sync-dgjpm" Feb 27 10:48:26 crc kubenswrapper[4728]: I0227 10:48:26.957599 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e293ec90-6006-49f0-8e24-8a0f4327d2cf-config-data\") pod \"cinder-db-sync-dds8x\" (UID: \"e293ec90-6006-49f0-8e24-8a0f4327d2cf\") " pod="openstack/cinder-db-sync-dds8x" Feb 27 10:48:26 crc kubenswrapper[4728]: I0227 10:48:26.958006 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e293ec90-6006-49f0-8e24-8a0f4327d2cf-scripts\") pod \"cinder-db-sync-dds8x\" (UID: \"e293ec90-6006-49f0-8e24-8a0f4327d2cf\") " pod="openstack/cinder-db-sync-dds8x" Feb 27 10:48:26 crc kubenswrapper[4728]: I0227 10:48:26.960537 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e293ec90-6006-49f0-8e24-8a0f4327d2cf-db-sync-config-data\") pod \"cinder-db-sync-dds8x\" (UID: \"e293ec90-6006-49f0-8e24-8a0f4327d2cf\") " pod="openstack/cinder-db-sync-dds8x" Feb 27 10:48:26 crc kubenswrapper[4728]: I0227 10:48:26.960843 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e977ffad-2764-4871-bdc8-24f0c3b4caf1-config-data\") pod \"heat-db-sync-dgjpm\" (UID: \"e977ffad-2764-4871-bdc8-24f0c3b4caf1\") " pod="openstack/heat-db-sync-dgjpm" Feb 27 10:48:26 crc kubenswrapper[4728]: I0227 10:48:26.971217 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e293ec90-6006-49f0-8e24-8a0f4327d2cf-combined-ca-bundle\") pod \"cinder-db-sync-dds8x\" (UID: \"e293ec90-6006-49f0-8e24-8a0f4327d2cf\") " pod="openstack/cinder-db-sync-dds8x" Feb 27 10:48:26 crc kubenswrapper[4728]: I0227 10:48:26.974886 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-4msxf"] Feb 27 10:48:27 crc kubenswrapper[4728]: I0227 10:48:27.004789 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b8gs7\" (UniqueName: \"kubernetes.io/projected/e977ffad-2764-4871-bdc8-24f0c3b4caf1-kube-api-access-b8gs7\") pod \"heat-db-sync-dgjpm\" (UID: \"e977ffad-2764-4871-bdc8-24f0c3b4caf1\") " pod="openstack/heat-db-sync-dgjpm" Feb 27 10:48:27 crc kubenswrapper[4728]: I0227 10:48:27.005485 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sm7br\" (UniqueName: \"kubernetes.io/projected/e293ec90-6006-49f0-8e24-8a0f4327d2cf-kube-api-access-sm7br\") pod \"cinder-db-sync-dds8x\" (UID: \"e293ec90-6006-49f0-8e24-8a0f4327d2cf\") " pod="openstack/cinder-db-sync-dds8x" Feb 27 10:48:27 crc kubenswrapper[4728]: I0227 10:48:27.037252 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/085a2ebe-2930-448c-aec9-396b505fb399-combined-ca-bundle\") pod \"neutron-db-sync-4msxf\" (UID: \"085a2ebe-2930-448c-aec9-396b505fb399\") " pod="openstack/neutron-db-sync-4msxf" Feb 27 10:48:27 crc kubenswrapper[4728]: I0227 10:48:27.037440 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/085a2ebe-2930-448c-aec9-396b505fb399-config\") pod \"neutron-db-sync-4msxf\" (UID: \"085a2ebe-2930-448c-aec9-396b505fb399\") " pod="openstack/neutron-db-sync-4msxf" Feb 27 10:48:27 crc kubenswrapper[4728]: I0227 10:48:27.037470 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-56z9r\" (UniqueName: \"kubernetes.io/projected/085a2ebe-2930-448c-aec9-396b505fb399-kube-api-access-56z9r\") pod \"neutron-db-sync-4msxf\" (UID: \"085a2ebe-2930-448c-aec9-396b505fb399\") " pod="openstack/neutron-db-sync-4msxf" Feb 27 10:48:27 crc kubenswrapper[4728]: I0227 10:48:27.051202 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/085a2ebe-2930-448c-aec9-396b505fb399-config\") pod \"neutron-db-sync-4msxf\" (UID: \"085a2ebe-2930-448c-aec9-396b505fb399\") " pod="openstack/neutron-db-sync-4msxf" Feb 27 10:48:27 crc kubenswrapper[4728]: I0227 10:48:27.064322 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-dgjpm" Feb 27 10:48:27 crc kubenswrapper[4728]: I0227 10:48:27.064417 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/085a2ebe-2930-448c-aec9-396b505fb399-combined-ca-bundle\") pod \"neutron-db-sync-4msxf\" (UID: \"085a2ebe-2930-448c-aec9-396b505fb399\") " pod="openstack/neutron-db-sync-4msxf" Feb 27 10:48:27 crc kubenswrapper[4728]: I0227 10:48:27.085754 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6c9c9f998c-4w8s7"] Feb 27 10:48:27 crc kubenswrapper[4728]: I0227 10:48:27.106914 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-tmx8h"] Feb 27 10:48:27 crc kubenswrapper[4728]: I0227 10:48:27.108969 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-tmx8h" Feb 27 10:48:27 crc kubenswrapper[4728]: I0227 10:48:27.140154 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/141bf253-61a8-46a0-9d10-9aefbbd124c6-config-data\") pod \"placement-db-sync-tmx8h\" (UID: \"141bf253-61a8-46a0-9d10-9aefbbd124c6\") " pod="openstack/placement-db-sync-tmx8h" Feb 27 10:48:27 crc kubenswrapper[4728]: I0227 10:48:27.140249 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/141bf253-61a8-46a0-9d10-9aefbbd124c6-scripts\") pod \"placement-db-sync-tmx8h\" (UID: \"141bf253-61a8-46a0-9d10-9aefbbd124c6\") " pod="openstack/placement-db-sync-tmx8h" Feb 27 10:48:27 crc kubenswrapper[4728]: I0227 10:48:27.140285 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/141bf253-61a8-46a0-9d10-9aefbbd124c6-logs\") pod \"placement-db-sync-tmx8h\" (UID: \"141bf253-61a8-46a0-9d10-9aefbbd124c6\") " pod="openstack/placement-db-sync-tmx8h" Feb 27 10:48:27 crc kubenswrapper[4728]: I0227 10:48:27.140444 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/141bf253-61a8-46a0-9d10-9aefbbd124c6-combined-ca-bundle\") pod \"placement-db-sync-tmx8h\" (UID: \"141bf253-61a8-46a0-9d10-9aefbbd124c6\") " pod="openstack/placement-db-sync-tmx8h" Feb 27 10:48:27 crc kubenswrapper[4728]: I0227 10:48:27.140489 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rg9b5\" (UniqueName: \"kubernetes.io/projected/141bf253-61a8-46a0-9d10-9aefbbd124c6-kube-api-access-rg9b5\") pod \"placement-db-sync-tmx8h\" (UID: \"141bf253-61a8-46a0-9d10-9aefbbd124c6\") " pod="openstack/placement-db-sync-tmx8h" Feb 27 10:48:27 crc kubenswrapper[4728]: W0227 10:48:27.141376 4728 reflector.go:561] object-"openstack"/"placement-placement-dockercfg-9jk9w": failed to list *v1.Secret: secrets "placement-placement-dockercfg-9jk9w" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openstack": no relationship found between node 'crc' and this object Feb 27 10:48:27 crc kubenswrapper[4728]: E0227 10:48:27.141416 4728 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"placement-placement-dockercfg-9jk9w\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"placement-placement-dockercfg-9jk9w\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openstack\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 27 10:48:27 crc kubenswrapper[4728]: I0227 10:48:27.141693 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 27 10:48:27 crc kubenswrapper[4728]: I0227 10:48:27.141890 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 27 10:48:27 crc kubenswrapper[4728]: I0227 10:48:27.165927 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-dds8x" Feb 27 10:48:27 crc kubenswrapper[4728]: I0227 10:48:27.186581 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-tmx8h"] Feb 27 10:48:27 crc kubenswrapper[4728]: I0227 10:48:27.189626 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-56z9r\" (UniqueName: \"kubernetes.io/projected/085a2ebe-2930-448c-aec9-396b505fb399-kube-api-access-56z9r\") pod \"neutron-db-sync-4msxf\" (UID: \"085a2ebe-2930-448c-aec9-396b505fb399\") " pod="openstack/neutron-db-sync-4msxf" Feb 27 10:48:27 crc kubenswrapper[4728]: I0227 10:48:27.240583 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-v2vbf"] Feb 27 10:48:27 crc kubenswrapper[4728]: I0227 10:48:27.242749 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-v2vbf" Feb 27 10:48:27 crc kubenswrapper[4728]: I0227 10:48:27.243661 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rg9b5\" (UniqueName: \"kubernetes.io/projected/141bf253-61a8-46a0-9d10-9aefbbd124c6-kube-api-access-rg9b5\") pod \"placement-db-sync-tmx8h\" (UID: \"141bf253-61a8-46a0-9d10-9aefbbd124c6\") " pod="openstack/placement-db-sync-tmx8h" Feb 27 10:48:27 crc kubenswrapper[4728]: I0227 10:48:27.243735 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/141bf253-61a8-46a0-9d10-9aefbbd124c6-config-data\") pod \"placement-db-sync-tmx8h\" (UID: \"141bf253-61a8-46a0-9d10-9aefbbd124c6\") " pod="openstack/placement-db-sync-tmx8h" Feb 27 10:48:27 crc kubenswrapper[4728]: I0227 10:48:27.243776 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/141bf253-61a8-46a0-9d10-9aefbbd124c6-scripts\") pod \"placement-db-sync-tmx8h\" (UID: \"141bf253-61a8-46a0-9d10-9aefbbd124c6\") " pod="openstack/placement-db-sync-tmx8h" Feb 27 10:48:27 crc kubenswrapper[4728]: I0227 10:48:27.243798 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/141bf253-61a8-46a0-9d10-9aefbbd124c6-logs\") pod \"placement-db-sync-tmx8h\" (UID: \"141bf253-61a8-46a0-9d10-9aefbbd124c6\") " pod="openstack/placement-db-sync-tmx8h" Feb 27 10:48:27 crc kubenswrapper[4728]: I0227 10:48:27.243901 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/141bf253-61a8-46a0-9d10-9aefbbd124c6-combined-ca-bundle\") pod \"placement-db-sync-tmx8h\" (UID: \"141bf253-61a8-46a0-9d10-9aefbbd124c6\") " pod="openstack/placement-db-sync-tmx8h" Feb 27 10:48:27 crc kubenswrapper[4728]: I0227 10:48:27.246478 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/141bf253-61a8-46a0-9d10-9aefbbd124c6-logs\") pod \"placement-db-sync-tmx8h\" (UID: \"141bf253-61a8-46a0-9d10-9aefbbd124c6\") " pod="openstack/placement-db-sync-tmx8h" Feb 27 10:48:27 crc kubenswrapper[4728]: I0227 10:48:27.249048 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-z6m4k" Feb 27 10:48:27 crc kubenswrapper[4728]: I0227 10:48:27.249333 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 27 10:48:27 crc kubenswrapper[4728]: I0227 10:48:27.255869 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/141bf253-61a8-46a0-9d10-9aefbbd124c6-scripts\") pod \"placement-db-sync-tmx8h\" (UID: \"141bf253-61a8-46a0-9d10-9aefbbd124c6\") " pod="openstack/placement-db-sync-tmx8h" Feb 27 10:48:27 crc kubenswrapper[4728]: I0227 10:48:27.269114 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/141bf253-61a8-46a0-9d10-9aefbbd124c6-combined-ca-bundle\") pod \"placement-db-sync-tmx8h\" (UID: \"141bf253-61a8-46a0-9d10-9aefbbd124c6\") " pod="openstack/placement-db-sync-tmx8h" Feb 27 10:48:27 crc kubenswrapper[4728]: I0227 10:48:27.273570 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/141bf253-61a8-46a0-9d10-9aefbbd124c6-config-data\") pod \"placement-db-sync-tmx8h\" (UID: \"141bf253-61a8-46a0-9d10-9aefbbd124c6\") " pod="openstack/placement-db-sync-tmx8h" Feb 27 10:48:27 crc kubenswrapper[4728]: I0227 10:48:27.278833 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rg9b5\" (UniqueName: \"kubernetes.io/projected/141bf253-61a8-46a0-9d10-9aefbbd124c6-kube-api-access-rg9b5\") pod \"placement-db-sync-tmx8h\" (UID: \"141bf253-61a8-46a0-9d10-9aefbbd124c6\") " pod="openstack/placement-db-sync-tmx8h" Feb 27 10:48:27 crc kubenswrapper[4728]: I0227 10:48:27.339018 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-4msxf" Feb 27 10:48:27 crc kubenswrapper[4728]: I0227 10:48:27.343563 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-v2vbf"] Feb 27 10:48:27 crc kubenswrapper[4728]: I0227 10:48:27.357435 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7cb2\" (UniqueName: \"kubernetes.io/projected/f69c278c-545b-4a40-9f34-53d895c528c0-kube-api-access-x7cb2\") pod \"barbican-db-sync-v2vbf\" (UID: \"f69c278c-545b-4a40-9f34-53d895c528c0\") " pod="openstack/barbican-db-sync-v2vbf" Feb 27 10:48:27 crc kubenswrapper[4728]: I0227 10:48:27.357497 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f69c278c-545b-4a40-9f34-53d895c528c0-db-sync-config-data\") pod \"barbican-db-sync-v2vbf\" (UID: \"f69c278c-545b-4a40-9f34-53d895c528c0\") " pod="openstack/barbican-db-sync-v2vbf" Feb 27 10:48:27 crc kubenswrapper[4728]: I0227 10:48:27.357559 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f69c278c-545b-4a40-9f34-53d895c528c0-combined-ca-bundle\") pod \"barbican-db-sync-v2vbf\" (UID: \"f69c278c-545b-4a40-9f34-53d895c528c0\") " pod="openstack/barbican-db-sync-v2vbf" Feb 27 10:48:27 crc kubenswrapper[4728]: I0227 10:48:27.375550 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57c957c4ff-947zv"] Feb 27 10:48:27 crc kubenswrapper[4728]: I0227 10:48:27.377189 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57c957c4ff-947zv" Feb 27 10:48:27 crc kubenswrapper[4728]: I0227 10:48:27.422958 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57c957c4ff-947zv"] Feb 27 10:48:27 crc kubenswrapper[4728]: I0227 10:48:27.459620 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4028a7cf-4a8a-4901-96f2-ee5577db5592-config\") pod \"dnsmasq-dns-57c957c4ff-947zv\" (UID: \"4028a7cf-4a8a-4901-96f2-ee5577db5592\") " pod="openstack/dnsmasq-dns-57c957c4ff-947zv" Feb 27 10:48:27 crc kubenswrapper[4728]: I0227 10:48:27.459678 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4028a7cf-4a8a-4901-96f2-ee5577db5592-dns-svc\") pod \"dnsmasq-dns-57c957c4ff-947zv\" (UID: \"4028a7cf-4a8a-4901-96f2-ee5577db5592\") " pod="openstack/dnsmasq-dns-57c957c4ff-947zv" Feb 27 10:48:27 crc kubenswrapper[4728]: I0227 10:48:27.459717 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4028a7cf-4a8a-4901-96f2-ee5577db5592-ovsdbserver-sb\") pod \"dnsmasq-dns-57c957c4ff-947zv\" (UID: \"4028a7cf-4a8a-4901-96f2-ee5577db5592\") " pod="openstack/dnsmasq-dns-57c957c4ff-947zv" Feb 27 10:48:27 crc kubenswrapper[4728]: I0227 10:48:27.459742 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lpt8l\" (UniqueName: \"kubernetes.io/projected/4028a7cf-4a8a-4901-96f2-ee5577db5592-kube-api-access-lpt8l\") pod \"dnsmasq-dns-57c957c4ff-947zv\" (UID: \"4028a7cf-4a8a-4901-96f2-ee5577db5592\") " pod="openstack/dnsmasq-dns-57c957c4ff-947zv" Feb 27 10:48:27 crc kubenswrapper[4728]: I0227 10:48:27.459777 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4028a7cf-4a8a-4901-96f2-ee5577db5592-dns-swift-storage-0\") pod \"dnsmasq-dns-57c957c4ff-947zv\" (UID: \"4028a7cf-4a8a-4901-96f2-ee5577db5592\") " pod="openstack/dnsmasq-dns-57c957c4ff-947zv" Feb 27 10:48:27 crc kubenswrapper[4728]: I0227 10:48:27.459873 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x7cb2\" (UniqueName: \"kubernetes.io/projected/f69c278c-545b-4a40-9f34-53d895c528c0-kube-api-access-x7cb2\") pod \"barbican-db-sync-v2vbf\" (UID: \"f69c278c-545b-4a40-9f34-53d895c528c0\") " pod="openstack/barbican-db-sync-v2vbf" Feb 27 10:48:27 crc kubenswrapper[4728]: I0227 10:48:27.459892 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4028a7cf-4a8a-4901-96f2-ee5577db5592-ovsdbserver-nb\") pod \"dnsmasq-dns-57c957c4ff-947zv\" (UID: \"4028a7cf-4a8a-4901-96f2-ee5577db5592\") " pod="openstack/dnsmasq-dns-57c957c4ff-947zv" Feb 27 10:48:27 crc kubenswrapper[4728]: I0227 10:48:27.459912 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f69c278c-545b-4a40-9f34-53d895c528c0-db-sync-config-data\") pod \"barbican-db-sync-v2vbf\" (UID: \"f69c278c-545b-4a40-9f34-53d895c528c0\") " pod="openstack/barbican-db-sync-v2vbf" Feb 27 10:48:27 crc kubenswrapper[4728]: I0227 10:48:27.459937 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f69c278c-545b-4a40-9f34-53d895c528c0-combined-ca-bundle\") pod \"barbican-db-sync-v2vbf\" (UID: \"f69c278c-545b-4a40-9f34-53d895c528c0\") " pod="openstack/barbican-db-sync-v2vbf" Feb 27 10:48:27 crc kubenswrapper[4728]: I0227 10:48:27.463643 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f69c278c-545b-4a40-9f34-53d895c528c0-combined-ca-bundle\") pod \"barbican-db-sync-v2vbf\" (UID: \"f69c278c-545b-4a40-9f34-53d895c528c0\") " pod="openstack/barbican-db-sync-v2vbf" Feb 27 10:48:27 crc kubenswrapper[4728]: I0227 10:48:27.474345 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f69c278c-545b-4a40-9f34-53d895c528c0-db-sync-config-data\") pod \"barbican-db-sync-v2vbf\" (UID: \"f69c278c-545b-4a40-9f34-53d895c528c0\") " pod="openstack/barbican-db-sync-v2vbf" Feb 27 10:48:27 crc kubenswrapper[4728]: I0227 10:48:27.494578 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x7cb2\" (UniqueName: \"kubernetes.io/projected/f69c278c-545b-4a40-9f34-53d895c528c0-kube-api-access-x7cb2\") pod \"barbican-db-sync-v2vbf\" (UID: \"f69c278c-545b-4a40-9f34-53d895c528c0\") " pod="openstack/barbican-db-sync-v2vbf" Feb 27 10:48:27 crc kubenswrapper[4728]: I0227 10:48:27.496387 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 27 10:48:27 crc kubenswrapper[4728]: I0227 10:48:27.498938 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 27 10:48:27 crc kubenswrapper[4728]: I0227 10:48:27.501715 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 27 10:48:27 crc kubenswrapper[4728]: I0227 10:48:27.501904 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 27 10:48:27 crc kubenswrapper[4728]: I0227 10:48:27.509938 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 27 10:48:27 crc kubenswrapper[4728]: I0227 10:48:27.562745 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4028a7cf-4a8a-4901-96f2-ee5577db5592-ovsdbserver-nb\") pod \"dnsmasq-dns-57c957c4ff-947zv\" (UID: \"4028a7cf-4a8a-4901-96f2-ee5577db5592\") " pod="openstack/dnsmasq-dns-57c957c4ff-947zv" Feb 27 10:48:27 crc kubenswrapper[4728]: I0227 10:48:27.562786 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7b7dd98b-314c-4e6c-a45b-31168398fca3-run-httpd\") pod \"ceilometer-0\" (UID: \"7b7dd98b-314c-4e6c-a45b-31168398fca3\") " pod="openstack/ceilometer-0" Feb 27 10:48:27 crc kubenswrapper[4728]: I0227 10:48:27.562830 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7b7dd98b-314c-4e6c-a45b-31168398fca3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7b7dd98b-314c-4e6c-a45b-31168398fca3\") " pod="openstack/ceilometer-0" Feb 27 10:48:27 crc kubenswrapper[4728]: I0227 10:48:27.562861 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b7dd98b-314c-4e6c-a45b-31168398fca3-config-data\") pod \"ceilometer-0\" (UID: \"7b7dd98b-314c-4e6c-a45b-31168398fca3\") " pod="openstack/ceilometer-0" Feb 27 10:48:27 crc kubenswrapper[4728]: I0227 10:48:27.562879 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4028a7cf-4a8a-4901-96f2-ee5577db5592-config\") pod \"dnsmasq-dns-57c957c4ff-947zv\" (UID: \"4028a7cf-4a8a-4901-96f2-ee5577db5592\") " pod="openstack/dnsmasq-dns-57c957c4ff-947zv" Feb 27 10:48:27 crc kubenswrapper[4728]: I0227 10:48:27.562913 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4028a7cf-4a8a-4901-96f2-ee5577db5592-dns-svc\") pod \"dnsmasq-dns-57c957c4ff-947zv\" (UID: \"4028a7cf-4a8a-4901-96f2-ee5577db5592\") " pod="openstack/dnsmasq-dns-57c957c4ff-947zv" Feb 27 10:48:27 crc kubenswrapper[4728]: I0227 10:48:27.562936 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7b7dd98b-314c-4e6c-a45b-31168398fca3-log-httpd\") pod \"ceilometer-0\" (UID: \"7b7dd98b-314c-4e6c-a45b-31168398fca3\") " pod="openstack/ceilometer-0" Feb 27 10:48:27 crc kubenswrapper[4728]: I0227 10:48:27.562955 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b7dd98b-314c-4e6c-a45b-31168398fca3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7b7dd98b-314c-4e6c-a45b-31168398fca3\") " pod="openstack/ceilometer-0" Feb 27 10:48:27 crc kubenswrapper[4728]: I0227 10:48:27.562978 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4028a7cf-4a8a-4901-96f2-ee5577db5592-ovsdbserver-sb\") pod \"dnsmasq-dns-57c957c4ff-947zv\" (UID: \"4028a7cf-4a8a-4901-96f2-ee5577db5592\") " pod="openstack/dnsmasq-dns-57c957c4ff-947zv" Feb 27 10:48:27 crc kubenswrapper[4728]: I0227 10:48:27.563003 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2782x\" (UniqueName: \"kubernetes.io/projected/7b7dd98b-314c-4e6c-a45b-31168398fca3-kube-api-access-2782x\") pod \"ceilometer-0\" (UID: \"7b7dd98b-314c-4e6c-a45b-31168398fca3\") " pod="openstack/ceilometer-0" Feb 27 10:48:27 crc kubenswrapper[4728]: I0227 10:48:27.563020 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lpt8l\" (UniqueName: \"kubernetes.io/projected/4028a7cf-4a8a-4901-96f2-ee5577db5592-kube-api-access-lpt8l\") pod \"dnsmasq-dns-57c957c4ff-947zv\" (UID: \"4028a7cf-4a8a-4901-96f2-ee5577db5592\") " pod="openstack/dnsmasq-dns-57c957c4ff-947zv" Feb 27 10:48:27 crc kubenswrapper[4728]: I0227 10:48:27.563051 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4028a7cf-4a8a-4901-96f2-ee5577db5592-dns-swift-storage-0\") pod \"dnsmasq-dns-57c957c4ff-947zv\" (UID: \"4028a7cf-4a8a-4901-96f2-ee5577db5592\") " pod="openstack/dnsmasq-dns-57c957c4ff-947zv" Feb 27 10:48:27 crc kubenswrapper[4728]: I0227 10:48:27.563070 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7b7dd98b-314c-4e6c-a45b-31168398fca3-scripts\") pod \"ceilometer-0\" (UID: \"7b7dd98b-314c-4e6c-a45b-31168398fca3\") " pod="openstack/ceilometer-0" Feb 27 10:48:27 crc kubenswrapper[4728]: I0227 10:48:27.564376 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4028a7cf-4a8a-4901-96f2-ee5577db5592-ovsdbserver-nb\") pod \"dnsmasq-dns-57c957c4ff-947zv\" (UID: \"4028a7cf-4a8a-4901-96f2-ee5577db5592\") " pod="openstack/dnsmasq-dns-57c957c4ff-947zv" Feb 27 10:48:27 crc kubenswrapper[4728]: I0227 10:48:27.565123 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4028a7cf-4a8a-4901-96f2-ee5577db5592-config\") pod \"dnsmasq-dns-57c957c4ff-947zv\" (UID: \"4028a7cf-4a8a-4901-96f2-ee5577db5592\") " pod="openstack/dnsmasq-dns-57c957c4ff-947zv" Feb 27 10:48:27 crc kubenswrapper[4728]: I0227 10:48:27.568991 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4028a7cf-4a8a-4901-96f2-ee5577db5592-dns-swift-storage-0\") pod \"dnsmasq-dns-57c957c4ff-947zv\" (UID: \"4028a7cf-4a8a-4901-96f2-ee5577db5592\") " pod="openstack/dnsmasq-dns-57c957c4ff-947zv" Feb 27 10:48:27 crc kubenswrapper[4728]: I0227 10:48:27.569138 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4028a7cf-4a8a-4901-96f2-ee5577db5592-ovsdbserver-sb\") pod \"dnsmasq-dns-57c957c4ff-947zv\" (UID: \"4028a7cf-4a8a-4901-96f2-ee5577db5592\") " pod="openstack/dnsmasq-dns-57c957c4ff-947zv" Feb 27 10:48:27 crc kubenswrapper[4728]: I0227 10:48:27.569414 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4028a7cf-4a8a-4901-96f2-ee5577db5592-dns-svc\") pod \"dnsmasq-dns-57c957c4ff-947zv\" (UID: \"4028a7cf-4a8a-4901-96f2-ee5577db5592\") " pod="openstack/dnsmasq-dns-57c957c4ff-947zv" Feb 27 10:48:27 crc kubenswrapper[4728]: I0227 10:48:27.578828 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 27 10:48:27 crc kubenswrapper[4728]: I0227 10:48:27.580617 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 27 10:48:27 crc kubenswrapper[4728]: I0227 10:48:27.584120 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Feb 27 10:48:27 crc kubenswrapper[4728]: I0227 10:48:27.584143 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lpt8l\" (UniqueName: \"kubernetes.io/projected/4028a7cf-4a8a-4901-96f2-ee5577db5592-kube-api-access-lpt8l\") pod \"dnsmasq-dns-57c957c4ff-947zv\" (UID: \"4028a7cf-4a8a-4901-96f2-ee5577db5592\") " pod="openstack/dnsmasq-dns-57c957c4ff-947zv" Feb 27 10:48:27 crc kubenswrapper[4728]: I0227 10:48:27.584337 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 27 10:48:27 crc kubenswrapper[4728]: I0227 10:48:27.584943 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-qvchz" Feb 27 10:48:27 crc kubenswrapper[4728]: I0227 10:48:27.591627 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 27 10:48:27 crc kubenswrapper[4728]: I0227 10:48:27.665757 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1a3c2f6-5fe8-4cea-a474-49773a0cb38e-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"c1a3c2f6-5fe8-4cea-a474-49773a0cb38e\") " pod="openstack/glance-default-external-api-0" Feb 27 10:48:27 crc kubenswrapper[4728]: I0227 10:48:27.666030 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-57a3f881-6a5d-4f01-9ebb-b4d4e5ba8025\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-57a3f881-6a5d-4f01-9ebb-b4d4e5ba8025\") pod \"glance-default-external-api-0\" (UID: \"c1a3c2f6-5fe8-4cea-a474-49773a0cb38e\") " pod="openstack/glance-default-external-api-0" Feb 27 10:48:27 crc kubenswrapper[4728]: I0227 10:48:27.666073 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4zlmc\" (UniqueName: \"kubernetes.io/projected/c1a3c2f6-5fe8-4cea-a474-49773a0cb38e-kube-api-access-4zlmc\") pod \"glance-default-external-api-0\" (UID: \"c1a3c2f6-5fe8-4cea-a474-49773a0cb38e\") " pod="openstack/glance-default-external-api-0" Feb 27 10:48:27 crc kubenswrapper[4728]: I0227 10:48:27.666173 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c1a3c2f6-5fe8-4cea-a474-49773a0cb38e-logs\") pod \"glance-default-external-api-0\" (UID: \"c1a3c2f6-5fe8-4cea-a474-49773a0cb38e\") " pod="openstack/glance-default-external-api-0" Feb 27 10:48:27 crc kubenswrapper[4728]: I0227 10:48:27.666196 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c1a3c2f6-5fe8-4cea-a474-49773a0cb38e-scripts\") pod \"glance-default-external-api-0\" (UID: \"c1a3c2f6-5fe8-4cea-a474-49773a0cb38e\") " pod="openstack/glance-default-external-api-0" Feb 27 10:48:27 crc kubenswrapper[4728]: I0227 10:48:27.666233 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7b7dd98b-314c-4e6c-a45b-31168398fca3-run-httpd\") pod \"ceilometer-0\" (UID: \"7b7dd98b-314c-4e6c-a45b-31168398fca3\") " pod="openstack/ceilometer-0" Feb 27 10:48:27 crc kubenswrapper[4728]: I0227 10:48:27.666270 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1a3c2f6-5fe8-4cea-a474-49773a0cb38e-config-data\") pod \"glance-default-external-api-0\" (UID: \"c1a3c2f6-5fe8-4cea-a474-49773a0cb38e\") " pod="openstack/glance-default-external-api-0" Feb 27 10:48:27 crc kubenswrapper[4728]: I0227 10:48:27.666292 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1a3c2f6-5fe8-4cea-a474-49773a0cb38e-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"c1a3c2f6-5fe8-4cea-a474-49773a0cb38e\") " pod="openstack/glance-default-external-api-0" Feb 27 10:48:27 crc kubenswrapper[4728]: I0227 10:48:27.666315 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7b7dd98b-314c-4e6c-a45b-31168398fca3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7b7dd98b-314c-4e6c-a45b-31168398fca3\") " pod="openstack/ceilometer-0" Feb 27 10:48:27 crc kubenswrapper[4728]: I0227 10:48:27.666346 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b7dd98b-314c-4e6c-a45b-31168398fca3-config-data\") pod \"ceilometer-0\" (UID: \"7b7dd98b-314c-4e6c-a45b-31168398fca3\") " pod="openstack/ceilometer-0" Feb 27 10:48:27 crc kubenswrapper[4728]: I0227 10:48:27.666377 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c1a3c2f6-5fe8-4cea-a474-49773a0cb38e-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"c1a3c2f6-5fe8-4cea-a474-49773a0cb38e\") " pod="openstack/glance-default-external-api-0" Feb 27 10:48:27 crc kubenswrapper[4728]: I0227 10:48:27.666415 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7b7dd98b-314c-4e6c-a45b-31168398fca3-log-httpd\") pod \"ceilometer-0\" (UID: \"7b7dd98b-314c-4e6c-a45b-31168398fca3\") " pod="openstack/ceilometer-0" Feb 27 10:48:27 crc kubenswrapper[4728]: I0227 10:48:27.666440 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b7dd98b-314c-4e6c-a45b-31168398fca3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7b7dd98b-314c-4e6c-a45b-31168398fca3\") " pod="openstack/ceilometer-0" Feb 27 10:48:27 crc kubenswrapper[4728]: I0227 10:48:27.666479 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2782x\" (UniqueName: \"kubernetes.io/projected/7b7dd98b-314c-4e6c-a45b-31168398fca3-kube-api-access-2782x\") pod \"ceilometer-0\" (UID: \"7b7dd98b-314c-4e6c-a45b-31168398fca3\") " pod="openstack/ceilometer-0" Feb 27 10:48:27 crc kubenswrapper[4728]: I0227 10:48:27.666977 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7b7dd98b-314c-4e6c-a45b-31168398fca3-run-httpd\") pod \"ceilometer-0\" (UID: \"7b7dd98b-314c-4e6c-a45b-31168398fca3\") " pod="openstack/ceilometer-0" Feb 27 10:48:27 crc kubenswrapper[4728]: I0227 10:48:27.667262 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7b7dd98b-314c-4e6c-a45b-31168398fca3-scripts\") pod \"ceilometer-0\" (UID: \"7b7dd98b-314c-4e6c-a45b-31168398fca3\") " pod="openstack/ceilometer-0" Feb 27 10:48:27 crc kubenswrapper[4728]: I0227 10:48:27.671686 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7b7dd98b-314c-4e6c-a45b-31168398fca3-scripts\") pod \"ceilometer-0\" (UID: \"7b7dd98b-314c-4e6c-a45b-31168398fca3\") " pod="openstack/ceilometer-0" Feb 27 10:48:27 crc kubenswrapper[4728]: I0227 10:48:27.672012 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7b7dd98b-314c-4e6c-a45b-31168398fca3-log-httpd\") pod \"ceilometer-0\" (UID: \"7b7dd98b-314c-4e6c-a45b-31168398fca3\") " pod="openstack/ceilometer-0" Feb 27 10:48:27 crc kubenswrapper[4728]: I0227 10:48:27.673379 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7b7dd98b-314c-4e6c-a45b-31168398fca3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7b7dd98b-314c-4e6c-a45b-31168398fca3\") " pod="openstack/ceilometer-0" Feb 27 10:48:27 crc kubenswrapper[4728]: I0227 10:48:27.675710 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b7dd98b-314c-4e6c-a45b-31168398fca3-config-data\") pod \"ceilometer-0\" (UID: \"7b7dd98b-314c-4e6c-a45b-31168398fca3\") " pod="openstack/ceilometer-0" Feb 27 10:48:27 crc kubenswrapper[4728]: I0227 10:48:27.680345 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b7dd98b-314c-4e6c-a45b-31168398fca3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7b7dd98b-314c-4e6c-a45b-31168398fca3\") " pod="openstack/ceilometer-0" Feb 27 10:48:27 crc kubenswrapper[4728]: I0227 10:48:27.684745 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 27 10:48:27 crc kubenswrapper[4728]: I0227 10:48:27.698019 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-v2vbf" Feb 27 10:48:27 crc kubenswrapper[4728]: I0227 10:48:27.719989 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2782x\" (UniqueName: \"kubernetes.io/projected/7b7dd98b-314c-4e6c-a45b-31168398fca3-kube-api-access-2782x\") pod \"ceilometer-0\" (UID: \"7b7dd98b-314c-4e6c-a45b-31168398fca3\") " pod="openstack/ceilometer-0" Feb 27 10:48:27 crc kubenswrapper[4728]: I0227 10:48:27.744487 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57c957c4ff-947zv" Feb 27 10:48:27 crc kubenswrapper[4728]: I0227 10:48:27.774378 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1a3c2f6-5fe8-4cea-a474-49773a0cb38e-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"c1a3c2f6-5fe8-4cea-a474-49773a0cb38e\") " pod="openstack/glance-default-external-api-0" Feb 27 10:48:27 crc kubenswrapper[4728]: I0227 10:48:27.774439 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-57a3f881-6a5d-4f01-9ebb-b4d4e5ba8025\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-57a3f881-6a5d-4f01-9ebb-b4d4e5ba8025\") pod \"glance-default-external-api-0\" (UID: \"c1a3c2f6-5fe8-4cea-a474-49773a0cb38e\") " pod="openstack/glance-default-external-api-0" Feb 27 10:48:27 crc kubenswrapper[4728]: I0227 10:48:27.774484 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4zlmc\" (UniqueName: \"kubernetes.io/projected/c1a3c2f6-5fe8-4cea-a474-49773a0cb38e-kube-api-access-4zlmc\") pod \"glance-default-external-api-0\" (UID: \"c1a3c2f6-5fe8-4cea-a474-49773a0cb38e\") " pod="openstack/glance-default-external-api-0" Feb 27 10:48:27 crc kubenswrapper[4728]: I0227 10:48:27.774563 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c1a3c2f6-5fe8-4cea-a474-49773a0cb38e-logs\") pod \"glance-default-external-api-0\" (UID: \"c1a3c2f6-5fe8-4cea-a474-49773a0cb38e\") " pod="openstack/glance-default-external-api-0" Feb 27 10:48:27 crc kubenswrapper[4728]: I0227 10:48:27.774583 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c1a3c2f6-5fe8-4cea-a474-49773a0cb38e-scripts\") pod \"glance-default-external-api-0\" (UID: \"c1a3c2f6-5fe8-4cea-a474-49773a0cb38e\") " pod="openstack/glance-default-external-api-0" Feb 27 10:48:27 crc kubenswrapper[4728]: I0227 10:48:27.774614 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1a3c2f6-5fe8-4cea-a474-49773a0cb38e-config-data\") pod \"glance-default-external-api-0\" (UID: \"c1a3c2f6-5fe8-4cea-a474-49773a0cb38e\") " pod="openstack/glance-default-external-api-0" Feb 27 10:48:27 crc kubenswrapper[4728]: I0227 10:48:27.774633 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1a3c2f6-5fe8-4cea-a474-49773a0cb38e-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"c1a3c2f6-5fe8-4cea-a474-49773a0cb38e\") " pod="openstack/glance-default-external-api-0" Feb 27 10:48:27 crc kubenswrapper[4728]: I0227 10:48:27.774704 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c1a3c2f6-5fe8-4cea-a474-49773a0cb38e-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"c1a3c2f6-5fe8-4cea-a474-49773a0cb38e\") " pod="openstack/glance-default-external-api-0" Feb 27 10:48:27 crc kubenswrapper[4728]: I0227 10:48:27.775204 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c1a3c2f6-5fe8-4cea-a474-49773a0cb38e-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"c1a3c2f6-5fe8-4cea-a474-49773a0cb38e\") " pod="openstack/glance-default-external-api-0" Feb 27 10:48:27 crc kubenswrapper[4728]: I0227 10:48:27.775488 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c1a3c2f6-5fe8-4cea-a474-49773a0cb38e-logs\") pod \"glance-default-external-api-0\" (UID: \"c1a3c2f6-5fe8-4cea-a474-49773a0cb38e\") " pod="openstack/glance-default-external-api-0" Feb 27 10:48:27 crc kubenswrapper[4728]: I0227 10:48:27.781750 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1a3c2f6-5fe8-4cea-a474-49773a0cb38e-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"c1a3c2f6-5fe8-4cea-a474-49773a0cb38e\") " pod="openstack/glance-default-external-api-0" Feb 27 10:48:27 crc kubenswrapper[4728]: I0227 10:48:27.791381 4728 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 27 10:48:27 crc kubenswrapper[4728]: I0227 10:48:27.791429 4728 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-57a3f881-6a5d-4f01-9ebb-b4d4e5ba8025\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-57a3f881-6a5d-4f01-9ebb-b4d4e5ba8025\") pod \"glance-default-external-api-0\" (UID: \"c1a3c2f6-5fe8-4cea-a474-49773a0cb38e\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/d1be12f0d287d711dc64205ed4c6dc8f46de7817821a549d7df0761d45187fe2/globalmount\"" pod="openstack/glance-default-external-api-0" Feb 27 10:48:27 crc kubenswrapper[4728]: I0227 10:48:27.794026 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4zlmc\" (UniqueName: \"kubernetes.io/projected/c1a3c2f6-5fe8-4cea-a474-49773a0cb38e-kube-api-access-4zlmc\") pod \"glance-default-external-api-0\" (UID: \"c1a3c2f6-5fe8-4cea-a474-49773a0cb38e\") " pod="openstack/glance-default-external-api-0" Feb 27 10:48:27 crc kubenswrapper[4728]: I0227 10:48:27.804563 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 27 10:48:27 crc kubenswrapper[4728]: I0227 10:48:27.808041 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c1a3c2f6-5fe8-4cea-a474-49773a0cb38e-scripts\") pod \"glance-default-external-api-0\" (UID: \"c1a3c2f6-5fe8-4cea-a474-49773a0cb38e\") " pod="openstack/glance-default-external-api-0" Feb 27 10:48:27 crc kubenswrapper[4728]: I0227 10:48:27.808638 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1a3c2f6-5fe8-4cea-a474-49773a0cb38e-config-data\") pod \"glance-default-external-api-0\" (UID: \"c1a3c2f6-5fe8-4cea-a474-49773a0cb38e\") " pod="openstack/glance-default-external-api-0" Feb 27 10:48:27 crc kubenswrapper[4728]: I0227 10:48:27.817206 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1a3c2f6-5fe8-4cea-a474-49773a0cb38e-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"c1a3c2f6-5fe8-4cea-a474-49773a0cb38e\") " pod="openstack/glance-default-external-api-0" Feb 27 10:48:27 crc kubenswrapper[4728]: I0227 10:48:27.820018 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 27 10:48:27 crc kubenswrapper[4728]: I0227 10:48:27.826008 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 27 10:48:27 crc kubenswrapper[4728]: I0227 10:48:27.828101 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 27 10:48:27 crc kubenswrapper[4728]: I0227 10:48:27.838863 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 27 10:48:27 crc kubenswrapper[4728]: I0227 10:48:27.839481 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 27 10:48:27 crc kubenswrapper[4728]: I0227 10:48:27.872366 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-lt869"] Feb 27 10:48:27 crc kubenswrapper[4728]: I0227 10:48:27.872768 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-57a3f881-6a5d-4f01-9ebb-b4d4e5ba8025\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-57a3f881-6a5d-4f01-9ebb-b4d4e5ba8025\") pod \"glance-default-external-api-0\" (UID: \"c1a3c2f6-5fe8-4cea-a474-49773a0cb38e\") " pod="openstack/glance-default-external-api-0" Feb 27 10:48:27 crc kubenswrapper[4728]: I0227 10:48:27.881985 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/062940c5-dd5d-498d-a4d0-91163540b381-config-data\") pod \"glance-default-internal-api-0\" (UID: \"062940c5-dd5d-498d-a4d0-91163540b381\") " pod="openstack/glance-default-internal-api-0" Feb 27 10:48:27 crc kubenswrapper[4728]: I0227 10:48:27.882047 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/062940c5-dd5d-498d-a4d0-91163540b381-scripts\") pod \"glance-default-internal-api-0\" (UID: \"062940c5-dd5d-498d-a4d0-91163540b381\") " pod="openstack/glance-default-internal-api-0" Feb 27 10:48:27 crc kubenswrapper[4728]: I0227 10:48:27.882074 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-35b05381-be13-4917-9b03-b3cf66c9fdec\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-35b05381-be13-4917-9b03-b3cf66c9fdec\") pod \"glance-default-internal-api-0\" (UID: \"062940c5-dd5d-498d-a4d0-91163540b381\") " pod="openstack/glance-default-internal-api-0" Feb 27 10:48:27 crc kubenswrapper[4728]: I0227 10:48:27.882117 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/062940c5-dd5d-498d-a4d0-91163540b381-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"062940c5-dd5d-498d-a4d0-91163540b381\") " pod="openstack/glance-default-internal-api-0" Feb 27 10:48:27 crc kubenswrapper[4728]: I0227 10:48:27.882152 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/062940c5-dd5d-498d-a4d0-91163540b381-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"062940c5-dd5d-498d-a4d0-91163540b381\") " pod="openstack/glance-default-internal-api-0" Feb 27 10:48:27 crc kubenswrapper[4728]: I0227 10:48:27.882191 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wx7s\" (UniqueName: \"kubernetes.io/projected/062940c5-dd5d-498d-a4d0-91163540b381-kube-api-access-8wx7s\") pod \"glance-default-internal-api-0\" (UID: \"062940c5-dd5d-498d-a4d0-91163540b381\") " pod="openstack/glance-default-internal-api-0" Feb 27 10:48:27 crc kubenswrapper[4728]: I0227 10:48:27.882213 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/062940c5-dd5d-498d-a4d0-91163540b381-logs\") pod \"glance-default-internal-api-0\" (UID: \"062940c5-dd5d-498d-a4d0-91163540b381\") " pod="openstack/glance-default-internal-api-0" Feb 27 10:48:27 crc kubenswrapper[4728]: I0227 10:48:27.882232 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/062940c5-dd5d-498d-a4d0-91163540b381-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"062940c5-dd5d-498d-a4d0-91163540b381\") " pod="openstack/glance-default-internal-api-0" Feb 27 10:48:27 crc kubenswrapper[4728]: I0227 10:48:27.909473 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 27 10:48:27 crc kubenswrapper[4728]: I0227 10:48:27.986241 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8wx7s\" (UniqueName: \"kubernetes.io/projected/062940c5-dd5d-498d-a4d0-91163540b381-kube-api-access-8wx7s\") pod \"glance-default-internal-api-0\" (UID: \"062940c5-dd5d-498d-a4d0-91163540b381\") " pod="openstack/glance-default-internal-api-0" Feb 27 10:48:27 crc kubenswrapper[4728]: I0227 10:48:27.986553 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/062940c5-dd5d-498d-a4d0-91163540b381-logs\") pod \"glance-default-internal-api-0\" (UID: \"062940c5-dd5d-498d-a4d0-91163540b381\") " pod="openstack/glance-default-internal-api-0" Feb 27 10:48:27 crc kubenswrapper[4728]: I0227 10:48:27.986581 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/062940c5-dd5d-498d-a4d0-91163540b381-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"062940c5-dd5d-498d-a4d0-91163540b381\") " pod="openstack/glance-default-internal-api-0" Feb 27 10:48:27 crc kubenswrapper[4728]: I0227 10:48:27.986731 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/062940c5-dd5d-498d-a4d0-91163540b381-config-data\") pod \"glance-default-internal-api-0\" (UID: \"062940c5-dd5d-498d-a4d0-91163540b381\") " pod="openstack/glance-default-internal-api-0" Feb 27 10:48:27 crc kubenswrapper[4728]: I0227 10:48:27.986780 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/062940c5-dd5d-498d-a4d0-91163540b381-scripts\") pod \"glance-default-internal-api-0\" (UID: \"062940c5-dd5d-498d-a4d0-91163540b381\") " pod="openstack/glance-default-internal-api-0" Feb 27 10:48:27 crc kubenswrapper[4728]: I0227 10:48:27.986807 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-35b05381-be13-4917-9b03-b3cf66c9fdec\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-35b05381-be13-4917-9b03-b3cf66c9fdec\") pod \"glance-default-internal-api-0\" (UID: \"062940c5-dd5d-498d-a4d0-91163540b381\") " pod="openstack/glance-default-internal-api-0" Feb 27 10:48:27 crc kubenswrapper[4728]: I0227 10:48:27.986848 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/062940c5-dd5d-498d-a4d0-91163540b381-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"062940c5-dd5d-498d-a4d0-91163540b381\") " pod="openstack/glance-default-internal-api-0" Feb 27 10:48:27 crc kubenswrapper[4728]: I0227 10:48:27.986883 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/062940c5-dd5d-498d-a4d0-91163540b381-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"062940c5-dd5d-498d-a4d0-91163540b381\") " pod="openstack/glance-default-internal-api-0" Feb 27 10:48:27 crc kubenswrapper[4728]: I0227 10:48:27.987117 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/062940c5-dd5d-498d-a4d0-91163540b381-logs\") pod \"glance-default-internal-api-0\" (UID: \"062940c5-dd5d-498d-a4d0-91163540b381\") " pod="openstack/glance-default-internal-api-0" Feb 27 10:48:27 crc kubenswrapper[4728]: I0227 10:48:27.990198 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/062940c5-dd5d-498d-a4d0-91163540b381-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"062940c5-dd5d-498d-a4d0-91163540b381\") " pod="openstack/glance-default-internal-api-0" Feb 27 10:48:27 crc kubenswrapper[4728]: I0227 10:48:27.997901 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/062940c5-dd5d-498d-a4d0-91163540b381-scripts\") pod \"glance-default-internal-api-0\" (UID: \"062940c5-dd5d-498d-a4d0-91163540b381\") " pod="openstack/glance-default-internal-api-0" Feb 27 10:48:27 crc kubenswrapper[4728]: I0227 10:48:27.998586 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/062940c5-dd5d-498d-a4d0-91163540b381-config-data\") pod \"glance-default-internal-api-0\" (UID: \"062940c5-dd5d-498d-a4d0-91163540b381\") " pod="openstack/glance-default-internal-api-0" Feb 27 10:48:28 crc kubenswrapper[4728]: I0227 10:48:28.000645 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/062940c5-dd5d-498d-a4d0-91163540b381-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"062940c5-dd5d-498d-a4d0-91163540b381\") " pod="openstack/glance-default-internal-api-0" Feb 27 10:48:28 crc kubenswrapper[4728]: I0227 10:48:28.003048 4728 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 27 10:48:28 crc kubenswrapper[4728]: I0227 10:48:28.003084 4728 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-35b05381-be13-4917-9b03-b3cf66c9fdec\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-35b05381-be13-4917-9b03-b3cf66c9fdec\") pod \"glance-default-internal-api-0\" (UID: \"062940c5-dd5d-498d-a4d0-91163540b381\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b9ab4151576111345e22bb0de839169f32b3e09bf40bf7b14be12c506cea8a77/globalmount\"" pod="openstack/glance-default-internal-api-0" Feb 27 10:48:28 crc kubenswrapper[4728]: I0227 10:48:28.033484 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8wx7s\" (UniqueName: \"kubernetes.io/projected/062940c5-dd5d-498d-a4d0-91163540b381-kube-api-access-8wx7s\") pod \"glance-default-internal-api-0\" (UID: \"062940c5-dd5d-498d-a4d0-91163540b381\") " pod="openstack/glance-default-internal-api-0" Feb 27 10:48:28 crc kubenswrapper[4728]: I0227 10:48:28.049300 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6c9c9f998c-4w8s7"] Feb 27 10:48:28 crc kubenswrapper[4728]: I0227 10:48:28.051167 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/062940c5-dd5d-498d-a4d0-91163540b381-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"062940c5-dd5d-498d-a4d0-91163540b381\") " pod="openstack/glance-default-internal-api-0" Feb 27 10:48:28 crc kubenswrapper[4728]: I0227 10:48:28.109399 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-9jk9w" Feb 27 10:48:28 crc kubenswrapper[4728]: I0227 10:48:28.110047 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-tmx8h" Feb 27 10:48:28 crc kubenswrapper[4728]: I0227 10:48:28.185265 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-35b05381-be13-4917-9b03-b3cf66c9fdec\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-35b05381-be13-4917-9b03-b3cf66c9fdec\") pod \"glance-default-internal-api-0\" (UID: \"062940c5-dd5d-498d-a4d0-91163540b381\") " pod="openstack/glance-default-internal-api-0" Feb 27 10:48:28 crc kubenswrapper[4728]: I0227 10:48:28.198660 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c9c9f998c-4w8s7" event={"ID":"484474dd-d534-4903-99fb-afb3f53aa146","Type":"ContainerStarted","Data":"86737ab00b71e4d59c4daeb536395b1eedeebcbd9a389414196f3512f3c1b40a"} Feb 27 10:48:28 crc kubenswrapper[4728]: I0227 10:48:28.213838 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-lt869" event={"ID":"bcd8863f-4937-4e77-80b9-0bfbf852fcac","Type":"ContainerStarted","Data":"9bcd714432e176011983c4468db26de87b76e3c74ff679b41d17a3ec6ef5e954"} Feb 27 10:48:28 crc kubenswrapper[4728]: I0227 10:48:28.213891 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-lt869" event={"ID":"bcd8863f-4937-4e77-80b9-0bfbf852fcac","Type":"ContainerStarted","Data":"ba8506d4adcf188a1f6ff3b51c2e173686c0608cb594a2d3335a18ea96c10122"} Feb 27 10:48:28 crc kubenswrapper[4728]: I0227 10:48:28.240163 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-lt869" podStartSLOduration=2.240147535 podStartE2EDuration="2.240147535s" podCreationTimestamp="2026-02-27 10:48:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:48:28.238929901 +0000 UTC m=+1328.201296007" watchObservedRunningTime="2026-02-27 10:48:28.240147535 +0000 UTC m=+1328.202513641" Feb 27 10:48:28 crc kubenswrapper[4728]: I0227 10:48:28.245424 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 27 10:48:28 crc kubenswrapper[4728]: I0227 10:48:28.492780 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-dgjpm"] Feb 27 10:48:28 crc kubenswrapper[4728]: W0227 10:48:28.510705 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode977ffad_2764_4871_bdc8_24f0c3b4caf1.slice/crio-556c1c4bd1fe865165f86e3f38b978424099c08ffa37ddbece050573cd110cbb WatchSource:0}: Error finding container 556c1c4bd1fe865165f86e3f38b978424099c08ffa37ddbece050573cd110cbb: Status 404 returned error can't find the container with id 556c1c4bd1fe865165f86e3f38b978424099c08ffa37ddbece050573cd110cbb Feb 27 10:48:28 crc kubenswrapper[4728]: I0227 10:48:28.517007 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-dds8x"] Feb 27 10:48:28 crc kubenswrapper[4728]: I0227 10:48:28.626681 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-4msxf"] Feb 27 10:48:28 crc kubenswrapper[4728]: W0227 10:48:28.634129 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod085a2ebe_2930_448c_aec9_396b505fb399.slice/crio-e30b22412fbe300f318a754b43fe074f272d418426d4054a4fcf5ccdcd0d6a15 WatchSource:0}: Error finding container e30b22412fbe300f318a754b43fe074f272d418426d4054a4fcf5ccdcd0d6a15: Status 404 returned error can't find the container with id e30b22412fbe300f318a754b43fe074f272d418426d4054a4fcf5ccdcd0d6a15 Feb 27 10:48:28 crc kubenswrapper[4728]: I0227 10:48:28.940081 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 27 10:48:28 crc kubenswrapper[4728]: W0227 10:48:28.955356 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf69c278c_545b_4a40_9f34_53d895c528c0.slice/crio-e5bded05e85b2d9b7887263d81a2c417712efa950d6ff9789f418058a03144b8 WatchSource:0}: Error finding container e5bded05e85b2d9b7887263d81a2c417712efa950d6ff9789f418058a03144b8: Status 404 returned error can't find the container with id e5bded05e85b2d9b7887263d81a2c417712efa950d6ff9789f418058a03144b8 Feb 27 10:48:28 crc kubenswrapper[4728]: I0227 10:48:28.986632 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57c957c4ff-947zv"] Feb 27 10:48:29 crc kubenswrapper[4728]: I0227 10:48:29.015104 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-v2vbf"] Feb 27 10:48:29 crc kubenswrapper[4728]: I0227 10:48:29.099954 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 27 10:48:29 crc kubenswrapper[4728]: I0227 10:48:29.157424 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 27 10:48:29 crc kubenswrapper[4728]: I0227 10:48:29.192190 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-tmx8h"] Feb 27 10:48:29 crc kubenswrapper[4728]: I0227 10:48:29.245944 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 27 10:48:29 crc kubenswrapper[4728]: W0227 10:48:29.258317 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc1a3c2f6_5fe8_4cea_a474_49773a0cb38e.slice/crio-78f6bf931be091bced60b987e2d1296316236023c4d02e81dcc519bf4ed62fe2 WatchSource:0}: Error finding container 78f6bf931be091bced60b987e2d1296316236023c4d02e81dcc519bf4ed62fe2: Status 404 returned error can't find the container with id 78f6bf931be091bced60b987e2d1296316236023c4d02e81dcc519bf4ed62fe2 Feb 27 10:48:29 crc kubenswrapper[4728]: I0227 10:48:29.282969 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-dds8x" event={"ID":"e293ec90-6006-49f0-8e24-8a0f4327d2cf","Type":"ContainerStarted","Data":"b6f2a7cc5773b343b0059d34e89bad92ddd5d95a363ee4540a1b0a055dd6c6a6"} Feb 27 10:48:29 crc kubenswrapper[4728]: I0227 10:48:29.308319 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-dgjpm" event={"ID":"e977ffad-2764-4871-bdc8-24f0c3b4caf1","Type":"ContainerStarted","Data":"556c1c4bd1fe865165f86e3f38b978424099c08ffa37ddbece050573cd110cbb"} Feb 27 10:48:29 crc kubenswrapper[4728]: I0227 10:48:29.312981 4728 generic.go:334] "Generic (PLEG): container finished" podID="484474dd-d534-4903-99fb-afb3f53aa146" containerID="5fae834bd73a8d1ec368c948c7d037025dcdeae45d3b497cb49ed451a37eae08" exitCode=0 Feb 27 10:48:29 crc kubenswrapper[4728]: I0227 10:48:29.313029 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c9c9f998c-4w8s7" event={"ID":"484474dd-d534-4903-99fb-afb3f53aa146","Type":"ContainerDied","Data":"5fae834bd73a8d1ec368c948c7d037025dcdeae45d3b497cb49ed451a37eae08"} Feb 27 10:48:29 crc kubenswrapper[4728]: I0227 10:48:29.362897 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7b7dd98b-314c-4e6c-a45b-31168398fca3","Type":"ContainerStarted","Data":"7ebafb18aae18f1edef65f0f1b611c9ecfcbf7297844106be1a19afb0c193edb"} Feb 27 10:48:29 crc kubenswrapper[4728]: I0227 10:48:29.482175 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-4msxf" event={"ID":"085a2ebe-2930-448c-aec9-396b505fb399","Type":"ContainerStarted","Data":"d58201e350456ff6648eab9f54f9a69bc1f8422e0ff56b1e2779544fae8a2756"} Feb 27 10:48:29 crc kubenswrapper[4728]: I0227 10:48:29.482225 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-4msxf" event={"ID":"085a2ebe-2930-448c-aec9-396b505fb399","Type":"ContainerStarted","Data":"e30b22412fbe300f318a754b43fe074f272d418426d4054a4fcf5ccdcd0d6a15"} Feb 27 10:48:29 crc kubenswrapper[4728]: I0227 10:48:29.518289 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57c957c4ff-947zv" event={"ID":"4028a7cf-4a8a-4901-96f2-ee5577db5592","Type":"ContainerStarted","Data":"b3e9ae206216f35b1724d7e0e152f9251e66797a7951f24736c877ec5b37adc8"} Feb 27 10:48:29 crc kubenswrapper[4728]: I0227 10:48:29.525605 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-v2vbf" event={"ID":"f69c278c-545b-4a40-9f34-53d895c528c0","Type":"ContainerStarted","Data":"e5bded05e85b2d9b7887263d81a2c417712efa950d6ff9789f418058a03144b8"} Feb 27 10:48:29 crc kubenswrapper[4728]: I0227 10:48:29.581575 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 27 10:48:29 crc kubenswrapper[4728]: I0227 10:48:29.672442 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-4msxf" podStartSLOduration=3.672413234 podStartE2EDuration="3.672413234s" podCreationTimestamp="2026-02-27 10:48:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:48:29.518125161 +0000 UTC m=+1329.480491267" watchObservedRunningTime="2026-02-27 10:48:29.672413234 +0000 UTC m=+1329.634779360" Feb 27 10:48:29 crc kubenswrapper[4728]: I0227 10:48:29.692552 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 27 10:48:30 crc kubenswrapper[4728]: I0227 10:48:30.228122 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c9c9f998c-4w8s7" Feb 27 10:48:30 crc kubenswrapper[4728]: I0227 10:48:30.260361 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/484474dd-d534-4903-99fb-afb3f53aa146-ovsdbserver-nb\") pod \"484474dd-d534-4903-99fb-afb3f53aa146\" (UID: \"484474dd-d534-4903-99fb-afb3f53aa146\") " Feb 27 10:48:30 crc kubenswrapper[4728]: I0227 10:48:30.260456 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/484474dd-d534-4903-99fb-afb3f53aa146-dns-svc\") pod \"484474dd-d534-4903-99fb-afb3f53aa146\" (UID: \"484474dd-d534-4903-99fb-afb3f53aa146\") " Feb 27 10:48:30 crc kubenswrapper[4728]: I0227 10:48:30.260596 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/484474dd-d534-4903-99fb-afb3f53aa146-config\") pod \"484474dd-d534-4903-99fb-afb3f53aa146\" (UID: \"484474dd-d534-4903-99fb-afb3f53aa146\") " Feb 27 10:48:30 crc kubenswrapper[4728]: I0227 10:48:30.260643 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/484474dd-d534-4903-99fb-afb3f53aa146-dns-swift-storage-0\") pod \"484474dd-d534-4903-99fb-afb3f53aa146\" (UID: \"484474dd-d534-4903-99fb-afb3f53aa146\") " Feb 27 10:48:30 crc kubenswrapper[4728]: I0227 10:48:30.260680 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/484474dd-d534-4903-99fb-afb3f53aa146-ovsdbserver-sb\") pod \"484474dd-d534-4903-99fb-afb3f53aa146\" (UID: \"484474dd-d534-4903-99fb-afb3f53aa146\") " Feb 27 10:48:30 crc kubenswrapper[4728]: I0227 10:48:30.260696 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hsmpc\" (UniqueName: \"kubernetes.io/projected/484474dd-d534-4903-99fb-afb3f53aa146-kube-api-access-hsmpc\") pod \"484474dd-d534-4903-99fb-afb3f53aa146\" (UID: \"484474dd-d534-4903-99fb-afb3f53aa146\") " Feb 27 10:48:30 crc kubenswrapper[4728]: I0227 10:48:30.282298 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/484474dd-d534-4903-99fb-afb3f53aa146-kube-api-access-hsmpc" (OuterVolumeSpecName: "kube-api-access-hsmpc") pod "484474dd-d534-4903-99fb-afb3f53aa146" (UID: "484474dd-d534-4903-99fb-afb3f53aa146"). InnerVolumeSpecName "kube-api-access-hsmpc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:48:30 crc kubenswrapper[4728]: I0227 10:48:30.301606 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/484474dd-d534-4903-99fb-afb3f53aa146-config" (OuterVolumeSpecName: "config") pod "484474dd-d534-4903-99fb-afb3f53aa146" (UID: "484474dd-d534-4903-99fb-afb3f53aa146"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:48:30 crc kubenswrapper[4728]: I0227 10:48:30.308031 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/484474dd-d534-4903-99fb-afb3f53aa146-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "484474dd-d534-4903-99fb-afb3f53aa146" (UID: "484474dd-d534-4903-99fb-afb3f53aa146"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:48:30 crc kubenswrapper[4728]: I0227 10:48:30.309762 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/484474dd-d534-4903-99fb-afb3f53aa146-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "484474dd-d534-4903-99fb-afb3f53aa146" (UID: "484474dd-d534-4903-99fb-afb3f53aa146"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:48:30 crc kubenswrapper[4728]: I0227 10:48:30.314972 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/484474dd-d534-4903-99fb-afb3f53aa146-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "484474dd-d534-4903-99fb-afb3f53aa146" (UID: "484474dd-d534-4903-99fb-afb3f53aa146"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:48:30 crc kubenswrapper[4728]: I0227 10:48:30.343953 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/484474dd-d534-4903-99fb-afb3f53aa146-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "484474dd-d534-4903-99fb-afb3f53aa146" (UID: "484474dd-d534-4903-99fb-afb3f53aa146"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:48:30 crc kubenswrapper[4728]: I0227 10:48:30.363761 4728 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/484474dd-d534-4903-99fb-afb3f53aa146-config\") on node \"crc\" DevicePath \"\"" Feb 27 10:48:30 crc kubenswrapper[4728]: I0227 10:48:30.363798 4728 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/484474dd-d534-4903-99fb-afb3f53aa146-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 27 10:48:30 crc kubenswrapper[4728]: I0227 10:48:30.363811 4728 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/484474dd-d534-4903-99fb-afb3f53aa146-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 27 10:48:30 crc kubenswrapper[4728]: I0227 10:48:30.363821 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hsmpc\" (UniqueName: \"kubernetes.io/projected/484474dd-d534-4903-99fb-afb3f53aa146-kube-api-access-hsmpc\") on node \"crc\" DevicePath \"\"" Feb 27 10:48:30 crc kubenswrapper[4728]: I0227 10:48:30.363832 4728 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/484474dd-d534-4903-99fb-afb3f53aa146-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 27 10:48:30 crc kubenswrapper[4728]: I0227 10:48:30.363840 4728 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/484474dd-d534-4903-99fb-afb3f53aa146-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 27 10:48:30 crc kubenswrapper[4728]: I0227 10:48:30.554262 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c9c9f998c-4w8s7" event={"ID":"484474dd-d534-4903-99fb-afb3f53aa146","Type":"ContainerDied","Data":"86737ab00b71e4d59c4daeb536395b1eedeebcbd9a389414196f3512f3c1b40a"} Feb 27 10:48:30 crc kubenswrapper[4728]: I0227 10:48:30.554323 4728 scope.go:117] "RemoveContainer" containerID="5fae834bd73a8d1ec368c948c7d037025dcdeae45d3b497cb49ed451a37eae08" Feb 27 10:48:30 crc kubenswrapper[4728]: I0227 10:48:30.554331 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c9c9f998c-4w8s7" Feb 27 10:48:30 crc kubenswrapper[4728]: I0227 10:48:30.556335 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-tmx8h" event={"ID":"141bf253-61a8-46a0-9d10-9aefbbd124c6","Type":"ContainerStarted","Data":"087f7830cba62f0afa60f87852d0de52a3bef4dad70f204816f650d0aff4cd02"} Feb 27 10:48:30 crc kubenswrapper[4728]: I0227 10:48:30.561546 4728 generic.go:334] "Generic (PLEG): container finished" podID="4028a7cf-4a8a-4901-96f2-ee5577db5592" containerID="a2fd235e74b6961cd1469db099014e0c5392a53cd2641a0220669630643886ac" exitCode=0 Feb 27 10:48:30 crc kubenswrapper[4728]: I0227 10:48:30.561635 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57c957c4ff-947zv" event={"ID":"4028a7cf-4a8a-4901-96f2-ee5577db5592","Type":"ContainerDied","Data":"a2fd235e74b6961cd1469db099014e0c5392a53cd2641a0220669630643886ac"} Feb 27 10:48:30 crc kubenswrapper[4728]: I0227 10:48:30.565299 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"062940c5-dd5d-498d-a4d0-91163540b381","Type":"ContainerStarted","Data":"f40f45ab52b31220e1a56a241ea82da7d06125a77be59e483c2c61cd2a9c6ae3"} Feb 27 10:48:30 crc kubenswrapper[4728]: I0227 10:48:30.569468 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c1a3c2f6-5fe8-4cea-a474-49773a0cb38e","Type":"ContainerStarted","Data":"78f6bf931be091bced60b987e2d1296316236023c4d02e81dcc519bf4ed62fe2"} Feb 27 10:48:30 crc kubenswrapper[4728]: I0227 10:48:30.651919 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6c9c9f998c-4w8s7"] Feb 27 10:48:30 crc kubenswrapper[4728]: I0227 10:48:30.663732 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6c9c9f998c-4w8s7"] Feb 27 10:48:30 crc kubenswrapper[4728]: I0227 10:48:30.760794 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="484474dd-d534-4903-99fb-afb3f53aa146" path="/var/lib/kubelet/pods/484474dd-d534-4903-99fb-afb3f53aa146/volumes" Feb 27 10:48:31 crc kubenswrapper[4728]: I0227 10:48:31.605774 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57c957c4ff-947zv" event={"ID":"4028a7cf-4a8a-4901-96f2-ee5577db5592","Type":"ContainerStarted","Data":"446191cd0a92bd3c333bb45fe9c5adb346168e0b6a2a0f1cf83b3c8a4b1ab86b"} Feb 27 10:48:31 crc kubenswrapper[4728]: I0227 10:48:31.606107 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57c957c4ff-947zv" Feb 27 10:48:31 crc kubenswrapper[4728]: I0227 10:48:31.614901 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"062940c5-dd5d-498d-a4d0-91163540b381","Type":"ContainerStarted","Data":"1c8c2c3d06ba0e195365b14dc203de0706a7d88d30bf88b287cec919226c79d2"} Feb 27 10:48:31 crc kubenswrapper[4728]: I0227 10:48:31.620720 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c1a3c2f6-5fe8-4cea-a474-49773a0cb38e","Type":"ContainerStarted","Data":"d22101c88e6582a8c598b802cd9cbd1a4e35d7dafd35b837bd745fbe1768106c"} Feb 27 10:48:32 crc kubenswrapper[4728]: I0227 10:48:32.652397 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"062940c5-dd5d-498d-a4d0-91163540b381","Type":"ContainerStarted","Data":"53346000fca72bf31ce19acafdeb871743a19c241ca5460f4736f652ed79c4be"} Feb 27 10:48:32 crc kubenswrapper[4728]: I0227 10:48:32.652936 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="062940c5-dd5d-498d-a4d0-91163540b381" containerName="glance-log" containerID="cri-o://1c8c2c3d06ba0e195365b14dc203de0706a7d88d30bf88b287cec919226c79d2" gracePeriod=30 Feb 27 10:48:32 crc kubenswrapper[4728]: I0227 10:48:32.653428 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="062940c5-dd5d-498d-a4d0-91163540b381" containerName="glance-httpd" containerID="cri-o://53346000fca72bf31ce19acafdeb871743a19c241ca5460f4736f652ed79c4be" gracePeriod=30 Feb 27 10:48:32 crc kubenswrapper[4728]: I0227 10:48:32.670084 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="c1a3c2f6-5fe8-4cea-a474-49773a0cb38e" containerName="glance-log" containerID="cri-o://d22101c88e6582a8c598b802cd9cbd1a4e35d7dafd35b837bd745fbe1768106c" gracePeriod=30 Feb 27 10:48:32 crc kubenswrapper[4728]: I0227 10:48:32.670770 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c1a3c2f6-5fe8-4cea-a474-49773a0cb38e","Type":"ContainerStarted","Data":"fab3d1247d35ca3e9a13000dafe14ec5e0cc51c5b1dca11f2cb30a0f6e87b414"} Feb 27 10:48:32 crc kubenswrapper[4728]: I0227 10:48:32.670828 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="c1a3c2f6-5fe8-4cea-a474-49773a0cb38e" containerName="glance-httpd" containerID="cri-o://fab3d1247d35ca3e9a13000dafe14ec5e0cc51c5b1dca11f2cb30a0f6e87b414" gracePeriod=30 Feb 27 10:48:32 crc kubenswrapper[4728]: I0227 10:48:32.676277 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57c957c4ff-947zv" podStartSLOduration=5.676243923 podStartE2EDuration="5.676243923s" podCreationTimestamp="2026-02-27 10:48:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:48:31.651914443 +0000 UTC m=+1331.614280549" watchObservedRunningTime="2026-02-27 10:48:32.676243923 +0000 UTC m=+1332.638610039" Feb 27 10:48:32 crc kubenswrapper[4728]: I0227 10:48:32.695958 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=6.695936024 podStartE2EDuration="6.695936024s" podCreationTimestamp="2026-02-27 10:48:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:48:32.678847665 +0000 UTC m=+1332.641213791" watchObservedRunningTime="2026-02-27 10:48:32.695936024 +0000 UTC m=+1332.658302130" Feb 27 10:48:32 crc kubenswrapper[4728]: I0227 10:48:32.712979 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=6.7129571519999995 podStartE2EDuration="6.712957152s" podCreationTimestamp="2026-02-27 10:48:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:48:32.704445918 +0000 UTC m=+1332.666812024" watchObservedRunningTime="2026-02-27 10:48:32.712957152 +0000 UTC m=+1332.675323258" Feb 27 10:48:33 crc kubenswrapper[4728]: I0227 10:48:33.715128 4728 generic.go:334] "Generic (PLEG): container finished" podID="c1a3c2f6-5fe8-4cea-a474-49773a0cb38e" containerID="fab3d1247d35ca3e9a13000dafe14ec5e0cc51c5b1dca11f2cb30a0f6e87b414" exitCode=0 Feb 27 10:48:33 crc kubenswrapper[4728]: I0227 10:48:33.715424 4728 generic.go:334] "Generic (PLEG): container finished" podID="c1a3c2f6-5fe8-4cea-a474-49773a0cb38e" containerID="d22101c88e6582a8c598b802cd9cbd1a4e35d7dafd35b837bd745fbe1768106c" exitCode=143 Feb 27 10:48:33 crc kubenswrapper[4728]: I0227 10:48:33.715186 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c1a3c2f6-5fe8-4cea-a474-49773a0cb38e","Type":"ContainerDied","Data":"fab3d1247d35ca3e9a13000dafe14ec5e0cc51c5b1dca11f2cb30a0f6e87b414"} Feb 27 10:48:33 crc kubenswrapper[4728]: I0227 10:48:33.715488 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c1a3c2f6-5fe8-4cea-a474-49773a0cb38e","Type":"ContainerDied","Data":"d22101c88e6582a8c598b802cd9cbd1a4e35d7dafd35b837bd745fbe1768106c"} Feb 27 10:48:33 crc kubenswrapper[4728]: I0227 10:48:33.721382 4728 generic.go:334] "Generic (PLEG): container finished" podID="062940c5-dd5d-498d-a4d0-91163540b381" containerID="53346000fca72bf31ce19acafdeb871743a19c241ca5460f4736f652ed79c4be" exitCode=0 Feb 27 10:48:33 crc kubenswrapper[4728]: I0227 10:48:33.721423 4728 generic.go:334] "Generic (PLEG): container finished" podID="062940c5-dd5d-498d-a4d0-91163540b381" containerID="1c8c2c3d06ba0e195365b14dc203de0706a7d88d30bf88b287cec919226c79d2" exitCode=143 Feb 27 10:48:33 crc kubenswrapper[4728]: I0227 10:48:33.721444 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"062940c5-dd5d-498d-a4d0-91163540b381","Type":"ContainerDied","Data":"53346000fca72bf31ce19acafdeb871743a19c241ca5460f4736f652ed79c4be"} Feb 27 10:48:33 crc kubenswrapper[4728]: I0227 10:48:33.721470 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"062940c5-dd5d-498d-a4d0-91163540b381","Type":"ContainerDied","Data":"1c8c2c3d06ba0e195365b14dc203de0706a7d88d30bf88b287cec919226c79d2"} Feb 27 10:48:34 crc kubenswrapper[4728]: I0227 10:48:34.752957 4728 generic.go:334] "Generic (PLEG): container finished" podID="bcd8863f-4937-4e77-80b9-0bfbf852fcac" containerID="9bcd714432e176011983c4468db26de87b76e3c74ff679b41d17a3ec6ef5e954" exitCode=0 Feb 27 10:48:34 crc kubenswrapper[4728]: I0227 10:48:34.753317 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-lt869" event={"ID":"bcd8863f-4937-4e77-80b9-0bfbf852fcac","Type":"ContainerDied","Data":"9bcd714432e176011983c4468db26de87b76e3c74ff679b41d17a3ec6ef5e954"} Feb 27 10:48:35 crc kubenswrapper[4728]: I0227 10:48:35.756344 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 27 10:48:35 crc kubenswrapper[4728]: I0227 10:48:35.769444 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 27 10:48:35 crc kubenswrapper[4728]: I0227 10:48:35.769453 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"062940c5-dd5d-498d-a4d0-91163540b381","Type":"ContainerDied","Data":"f40f45ab52b31220e1a56a241ea82da7d06125a77be59e483c2c61cd2a9c6ae3"} Feb 27 10:48:35 crc kubenswrapper[4728]: I0227 10:48:35.769547 4728 scope.go:117] "RemoveContainer" containerID="53346000fca72bf31ce19acafdeb871743a19c241ca5460f4736f652ed79c4be" Feb 27 10:48:35 crc kubenswrapper[4728]: I0227 10:48:35.807771 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/062940c5-dd5d-498d-a4d0-91163540b381-logs\") pod \"062940c5-dd5d-498d-a4d0-91163540b381\" (UID: \"062940c5-dd5d-498d-a4d0-91163540b381\") " Feb 27 10:48:35 crc kubenswrapper[4728]: I0227 10:48:35.807977 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-35b05381-be13-4917-9b03-b3cf66c9fdec\") pod \"062940c5-dd5d-498d-a4d0-91163540b381\" (UID: \"062940c5-dd5d-498d-a4d0-91163540b381\") " Feb 27 10:48:35 crc kubenswrapper[4728]: I0227 10:48:35.808013 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8wx7s\" (UniqueName: \"kubernetes.io/projected/062940c5-dd5d-498d-a4d0-91163540b381-kube-api-access-8wx7s\") pod \"062940c5-dd5d-498d-a4d0-91163540b381\" (UID: \"062940c5-dd5d-498d-a4d0-91163540b381\") " Feb 27 10:48:35 crc kubenswrapper[4728]: I0227 10:48:35.808073 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/062940c5-dd5d-498d-a4d0-91163540b381-scripts\") pod \"062940c5-dd5d-498d-a4d0-91163540b381\" (UID: \"062940c5-dd5d-498d-a4d0-91163540b381\") " Feb 27 10:48:35 crc kubenswrapper[4728]: I0227 10:48:35.808158 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/062940c5-dd5d-498d-a4d0-91163540b381-config-data\") pod \"062940c5-dd5d-498d-a4d0-91163540b381\" (UID: \"062940c5-dd5d-498d-a4d0-91163540b381\") " Feb 27 10:48:35 crc kubenswrapper[4728]: I0227 10:48:35.808269 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/062940c5-dd5d-498d-a4d0-91163540b381-combined-ca-bundle\") pod \"062940c5-dd5d-498d-a4d0-91163540b381\" (UID: \"062940c5-dd5d-498d-a4d0-91163540b381\") " Feb 27 10:48:35 crc kubenswrapper[4728]: I0227 10:48:35.808312 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/062940c5-dd5d-498d-a4d0-91163540b381-internal-tls-certs\") pod \"062940c5-dd5d-498d-a4d0-91163540b381\" (UID: \"062940c5-dd5d-498d-a4d0-91163540b381\") " Feb 27 10:48:35 crc kubenswrapper[4728]: I0227 10:48:35.808313 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/062940c5-dd5d-498d-a4d0-91163540b381-logs" (OuterVolumeSpecName: "logs") pod "062940c5-dd5d-498d-a4d0-91163540b381" (UID: "062940c5-dd5d-498d-a4d0-91163540b381"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:48:35 crc kubenswrapper[4728]: I0227 10:48:35.808453 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/062940c5-dd5d-498d-a4d0-91163540b381-httpd-run\") pod \"062940c5-dd5d-498d-a4d0-91163540b381\" (UID: \"062940c5-dd5d-498d-a4d0-91163540b381\") " Feb 27 10:48:35 crc kubenswrapper[4728]: I0227 10:48:35.809398 4728 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/062940c5-dd5d-498d-a4d0-91163540b381-logs\") on node \"crc\" DevicePath \"\"" Feb 27 10:48:35 crc kubenswrapper[4728]: I0227 10:48:35.810728 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/062940c5-dd5d-498d-a4d0-91163540b381-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "062940c5-dd5d-498d-a4d0-91163540b381" (UID: "062940c5-dd5d-498d-a4d0-91163540b381"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:48:35 crc kubenswrapper[4728]: I0227 10:48:35.821463 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/062940c5-dd5d-498d-a4d0-91163540b381-kube-api-access-8wx7s" (OuterVolumeSpecName: "kube-api-access-8wx7s") pod "062940c5-dd5d-498d-a4d0-91163540b381" (UID: "062940c5-dd5d-498d-a4d0-91163540b381"). InnerVolumeSpecName "kube-api-access-8wx7s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:48:35 crc kubenswrapper[4728]: I0227 10:48:35.831677 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-35b05381-be13-4917-9b03-b3cf66c9fdec" (OuterVolumeSpecName: "glance") pod "062940c5-dd5d-498d-a4d0-91163540b381" (UID: "062940c5-dd5d-498d-a4d0-91163540b381"). InnerVolumeSpecName "pvc-35b05381-be13-4917-9b03-b3cf66c9fdec". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 27 10:48:35 crc kubenswrapper[4728]: I0227 10:48:35.841862 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/062940c5-dd5d-498d-a4d0-91163540b381-scripts" (OuterVolumeSpecName: "scripts") pod "062940c5-dd5d-498d-a4d0-91163540b381" (UID: "062940c5-dd5d-498d-a4d0-91163540b381"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:48:35 crc kubenswrapper[4728]: I0227 10:48:35.860257 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/062940c5-dd5d-498d-a4d0-91163540b381-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "062940c5-dd5d-498d-a4d0-91163540b381" (UID: "062940c5-dd5d-498d-a4d0-91163540b381"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:48:35 crc kubenswrapper[4728]: I0227 10:48:35.874074 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/062940c5-dd5d-498d-a4d0-91163540b381-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "062940c5-dd5d-498d-a4d0-91163540b381" (UID: "062940c5-dd5d-498d-a4d0-91163540b381"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:48:35 crc kubenswrapper[4728]: I0227 10:48:35.894277 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/062940c5-dd5d-498d-a4d0-91163540b381-config-data" (OuterVolumeSpecName: "config-data") pod "062940c5-dd5d-498d-a4d0-91163540b381" (UID: "062940c5-dd5d-498d-a4d0-91163540b381"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:48:35 crc kubenswrapper[4728]: I0227 10:48:35.911054 4728 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/062940c5-dd5d-498d-a4d0-91163540b381-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 10:48:35 crc kubenswrapper[4728]: I0227 10:48:35.911094 4728 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/062940c5-dd5d-498d-a4d0-91163540b381-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 10:48:35 crc kubenswrapper[4728]: I0227 10:48:35.911109 4728 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/062940c5-dd5d-498d-a4d0-91163540b381-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 27 10:48:35 crc kubenswrapper[4728]: I0227 10:48:35.911120 4728 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/062940c5-dd5d-498d-a4d0-91163540b381-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 27 10:48:35 crc kubenswrapper[4728]: I0227 10:48:35.911160 4728 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-35b05381-be13-4917-9b03-b3cf66c9fdec\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-35b05381-be13-4917-9b03-b3cf66c9fdec\") on node \"crc\" " Feb 27 10:48:35 crc kubenswrapper[4728]: I0227 10:48:35.911171 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8wx7s\" (UniqueName: \"kubernetes.io/projected/062940c5-dd5d-498d-a4d0-91163540b381-kube-api-access-8wx7s\") on node \"crc\" DevicePath \"\"" Feb 27 10:48:35 crc kubenswrapper[4728]: I0227 10:48:35.911181 4728 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/062940c5-dd5d-498d-a4d0-91163540b381-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 10:48:35 crc kubenswrapper[4728]: I0227 10:48:35.928667 4728 patch_prober.go:28] interesting pod/machine-config-daemon-mf2hh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 10:48:35 crc kubenswrapper[4728]: I0227 10:48:35.928727 4728 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 10:48:35 crc kubenswrapper[4728]: I0227 10:48:35.944715 4728 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 27 10:48:35 crc kubenswrapper[4728]: I0227 10:48:35.949380 4728 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-35b05381-be13-4917-9b03-b3cf66c9fdec" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-35b05381-be13-4917-9b03-b3cf66c9fdec") on node "crc" Feb 27 10:48:36 crc kubenswrapper[4728]: I0227 10:48:36.013559 4728 reconciler_common.go:293] "Volume detached for volume \"pvc-35b05381-be13-4917-9b03-b3cf66c9fdec\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-35b05381-be13-4917-9b03-b3cf66c9fdec\") on node \"crc\" DevicePath \"\"" Feb 27 10:48:36 crc kubenswrapper[4728]: I0227 10:48:36.118915 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 27 10:48:36 crc kubenswrapper[4728]: I0227 10:48:36.141107 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 27 10:48:36 crc kubenswrapper[4728]: I0227 10:48:36.162206 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 27 10:48:36 crc kubenswrapper[4728]: E0227 10:48:36.163619 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="062940c5-dd5d-498d-a4d0-91163540b381" containerName="glance-httpd" Feb 27 10:48:36 crc kubenswrapper[4728]: I0227 10:48:36.163641 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="062940c5-dd5d-498d-a4d0-91163540b381" containerName="glance-httpd" Feb 27 10:48:36 crc kubenswrapper[4728]: E0227 10:48:36.163672 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="062940c5-dd5d-498d-a4d0-91163540b381" containerName="glance-log" Feb 27 10:48:36 crc kubenswrapper[4728]: I0227 10:48:36.163680 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="062940c5-dd5d-498d-a4d0-91163540b381" containerName="glance-log" Feb 27 10:48:36 crc kubenswrapper[4728]: E0227 10:48:36.163694 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="484474dd-d534-4903-99fb-afb3f53aa146" containerName="init" Feb 27 10:48:36 crc kubenswrapper[4728]: I0227 10:48:36.163702 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="484474dd-d534-4903-99fb-afb3f53aa146" containerName="init" Feb 27 10:48:36 crc kubenswrapper[4728]: I0227 10:48:36.163907 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="062940c5-dd5d-498d-a4d0-91163540b381" containerName="glance-log" Feb 27 10:48:36 crc kubenswrapper[4728]: I0227 10:48:36.163935 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="484474dd-d534-4903-99fb-afb3f53aa146" containerName="init" Feb 27 10:48:36 crc kubenswrapper[4728]: I0227 10:48:36.163948 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="062940c5-dd5d-498d-a4d0-91163540b381" containerName="glance-httpd" Feb 27 10:48:36 crc kubenswrapper[4728]: I0227 10:48:36.165172 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 27 10:48:36 crc kubenswrapper[4728]: I0227 10:48:36.167588 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 27 10:48:36 crc kubenswrapper[4728]: I0227 10:48:36.167808 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 27 10:48:36 crc kubenswrapper[4728]: I0227 10:48:36.179590 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 27 10:48:36 crc kubenswrapper[4728]: I0227 10:48:36.217790 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4489bcc6-1de2-45f0-993d-67bd941e697c-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"4489bcc6-1de2-45f0-993d-67bd941e697c\") " pod="openstack/glance-default-internal-api-0" Feb 27 10:48:36 crc kubenswrapper[4728]: I0227 10:48:36.217899 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4489bcc6-1de2-45f0-993d-67bd941e697c-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"4489bcc6-1de2-45f0-993d-67bd941e697c\") " pod="openstack/glance-default-internal-api-0" Feb 27 10:48:36 crc kubenswrapper[4728]: I0227 10:48:36.217962 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qv78l\" (UniqueName: \"kubernetes.io/projected/4489bcc6-1de2-45f0-993d-67bd941e697c-kube-api-access-qv78l\") pod \"glance-default-internal-api-0\" (UID: \"4489bcc6-1de2-45f0-993d-67bd941e697c\") " pod="openstack/glance-default-internal-api-0" Feb 27 10:48:36 crc kubenswrapper[4728]: I0227 10:48:36.218071 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-35b05381-be13-4917-9b03-b3cf66c9fdec\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-35b05381-be13-4917-9b03-b3cf66c9fdec\") pod \"glance-default-internal-api-0\" (UID: \"4489bcc6-1de2-45f0-993d-67bd941e697c\") " pod="openstack/glance-default-internal-api-0" Feb 27 10:48:36 crc kubenswrapper[4728]: I0227 10:48:36.218108 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4489bcc6-1de2-45f0-993d-67bd941e697c-logs\") pod \"glance-default-internal-api-0\" (UID: \"4489bcc6-1de2-45f0-993d-67bd941e697c\") " pod="openstack/glance-default-internal-api-0" Feb 27 10:48:36 crc kubenswrapper[4728]: I0227 10:48:36.218215 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4489bcc6-1de2-45f0-993d-67bd941e697c-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"4489bcc6-1de2-45f0-993d-67bd941e697c\") " pod="openstack/glance-default-internal-api-0" Feb 27 10:48:36 crc kubenswrapper[4728]: I0227 10:48:36.218250 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4489bcc6-1de2-45f0-993d-67bd941e697c-scripts\") pod \"glance-default-internal-api-0\" (UID: \"4489bcc6-1de2-45f0-993d-67bd941e697c\") " pod="openstack/glance-default-internal-api-0" Feb 27 10:48:36 crc kubenswrapper[4728]: I0227 10:48:36.218293 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4489bcc6-1de2-45f0-993d-67bd941e697c-config-data\") pod \"glance-default-internal-api-0\" (UID: \"4489bcc6-1de2-45f0-993d-67bd941e697c\") " pod="openstack/glance-default-internal-api-0" Feb 27 10:48:36 crc kubenswrapper[4728]: I0227 10:48:36.321431 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qv78l\" (UniqueName: \"kubernetes.io/projected/4489bcc6-1de2-45f0-993d-67bd941e697c-kube-api-access-qv78l\") pod \"glance-default-internal-api-0\" (UID: \"4489bcc6-1de2-45f0-993d-67bd941e697c\") " pod="openstack/glance-default-internal-api-0" Feb 27 10:48:36 crc kubenswrapper[4728]: I0227 10:48:36.321924 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-35b05381-be13-4917-9b03-b3cf66c9fdec\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-35b05381-be13-4917-9b03-b3cf66c9fdec\") pod \"glance-default-internal-api-0\" (UID: \"4489bcc6-1de2-45f0-993d-67bd941e697c\") " pod="openstack/glance-default-internal-api-0" Feb 27 10:48:36 crc kubenswrapper[4728]: I0227 10:48:36.321963 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4489bcc6-1de2-45f0-993d-67bd941e697c-logs\") pod \"glance-default-internal-api-0\" (UID: \"4489bcc6-1de2-45f0-993d-67bd941e697c\") " pod="openstack/glance-default-internal-api-0" Feb 27 10:48:36 crc kubenswrapper[4728]: I0227 10:48:36.322058 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4489bcc6-1de2-45f0-993d-67bd941e697c-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"4489bcc6-1de2-45f0-993d-67bd941e697c\") " pod="openstack/glance-default-internal-api-0" Feb 27 10:48:36 crc kubenswrapper[4728]: I0227 10:48:36.322111 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4489bcc6-1de2-45f0-993d-67bd941e697c-scripts\") pod \"glance-default-internal-api-0\" (UID: \"4489bcc6-1de2-45f0-993d-67bd941e697c\") " pod="openstack/glance-default-internal-api-0" Feb 27 10:48:36 crc kubenswrapper[4728]: I0227 10:48:36.322161 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4489bcc6-1de2-45f0-993d-67bd941e697c-config-data\") pod \"glance-default-internal-api-0\" (UID: \"4489bcc6-1de2-45f0-993d-67bd941e697c\") " pod="openstack/glance-default-internal-api-0" Feb 27 10:48:36 crc kubenswrapper[4728]: I0227 10:48:36.322267 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4489bcc6-1de2-45f0-993d-67bd941e697c-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"4489bcc6-1de2-45f0-993d-67bd941e697c\") " pod="openstack/glance-default-internal-api-0" Feb 27 10:48:36 crc kubenswrapper[4728]: I0227 10:48:36.322364 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4489bcc6-1de2-45f0-993d-67bd941e697c-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"4489bcc6-1de2-45f0-993d-67bd941e697c\") " pod="openstack/glance-default-internal-api-0" Feb 27 10:48:36 crc kubenswrapper[4728]: I0227 10:48:36.322815 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4489bcc6-1de2-45f0-993d-67bd941e697c-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"4489bcc6-1de2-45f0-993d-67bd941e697c\") " pod="openstack/glance-default-internal-api-0" Feb 27 10:48:36 crc kubenswrapper[4728]: I0227 10:48:36.322895 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4489bcc6-1de2-45f0-993d-67bd941e697c-logs\") pod \"glance-default-internal-api-0\" (UID: \"4489bcc6-1de2-45f0-993d-67bd941e697c\") " pod="openstack/glance-default-internal-api-0" Feb 27 10:48:36 crc kubenswrapper[4728]: I0227 10:48:36.334419 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4489bcc6-1de2-45f0-993d-67bd941e697c-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"4489bcc6-1de2-45f0-993d-67bd941e697c\") " pod="openstack/glance-default-internal-api-0" Feb 27 10:48:36 crc kubenswrapper[4728]: I0227 10:48:36.334435 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4489bcc6-1de2-45f0-993d-67bd941e697c-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"4489bcc6-1de2-45f0-993d-67bd941e697c\") " pod="openstack/glance-default-internal-api-0" Feb 27 10:48:36 crc kubenswrapper[4728]: I0227 10:48:36.334772 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4489bcc6-1de2-45f0-993d-67bd941e697c-scripts\") pod \"glance-default-internal-api-0\" (UID: \"4489bcc6-1de2-45f0-993d-67bd941e697c\") " pod="openstack/glance-default-internal-api-0" Feb 27 10:48:36 crc kubenswrapper[4728]: I0227 10:48:36.335077 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4489bcc6-1de2-45f0-993d-67bd941e697c-config-data\") pod \"glance-default-internal-api-0\" (UID: \"4489bcc6-1de2-45f0-993d-67bd941e697c\") " pod="openstack/glance-default-internal-api-0" Feb 27 10:48:36 crc kubenswrapper[4728]: I0227 10:48:36.335141 4728 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 27 10:48:36 crc kubenswrapper[4728]: I0227 10:48:36.335162 4728 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-35b05381-be13-4917-9b03-b3cf66c9fdec\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-35b05381-be13-4917-9b03-b3cf66c9fdec\") pod \"glance-default-internal-api-0\" (UID: \"4489bcc6-1de2-45f0-993d-67bd941e697c\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b9ab4151576111345e22bb0de839169f32b3e09bf40bf7b14be12c506cea8a77/globalmount\"" pod="openstack/glance-default-internal-api-0" Feb 27 10:48:36 crc kubenswrapper[4728]: I0227 10:48:36.339385 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qv78l\" (UniqueName: \"kubernetes.io/projected/4489bcc6-1de2-45f0-993d-67bd941e697c-kube-api-access-qv78l\") pod \"glance-default-internal-api-0\" (UID: \"4489bcc6-1de2-45f0-993d-67bd941e697c\") " pod="openstack/glance-default-internal-api-0" Feb 27 10:48:36 crc kubenswrapper[4728]: I0227 10:48:36.389908 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-35b05381-be13-4917-9b03-b3cf66c9fdec\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-35b05381-be13-4917-9b03-b3cf66c9fdec\") pod \"glance-default-internal-api-0\" (UID: \"4489bcc6-1de2-45f0-993d-67bd941e697c\") " pod="openstack/glance-default-internal-api-0" Feb 27 10:48:36 crc kubenswrapper[4728]: I0227 10:48:36.490500 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 27 10:48:36 crc kubenswrapper[4728]: I0227 10:48:36.738152 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="062940c5-dd5d-498d-a4d0-91163540b381" path="/var/lib/kubelet/pods/062940c5-dd5d-498d-a4d0-91163540b381/volumes" Feb 27 10:48:37 crc kubenswrapper[4728]: I0227 10:48:37.748836 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-57c957c4ff-947zv" Feb 27 10:48:37 crc kubenswrapper[4728]: I0227 10:48:37.832162 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-895cf5cf-lfgq9"] Feb 27 10:48:37 crc kubenswrapper[4728]: I0227 10:48:37.832405 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-895cf5cf-lfgq9" podUID="1080a33e-f552-404c-8381-281cf9ab1078" containerName="dnsmasq-dns" containerID="cri-o://ead82b73a5ce8b18f1147934bda38e7b8c4732a6f3fc5d0b5ac5d099b38fb766" gracePeriod=10 Feb 27 10:48:38 crc kubenswrapper[4728]: I0227 10:48:38.808285 4728 generic.go:334] "Generic (PLEG): container finished" podID="1080a33e-f552-404c-8381-281cf9ab1078" containerID="ead82b73a5ce8b18f1147934bda38e7b8c4732a6f3fc5d0b5ac5d099b38fb766" exitCode=0 Feb 27 10:48:38 crc kubenswrapper[4728]: I0227 10:48:38.808333 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-895cf5cf-lfgq9" event={"ID":"1080a33e-f552-404c-8381-281cf9ab1078","Type":"ContainerDied","Data":"ead82b73a5ce8b18f1147934bda38e7b8c4732a6f3fc5d0b5ac5d099b38fb766"} Feb 27 10:48:39 crc kubenswrapper[4728]: I0227 10:48:39.150180 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-lt869" Feb 27 10:48:39 crc kubenswrapper[4728]: I0227 10:48:39.202875 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/bcd8863f-4937-4e77-80b9-0bfbf852fcac-credential-keys\") pod \"bcd8863f-4937-4e77-80b9-0bfbf852fcac\" (UID: \"bcd8863f-4937-4e77-80b9-0bfbf852fcac\") " Feb 27 10:48:39 crc kubenswrapper[4728]: I0227 10:48:39.202923 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/bcd8863f-4937-4e77-80b9-0bfbf852fcac-fernet-keys\") pod \"bcd8863f-4937-4e77-80b9-0bfbf852fcac\" (UID: \"bcd8863f-4937-4e77-80b9-0bfbf852fcac\") " Feb 27 10:48:39 crc kubenswrapper[4728]: I0227 10:48:39.202977 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bcd8863f-4937-4e77-80b9-0bfbf852fcac-combined-ca-bundle\") pod \"bcd8863f-4937-4e77-80b9-0bfbf852fcac\" (UID: \"bcd8863f-4937-4e77-80b9-0bfbf852fcac\") " Feb 27 10:48:39 crc kubenswrapper[4728]: I0227 10:48:39.203853 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bcd8863f-4937-4e77-80b9-0bfbf852fcac-config-data\") pod \"bcd8863f-4937-4e77-80b9-0bfbf852fcac\" (UID: \"bcd8863f-4937-4e77-80b9-0bfbf852fcac\") " Feb 27 10:48:39 crc kubenswrapper[4728]: I0227 10:48:39.203954 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bcd8863f-4937-4e77-80b9-0bfbf852fcac-scripts\") pod \"bcd8863f-4937-4e77-80b9-0bfbf852fcac\" (UID: \"bcd8863f-4937-4e77-80b9-0bfbf852fcac\") " Feb 27 10:48:39 crc kubenswrapper[4728]: I0227 10:48:39.204005 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wqtrs\" (UniqueName: \"kubernetes.io/projected/bcd8863f-4937-4e77-80b9-0bfbf852fcac-kube-api-access-wqtrs\") pod \"bcd8863f-4937-4e77-80b9-0bfbf852fcac\" (UID: \"bcd8863f-4937-4e77-80b9-0bfbf852fcac\") " Feb 27 10:48:39 crc kubenswrapper[4728]: I0227 10:48:39.210965 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bcd8863f-4937-4e77-80b9-0bfbf852fcac-scripts" (OuterVolumeSpecName: "scripts") pod "bcd8863f-4937-4e77-80b9-0bfbf852fcac" (UID: "bcd8863f-4937-4e77-80b9-0bfbf852fcac"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:48:39 crc kubenswrapper[4728]: I0227 10:48:39.218629 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bcd8863f-4937-4e77-80b9-0bfbf852fcac-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "bcd8863f-4937-4e77-80b9-0bfbf852fcac" (UID: "bcd8863f-4937-4e77-80b9-0bfbf852fcac"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:48:39 crc kubenswrapper[4728]: I0227 10:48:39.218678 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bcd8863f-4937-4e77-80b9-0bfbf852fcac-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "bcd8863f-4937-4e77-80b9-0bfbf852fcac" (UID: "bcd8863f-4937-4e77-80b9-0bfbf852fcac"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:48:39 crc kubenswrapper[4728]: I0227 10:48:39.223004 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bcd8863f-4937-4e77-80b9-0bfbf852fcac-kube-api-access-wqtrs" (OuterVolumeSpecName: "kube-api-access-wqtrs") pod "bcd8863f-4937-4e77-80b9-0bfbf852fcac" (UID: "bcd8863f-4937-4e77-80b9-0bfbf852fcac"). InnerVolumeSpecName "kube-api-access-wqtrs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:48:39 crc kubenswrapper[4728]: I0227 10:48:39.236793 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bcd8863f-4937-4e77-80b9-0bfbf852fcac-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bcd8863f-4937-4e77-80b9-0bfbf852fcac" (UID: "bcd8863f-4937-4e77-80b9-0bfbf852fcac"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:48:39 crc kubenswrapper[4728]: I0227 10:48:39.241259 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bcd8863f-4937-4e77-80b9-0bfbf852fcac-config-data" (OuterVolumeSpecName: "config-data") pod "bcd8863f-4937-4e77-80b9-0bfbf852fcac" (UID: "bcd8863f-4937-4e77-80b9-0bfbf852fcac"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:48:39 crc kubenswrapper[4728]: I0227 10:48:39.306729 4728 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/bcd8863f-4937-4e77-80b9-0bfbf852fcac-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 27 10:48:39 crc kubenswrapper[4728]: I0227 10:48:39.306766 4728 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/bcd8863f-4937-4e77-80b9-0bfbf852fcac-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 27 10:48:39 crc kubenswrapper[4728]: I0227 10:48:39.306781 4728 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bcd8863f-4937-4e77-80b9-0bfbf852fcac-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 10:48:39 crc kubenswrapper[4728]: I0227 10:48:39.306793 4728 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bcd8863f-4937-4e77-80b9-0bfbf852fcac-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 10:48:39 crc kubenswrapper[4728]: I0227 10:48:39.306804 4728 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bcd8863f-4937-4e77-80b9-0bfbf852fcac-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 10:48:39 crc kubenswrapper[4728]: I0227 10:48:39.306818 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wqtrs\" (UniqueName: \"kubernetes.io/projected/bcd8863f-4937-4e77-80b9-0bfbf852fcac-kube-api-access-wqtrs\") on node \"crc\" DevicePath \"\"" Feb 27 10:48:39 crc kubenswrapper[4728]: I0227 10:48:39.831083 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-lt869" event={"ID":"bcd8863f-4937-4e77-80b9-0bfbf852fcac","Type":"ContainerDied","Data":"ba8506d4adcf188a1f6ff3b51c2e173686c0608cb594a2d3335a18ea96c10122"} Feb 27 10:48:39 crc kubenswrapper[4728]: I0227 10:48:39.831129 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ba8506d4adcf188a1f6ff3b51c2e173686c0608cb594a2d3335a18ea96c10122" Feb 27 10:48:39 crc kubenswrapper[4728]: I0227 10:48:39.831151 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-lt869" Feb 27 10:48:40 crc kubenswrapper[4728]: I0227 10:48:40.249578 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-lt869"] Feb 27 10:48:40 crc kubenswrapper[4728]: I0227 10:48:40.260888 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-lt869"] Feb 27 10:48:40 crc kubenswrapper[4728]: I0227 10:48:40.329128 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-tstt8"] Feb 27 10:48:40 crc kubenswrapper[4728]: E0227 10:48:40.329551 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bcd8863f-4937-4e77-80b9-0bfbf852fcac" containerName="keystone-bootstrap" Feb 27 10:48:40 crc kubenswrapper[4728]: I0227 10:48:40.329568 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="bcd8863f-4937-4e77-80b9-0bfbf852fcac" containerName="keystone-bootstrap" Feb 27 10:48:40 crc kubenswrapper[4728]: I0227 10:48:40.329786 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="bcd8863f-4937-4e77-80b9-0bfbf852fcac" containerName="keystone-bootstrap" Feb 27 10:48:40 crc kubenswrapper[4728]: I0227 10:48:40.331060 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-tstt8" Feb 27 10:48:40 crc kubenswrapper[4728]: I0227 10:48:40.333241 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 27 10:48:40 crc kubenswrapper[4728]: I0227 10:48:40.338002 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 27 10:48:40 crc kubenswrapper[4728]: I0227 10:48:40.338193 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 27 10:48:40 crc kubenswrapper[4728]: I0227 10:48:40.338251 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 27 10:48:40 crc kubenswrapper[4728]: I0227 10:48:40.342928 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-2jf8f" Feb 27 10:48:40 crc kubenswrapper[4728]: I0227 10:48:40.355541 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-tstt8"] Feb 27 10:48:40 crc kubenswrapper[4728]: I0227 10:48:40.441748 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d3ddaa20-d617-4d45-9e82-b31c982de147-fernet-keys\") pod \"keystone-bootstrap-tstt8\" (UID: \"d3ddaa20-d617-4d45-9e82-b31c982de147\") " pod="openstack/keystone-bootstrap-tstt8" Feb 27 10:48:40 crc kubenswrapper[4728]: I0227 10:48:40.441801 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3ddaa20-d617-4d45-9e82-b31c982de147-combined-ca-bundle\") pod \"keystone-bootstrap-tstt8\" (UID: \"d3ddaa20-d617-4d45-9e82-b31c982de147\") " pod="openstack/keystone-bootstrap-tstt8" Feb 27 10:48:40 crc kubenswrapper[4728]: I0227 10:48:40.441847 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d3ddaa20-d617-4d45-9e82-b31c982de147-scripts\") pod \"keystone-bootstrap-tstt8\" (UID: \"d3ddaa20-d617-4d45-9e82-b31c982de147\") " pod="openstack/keystone-bootstrap-tstt8" Feb 27 10:48:40 crc kubenswrapper[4728]: I0227 10:48:40.442023 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d3ddaa20-d617-4d45-9e82-b31c982de147-credential-keys\") pod \"keystone-bootstrap-tstt8\" (UID: \"d3ddaa20-d617-4d45-9e82-b31c982de147\") " pod="openstack/keystone-bootstrap-tstt8" Feb 27 10:48:40 crc kubenswrapper[4728]: I0227 10:48:40.442061 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3ddaa20-d617-4d45-9e82-b31c982de147-config-data\") pod \"keystone-bootstrap-tstt8\" (UID: \"d3ddaa20-d617-4d45-9e82-b31c982de147\") " pod="openstack/keystone-bootstrap-tstt8" Feb 27 10:48:40 crc kubenswrapper[4728]: I0227 10:48:40.442088 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rgrmr\" (UniqueName: \"kubernetes.io/projected/d3ddaa20-d617-4d45-9e82-b31c982de147-kube-api-access-rgrmr\") pod \"keystone-bootstrap-tstt8\" (UID: \"d3ddaa20-d617-4d45-9e82-b31c982de147\") " pod="openstack/keystone-bootstrap-tstt8" Feb 27 10:48:40 crc kubenswrapper[4728]: I0227 10:48:40.544094 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d3ddaa20-d617-4d45-9e82-b31c982de147-credential-keys\") pod \"keystone-bootstrap-tstt8\" (UID: \"d3ddaa20-d617-4d45-9e82-b31c982de147\") " pod="openstack/keystone-bootstrap-tstt8" Feb 27 10:48:40 crc kubenswrapper[4728]: I0227 10:48:40.544156 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3ddaa20-d617-4d45-9e82-b31c982de147-config-data\") pod \"keystone-bootstrap-tstt8\" (UID: \"d3ddaa20-d617-4d45-9e82-b31c982de147\") " pod="openstack/keystone-bootstrap-tstt8" Feb 27 10:48:40 crc kubenswrapper[4728]: I0227 10:48:40.544173 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rgrmr\" (UniqueName: \"kubernetes.io/projected/d3ddaa20-d617-4d45-9e82-b31c982de147-kube-api-access-rgrmr\") pod \"keystone-bootstrap-tstt8\" (UID: \"d3ddaa20-d617-4d45-9e82-b31c982de147\") " pod="openstack/keystone-bootstrap-tstt8" Feb 27 10:48:40 crc kubenswrapper[4728]: I0227 10:48:40.544244 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d3ddaa20-d617-4d45-9e82-b31c982de147-fernet-keys\") pod \"keystone-bootstrap-tstt8\" (UID: \"d3ddaa20-d617-4d45-9e82-b31c982de147\") " pod="openstack/keystone-bootstrap-tstt8" Feb 27 10:48:40 crc kubenswrapper[4728]: I0227 10:48:40.544263 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3ddaa20-d617-4d45-9e82-b31c982de147-combined-ca-bundle\") pod \"keystone-bootstrap-tstt8\" (UID: \"d3ddaa20-d617-4d45-9e82-b31c982de147\") " pod="openstack/keystone-bootstrap-tstt8" Feb 27 10:48:40 crc kubenswrapper[4728]: I0227 10:48:40.544288 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d3ddaa20-d617-4d45-9e82-b31c982de147-scripts\") pod \"keystone-bootstrap-tstt8\" (UID: \"d3ddaa20-d617-4d45-9e82-b31c982de147\") " pod="openstack/keystone-bootstrap-tstt8" Feb 27 10:48:40 crc kubenswrapper[4728]: I0227 10:48:40.560940 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3ddaa20-d617-4d45-9e82-b31c982de147-combined-ca-bundle\") pod \"keystone-bootstrap-tstt8\" (UID: \"d3ddaa20-d617-4d45-9e82-b31c982de147\") " pod="openstack/keystone-bootstrap-tstt8" Feb 27 10:48:40 crc kubenswrapper[4728]: I0227 10:48:40.560943 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d3ddaa20-d617-4d45-9e82-b31c982de147-scripts\") pod \"keystone-bootstrap-tstt8\" (UID: \"d3ddaa20-d617-4d45-9e82-b31c982de147\") " pod="openstack/keystone-bootstrap-tstt8" Feb 27 10:48:40 crc kubenswrapper[4728]: I0227 10:48:40.566171 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rgrmr\" (UniqueName: \"kubernetes.io/projected/d3ddaa20-d617-4d45-9e82-b31c982de147-kube-api-access-rgrmr\") pod \"keystone-bootstrap-tstt8\" (UID: \"d3ddaa20-d617-4d45-9e82-b31c982de147\") " pod="openstack/keystone-bootstrap-tstt8" Feb 27 10:48:40 crc kubenswrapper[4728]: I0227 10:48:40.568335 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d3ddaa20-d617-4d45-9e82-b31c982de147-credential-keys\") pod \"keystone-bootstrap-tstt8\" (UID: \"d3ddaa20-d617-4d45-9e82-b31c982de147\") " pod="openstack/keystone-bootstrap-tstt8" Feb 27 10:48:40 crc kubenswrapper[4728]: I0227 10:48:40.572349 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3ddaa20-d617-4d45-9e82-b31c982de147-config-data\") pod \"keystone-bootstrap-tstt8\" (UID: \"d3ddaa20-d617-4d45-9e82-b31c982de147\") " pod="openstack/keystone-bootstrap-tstt8" Feb 27 10:48:40 crc kubenswrapper[4728]: I0227 10:48:40.574111 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d3ddaa20-d617-4d45-9e82-b31c982de147-fernet-keys\") pod \"keystone-bootstrap-tstt8\" (UID: \"d3ddaa20-d617-4d45-9e82-b31c982de147\") " pod="openstack/keystone-bootstrap-tstt8" Feb 27 10:48:40 crc kubenswrapper[4728]: I0227 10:48:40.659867 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-tstt8" Feb 27 10:48:40 crc kubenswrapper[4728]: I0227 10:48:40.737608 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bcd8863f-4937-4e77-80b9-0bfbf852fcac" path="/var/lib/kubelet/pods/bcd8863f-4937-4e77-80b9-0bfbf852fcac/volumes" Feb 27 10:48:42 crc kubenswrapper[4728]: I0227 10:48:42.474374 4728 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-895cf5cf-lfgq9" podUID="1080a33e-f552-404c-8381-281cf9ab1078" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.176:5353: connect: connection refused" Feb 27 10:48:46 crc kubenswrapper[4728]: E0227 10:48:46.425685 4728 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-placement-api:current-podified" Feb 27 10:48:46 crc kubenswrapper[4728]: E0227 10:48:46.426320 4728 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:placement-db-sync,Image:quay.io/podified-antelope-centos9/openstack-placement-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/placement,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:placement-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rg9b5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42482,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-db-sync-tmx8h_openstack(141bf253-61a8-46a0-9d10-9aefbbd124c6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 27 10:48:46 crc kubenswrapper[4728]: E0227 10:48:46.427712 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/placement-db-sync-tmx8h" podUID="141bf253-61a8-46a0-9d10-9aefbbd124c6" Feb 27 10:48:46 crc kubenswrapper[4728]: I0227 10:48:46.540695 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 27 10:48:46 crc kubenswrapper[4728]: I0227 10:48:46.596288 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1a3c2f6-5fe8-4cea-a474-49773a0cb38e-combined-ca-bundle\") pod \"c1a3c2f6-5fe8-4cea-a474-49773a0cb38e\" (UID: \"c1a3c2f6-5fe8-4cea-a474-49773a0cb38e\") " Feb 27 10:48:46 crc kubenswrapper[4728]: I0227 10:48:46.596544 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1a3c2f6-5fe8-4cea-a474-49773a0cb38e-config-data\") pod \"c1a3c2f6-5fe8-4cea-a474-49773a0cb38e\" (UID: \"c1a3c2f6-5fe8-4cea-a474-49773a0cb38e\") " Feb 27 10:48:46 crc kubenswrapper[4728]: I0227 10:48:46.596739 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4zlmc\" (UniqueName: \"kubernetes.io/projected/c1a3c2f6-5fe8-4cea-a474-49773a0cb38e-kube-api-access-4zlmc\") pod \"c1a3c2f6-5fe8-4cea-a474-49773a0cb38e\" (UID: \"c1a3c2f6-5fe8-4cea-a474-49773a0cb38e\") " Feb 27 10:48:46 crc kubenswrapper[4728]: I0227 10:48:46.599722 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1a3c2f6-5fe8-4cea-a474-49773a0cb38e-public-tls-certs\") pod \"c1a3c2f6-5fe8-4cea-a474-49773a0cb38e\" (UID: \"c1a3c2f6-5fe8-4cea-a474-49773a0cb38e\") " Feb 27 10:48:46 crc kubenswrapper[4728]: I0227 10:48:46.599864 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c1a3c2f6-5fe8-4cea-a474-49773a0cb38e-httpd-run\") pod \"c1a3c2f6-5fe8-4cea-a474-49773a0cb38e\" (UID: \"c1a3c2f6-5fe8-4cea-a474-49773a0cb38e\") " Feb 27 10:48:46 crc kubenswrapper[4728]: I0227 10:48:46.599892 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c1a3c2f6-5fe8-4cea-a474-49773a0cb38e-logs\") pod \"c1a3c2f6-5fe8-4cea-a474-49773a0cb38e\" (UID: \"c1a3c2f6-5fe8-4cea-a474-49773a0cb38e\") " Feb 27 10:48:46 crc kubenswrapper[4728]: I0227 10:48:46.599911 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c1a3c2f6-5fe8-4cea-a474-49773a0cb38e-scripts\") pod \"c1a3c2f6-5fe8-4cea-a474-49773a0cb38e\" (UID: \"c1a3c2f6-5fe8-4cea-a474-49773a0cb38e\") " Feb 27 10:48:46 crc kubenswrapper[4728]: I0227 10:48:46.600110 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-57a3f881-6a5d-4f01-9ebb-b4d4e5ba8025\") pod \"c1a3c2f6-5fe8-4cea-a474-49773a0cb38e\" (UID: \"c1a3c2f6-5fe8-4cea-a474-49773a0cb38e\") " Feb 27 10:48:46 crc kubenswrapper[4728]: I0227 10:48:46.600558 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c1a3c2f6-5fe8-4cea-a474-49773a0cb38e-logs" (OuterVolumeSpecName: "logs") pod "c1a3c2f6-5fe8-4cea-a474-49773a0cb38e" (UID: "c1a3c2f6-5fe8-4cea-a474-49773a0cb38e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:48:46 crc kubenswrapper[4728]: I0227 10:48:46.600853 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c1a3c2f6-5fe8-4cea-a474-49773a0cb38e-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "c1a3c2f6-5fe8-4cea-a474-49773a0cb38e" (UID: "c1a3c2f6-5fe8-4cea-a474-49773a0cb38e"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:48:46 crc kubenswrapper[4728]: I0227 10:48:46.601709 4728 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c1a3c2f6-5fe8-4cea-a474-49773a0cb38e-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 27 10:48:46 crc kubenswrapper[4728]: I0227 10:48:46.601759 4728 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c1a3c2f6-5fe8-4cea-a474-49773a0cb38e-logs\") on node \"crc\" DevicePath \"\"" Feb 27 10:48:46 crc kubenswrapper[4728]: I0227 10:48:46.605775 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1a3c2f6-5fe8-4cea-a474-49773a0cb38e-scripts" (OuterVolumeSpecName: "scripts") pod "c1a3c2f6-5fe8-4cea-a474-49773a0cb38e" (UID: "c1a3c2f6-5fe8-4cea-a474-49773a0cb38e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:48:46 crc kubenswrapper[4728]: I0227 10:48:46.606137 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1a3c2f6-5fe8-4cea-a474-49773a0cb38e-kube-api-access-4zlmc" (OuterVolumeSpecName: "kube-api-access-4zlmc") pod "c1a3c2f6-5fe8-4cea-a474-49773a0cb38e" (UID: "c1a3c2f6-5fe8-4cea-a474-49773a0cb38e"). InnerVolumeSpecName "kube-api-access-4zlmc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:48:46 crc kubenswrapper[4728]: I0227 10:48:46.632612 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1a3c2f6-5fe8-4cea-a474-49773a0cb38e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c1a3c2f6-5fe8-4cea-a474-49773a0cb38e" (UID: "c1a3c2f6-5fe8-4cea-a474-49773a0cb38e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:48:46 crc kubenswrapper[4728]: I0227 10:48:46.632628 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-57a3f881-6a5d-4f01-9ebb-b4d4e5ba8025" (OuterVolumeSpecName: "glance") pod "c1a3c2f6-5fe8-4cea-a474-49773a0cb38e" (UID: "c1a3c2f6-5fe8-4cea-a474-49773a0cb38e"). InnerVolumeSpecName "pvc-57a3f881-6a5d-4f01-9ebb-b4d4e5ba8025". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 27 10:48:46 crc kubenswrapper[4728]: I0227 10:48:46.663252 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1a3c2f6-5fe8-4cea-a474-49773a0cb38e-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "c1a3c2f6-5fe8-4cea-a474-49773a0cb38e" (UID: "c1a3c2f6-5fe8-4cea-a474-49773a0cb38e"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:48:46 crc kubenswrapper[4728]: I0227 10:48:46.663681 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1a3c2f6-5fe8-4cea-a474-49773a0cb38e-config-data" (OuterVolumeSpecName: "config-data") pod "c1a3c2f6-5fe8-4cea-a474-49773a0cb38e" (UID: "c1a3c2f6-5fe8-4cea-a474-49773a0cb38e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:48:46 crc kubenswrapper[4728]: I0227 10:48:46.705765 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4zlmc\" (UniqueName: \"kubernetes.io/projected/c1a3c2f6-5fe8-4cea-a474-49773a0cb38e-kube-api-access-4zlmc\") on node \"crc\" DevicePath \"\"" Feb 27 10:48:46 crc kubenswrapper[4728]: I0227 10:48:46.705836 4728 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1a3c2f6-5fe8-4cea-a474-49773a0cb38e-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 27 10:48:46 crc kubenswrapper[4728]: I0227 10:48:46.705849 4728 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c1a3c2f6-5fe8-4cea-a474-49773a0cb38e-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 10:48:46 crc kubenswrapper[4728]: I0227 10:48:46.705919 4728 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-57a3f881-6a5d-4f01-9ebb-b4d4e5ba8025\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-57a3f881-6a5d-4f01-9ebb-b4d4e5ba8025\") on node \"crc\" " Feb 27 10:48:46 crc kubenswrapper[4728]: I0227 10:48:46.705937 4728 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1a3c2f6-5fe8-4cea-a474-49773a0cb38e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 10:48:46 crc kubenswrapper[4728]: I0227 10:48:46.705952 4728 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1a3c2f6-5fe8-4cea-a474-49773a0cb38e-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 10:48:46 crc kubenswrapper[4728]: I0227 10:48:46.738104 4728 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 27 10:48:46 crc kubenswrapper[4728]: I0227 10:48:46.738302 4728 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-57a3f881-6a5d-4f01-9ebb-b4d4e5ba8025" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-57a3f881-6a5d-4f01-9ebb-b4d4e5ba8025") on node "crc" Feb 27 10:48:46 crc kubenswrapper[4728]: I0227 10:48:46.810330 4728 reconciler_common.go:293] "Volume detached for volume \"pvc-57a3f881-6a5d-4f01-9ebb-b4d4e5ba8025\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-57a3f881-6a5d-4f01-9ebb-b4d4e5ba8025\") on node \"crc\" DevicePath \"\"" Feb 27 10:48:46 crc kubenswrapper[4728]: I0227 10:48:46.910491 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 27 10:48:46 crc kubenswrapper[4728]: I0227 10:48:46.911021 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c1a3c2f6-5fe8-4cea-a474-49773a0cb38e","Type":"ContainerDied","Data":"78f6bf931be091bced60b987e2d1296316236023c4d02e81dcc519bf4ed62fe2"} Feb 27 10:48:46 crc kubenswrapper[4728]: E0227 10:48:46.912593 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-placement-api:current-podified\\\"\"" pod="openstack/placement-db-sync-tmx8h" podUID="141bf253-61a8-46a0-9d10-9aefbbd124c6" Feb 27 10:48:46 crc kubenswrapper[4728]: I0227 10:48:46.960704 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 27 10:48:46 crc kubenswrapper[4728]: I0227 10:48:46.978865 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 27 10:48:46 crc kubenswrapper[4728]: I0227 10:48:46.996142 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 27 10:48:46 crc kubenswrapper[4728]: E0227 10:48:46.996727 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1a3c2f6-5fe8-4cea-a474-49773a0cb38e" containerName="glance-log" Feb 27 10:48:46 crc kubenswrapper[4728]: I0227 10:48:46.996751 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1a3c2f6-5fe8-4cea-a474-49773a0cb38e" containerName="glance-log" Feb 27 10:48:46 crc kubenswrapper[4728]: E0227 10:48:46.996781 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1a3c2f6-5fe8-4cea-a474-49773a0cb38e" containerName="glance-httpd" Feb 27 10:48:46 crc kubenswrapper[4728]: I0227 10:48:46.996790 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1a3c2f6-5fe8-4cea-a474-49773a0cb38e" containerName="glance-httpd" Feb 27 10:48:46 crc kubenswrapper[4728]: I0227 10:48:46.997052 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1a3c2f6-5fe8-4cea-a474-49773a0cb38e" containerName="glance-log" Feb 27 10:48:46 crc kubenswrapper[4728]: I0227 10:48:46.997083 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1a3c2f6-5fe8-4cea-a474-49773a0cb38e" containerName="glance-httpd" Feb 27 10:48:46 crc kubenswrapper[4728]: I0227 10:48:46.998410 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 27 10:48:47 crc kubenswrapper[4728]: I0227 10:48:47.002222 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 27 10:48:47 crc kubenswrapper[4728]: I0227 10:48:47.003141 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 27 10:48:47 crc kubenswrapper[4728]: I0227 10:48:47.007902 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 27 10:48:47 crc kubenswrapper[4728]: I0227 10:48:47.118682 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/afb3f3cd-8dc7-4bfb-b46d-aa7d1eef7dc0-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"afb3f3cd-8dc7-4bfb-b46d-aa7d1eef7dc0\") " pod="openstack/glance-default-external-api-0" Feb 27 10:48:47 crc kubenswrapper[4728]: I0227 10:48:47.119024 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/afb3f3cd-8dc7-4bfb-b46d-aa7d1eef7dc0-logs\") pod \"glance-default-external-api-0\" (UID: \"afb3f3cd-8dc7-4bfb-b46d-aa7d1eef7dc0\") " pod="openstack/glance-default-external-api-0" Feb 27 10:48:47 crc kubenswrapper[4728]: I0227 10:48:47.119048 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/afb3f3cd-8dc7-4bfb-b46d-aa7d1eef7dc0-config-data\") pod \"glance-default-external-api-0\" (UID: \"afb3f3cd-8dc7-4bfb-b46d-aa7d1eef7dc0\") " pod="openstack/glance-default-external-api-0" Feb 27 10:48:47 crc kubenswrapper[4728]: I0227 10:48:47.119070 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/afb3f3cd-8dc7-4bfb-b46d-aa7d1eef7dc0-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"afb3f3cd-8dc7-4bfb-b46d-aa7d1eef7dc0\") " pod="openstack/glance-default-external-api-0" Feb 27 10:48:47 crc kubenswrapper[4728]: I0227 10:48:47.119140 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hfwv\" (UniqueName: \"kubernetes.io/projected/afb3f3cd-8dc7-4bfb-b46d-aa7d1eef7dc0-kube-api-access-2hfwv\") pod \"glance-default-external-api-0\" (UID: \"afb3f3cd-8dc7-4bfb-b46d-aa7d1eef7dc0\") " pod="openstack/glance-default-external-api-0" Feb 27 10:48:47 crc kubenswrapper[4728]: I0227 10:48:47.119211 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/afb3f3cd-8dc7-4bfb-b46d-aa7d1eef7dc0-scripts\") pod \"glance-default-external-api-0\" (UID: \"afb3f3cd-8dc7-4bfb-b46d-aa7d1eef7dc0\") " pod="openstack/glance-default-external-api-0" Feb 27 10:48:47 crc kubenswrapper[4728]: I0227 10:48:47.119241 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afb3f3cd-8dc7-4bfb-b46d-aa7d1eef7dc0-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"afb3f3cd-8dc7-4bfb-b46d-aa7d1eef7dc0\") " pod="openstack/glance-default-external-api-0" Feb 27 10:48:47 crc kubenswrapper[4728]: I0227 10:48:47.119401 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-57a3f881-6a5d-4f01-9ebb-b4d4e5ba8025\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-57a3f881-6a5d-4f01-9ebb-b4d4e5ba8025\") pod \"glance-default-external-api-0\" (UID: \"afb3f3cd-8dc7-4bfb-b46d-aa7d1eef7dc0\") " pod="openstack/glance-default-external-api-0" Feb 27 10:48:47 crc kubenswrapper[4728]: E0227 10:48:47.201332 4728 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Feb 27 10:48:47 crc kubenswrapper[4728]: E0227 10:48:47.201523 4728 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x7cb2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-v2vbf_openstack(f69c278c-545b-4a40-9f34-53d895c528c0): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 27 10:48:47 crc kubenswrapper[4728]: E0227 10:48:47.202760 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-v2vbf" podUID="f69c278c-545b-4a40-9f34-53d895c528c0" Feb 27 10:48:47 crc kubenswrapper[4728]: I0227 10:48:47.221183 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/afb3f3cd-8dc7-4bfb-b46d-aa7d1eef7dc0-scripts\") pod \"glance-default-external-api-0\" (UID: \"afb3f3cd-8dc7-4bfb-b46d-aa7d1eef7dc0\") " pod="openstack/glance-default-external-api-0" Feb 27 10:48:47 crc kubenswrapper[4728]: I0227 10:48:47.221252 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afb3f3cd-8dc7-4bfb-b46d-aa7d1eef7dc0-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"afb3f3cd-8dc7-4bfb-b46d-aa7d1eef7dc0\") " pod="openstack/glance-default-external-api-0" Feb 27 10:48:47 crc kubenswrapper[4728]: I0227 10:48:47.221413 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-57a3f881-6a5d-4f01-9ebb-b4d4e5ba8025\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-57a3f881-6a5d-4f01-9ebb-b4d4e5ba8025\") pod \"glance-default-external-api-0\" (UID: \"afb3f3cd-8dc7-4bfb-b46d-aa7d1eef7dc0\") " pod="openstack/glance-default-external-api-0" Feb 27 10:48:47 crc kubenswrapper[4728]: I0227 10:48:47.221480 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/afb3f3cd-8dc7-4bfb-b46d-aa7d1eef7dc0-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"afb3f3cd-8dc7-4bfb-b46d-aa7d1eef7dc0\") " pod="openstack/glance-default-external-api-0" Feb 27 10:48:47 crc kubenswrapper[4728]: I0227 10:48:47.221869 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/afb3f3cd-8dc7-4bfb-b46d-aa7d1eef7dc0-logs\") pod \"glance-default-external-api-0\" (UID: \"afb3f3cd-8dc7-4bfb-b46d-aa7d1eef7dc0\") " pod="openstack/glance-default-external-api-0" Feb 27 10:48:47 crc kubenswrapper[4728]: I0227 10:48:47.221898 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/afb3f3cd-8dc7-4bfb-b46d-aa7d1eef7dc0-config-data\") pod \"glance-default-external-api-0\" (UID: \"afb3f3cd-8dc7-4bfb-b46d-aa7d1eef7dc0\") " pod="openstack/glance-default-external-api-0" Feb 27 10:48:47 crc kubenswrapper[4728]: I0227 10:48:47.221922 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/afb3f3cd-8dc7-4bfb-b46d-aa7d1eef7dc0-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"afb3f3cd-8dc7-4bfb-b46d-aa7d1eef7dc0\") " pod="openstack/glance-default-external-api-0" Feb 27 10:48:47 crc kubenswrapper[4728]: I0227 10:48:47.221941 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2hfwv\" (UniqueName: \"kubernetes.io/projected/afb3f3cd-8dc7-4bfb-b46d-aa7d1eef7dc0-kube-api-access-2hfwv\") pod \"glance-default-external-api-0\" (UID: \"afb3f3cd-8dc7-4bfb-b46d-aa7d1eef7dc0\") " pod="openstack/glance-default-external-api-0" Feb 27 10:48:47 crc kubenswrapper[4728]: I0227 10:48:47.222437 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/afb3f3cd-8dc7-4bfb-b46d-aa7d1eef7dc0-logs\") pod \"glance-default-external-api-0\" (UID: \"afb3f3cd-8dc7-4bfb-b46d-aa7d1eef7dc0\") " pod="openstack/glance-default-external-api-0" Feb 27 10:48:47 crc kubenswrapper[4728]: I0227 10:48:47.222633 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/afb3f3cd-8dc7-4bfb-b46d-aa7d1eef7dc0-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"afb3f3cd-8dc7-4bfb-b46d-aa7d1eef7dc0\") " pod="openstack/glance-default-external-api-0" Feb 27 10:48:47 crc kubenswrapper[4728]: I0227 10:48:47.223781 4728 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 27 10:48:47 crc kubenswrapper[4728]: I0227 10:48:47.223815 4728 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-57a3f881-6a5d-4f01-9ebb-b4d4e5ba8025\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-57a3f881-6a5d-4f01-9ebb-b4d4e5ba8025\") pod \"glance-default-external-api-0\" (UID: \"afb3f3cd-8dc7-4bfb-b46d-aa7d1eef7dc0\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/d1be12f0d287d711dc64205ed4c6dc8f46de7817821a549d7df0761d45187fe2/globalmount\"" pod="openstack/glance-default-external-api-0" Feb 27 10:48:47 crc kubenswrapper[4728]: I0227 10:48:47.227350 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afb3f3cd-8dc7-4bfb-b46d-aa7d1eef7dc0-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"afb3f3cd-8dc7-4bfb-b46d-aa7d1eef7dc0\") " pod="openstack/glance-default-external-api-0" Feb 27 10:48:47 crc kubenswrapper[4728]: I0227 10:48:47.227453 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/afb3f3cd-8dc7-4bfb-b46d-aa7d1eef7dc0-scripts\") pod \"glance-default-external-api-0\" (UID: \"afb3f3cd-8dc7-4bfb-b46d-aa7d1eef7dc0\") " pod="openstack/glance-default-external-api-0" Feb 27 10:48:47 crc kubenswrapper[4728]: I0227 10:48:47.227893 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/afb3f3cd-8dc7-4bfb-b46d-aa7d1eef7dc0-config-data\") pod \"glance-default-external-api-0\" (UID: \"afb3f3cd-8dc7-4bfb-b46d-aa7d1eef7dc0\") " pod="openstack/glance-default-external-api-0" Feb 27 10:48:47 crc kubenswrapper[4728]: I0227 10:48:47.229447 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/afb3f3cd-8dc7-4bfb-b46d-aa7d1eef7dc0-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"afb3f3cd-8dc7-4bfb-b46d-aa7d1eef7dc0\") " pod="openstack/glance-default-external-api-0" Feb 27 10:48:47 crc kubenswrapper[4728]: I0227 10:48:47.242381 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2hfwv\" (UniqueName: \"kubernetes.io/projected/afb3f3cd-8dc7-4bfb-b46d-aa7d1eef7dc0-kube-api-access-2hfwv\") pod \"glance-default-external-api-0\" (UID: \"afb3f3cd-8dc7-4bfb-b46d-aa7d1eef7dc0\") " pod="openstack/glance-default-external-api-0" Feb 27 10:48:47 crc kubenswrapper[4728]: I0227 10:48:47.270542 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-57a3f881-6a5d-4f01-9ebb-b4d4e5ba8025\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-57a3f881-6a5d-4f01-9ebb-b4d4e5ba8025\") pod \"glance-default-external-api-0\" (UID: \"afb3f3cd-8dc7-4bfb-b46d-aa7d1eef7dc0\") " pod="openstack/glance-default-external-api-0" Feb 27 10:48:47 crc kubenswrapper[4728]: I0227 10:48:47.343595 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 27 10:48:47 crc kubenswrapper[4728]: E0227 10:48:47.923160 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-v2vbf" podUID="f69c278c-545b-4a40-9f34-53d895c528c0" Feb 27 10:48:48 crc kubenswrapper[4728]: I0227 10:48:48.745187 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c1a3c2f6-5fe8-4cea-a474-49773a0cb38e" path="/var/lib/kubelet/pods/c1a3c2f6-5fe8-4cea-a474-49773a0cb38e/volumes" Feb 27 10:48:52 crc kubenswrapper[4728]: I0227 10:48:52.475196 4728 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-895cf5cf-lfgq9" podUID="1080a33e-f552-404c-8381-281cf9ab1078" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.176:5353: i/o timeout" Feb 27 10:48:57 crc kubenswrapper[4728]: I0227 10:48:57.475977 4728 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-895cf5cf-lfgq9" podUID="1080a33e-f552-404c-8381-281cf9ab1078" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.176:5353: i/o timeout" Feb 27 10:48:57 crc kubenswrapper[4728]: I0227 10:48:57.476902 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-895cf5cf-lfgq9" Feb 27 10:49:01 crc kubenswrapper[4728]: I0227 10:49:01.674888 4728 scope.go:117] "RemoveContainer" containerID="511a8287396f7123a5ee403d95ae693f605a6e64df03c9c717e4dc3b0b14f42d" Feb 27 10:49:02 crc kubenswrapper[4728]: E0227 10:49:02.280136 4728 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified" Feb 27 10:49:02 crc kubenswrapper[4728]: E0227 10:49:02.280602 4728 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-b8gs7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-dgjpm_openstack(e977ffad-2764-4871-bdc8-24f0c3b4caf1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 27 10:49:02 crc kubenswrapper[4728]: E0227 10:49:02.281840 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/heat-db-sync-dgjpm" podUID="e977ffad-2764-4871-bdc8-24f0c3b4caf1" Feb 27 10:49:02 crc kubenswrapper[4728]: I0227 10:49:02.317489 4728 scope.go:117] "RemoveContainer" containerID="1c8c2c3d06ba0e195365b14dc203de0706a7d88d30bf88b287cec919226c79d2" Feb 27 10:49:02 crc kubenswrapper[4728]: I0227 10:49:02.417383 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-895cf5cf-lfgq9" Feb 27 10:49:02 crc kubenswrapper[4728]: I0227 10:49:02.477017 4728 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-895cf5cf-lfgq9" podUID="1080a33e-f552-404c-8381-281cf9ab1078" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.176:5353: i/o timeout" Feb 27 10:49:02 crc kubenswrapper[4728]: I0227 10:49:02.572676 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1080a33e-f552-404c-8381-281cf9ab1078-ovsdbserver-nb\") pod \"1080a33e-f552-404c-8381-281cf9ab1078\" (UID: \"1080a33e-f552-404c-8381-281cf9ab1078\") " Feb 27 10:49:02 crc kubenswrapper[4728]: I0227 10:49:02.572809 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1080a33e-f552-404c-8381-281cf9ab1078-config\") pod \"1080a33e-f552-404c-8381-281cf9ab1078\" (UID: \"1080a33e-f552-404c-8381-281cf9ab1078\") " Feb 27 10:49:02 crc kubenswrapper[4728]: I0227 10:49:02.572871 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-66rdk\" (UniqueName: \"kubernetes.io/projected/1080a33e-f552-404c-8381-281cf9ab1078-kube-api-access-66rdk\") pod \"1080a33e-f552-404c-8381-281cf9ab1078\" (UID: \"1080a33e-f552-404c-8381-281cf9ab1078\") " Feb 27 10:49:02 crc kubenswrapper[4728]: I0227 10:49:02.572946 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1080a33e-f552-404c-8381-281cf9ab1078-ovsdbserver-sb\") pod \"1080a33e-f552-404c-8381-281cf9ab1078\" (UID: \"1080a33e-f552-404c-8381-281cf9ab1078\") " Feb 27 10:49:02 crc kubenswrapper[4728]: I0227 10:49:02.572981 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1080a33e-f552-404c-8381-281cf9ab1078-dns-svc\") pod \"1080a33e-f552-404c-8381-281cf9ab1078\" (UID: \"1080a33e-f552-404c-8381-281cf9ab1078\") " Feb 27 10:49:02 crc kubenswrapper[4728]: I0227 10:49:02.573047 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1080a33e-f552-404c-8381-281cf9ab1078-dns-swift-storage-0\") pod \"1080a33e-f552-404c-8381-281cf9ab1078\" (UID: \"1080a33e-f552-404c-8381-281cf9ab1078\") " Feb 27 10:49:02 crc kubenswrapper[4728]: I0227 10:49:02.584142 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1080a33e-f552-404c-8381-281cf9ab1078-kube-api-access-66rdk" (OuterVolumeSpecName: "kube-api-access-66rdk") pod "1080a33e-f552-404c-8381-281cf9ab1078" (UID: "1080a33e-f552-404c-8381-281cf9ab1078"). InnerVolumeSpecName "kube-api-access-66rdk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:49:02 crc kubenswrapper[4728]: I0227 10:49:02.630385 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1080a33e-f552-404c-8381-281cf9ab1078-config" (OuterVolumeSpecName: "config") pod "1080a33e-f552-404c-8381-281cf9ab1078" (UID: "1080a33e-f552-404c-8381-281cf9ab1078"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:49:02 crc kubenswrapper[4728]: I0227 10:49:02.631066 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1080a33e-f552-404c-8381-281cf9ab1078-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "1080a33e-f552-404c-8381-281cf9ab1078" (UID: "1080a33e-f552-404c-8381-281cf9ab1078"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:49:02 crc kubenswrapper[4728]: I0227 10:49:02.645833 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1080a33e-f552-404c-8381-281cf9ab1078-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "1080a33e-f552-404c-8381-281cf9ab1078" (UID: "1080a33e-f552-404c-8381-281cf9ab1078"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:49:02 crc kubenswrapper[4728]: I0227 10:49:02.652270 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1080a33e-f552-404c-8381-281cf9ab1078-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "1080a33e-f552-404c-8381-281cf9ab1078" (UID: "1080a33e-f552-404c-8381-281cf9ab1078"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:49:02 crc kubenswrapper[4728]: I0227 10:49:02.658594 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1080a33e-f552-404c-8381-281cf9ab1078-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1080a33e-f552-404c-8381-281cf9ab1078" (UID: "1080a33e-f552-404c-8381-281cf9ab1078"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:49:02 crc kubenswrapper[4728]: I0227 10:49:02.674933 4728 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1080a33e-f552-404c-8381-281cf9ab1078-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 27 10:49:02 crc kubenswrapper[4728]: I0227 10:49:02.674968 4728 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1080a33e-f552-404c-8381-281cf9ab1078-config\") on node \"crc\" DevicePath \"\"" Feb 27 10:49:02 crc kubenswrapper[4728]: I0227 10:49:02.674979 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-66rdk\" (UniqueName: \"kubernetes.io/projected/1080a33e-f552-404c-8381-281cf9ab1078-kube-api-access-66rdk\") on node \"crc\" DevicePath \"\"" Feb 27 10:49:02 crc kubenswrapper[4728]: I0227 10:49:02.674988 4728 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1080a33e-f552-404c-8381-281cf9ab1078-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 27 10:49:02 crc kubenswrapper[4728]: I0227 10:49:02.674999 4728 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1080a33e-f552-404c-8381-281cf9ab1078-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 27 10:49:02 crc kubenswrapper[4728]: I0227 10:49:02.675008 4728 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1080a33e-f552-404c-8381-281cf9ab1078-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 27 10:49:03 crc kubenswrapper[4728]: I0227 10:49:03.115264 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-895cf5cf-lfgq9" Feb 27 10:49:03 crc kubenswrapper[4728]: I0227 10:49:03.115432 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-895cf5cf-lfgq9" event={"ID":"1080a33e-f552-404c-8381-281cf9ab1078","Type":"ContainerDied","Data":"0e1b42e708d2188bf4de26042523426417bd09220d68a4ce2a056ee050552c4d"} Feb 27 10:49:03 crc kubenswrapper[4728]: E0227 10:49:03.121480 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified\\\"\"" pod="openstack/heat-db-sync-dgjpm" podUID="e977ffad-2764-4871-bdc8-24f0c3b4caf1" Feb 27 10:49:03 crc kubenswrapper[4728]: I0227 10:49:03.184983 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-895cf5cf-lfgq9"] Feb 27 10:49:03 crc kubenswrapper[4728]: I0227 10:49:03.203439 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-895cf5cf-lfgq9"] Feb 27 10:49:03 crc kubenswrapper[4728]: E0227 10:49:03.688292 4728 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Feb 27 10:49:03 crc kubenswrapper[4728]: E0227 10:49:03.688448 4728 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sm7br,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-dds8x_openstack(e293ec90-6006-49f0-8e24-8a0f4327d2cf): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 27 10:49:03 crc kubenswrapper[4728]: E0227 10:49:03.689742 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-dds8x" podUID="e293ec90-6006-49f0-8e24-8a0f4327d2cf" Feb 27 10:49:03 crc kubenswrapper[4728]: I0227 10:49:03.940369 4728 scope.go:117] "RemoveContainer" containerID="fab3d1247d35ca3e9a13000dafe14ec5e0cc51c5b1dca11f2cb30a0f6e87b414" Feb 27 10:49:04 crc kubenswrapper[4728]: I0227 10:49:04.018143 4728 scope.go:117] "RemoveContainer" containerID="d22101c88e6582a8c598b802cd9cbd1a4e35d7dafd35b837bd745fbe1768106c" Feb 27 10:49:04 crc kubenswrapper[4728]: E0227 10:49:04.034356 4728 kuberuntime_gc.go:389] "Failed to remove container log dead symlink" err="remove /var/log/containers/glance-default-external-api-0_openstack_glance-log-d22101c88e6582a8c598b802cd9cbd1a4e35d7dafd35b837bd745fbe1768106c.log: no such file or directory" path="/var/log/containers/glance-default-external-api-0_openstack_glance-log-d22101c88e6582a8c598b802cd9cbd1a4e35d7dafd35b837bd745fbe1768106c.log" Feb 27 10:49:04 crc kubenswrapper[4728]: E0227 10:49:04.142174 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-dds8x" podUID="e293ec90-6006-49f0-8e24-8a0f4327d2cf" Feb 27 10:49:04 crc kubenswrapper[4728]: I0227 10:49:04.197807 4728 scope.go:117] "RemoveContainer" containerID="ead82b73a5ce8b18f1147934bda38e7b8c4732a6f3fc5d0b5ac5d099b38fb766" Feb 27 10:49:04 crc kubenswrapper[4728]: I0227 10:49:04.223974 4728 scope.go:117] "RemoveContainer" containerID="393cc683237d0f1226c9deb8bd8852cd4c871681f45d92bcc309735445638319" Feb 27 10:49:04 crc kubenswrapper[4728]: I0227 10:49:04.353045 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 27 10:49:04 crc kubenswrapper[4728]: I0227 10:49:04.422489 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-tstt8"] Feb 27 10:49:04 crc kubenswrapper[4728]: W0227 10:49:04.431550 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd3ddaa20_d617_4d45_9e82_b31c982de147.slice/crio-d1f66f5d78d45652595e434984107e0faef329550b5294b1488a61fde74082ad WatchSource:0}: Error finding container d1f66f5d78d45652595e434984107e0faef329550b5294b1488a61fde74082ad: Status 404 returned error can't find the container with id d1f66f5d78d45652595e434984107e0faef329550b5294b1488a61fde74082ad Feb 27 10:49:04 crc kubenswrapper[4728]: I0227 10:49:04.665935 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 27 10:49:04 crc kubenswrapper[4728]: W0227 10:49:04.666935 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podafb3f3cd_8dc7_4bfb_b46d_aa7d1eef7dc0.slice/crio-ff529f21983fdc9638da435b921b17a0f6ffc0212c182cea404eb29a199aec79 WatchSource:0}: Error finding container ff529f21983fdc9638da435b921b17a0f6ffc0212c182cea404eb29a199aec79: Status 404 returned error can't find the container with id ff529f21983fdc9638da435b921b17a0f6ffc0212c182cea404eb29a199aec79 Feb 27 10:49:04 crc kubenswrapper[4728]: I0227 10:49:04.747774 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1080a33e-f552-404c-8381-281cf9ab1078" path="/var/lib/kubelet/pods/1080a33e-f552-404c-8381-281cf9ab1078/volumes" Feb 27 10:49:05 crc kubenswrapper[4728]: I0227 10:49:05.145004 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4489bcc6-1de2-45f0-993d-67bd941e697c","Type":"ContainerStarted","Data":"06ac0b483b2a9e06753561a7639abef68a0cd6228c20931c8119eba80490f3b5"} Feb 27 10:49:05 crc kubenswrapper[4728]: I0227 10:49:05.145332 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4489bcc6-1de2-45f0-993d-67bd941e697c","Type":"ContainerStarted","Data":"07d80261b3ede7b07a58551b40861c499ed3cc256ebce237279028efc99feb9c"} Feb 27 10:49:05 crc kubenswrapper[4728]: I0227 10:49:05.147295 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-v2vbf" event={"ID":"f69c278c-545b-4a40-9f34-53d895c528c0","Type":"ContainerStarted","Data":"d1b547d94847f3dca4ab8f436e619449567c33b3c8310f9df38636fe58550781"} Feb 27 10:49:05 crc kubenswrapper[4728]: I0227 10:49:05.154211 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7b7dd98b-314c-4e6c-a45b-31168398fca3","Type":"ContainerStarted","Data":"9f7b948c9f018079389be6b868387bb61b1aa295bd67bb6c8159167b8791e6d6"} Feb 27 10:49:05 crc kubenswrapper[4728]: I0227 10:49:05.163285 4728 generic.go:334] "Generic (PLEG): container finished" podID="085a2ebe-2930-448c-aec9-396b505fb399" containerID="d58201e350456ff6648eab9f54f9a69bc1f8422e0ff56b1e2779544fae8a2756" exitCode=0 Feb 27 10:49:05 crc kubenswrapper[4728]: I0227 10:49:05.163399 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-4msxf" event={"ID":"085a2ebe-2930-448c-aec9-396b505fb399","Type":"ContainerDied","Data":"d58201e350456ff6648eab9f54f9a69bc1f8422e0ff56b1e2779544fae8a2756"} Feb 27 10:49:05 crc kubenswrapper[4728]: I0227 10:49:05.164268 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-v2vbf" podStartSLOduration=3.105804892 podStartE2EDuration="38.164253934s" podCreationTimestamp="2026-02-27 10:48:27 +0000 UTC" firstStartedPulling="2026-02-27 10:48:28.959194129 +0000 UTC m=+1328.921560275" lastFinishedPulling="2026-02-27 10:49:04.017643211 +0000 UTC m=+1363.980009317" observedRunningTime="2026-02-27 10:49:05.163483904 +0000 UTC m=+1365.125850010" watchObservedRunningTime="2026-02-27 10:49:05.164253934 +0000 UTC m=+1365.126620040" Feb 27 10:49:05 crc kubenswrapper[4728]: I0227 10:49:05.174269 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-tstt8" event={"ID":"d3ddaa20-d617-4d45-9e82-b31c982de147","Type":"ContainerStarted","Data":"a01d10a99a8b2be9ba0ab1bc9e3953c78b32c9585aaa9aa0150af4480b4e8091"} Feb 27 10:49:05 crc kubenswrapper[4728]: I0227 10:49:05.174316 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-tstt8" event={"ID":"d3ddaa20-d617-4d45-9e82-b31c982de147","Type":"ContainerStarted","Data":"d1f66f5d78d45652595e434984107e0faef329550b5294b1488a61fde74082ad"} Feb 27 10:49:05 crc kubenswrapper[4728]: I0227 10:49:05.176159 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"afb3f3cd-8dc7-4bfb-b46d-aa7d1eef7dc0","Type":"ContainerStarted","Data":"ff529f21983fdc9638da435b921b17a0f6ffc0212c182cea404eb29a199aec79"} Feb 27 10:49:05 crc kubenswrapper[4728]: I0227 10:49:05.179664 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-tmx8h" event={"ID":"141bf253-61a8-46a0-9d10-9aefbbd124c6","Type":"ContainerStarted","Data":"2b52ca2354545d301b5b9fd9a4be5db5b27d3cca3b6e1522f1ed3aecdd9a19b0"} Feb 27 10:49:05 crc kubenswrapper[4728]: I0227 10:49:05.218419 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-tmx8h" podStartSLOduration=3.517789181 podStartE2EDuration="38.218397433s" podCreationTimestamp="2026-02-27 10:48:27 +0000 UTC" firstStartedPulling="2026-02-27 10:48:29.259758015 +0000 UTC m=+1329.222124121" lastFinishedPulling="2026-02-27 10:49:03.960366267 +0000 UTC m=+1363.922732373" observedRunningTime="2026-02-27 10:49:05.201565697 +0000 UTC m=+1365.163931803" watchObservedRunningTime="2026-02-27 10:49:05.218397433 +0000 UTC m=+1365.180763529" Feb 27 10:49:05 crc kubenswrapper[4728]: I0227 10:49:05.231675 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-tstt8" podStartSLOduration=25.231651794 podStartE2EDuration="25.231651794s" podCreationTimestamp="2026-02-27 10:48:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:49:05.229082834 +0000 UTC m=+1365.191448940" watchObservedRunningTime="2026-02-27 10:49:05.231651794 +0000 UTC m=+1365.194017900" Feb 27 10:49:05 crc kubenswrapper[4728]: I0227 10:49:05.922073 4728 patch_prober.go:28] interesting pod/machine-config-daemon-mf2hh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 10:49:05 crc kubenswrapper[4728]: I0227 10:49:05.922349 4728 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 10:49:05 crc kubenswrapper[4728]: I0227 10:49:05.922391 4728 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" Feb 27 10:49:05 crc kubenswrapper[4728]: I0227 10:49:05.923188 4728 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"38e4421806f8078d8e00d718689caad66ee119d857ee6a04b69a7a968f3e70aa"} pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 27 10:49:05 crc kubenswrapper[4728]: I0227 10:49:05.923233 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" containerName="machine-config-daemon" containerID="cri-o://38e4421806f8078d8e00d718689caad66ee119d857ee6a04b69a7a968f3e70aa" gracePeriod=600 Feb 27 10:49:06 crc kubenswrapper[4728]: I0227 10:49:06.192750 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7b7dd98b-314c-4e6c-a45b-31168398fca3","Type":"ContainerStarted","Data":"4d48e0452ff5d1b0b1e2e160be3339fd25478438c80b2d61cdb3bdc1a989c207"} Feb 27 10:49:06 crc kubenswrapper[4728]: I0227 10:49:06.200228 4728 generic.go:334] "Generic (PLEG): container finished" podID="c2cfd349-f825-497b-b698-7fb6bc258b22" containerID="38e4421806f8078d8e00d718689caad66ee119d857ee6a04b69a7a968f3e70aa" exitCode=0 Feb 27 10:49:06 crc kubenswrapper[4728]: I0227 10:49:06.200606 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" event={"ID":"c2cfd349-f825-497b-b698-7fb6bc258b22","Type":"ContainerDied","Data":"38e4421806f8078d8e00d718689caad66ee119d857ee6a04b69a7a968f3e70aa"} Feb 27 10:49:06 crc kubenswrapper[4728]: I0227 10:49:06.200778 4728 scope.go:117] "RemoveContainer" containerID="25e402b0eb27122e7fbd4811edc6c8ff99dce0897b61a2efd27b0c5dbb0c9671" Feb 27 10:49:06 crc kubenswrapper[4728]: I0227 10:49:06.210811 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"afb3f3cd-8dc7-4bfb-b46d-aa7d1eef7dc0","Type":"ContainerStarted","Data":"ab4d5066d00303bc3e83ffd52e4b2f80c96fb80255ced422c66039e161a36d37"} Feb 27 10:49:06 crc kubenswrapper[4728]: I0227 10:49:06.709703 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-4msxf" Feb 27 10:49:06 crc kubenswrapper[4728]: I0227 10:49:06.804466 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/085a2ebe-2930-448c-aec9-396b505fb399-config\") pod \"085a2ebe-2930-448c-aec9-396b505fb399\" (UID: \"085a2ebe-2930-448c-aec9-396b505fb399\") " Feb 27 10:49:06 crc kubenswrapper[4728]: I0227 10:49:06.804581 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/085a2ebe-2930-448c-aec9-396b505fb399-combined-ca-bundle\") pod \"085a2ebe-2930-448c-aec9-396b505fb399\" (UID: \"085a2ebe-2930-448c-aec9-396b505fb399\") " Feb 27 10:49:06 crc kubenswrapper[4728]: I0227 10:49:06.804709 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-56z9r\" (UniqueName: \"kubernetes.io/projected/085a2ebe-2930-448c-aec9-396b505fb399-kube-api-access-56z9r\") pod \"085a2ebe-2930-448c-aec9-396b505fb399\" (UID: \"085a2ebe-2930-448c-aec9-396b505fb399\") " Feb 27 10:49:06 crc kubenswrapper[4728]: I0227 10:49:06.811408 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/085a2ebe-2930-448c-aec9-396b505fb399-kube-api-access-56z9r" (OuterVolumeSpecName: "kube-api-access-56z9r") pod "085a2ebe-2930-448c-aec9-396b505fb399" (UID: "085a2ebe-2930-448c-aec9-396b505fb399"). InnerVolumeSpecName "kube-api-access-56z9r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:49:06 crc kubenswrapper[4728]: I0227 10:49:06.841167 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/085a2ebe-2930-448c-aec9-396b505fb399-config" (OuterVolumeSpecName: "config") pod "085a2ebe-2930-448c-aec9-396b505fb399" (UID: "085a2ebe-2930-448c-aec9-396b505fb399"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:49:06 crc kubenswrapper[4728]: I0227 10:49:06.843606 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/085a2ebe-2930-448c-aec9-396b505fb399-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "085a2ebe-2930-448c-aec9-396b505fb399" (UID: "085a2ebe-2930-448c-aec9-396b505fb399"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:49:06 crc kubenswrapper[4728]: I0227 10:49:06.910600 4728 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/085a2ebe-2930-448c-aec9-396b505fb399-config\") on node \"crc\" DevicePath \"\"" Feb 27 10:49:06 crc kubenswrapper[4728]: I0227 10:49:06.910763 4728 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/085a2ebe-2930-448c-aec9-396b505fb399-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 10:49:06 crc kubenswrapper[4728]: I0227 10:49:06.910814 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-56z9r\" (UniqueName: \"kubernetes.io/projected/085a2ebe-2930-448c-aec9-396b505fb399-kube-api-access-56z9r\") on node \"crc\" DevicePath \"\"" Feb 27 10:49:07 crc kubenswrapper[4728]: I0227 10:49:07.232016 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" event={"ID":"c2cfd349-f825-497b-b698-7fb6bc258b22","Type":"ContainerStarted","Data":"e31218d6f1087057441016d4a0b5eeb91a41486bd3e9c9784604100aaaedc60a"} Feb 27 10:49:07 crc kubenswrapper[4728]: I0227 10:49:07.235929 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4489bcc6-1de2-45f0-993d-67bd941e697c","Type":"ContainerStarted","Data":"3e5a486e6493ed0a69a24a8111f106acb12103b60d7d09f35b9f7062ba41f0d1"} Feb 27 10:49:07 crc kubenswrapper[4728]: I0227 10:49:07.240660 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"afb3f3cd-8dc7-4bfb-b46d-aa7d1eef7dc0","Type":"ContainerStarted","Data":"a1ba93c922074640e7fa96eddc3052a948ef0eb1207edaf20ab7ea540b868666"} Feb 27 10:49:07 crc kubenswrapper[4728]: I0227 10:49:07.247372 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-4msxf" event={"ID":"085a2ebe-2930-448c-aec9-396b505fb399","Type":"ContainerDied","Data":"e30b22412fbe300f318a754b43fe074f272d418426d4054a4fcf5ccdcd0d6a15"} Feb 27 10:49:07 crc kubenswrapper[4728]: I0227 10:49:07.247412 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e30b22412fbe300f318a754b43fe074f272d418426d4054a4fcf5ccdcd0d6a15" Feb 27 10:49:07 crc kubenswrapper[4728]: I0227 10:49:07.247487 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-4msxf" Feb 27 10:49:07 crc kubenswrapper[4728]: I0227 10:49:07.301300 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=21.301277493 podStartE2EDuration="21.301277493s" podCreationTimestamp="2026-02-27 10:48:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:49:07.279176743 +0000 UTC m=+1367.241542839" watchObservedRunningTime="2026-02-27 10:49:07.301277493 +0000 UTC m=+1367.263643599" Feb 27 10:49:07 crc kubenswrapper[4728]: I0227 10:49:07.326981 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=31.32695646 podStartE2EDuration="31.32695646s" podCreationTimestamp="2026-02-27 10:48:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:49:07.310826932 +0000 UTC m=+1367.273193048" watchObservedRunningTime="2026-02-27 10:49:07.32695646 +0000 UTC m=+1367.289322566" Feb 27 10:49:07 crc kubenswrapper[4728]: I0227 10:49:07.346031 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 27 10:49:07 crc kubenswrapper[4728]: I0227 10:49:07.346077 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 27 10:49:07 crc kubenswrapper[4728]: I0227 10:49:07.427463 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 27 10:49:07 crc kubenswrapper[4728]: I0227 10:49:07.462683 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5ccc5c4795-2ch9l"] Feb 27 10:49:07 crc kubenswrapper[4728]: E0227 10:49:07.463135 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="085a2ebe-2930-448c-aec9-396b505fb399" containerName="neutron-db-sync" Feb 27 10:49:07 crc kubenswrapper[4728]: I0227 10:49:07.463147 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="085a2ebe-2930-448c-aec9-396b505fb399" containerName="neutron-db-sync" Feb 27 10:49:07 crc kubenswrapper[4728]: E0227 10:49:07.463164 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1080a33e-f552-404c-8381-281cf9ab1078" containerName="init" Feb 27 10:49:07 crc kubenswrapper[4728]: I0227 10:49:07.463170 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="1080a33e-f552-404c-8381-281cf9ab1078" containerName="init" Feb 27 10:49:07 crc kubenswrapper[4728]: E0227 10:49:07.463191 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1080a33e-f552-404c-8381-281cf9ab1078" containerName="dnsmasq-dns" Feb 27 10:49:07 crc kubenswrapper[4728]: I0227 10:49:07.463198 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="1080a33e-f552-404c-8381-281cf9ab1078" containerName="dnsmasq-dns" Feb 27 10:49:07 crc kubenswrapper[4728]: I0227 10:49:07.463360 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="085a2ebe-2930-448c-aec9-396b505fb399" containerName="neutron-db-sync" Feb 27 10:49:07 crc kubenswrapper[4728]: I0227 10:49:07.463383 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="1080a33e-f552-404c-8381-281cf9ab1078" containerName="dnsmasq-dns" Feb 27 10:49:07 crc kubenswrapper[4728]: I0227 10:49:07.475401 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 27 10:49:07 crc kubenswrapper[4728]: I0227 10:49:07.475627 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc5c4795-2ch9l" Feb 27 10:49:07 crc kubenswrapper[4728]: I0227 10:49:07.578774 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5ccc5c4795-2ch9l"] Feb 27 10:49:07 crc kubenswrapper[4728]: I0227 10:49:07.629901 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fafc2d4e-4a84-43c4-8ffc-4d5b7b3fe4f7-ovsdbserver-sb\") pod \"dnsmasq-dns-5ccc5c4795-2ch9l\" (UID: \"fafc2d4e-4a84-43c4-8ffc-4d5b7b3fe4f7\") " pod="openstack/dnsmasq-dns-5ccc5c4795-2ch9l" Feb 27 10:49:07 crc kubenswrapper[4728]: I0227 10:49:07.630071 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fafc2d4e-4a84-43c4-8ffc-4d5b7b3fe4f7-ovsdbserver-nb\") pod \"dnsmasq-dns-5ccc5c4795-2ch9l\" (UID: \"fafc2d4e-4a84-43c4-8ffc-4d5b7b3fe4f7\") " pod="openstack/dnsmasq-dns-5ccc5c4795-2ch9l" Feb 27 10:49:07 crc kubenswrapper[4728]: I0227 10:49:07.630186 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-798c7\" (UniqueName: \"kubernetes.io/projected/fafc2d4e-4a84-43c4-8ffc-4d5b7b3fe4f7-kube-api-access-798c7\") pod \"dnsmasq-dns-5ccc5c4795-2ch9l\" (UID: \"fafc2d4e-4a84-43c4-8ffc-4d5b7b3fe4f7\") " pod="openstack/dnsmasq-dns-5ccc5c4795-2ch9l" Feb 27 10:49:07 crc kubenswrapper[4728]: I0227 10:49:07.630307 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fafc2d4e-4a84-43c4-8ffc-4d5b7b3fe4f7-dns-swift-storage-0\") pod \"dnsmasq-dns-5ccc5c4795-2ch9l\" (UID: \"fafc2d4e-4a84-43c4-8ffc-4d5b7b3fe4f7\") " pod="openstack/dnsmasq-dns-5ccc5c4795-2ch9l" Feb 27 10:49:07 crc kubenswrapper[4728]: I0227 10:49:07.630567 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fafc2d4e-4a84-43c4-8ffc-4d5b7b3fe4f7-dns-svc\") pod \"dnsmasq-dns-5ccc5c4795-2ch9l\" (UID: \"fafc2d4e-4a84-43c4-8ffc-4d5b7b3fe4f7\") " pod="openstack/dnsmasq-dns-5ccc5c4795-2ch9l" Feb 27 10:49:07 crc kubenswrapper[4728]: I0227 10:49:07.630589 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fafc2d4e-4a84-43c4-8ffc-4d5b7b3fe4f7-config\") pod \"dnsmasq-dns-5ccc5c4795-2ch9l\" (UID: \"fafc2d4e-4a84-43c4-8ffc-4d5b7b3fe4f7\") " pod="openstack/dnsmasq-dns-5ccc5c4795-2ch9l" Feb 27 10:49:07 crc kubenswrapper[4728]: I0227 10:49:07.655809 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-6bbb6b68c6-zhv6j"] Feb 27 10:49:07 crc kubenswrapper[4728]: I0227 10:49:07.660019 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6bbb6b68c6-zhv6j" Feb 27 10:49:07 crc kubenswrapper[4728]: I0227 10:49:07.661562 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Feb 27 10:49:07 crc kubenswrapper[4728]: I0227 10:49:07.663077 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 27 10:49:07 crc kubenswrapper[4728]: I0227 10:49:07.663328 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-w6w2k" Feb 27 10:49:07 crc kubenswrapper[4728]: I0227 10:49:07.663468 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 27 10:49:07 crc kubenswrapper[4728]: I0227 10:49:07.684581 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6bbb6b68c6-zhv6j"] Feb 27 10:49:07 crc kubenswrapper[4728]: I0227 10:49:07.732614 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fafc2d4e-4a84-43c4-8ffc-4d5b7b3fe4f7-ovsdbserver-nb\") pod \"dnsmasq-dns-5ccc5c4795-2ch9l\" (UID: \"fafc2d4e-4a84-43c4-8ffc-4d5b7b3fe4f7\") " pod="openstack/dnsmasq-dns-5ccc5c4795-2ch9l" Feb 27 10:49:07 crc kubenswrapper[4728]: I0227 10:49:07.732691 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49hl9\" (UniqueName: \"kubernetes.io/projected/ae36fee1-d5a7-470b-ae15-4eeb8d126951-kube-api-access-49hl9\") pod \"neutron-6bbb6b68c6-zhv6j\" (UID: \"ae36fee1-d5a7-470b-ae15-4eeb8d126951\") " pod="openstack/neutron-6bbb6b68c6-zhv6j" Feb 27 10:49:07 crc kubenswrapper[4728]: I0227 10:49:07.732745 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-798c7\" (UniqueName: \"kubernetes.io/projected/fafc2d4e-4a84-43c4-8ffc-4d5b7b3fe4f7-kube-api-access-798c7\") pod \"dnsmasq-dns-5ccc5c4795-2ch9l\" (UID: \"fafc2d4e-4a84-43c4-8ffc-4d5b7b3fe4f7\") " pod="openstack/dnsmasq-dns-5ccc5c4795-2ch9l" Feb 27 10:49:07 crc kubenswrapper[4728]: I0227 10:49:07.732788 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae36fee1-d5a7-470b-ae15-4eeb8d126951-combined-ca-bundle\") pod \"neutron-6bbb6b68c6-zhv6j\" (UID: \"ae36fee1-d5a7-470b-ae15-4eeb8d126951\") " pod="openstack/neutron-6bbb6b68c6-zhv6j" Feb 27 10:49:07 crc kubenswrapper[4728]: I0227 10:49:07.732838 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fafc2d4e-4a84-43c4-8ffc-4d5b7b3fe4f7-dns-swift-storage-0\") pod \"dnsmasq-dns-5ccc5c4795-2ch9l\" (UID: \"fafc2d4e-4a84-43c4-8ffc-4d5b7b3fe4f7\") " pod="openstack/dnsmasq-dns-5ccc5c4795-2ch9l" Feb 27 10:49:07 crc kubenswrapper[4728]: I0227 10:49:07.732854 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ae36fee1-d5a7-470b-ae15-4eeb8d126951-config\") pod \"neutron-6bbb6b68c6-zhv6j\" (UID: \"ae36fee1-d5a7-470b-ae15-4eeb8d126951\") " pod="openstack/neutron-6bbb6b68c6-zhv6j" Feb 27 10:49:07 crc kubenswrapper[4728]: I0227 10:49:07.732914 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/ae36fee1-d5a7-470b-ae15-4eeb8d126951-httpd-config\") pod \"neutron-6bbb6b68c6-zhv6j\" (UID: \"ae36fee1-d5a7-470b-ae15-4eeb8d126951\") " pod="openstack/neutron-6bbb6b68c6-zhv6j" Feb 27 10:49:07 crc kubenswrapper[4728]: I0227 10:49:07.732966 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fafc2d4e-4a84-43c4-8ffc-4d5b7b3fe4f7-dns-svc\") pod \"dnsmasq-dns-5ccc5c4795-2ch9l\" (UID: \"fafc2d4e-4a84-43c4-8ffc-4d5b7b3fe4f7\") " pod="openstack/dnsmasq-dns-5ccc5c4795-2ch9l" Feb 27 10:49:07 crc kubenswrapper[4728]: I0227 10:49:07.732986 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fafc2d4e-4a84-43c4-8ffc-4d5b7b3fe4f7-config\") pod \"dnsmasq-dns-5ccc5c4795-2ch9l\" (UID: \"fafc2d4e-4a84-43c4-8ffc-4d5b7b3fe4f7\") " pod="openstack/dnsmasq-dns-5ccc5c4795-2ch9l" Feb 27 10:49:07 crc kubenswrapper[4728]: I0227 10:49:07.733029 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fafc2d4e-4a84-43c4-8ffc-4d5b7b3fe4f7-ovsdbserver-sb\") pod \"dnsmasq-dns-5ccc5c4795-2ch9l\" (UID: \"fafc2d4e-4a84-43c4-8ffc-4d5b7b3fe4f7\") " pod="openstack/dnsmasq-dns-5ccc5c4795-2ch9l" Feb 27 10:49:07 crc kubenswrapper[4728]: I0227 10:49:07.733065 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae36fee1-d5a7-470b-ae15-4eeb8d126951-ovndb-tls-certs\") pod \"neutron-6bbb6b68c6-zhv6j\" (UID: \"ae36fee1-d5a7-470b-ae15-4eeb8d126951\") " pod="openstack/neutron-6bbb6b68c6-zhv6j" Feb 27 10:49:07 crc kubenswrapper[4728]: I0227 10:49:07.733909 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fafc2d4e-4a84-43c4-8ffc-4d5b7b3fe4f7-ovsdbserver-sb\") pod \"dnsmasq-dns-5ccc5c4795-2ch9l\" (UID: \"fafc2d4e-4a84-43c4-8ffc-4d5b7b3fe4f7\") " pod="openstack/dnsmasq-dns-5ccc5c4795-2ch9l" Feb 27 10:49:07 crc kubenswrapper[4728]: I0227 10:49:07.733931 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fafc2d4e-4a84-43c4-8ffc-4d5b7b3fe4f7-dns-svc\") pod \"dnsmasq-dns-5ccc5c4795-2ch9l\" (UID: \"fafc2d4e-4a84-43c4-8ffc-4d5b7b3fe4f7\") " pod="openstack/dnsmasq-dns-5ccc5c4795-2ch9l" Feb 27 10:49:07 crc kubenswrapper[4728]: I0227 10:49:07.734431 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fafc2d4e-4a84-43c4-8ffc-4d5b7b3fe4f7-dns-swift-storage-0\") pod \"dnsmasq-dns-5ccc5c4795-2ch9l\" (UID: \"fafc2d4e-4a84-43c4-8ffc-4d5b7b3fe4f7\") " pod="openstack/dnsmasq-dns-5ccc5c4795-2ch9l" Feb 27 10:49:07 crc kubenswrapper[4728]: I0227 10:49:07.734491 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fafc2d4e-4a84-43c4-8ffc-4d5b7b3fe4f7-config\") pod \"dnsmasq-dns-5ccc5c4795-2ch9l\" (UID: \"fafc2d4e-4a84-43c4-8ffc-4d5b7b3fe4f7\") " pod="openstack/dnsmasq-dns-5ccc5c4795-2ch9l" Feb 27 10:49:07 crc kubenswrapper[4728]: I0227 10:49:07.735000 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fafc2d4e-4a84-43c4-8ffc-4d5b7b3fe4f7-ovsdbserver-nb\") pod \"dnsmasq-dns-5ccc5c4795-2ch9l\" (UID: \"fafc2d4e-4a84-43c4-8ffc-4d5b7b3fe4f7\") " pod="openstack/dnsmasq-dns-5ccc5c4795-2ch9l" Feb 27 10:49:07 crc kubenswrapper[4728]: I0227 10:49:07.778554 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-798c7\" (UniqueName: \"kubernetes.io/projected/fafc2d4e-4a84-43c4-8ffc-4d5b7b3fe4f7-kube-api-access-798c7\") pod \"dnsmasq-dns-5ccc5c4795-2ch9l\" (UID: \"fafc2d4e-4a84-43c4-8ffc-4d5b7b3fe4f7\") " pod="openstack/dnsmasq-dns-5ccc5c4795-2ch9l" Feb 27 10:49:07 crc kubenswrapper[4728]: I0227 10:49:07.835157 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/ae36fee1-d5a7-470b-ae15-4eeb8d126951-httpd-config\") pod \"neutron-6bbb6b68c6-zhv6j\" (UID: \"ae36fee1-d5a7-470b-ae15-4eeb8d126951\") " pod="openstack/neutron-6bbb6b68c6-zhv6j" Feb 27 10:49:07 crc kubenswrapper[4728]: I0227 10:49:07.835441 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae36fee1-d5a7-470b-ae15-4eeb8d126951-ovndb-tls-certs\") pod \"neutron-6bbb6b68c6-zhv6j\" (UID: \"ae36fee1-d5a7-470b-ae15-4eeb8d126951\") " pod="openstack/neutron-6bbb6b68c6-zhv6j" Feb 27 10:49:07 crc kubenswrapper[4728]: I0227 10:49:07.835482 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-49hl9\" (UniqueName: \"kubernetes.io/projected/ae36fee1-d5a7-470b-ae15-4eeb8d126951-kube-api-access-49hl9\") pod \"neutron-6bbb6b68c6-zhv6j\" (UID: \"ae36fee1-d5a7-470b-ae15-4eeb8d126951\") " pod="openstack/neutron-6bbb6b68c6-zhv6j" Feb 27 10:49:07 crc kubenswrapper[4728]: I0227 10:49:07.835560 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae36fee1-d5a7-470b-ae15-4eeb8d126951-combined-ca-bundle\") pod \"neutron-6bbb6b68c6-zhv6j\" (UID: \"ae36fee1-d5a7-470b-ae15-4eeb8d126951\") " pod="openstack/neutron-6bbb6b68c6-zhv6j" Feb 27 10:49:07 crc kubenswrapper[4728]: I0227 10:49:07.835634 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ae36fee1-d5a7-470b-ae15-4eeb8d126951-config\") pod \"neutron-6bbb6b68c6-zhv6j\" (UID: \"ae36fee1-d5a7-470b-ae15-4eeb8d126951\") " pod="openstack/neutron-6bbb6b68c6-zhv6j" Feb 27 10:49:07 crc kubenswrapper[4728]: I0227 10:49:07.843029 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc5c4795-2ch9l" Feb 27 10:49:07 crc kubenswrapper[4728]: I0227 10:49:07.845241 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae36fee1-d5a7-470b-ae15-4eeb8d126951-combined-ca-bundle\") pod \"neutron-6bbb6b68c6-zhv6j\" (UID: \"ae36fee1-d5a7-470b-ae15-4eeb8d126951\") " pod="openstack/neutron-6bbb6b68c6-zhv6j" Feb 27 10:49:07 crc kubenswrapper[4728]: I0227 10:49:07.847261 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/ae36fee1-d5a7-470b-ae15-4eeb8d126951-config\") pod \"neutron-6bbb6b68c6-zhv6j\" (UID: \"ae36fee1-d5a7-470b-ae15-4eeb8d126951\") " pod="openstack/neutron-6bbb6b68c6-zhv6j" Feb 27 10:49:07 crc kubenswrapper[4728]: I0227 10:49:07.852218 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae36fee1-d5a7-470b-ae15-4eeb8d126951-ovndb-tls-certs\") pod \"neutron-6bbb6b68c6-zhv6j\" (UID: \"ae36fee1-d5a7-470b-ae15-4eeb8d126951\") " pod="openstack/neutron-6bbb6b68c6-zhv6j" Feb 27 10:49:07 crc kubenswrapper[4728]: I0227 10:49:07.857245 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/ae36fee1-d5a7-470b-ae15-4eeb8d126951-httpd-config\") pod \"neutron-6bbb6b68c6-zhv6j\" (UID: \"ae36fee1-d5a7-470b-ae15-4eeb8d126951\") " pod="openstack/neutron-6bbb6b68c6-zhv6j" Feb 27 10:49:07 crc kubenswrapper[4728]: I0227 10:49:07.863759 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-49hl9\" (UniqueName: \"kubernetes.io/projected/ae36fee1-d5a7-470b-ae15-4eeb8d126951-kube-api-access-49hl9\") pod \"neutron-6bbb6b68c6-zhv6j\" (UID: \"ae36fee1-d5a7-470b-ae15-4eeb8d126951\") " pod="openstack/neutron-6bbb6b68c6-zhv6j" Feb 27 10:49:07 crc kubenswrapper[4728]: I0227 10:49:07.986173 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6bbb6b68c6-zhv6j" Feb 27 10:49:08 crc kubenswrapper[4728]: I0227 10:49:08.261231 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 27 10:49:08 crc kubenswrapper[4728]: I0227 10:49:08.262871 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 27 10:49:08 crc kubenswrapper[4728]: I0227 10:49:08.441704 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5ccc5c4795-2ch9l"] Feb 27 10:49:08 crc kubenswrapper[4728]: I0227 10:49:08.739155 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6bbb6b68c6-zhv6j"] Feb 27 10:49:09 crc kubenswrapper[4728]: I0227 10:49:09.273417 4728 generic.go:334] "Generic (PLEG): container finished" podID="fafc2d4e-4a84-43c4-8ffc-4d5b7b3fe4f7" containerID="a83c4c7783488bed6d7e79ac3c7209d140d8857acc63750129049031c01e89ae" exitCode=0 Feb 27 10:49:09 crc kubenswrapper[4728]: I0227 10:49:09.273736 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc5c4795-2ch9l" event={"ID":"fafc2d4e-4a84-43c4-8ffc-4d5b7b3fe4f7","Type":"ContainerDied","Data":"a83c4c7783488bed6d7e79ac3c7209d140d8857acc63750129049031c01e89ae"} Feb 27 10:49:09 crc kubenswrapper[4728]: I0227 10:49:09.273760 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc5c4795-2ch9l" event={"ID":"fafc2d4e-4a84-43c4-8ffc-4d5b7b3fe4f7","Type":"ContainerStarted","Data":"3749a9512ab3ef2baa36b6560a404280eb44ee0dbd154852727285e2362ab66c"} Feb 27 10:49:09 crc kubenswrapper[4728]: I0227 10:49:09.275732 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6bbb6b68c6-zhv6j" event={"ID":"ae36fee1-d5a7-470b-ae15-4eeb8d126951","Type":"ContainerStarted","Data":"a62657ad6dd5ec6fc69071e4f8e44e841db4ea46839b61757b39e1624b584191"} Feb 27 10:49:09 crc kubenswrapper[4728]: I0227 10:49:09.969567 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-54dbd7489c-x96kn"] Feb 27 10:49:09 crc kubenswrapper[4728]: I0227 10:49:09.971688 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-54dbd7489c-x96kn" Feb 27 10:49:09 crc kubenswrapper[4728]: I0227 10:49:09.974712 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Feb 27 10:49:09 crc kubenswrapper[4728]: I0227 10:49:09.974929 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Feb 27 10:49:10 crc kubenswrapper[4728]: I0227 10:49:09.999676 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-54dbd7489c-x96kn"] Feb 27 10:49:10 crc kubenswrapper[4728]: I0227 10:49:10.023650 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rrz4\" (UniqueName: \"kubernetes.io/projected/ac391aa0-3053-4675-a2c2-8c418ed9bd3a-kube-api-access-2rrz4\") pod \"neutron-54dbd7489c-x96kn\" (UID: \"ac391aa0-3053-4675-a2c2-8c418ed9bd3a\") " pod="openstack/neutron-54dbd7489c-x96kn" Feb 27 10:49:10 crc kubenswrapper[4728]: I0227 10:49:10.023981 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac391aa0-3053-4675-a2c2-8c418ed9bd3a-combined-ca-bundle\") pod \"neutron-54dbd7489c-x96kn\" (UID: \"ac391aa0-3053-4675-a2c2-8c418ed9bd3a\") " pod="openstack/neutron-54dbd7489c-x96kn" Feb 27 10:49:10 crc kubenswrapper[4728]: I0227 10:49:10.024173 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ac391aa0-3053-4675-a2c2-8c418ed9bd3a-config\") pod \"neutron-54dbd7489c-x96kn\" (UID: \"ac391aa0-3053-4675-a2c2-8c418ed9bd3a\") " pod="openstack/neutron-54dbd7489c-x96kn" Feb 27 10:49:10 crc kubenswrapper[4728]: I0227 10:49:10.024294 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ac391aa0-3053-4675-a2c2-8c418ed9bd3a-internal-tls-certs\") pod \"neutron-54dbd7489c-x96kn\" (UID: \"ac391aa0-3053-4675-a2c2-8c418ed9bd3a\") " pod="openstack/neutron-54dbd7489c-x96kn" Feb 27 10:49:10 crc kubenswrapper[4728]: I0227 10:49:10.024399 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ac391aa0-3053-4675-a2c2-8c418ed9bd3a-ovndb-tls-certs\") pod \"neutron-54dbd7489c-x96kn\" (UID: \"ac391aa0-3053-4675-a2c2-8c418ed9bd3a\") " pod="openstack/neutron-54dbd7489c-x96kn" Feb 27 10:49:10 crc kubenswrapper[4728]: I0227 10:49:10.024765 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/ac391aa0-3053-4675-a2c2-8c418ed9bd3a-httpd-config\") pod \"neutron-54dbd7489c-x96kn\" (UID: \"ac391aa0-3053-4675-a2c2-8c418ed9bd3a\") " pod="openstack/neutron-54dbd7489c-x96kn" Feb 27 10:49:10 crc kubenswrapper[4728]: I0227 10:49:10.024843 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ac391aa0-3053-4675-a2c2-8c418ed9bd3a-public-tls-certs\") pod \"neutron-54dbd7489c-x96kn\" (UID: \"ac391aa0-3053-4675-a2c2-8c418ed9bd3a\") " pod="openstack/neutron-54dbd7489c-x96kn" Feb 27 10:49:10 crc kubenswrapper[4728]: I0227 10:49:10.130857 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/ac391aa0-3053-4675-a2c2-8c418ed9bd3a-httpd-config\") pod \"neutron-54dbd7489c-x96kn\" (UID: \"ac391aa0-3053-4675-a2c2-8c418ed9bd3a\") " pod="openstack/neutron-54dbd7489c-x96kn" Feb 27 10:49:10 crc kubenswrapper[4728]: I0227 10:49:10.130916 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ac391aa0-3053-4675-a2c2-8c418ed9bd3a-public-tls-certs\") pod \"neutron-54dbd7489c-x96kn\" (UID: \"ac391aa0-3053-4675-a2c2-8c418ed9bd3a\") " pod="openstack/neutron-54dbd7489c-x96kn" Feb 27 10:49:10 crc kubenswrapper[4728]: I0227 10:49:10.130985 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2rrz4\" (UniqueName: \"kubernetes.io/projected/ac391aa0-3053-4675-a2c2-8c418ed9bd3a-kube-api-access-2rrz4\") pod \"neutron-54dbd7489c-x96kn\" (UID: \"ac391aa0-3053-4675-a2c2-8c418ed9bd3a\") " pod="openstack/neutron-54dbd7489c-x96kn" Feb 27 10:49:10 crc kubenswrapper[4728]: I0227 10:49:10.131021 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac391aa0-3053-4675-a2c2-8c418ed9bd3a-combined-ca-bundle\") pod \"neutron-54dbd7489c-x96kn\" (UID: \"ac391aa0-3053-4675-a2c2-8c418ed9bd3a\") " pod="openstack/neutron-54dbd7489c-x96kn" Feb 27 10:49:10 crc kubenswrapper[4728]: I0227 10:49:10.131085 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ac391aa0-3053-4675-a2c2-8c418ed9bd3a-config\") pod \"neutron-54dbd7489c-x96kn\" (UID: \"ac391aa0-3053-4675-a2c2-8c418ed9bd3a\") " pod="openstack/neutron-54dbd7489c-x96kn" Feb 27 10:49:10 crc kubenswrapper[4728]: I0227 10:49:10.131141 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ac391aa0-3053-4675-a2c2-8c418ed9bd3a-internal-tls-certs\") pod \"neutron-54dbd7489c-x96kn\" (UID: \"ac391aa0-3053-4675-a2c2-8c418ed9bd3a\") " pod="openstack/neutron-54dbd7489c-x96kn" Feb 27 10:49:10 crc kubenswrapper[4728]: I0227 10:49:10.131179 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ac391aa0-3053-4675-a2c2-8c418ed9bd3a-ovndb-tls-certs\") pod \"neutron-54dbd7489c-x96kn\" (UID: \"ac391aa0-3053-4675-a2c2-8c418ed9bd3a\") " pod="openstack/neutron-54dbd7489c-x96kn" Feb 27 10:49:10 crc kubenswrapper[4728]: I0227 10:49:10.136562 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac391aa0-3053-4675-a2c2-8c418ed9bd3a-combined-ca-bundle\") pod \"neutron-54dbd7489c-x96kn\" (UID: \"ac391aa0-3053-4675-a2c2-8c418ed9bd3a\") " pod="openstack/neutron-54dbd7489c-x96kn" Feb 27 10:49:10 crc kubenswrapper[4728]: I0227 10:49:10.139298 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ac391aa0-3053-4675-a2c2-8c418ed9bd3a-internal-tls-certs\") pod \"neutron-54dbd7489c-x96kn\" (UID: \"ac391aa0-3053-4675-a2c2-8c418ed9bd3a\") " pod="openstack/neutron-54dbd7489c-x96kn" Feb 27 10:49:10 crc kubenswrapper[4728]: I0227 10:49:10.139911 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ac391aa0-3053-4675-a2c2-8c418ed9bd3a-ovndb-tls-certs\") pod \"neutron-54dbd7489c-x96kn\" (UID: \"ac391aa0-3053-4675-a2c2-8c418ed9bd3a\") " pod="openstack/neutron-54dbd7489c-x96kn" Feb 27 10:49:10 crc kubenswrapper[4728]: I0227 10:49:10.140060 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/ac391aa0-3053-4675-a2c2-8c418ed9bd3a-httpd-config\") pod \"neutron-54dbd7489c-x96kn\" (UID: \"ac391aa0-3053-4675-a2c2-8c418ed9bd3a\") " pod="openstack/neutron-54dbd7489c-x96kn" Feb 27 10:49:10 crc kubenswrapper[4728]: I0227 10:49:10.140188 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/ac391aa0-3053-4675-a2c2-8c418ed9bd3a-config\") pod \"neutron-54dbd7489c-x96kn\" (UID: \"ac391aa0-3053-4675-a2c2-8c418ed9bd3a\") " pod="openstack/neutron-54dbd7489c-x96kn" Feb 27 10:49:10 crc kubenswrapper[4728]: I0227 10:49:10.140693 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ac391aa0-3053-4675-a2c2-8c418ed9bd3a-public-tls-certs\") pod \"neutron-54dbd7489c-x96kn\" (UID: \"ac391aa0-3053-4675-a2c2-8c418ed9bd3a\") " pod="openstack/neutron-54dbd7489c-x96kn" Feb 27 10:49:10 crc kubenswrapper[4728]: I0227 10:49:10.153220 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2rrz4\" (UniqueName: \"kubernetes.io/projected/ac391aa0-3053-4675-a2c2-8c418ed9bd3a-kube-api-access-2rrz4\") pod \"neutron-54dbd7489c-x96kn\" (UID: \"ac391aa0-3053-4675-a2c2-8c418ed9bd3a\") " pod="openstack/neutron-54dbd7489c-x96kn" Feb 27 10:49:10 crc kubenswrapper[4728]: I0227 10:49:10.303550 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-54dbd7489c-x96kn" Feb 27 10:49:11 crc kubenswrapper[4728]: I0227 10:49:11.303481 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6bbb6b68c6-zhv6j" event={"ID":"ae36fee1-d5a7-470b-ae15-4eeb8d126951","Type":"ContainerStarted","Data":"8ff7d0184cf0adf6783ca2ef0e4fa98bcdbe844421c7a65a377b153dd3e99288"} Feb 27 10:49:12 crc kubenswrapper[4728]: I0227 10:49:12.357114 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc5c4795-2ch9l" event={"ID":"fafc2d4e-4a84-43c4-8ffc-4d5b7b3fe4f7","Type":"ContainerStarted","Data":"4815408e41f007cf27f75fa6d1de8f0837382754e52ee954edfc5a25e9b7d664"} Feb 27 10:49:12 crc kubenswrapper[4728]: I0227 10:49:12.357745 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5ccc5c4795-2ch9l" Feb 27 10:49:12 crc kubenswrapper[4728]: I0227 10:49:12.389230 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5ccc5c4795-2ch9l" podStartSLOduration=5.389212135 podStartE2EDuration="5.389212135s" podCreationTimestamp="2026-02-27 10:49:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:49:12.383775447 +0000 UTC m=+1372.346141563" watchObservedRunningTime="2026-02-27 10:49:12.389212135 +0000 UTC m=+1372.351578241" Feb 27 10:49:13 crc kubenswrapper[4728]: I0227 10:49:13.384884 4728 generic.go:334] "Generic (PLEG): container finished" podID="141bf253-61a8-46a0-9d10-9aefbbd124c6" containerID="2b52ca2354545d301b5b9fd9a4be5db5b27d3cca3b6e1522f1ed3aecdd9a19b0" exitCode=0 Feb 27 10:49:13 crc kubenswrapper[4728]: I0227 10:49:13.384956 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-tmx8h" event={"ID":"141bf253-61a8-46a0-9d10-9aefbbd124c6","Type":"ContainerDied","Data":"2b52ca2354545d301b5b9fd9a4be5db5b27d3cca3b6e1522f1ed3aecdd9a19b0"} Feb 27 10:49:14 crc kubenswrapper[4728]: I0227 10:49:14.308456 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-54dbd7489c-x96kn"] Feb 27 10:49:14 crc kubenswrapper[4728]: I0227 10:49:14.399699 4728 generic.go:334] "Generic (PLEG): container finished" podID="d3ddaa20-d617-4d45-9e82-b31c982de147" containerID="a01d10a99a8b2be9ba0ab1bc9e3953c78b32c9585aaa9aa0150af4480b4e8091" exitCode=0 Feb 27 10:49:14 crc kubenswrapper[4728]: I0227 10:49:14.399799 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-tstt8" event={"ID":"d3ddaa20-d617-4d45-9e82-b31c982de147","Type":"ContainerDied","Data":"a01d10a99a8b2be9ba0ab1bc9e3953c78b32c9585aaa9aa0150af4480b4e8091"} Feb 27 10:49:14 crc kubenswrapper[4728]: I0227 10:49:14.403543 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6bbb6b68c6-zhv6j" event={"ID":"ae36fee1-d5a7-470b-ae15-4eeb8d126951","Type":"ContainerStarted","Data":"2ec2cfe466d703d9c3909bfd575c33a84f34a43077f1ea7d4059f7e53fd9e1c3"} Feb 27 10:49:14 crc kubenswrapper[4728]: I0227 10:49:14.403698 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-6bbb6b68c6-zhv6j" Feb 27 10:49:14 crc kubenswrapper[4728]: I0227 10:49:14.410462 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-54dbd7489c-x96kn" event={"ID":"ac391aa0-3053-4675-a2c2-8c418ed9bd3a","Type":"ContainerStarted","Data":"aab8d97bffb126c27ac1d703ea996f44950eb8e950724b9e56f18ba77aa678f3"} Feb 27 10:49:14 crc kubenswrapper[4728]: I0227 10:49:14.423975 4728 generic.go:334] "Generic (PLEG): container finished" podID="f69c278c-545b-4a40-9f34-53d895c528c0" containerID="d1b547d94847f3dca4ab8f436e619449567c33b3c8310f9df38636fe58550781" exitCode=0 Feb 27 10:49:14 crc kubenswrapper[4728]: I0227 10:49:14.424034 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-v2vbf" event={"ID":"f69c278c-545b-4a40-9f34-53d895c528c0","Type":"ContainerDied","Data":"d1b547d94847f3dca4ab8f436e619449567c33b3c8310f9df38636fe58550781"} Feb 27 10:49:14 crc kubenswrapper[4728]: I0227 10:49:14.427532 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7b7dd98b-314c-4e6c-a45b-31168398fca3","Type":"ContainerStarted","Data":"c8824d9275d35912d4d0e9e5c1b5cbaa6967a69a0ddaee99fe5f4d00d59e606a"} Feb 27 10:49:14 crc kubenswrapper[4728]: I0227 10:49:14.490689 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-6bbb6b68c6-zhv6j" podStartSLOduration=7.490672458 podStartE2EDuration="7.490672458s" podCreationTimestamp="2026-02-27 10:49:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:49:14.485156039 +0000 UTC m=+1374.447522145" watchObservedRunningTime="2026-02-27 10:49:14.490672458 +0000 UTC m=+1374.453038564" Feb 27 10:49:14 crc kubenswrapper[4728]: I0227 10:49:14.873265 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-tmx8h" Feb 27 10:49:14 crc kubenswrapper[4728]: I0227 10:49:14.952891 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/141bf253-61a8-46a0-9d10-9aefbbd124c6-config-data\") pod \"141bf253-61a8-46a0-9d10-9aefbbd124c6\" (UID: \"141bf253-61a8-46a0-9d10-9aefbbd124c6\") " Feb 27 10:49:14 crc kubenswrapper[4728]: I0227 10:49:14.953003 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/141bf253-61a8-46a0-9d10-9aefbbd124c6-logs\") pod \"141bf253-61a8-46a0-9d10-9aefbbd124c6\" (UID: \"141bf253-61a8-46a0-9d10-9aefbbd124c6\") " Feb 27 10:49:14 crc kubenswrapper[4728]: I0227 10:49:14.953066 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/141bf253-61a8-46a0-9d10-9aefbbd124c6-combined-ca-bundle\") pod \"141bf253-61a8-46a0-9d10-9aefbbd124c6\" (UID: \"141bf253-61a8-46a0-9d10-9aefbbd124c6\") " Feb 27 10:49:14 crc kubenswrapper[4728]: I0227 10:49:14.953163 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rg9b5\" (UniqueName: \"kubernetes.io/projected/141bf253-61a8-46a0-9d10-9aefbbd124c6-kube-api-access-rg9b5\") pod \"141bf253-61a8-46a0-9d10-9aefbbd124c6\" (UID: \"141bf253-61a8-46a0-9d10-9aefbbd124c6\") " Feb 27 10:49:14 crc kubenswrapper[4728]: I0227 10:49:14.953299 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/141bf253-61a8-46a0-9d10-9aefbbd124c6-scripts\") pod \"141bf253-61a8-46a0-9d10-9aefbbd124c6\" (UID: \"141bf253-61a8-46a0-9d10-9aefbbd124c6\") " Feb 27 10:49:14 crc kubenswrapper[4728]: I0227 10:49:14.953460 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/141bf253-61a8-46a0-9d10-9aefbbd124c6-logs" (OuterVolumeSpecName: "logs") pod "141bf253-61a8-46a0-9d10-9aefbbd124c6" (UID: "141bf253-61a8-46a0-9d10-9aefbbd124c6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:49:14 crc kubenswrapper[4728]: I0227 10:49:14.953783 4728 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/141bf253-61a8-46a0-9d10-9aefbbd124c6-logs\") on node \"crc\" DevicePath \"\"" Feb 27 10:49:14 crc kubenswrapper[4728]: I0227 10:49:14.957706 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/141bf253-61a8-46a0-9d10-9aefbbd124c6-scripts" (OuterVolumeSpecName: "scripts") pod "141bf253-61a8-46a0-9d10-9aefbbd124c6" (UID: "141bf253-61a8-46a0-9d10-9aefbbd124c6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:49:14 crc kubenswrapper[4728]: I0227 10:49:14.958037 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/141bf253-61a8-46a0-9d10-9aefbbd124c6-kube-api-access-rg9b5" (OuterVolumeSpecName: "kube-api-access-rg9b5") pod "141bf253-61a8-46a0-9d10-9aefbbd124c6" (UID: "141bf253-61a8-46a0-9d10-9aefbbd124c6"). InnerVolumeSpecName "kube-api-access-rg9b5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:49:14 crc kubenswrapper[4728]: I0227 10:49:14.983771 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/141bf253-61a8-46a0-9d10-9aefbbd124c6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "141bf253-61a8-46a0-9d10-9aefbbd124c6" (UID: "141bf253-61a8-46a0-9d10-9aefbbd124c6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:49:14 crc kubenswrapper[4728]: I0227 10:49:14.989719 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/141bf253-61a8-46a0-9d10-9aefbbd124c6-config-data" (OuterVolumeSpecName: "config-data") pod "141bf253-61a8-46a0-9d10-9aefbbd124c6" (UID: "141bf253-61a8-46a0-9d10-9aefbbd124c6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:49:15 crc kubenswrapper[4728]: I0227 10:49:15.055528 4728 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/141bf253-61a8-46a0-9d10-9aefbbd124c6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 10:49:15 crc kubenswrapper[4728]: I0227 10:49:15.055560 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rg9b5\" (UniqueName: \"kubernetes.io/projected/141bf253-61a8-46a0-9d10-9aefbbd124c6-kube-api-access-rg9b5\") on node \"crc\" DevicePath \"\"" Feb 27 10:49:15 crc kubenswrapper[4728]: I0227 10:49:15.055572 4728 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/141bf253-61a8-46a0-9d10-9aefbbd124c6-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 10:49:15 crc kubenswrapper[4728]: I0227 10:49:15.055580 4728 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/141bf253-61a8-46a0-9d10-9aefbbd124c6-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 10:49:15 crc kubenswrapper[4728]: I0227 10:49:15.439536 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-54dbd7489c-x96kn" event={"ID":"ac391aa0-3053-4675-a2c2-8c418ed9bd3a","Type":"ContainerStarted","Data":"cc4e75e51c2221b31239e3ccc5cb02ff809b659244ceda42d4952a1c7d985487"} Feb 27 10:49:15 crc kubenswrapper[4728]: I0227 10:49:15.439896 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-54dbd7489c-x96kn" event={"ID":"ac391aa0-3053-4675-a2c2-8c418ed9bd3a","Type":"ContainerStarted","Data":"a109578385f7c7f0783ccb545d370fef90c64a9f399e14183da769b7a760367f"} Feb 27 10:49:15 crc kubenswrapper[4728]: I0227 10:49:15.440024 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-54dbd7489c-x96kn" Feb 27 10:49:15 crc kubenswrapper[4728]: I0227 10:49:15.442812 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-tmx8h" event={"ID":"141bf253-61a8-46a0-9d10-9aefbbd124c6","Type":"ContainerDied","Data":"087f7830cba62f0afa60f87852d0de52a3bef4dad70f204816f650d0aff4cd02"} Feb 27 10:49:15 crc kubenswrapper[4728]: I0227 10:49:15.442866 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="087f7830cba62f0afa60f87852d0de52a3bef4dad70f204816f650d0aff4cd02" Feb 27 10:49:15 crc kubenswrapper[4728]: I0227 10:49:15.442923 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-tmx8h" Feb 27 10:49:15 crc kubenswrapper[4728]: I0227 10:49:15.496143 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-54dbd7489c-x96kn" podStartSLOduration=6.496121741 podStartE2EDuration="6.496121741s" podCreationTimestamp="2026-02-27 10:49:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:49:15.463278309 +0000 UTC m=+1375.425644415" watchObservedRunningTime="2026-02-27 10:49:15.496121741 +0000 UTC m=+1375.458487837" Feb 27 10:49:15 crc kubenswrapper[4728]: I0227 10:49:15.684564 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-7ff4498744-6lwbb"] Feb 27 10:49:15 crc kubenswrapper[4728]: E0227 10:49:15.685014 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="141bf253-61a8-46a0-9d10-9aefbbd124c6" containerName="placement-db-sync" Feb 27 10:49:15 crc kubenswrapper[4728]: I0227 10:49:15.685038 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="141bf253-61a8-46a0-9d10-9aefbbd124c6" containerName="placement-db-sync" Feb 27 10:49:15 crc kubenswrapper[4728]: I0227 10:49:15.685621 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="141bf253-61a8-46a0-9d10-9aefbbd124c6" containerName="placement-db-sync" Feb 27 10:49:15 crc kubenswrapper[4728]: I0227 10:49:15.687003 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7ff4498744-6lwbb" Feb 27 10:49:15 crc kubenswrapper[4728]: I0227 10:49:15.691581 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 27 10:49:15 crc kubenswrapper[4728]: I0227 10:49:15.691831 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-9jk9w" Feb 27 10:49:15 crc kubenswrapper[4728]: I0227 10:49:15.691907 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Feb 27 10:49:15 crc kubenswrapper[4728]: I0227 10:49:15.692140 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 27 10:49:15 crc kubenswrapper[4728]: I0227 10:49:15.692708 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Feb 27 10:49:15 crc kubenswrapper[4728]: I0227 10:49:15.712452 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-7ff4498744-6lwbb"] Feb 27 10:49:15 crc kubenswrapper[4728]: I0227 10:49:15.774085 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a4671725-3a26-4c77-ad25-01c9aa82bdf0-scripts\") pod \"placement-7ff4498744-6lwbb\" (UID: \"a4671725-3a26-4c77-ad25-01c9aa82bdf0\") " pod="openstack/placement-7ff4498744-6lwbb" Feb 27 10:49:15 crc kubenswrapper[4728]: I0227 10:49:15.774418 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4671725-3a26-4c77-ad25-01c9aa82bdf0-config-data\") pod \"placement-7ff4498744-6lwbb\" (UID: \"a4671725-3a26-4c77-ad25-01c9aa82bdf0\") " pod="openstack/placement-7ff4498744-6lwbb" Feb 27 10:49:15 crc kubenswrapper[4728]: I0227 10:49:15.774586 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a4671725-3a26-4c77-ad25-01c9aa82bdf0-internal-tls-certs\") pod \"placement-7ff4498744-6lwbb\" (UID: \"a4671725-3a26-4c77-ad25-01c9aa82bdf0\") " pod="openstack/placement-7ff4498744-6lwbb" Feb 27 10:49:15 crc kubenswrapper[4728]: I0227 10:49:15.774844 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a4671725-3a26-4c77-ad25-01c9aa82bdf0-public-tls-certs\") pod \"placement-7ff4498744-6lwbb\" (UID: \"a4671725-3a26-4c77-ad25-01c9aa82bdf0\") " pod="openstack/placement-7ff4498744-6lwbb" Feb 27 10:49:15 crc kubenswrapper[4728]: I0227 10:49:15.775003 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vm4z\" (UniqueName: \"kubernetes.io/projected/a4671725-3a26-4c77-ad25-01c9aa82bdf0-kube-api-access-7vm4z\") pod \"placement-7ff4498744-6lwbb\" (UID: \"a4671725-3a26-4c77-ad25-01c9aa82bdf0\") " pod="openstack/placement-7ff4498744-6lwbb" Feb 27 10:49:15 crc kubenswrapper[4728]: I0227 10:49:15.775146 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4671725-3a26-4c77-ad25-01c9aa82bdf0-combined-ca-bundle\") pod \"placement-7ff4498744-6lwbb\" (UID: \"a4671725-3a26-4c77-ad25-01c9aa82bdf0\") " pod="openstack/placement-7ff4498744-6lwbb" Feb 27 10:49:15 crc kubenswrapper[4728]: I0227 10:49:15.775238 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a4671725-3a26-4c77-ad25-01c9aa82bdf0-logs\") pod \"placement-7ff4498744-6lwbb\" (UID: \"a4671725-3a26-4c77-ad25-01c9aa82bdf0\") " pod="openstack/placement-7ff4498744-6lwbb" Feb 27 10:49:15 crc kubenswrapper[4728]: I0227 10:49:15.877098 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7vm4z\" (UniqueName: \"kubernetes.io/projected/a4671725-3a26-4c77-ad25-01c9aa82bdf0-kube-api-access-7vm4z\") pod \"placement-7ff4498744-6lwbb\" (UID: \"a4671725-3a26-4c77-ad25-01c9aa82bdf0\") " pod="openstack/placement-7ff4498744-6lwbb" Feb 27 10:49:15 crc kubenswrapper[4728]: I0227 10:49:15.877188 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4671725-3a26-4c77-ad25-01c9aa82bdf0-combined-ca-bundle\") pod \"placement-7ff4498744-6lwbb\" (UID: \"a4671725-3a26-4c77-ad25-01c9aa82bdf0\") " pod="openstack/placement-7ff4498744-6lwbb" Feb 27 10:49:15 crc kubenswrapper[4728]: I0227 10:49:15.877249 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a4671725-3a26-4c77-ad25-01c9aa82bdf0-logs\") pod \"placement-7ff4498744-6lwbb\" (UID: \"a4671725-3a26-4c77-ad25-01c9aa82bdf0\") " pod="openstack/placement-7ff4498744-6lwbb" Feb 27 10:49:15 crc kubenswrapper[4728]: I0227 10:49:15.877302 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a4671725-3a26-4c77-ad25-01c9aa82bdf0-scripts\") pod \"placement-7ff4498744-6lwbb\" (UID: \"a4671725-3a26-4c77-ad25-01c9aa82bdf0\") " pod="openstack/placement-7ff4498744-6lwbb" Feb 27 10:49:15 crc kubenswrapper[4728]: I0227 10:49:15.877336 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4671725-3a26-4c77-ad25-01c9aa82bdf0-config-data\") pod \"placement-7ff4498744-6lwbb\" (UID: \"a4671725-3a26-4c77-ad25-01c9aa82bdf0\") " pod="openstack/placement-7ff4498744-6lwbb" Feb 27 10:49:15 crc kubenswrapper[4728]: I0227 10:49:15.877374 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a4671725-3a26-4c77-ad25-01c9aa82bdf0-internal-tls-certs\") pod \"placement-7ff4498744-6lwbb\" (UID: \"a4671725-3a26-4c77-ad25-01c9aa82bdf0\") " pod="openstack/placement-7ff4498744-6lwbb" Feb 27 10:49:15 crc kubenswrapper[4728]: I0227 10:49:15.877433 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a4671725-3a26-4c77-ad25-01c9aa82bdf0-public-tls-certs\") pod \"placement-7ff4498744-6lwbb\" (UID: \"a4671725-3a26-4c77-ad25-01c9aa82bdf0\") " pod="openstack/placement-7ff4498744-6lwbb" Feb 27 10:49:15 crc kubenswrapper[4728]: I0227 10:49:15.878176 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a4671725-3a26-4c77-ad25-01c9aa82bdf0-logs\") pod \"placement-7ff4498744-6lwbb\" (UID: \"a4671725-3a26-4c77-ad25-01c9aa82bdf0\") " pod="openstack/placement-7ff4498744-6lwbb" Feb 27 10:49:15 crc kubenswrapper[4728]: I0227 10:49:15.881709 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a4671725-3a26-4c77-ad25-01c9aa82bdf0-public-tls-certs\") pod \"placement-7ff4498744-6lwbb\" (UID: \"a4671725-3a26-4c77-ad25-01c9aa82bdf0\") " pod="openstack/placement-7ff4498744-6lwbb" Feb 27 10:49:15 crc kubenswrapper[4728]: I0227 10:49:15.883866 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4671725-3a26-4c77-ad25-01c9aa82bdf0-config-data\") pod \"placement-7ff4498744-6lwbb\" (UID: \"a4671725-3a26-4c77-ad25-01c9aa82bdf0\") " pod="openstack/placement-7ff4498744-6lwbb" Feb 27 10:49:15 crc kubenswrapper[4728]: I0227 10:49:15.898786 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a4671725-3a26-4c77-ad25-01c9aa82bdf0-scripts\") pod \"placement-7ff4498744-6lwbb\" (UID: \"a4671725-3a26-4c77-ad25-01c9aa82bdf0\") " pod="openstack/placement-7ff4498744-6lwbb" Feb 27 10:49:15 crc kubenswrapper[4728]: I0227 10:49:15.899288 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4671725-3a26-4c77-ad25-01c9aa82bdf0-combined-ca-bundle\") pod \"placement-7ff4498744-6lwbb\" (UID: \"a4671725-3a26-4c77-ad25-01c9aa82bdf0\") " pod="openstack/placement-7ff4498744-6lwbb" Feb 27 10:49:15 crc kubenswrapper[4728]: I0227 10:49:15.901999 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a4671725-3a26-4c77-ad25-01c9aa82bdf0-internal-tls-certs\") pod \"placement-7ff4498744-6lwbb\" (UID: \"a4671725-3a26-4c77-ad25-01c9aa82bdf0\") " pod="openstack/placement-7ff4498744-6lwbb" Feb 27 10:49:15 crc kubenswrapper[4728]: I0227 10:49:15.924764 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7vm4z\" (UniqueName: \"kubernetes.io/projected/a4671725-3a26-4c77-ad25-01c9aa82bdf0-kube-api-access-7vm4z\") pod \"placement-7ff4498744-6lwbb\" (UID: \"a4671725-3a26-4c77-ad25-01c9aa82bdf0\") " pod="openstack/placement-7ff4498744-6lwbb" Feb 27 10:49:15 crc kubenswrapper[4728]: I0227 10:49:15.967222 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 27 10:49:16 crc kubenswrapper[4728]: I0227 10:49:16.010603 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7ff4498744-6lwbb" Feb 27 10:49:16 crc kubenswrapper[4728]: I0227 10:49:16.300235 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-tstt8" Feb 27 10:49:16 crc kubenswrapper[4728]: I0227 10:49:16.333413 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-v2vbf" Feb 27 10:49:16 crc kubenswrapper[4728]: I0227 10:49:16.395492 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rgrmr\" (UniqueName: \"kubernetes.io/projected/d3ddaa20-d617-4d45-9e82-b31c982de147-kube-api-access-rgrmr\") pod \"d3ddaa20-d617-4d45-9e82-b31c982de147\" (UID: \"d3ddaa20-d617-4d45-9e82-b31c982de147\") " Feb 27 10:49:16 crc kubenswrapper[4728]: I0227 10:49:16.395583 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d3ddaa20-d617-4d45-9e82-b31c982de147-fernet-keys\") pod \"d3ddaa20-d617-4d45-9e82-b31c982de147\" (UID: \"d3ddaa20-d617-4d45-9e82-b31c982de147\") " Feb 27 10:49:16 crc kubenswrapper[4728]: I0227 10:49:16.395639 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d3ddaa20-d617-4d45-9e82-b31c982de147-credential-keys\") pod \"d3ddaa20-d617-4d45-9e82-b31c982de147\" (UID: \"d3ddaa20-d617-4d45-9e82-b31c982de147\") " Feb 27 10:49:16 crc kubenswrapper[4728]: I0227 10:49:16.395676 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3ddaa20-d617-4d45-9e82-b31c982de147-config-data\") pod \"d3ddaa20-d617-4d45-9e82-b31c982de147\" (UID: \"d3ddaa20-d617-4d45-9e82-b31c982de147\") " Feb 27 10:49:16 crc kubenswrapper[4728]: I0227 10:49:16.395750 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f69c278c-545b-4a40-9f34-53d895c528c0-db-sync-config-data\") pod \"f69c278c-545b-4a40-9f34-53d895c528c0\" (UID: \"f69c278c-545b-4a40-9f34-53d895c528c0\") " Feb 27 10:49:16 crc kubenswrapper[4728]: I0227 10:49:16.395795 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3ddaa20-d617-4d45-9e82-b31c982de147-combined-ca-bundle\") pod \"d3ddaa20-d617-4d45-9e82-b31c982de147\" (UID: \"d3ddaa20-d617-4d45-9e82-b31c982de147\") " Feb 27 10:49:16 crc kubenswrapper[4728]: I0227 10:49:16.395883 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7cb2\" (UniqueName: \"kubernetes.io/projected/f69c278c-545b-4a40-9f34-53d895c528c0-kube-api-access-x7cb2\") pod \"f69c278c-545b-4a40-9f34-53d895c528c0\" (UID: \"f69c278c-545b-4a40-9f34-53d895c528c0\") " Feb 27 10:49:16 crc kubenswrapper[4728]: I0227 10:49:16.396017 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f69c278c-545b-4a40-9f34-53d895c528c0-combined-ca-bundle\") pod \"f69c278c-545b-4a40-9f34-53d895c528c0\" (UID: \"f69c278c-545b-4a40-9f34-53d895c528c0\") " Feb 27 10:49:16 crc kubenswrapper[4728]: I0227 10:49:16.396073 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d3ddaa20-d617-4d45-9e82-b31c982de147-scripts\") pod \"d3ddaa20-d617-4d45-9e82-b31c982de147\" (UID: \"d3ddaa20-d617-4d45-9e82-b31c982de147\") " Feb 27 10:49:16 crc kubenswrapper[4728]: I0227 10:49:16.403938 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3ddaa20-d617-4d45-9e82-b31c982de147-kube-api-access-rgrmr" (OuterVolumeSpecName: "kube-api-access-rgrmr") pod "d3ddaa20-d617-4d45-9e82-b31c982de147" (UID: "d3ddaa20-d617-4d45-9e82-b31c982de147"). InnerVolumeSpecName "kube-api-access-rgrmr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:49:16 crc kubenswrapper[4728]: I0227 10:49:16.407858 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3ddaa20-d617-4d45-9e82-b31c982de147-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "d3ddaa20-d617-4d45-9e82-b31c982de147" (UID: "d3ddaa20-d617-4d45-9e82-b31c982de147"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:49:16 crc kubenswrapper[4728]: I0227 10:49:16.408981 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f69c278c-545b-4a40-9f34-53d895c528c0-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "f69c278c-545b-4a40-9f34-53d895c528c0" (UID: "f69c278c-545b-4a40-9f34-53d895c528c0"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:49:16 crc kubenswrapper[4728]: I0227 10:49:16.412346 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f69c278c-545b-4a40-9f34-53d895c528c0-kube-api-access-x7cb2" (OuterVolumeSpecName: "kube-api-access-x7cb2") pod "f69c278c-545b-4a40-9f34-53d895c528c0" (UID: "f69c278c-545b-4a40-9f34-53d895c528c0"). InnerVolumeSpecName "kube-api-access-x7cb2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:49:16 crc kubenswrapper[4728]: I0227 10:49:16.416928 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3ddaa20-d617-4d45-9e82-b31c982de147-scripts" (OuterVolumeSpecName: "scripts") pod "d3ddaa20-d617-4d45-9e82-b31c982de147" (UID: "d3ddaa20-d617-4d45-9e82-b31c982de147"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:49:16 crc kubenswrapper[4728]: I0227 10:49:16.417461 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3ddaa20-d617-4d45-9e82-b31c982de147-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "d3ddaa20-d617-4d45-9e82-b31c982de147" (UID: "d3ddaa20-d617-4d45-9e82-b31c982de147"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:49:16 crc kubenswrapper[4728]: I0227 10:49:16.454685 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f69c278c-545b-4a40-9f34-53d895c528c0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f69c278c-545b-4a40-9f34-53d895c528c0" (UID: "f69c278c-545b-4a40-9f34-53d895c528c0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:49:16 crc kubenswrapper[4728]: I0227 10:49:16.474649 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-tstt8" event={"ID":"d3ddaa20-d617-4d45-9e82-b31c982de147","Type":"ContainerDied","Data":"d1f66f5d78d45652595e434984107e0faef329550b5294b1488a61fde74082ad"} Feb 27 10:49:16 crc kubenswrapper[4728]: I0227 10:49:16.474702 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d1f66f5d78d45652595e434984107e0faef329550b5294b1488a61fde74082ad" Feb 27 10:49:16 crc kubenswrapper[4728]: I0227 10:49:16.474733 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-tstt8" Feb 27 10:49:16 crc kubenswrapper[4728]: I0227 10:49:16.481160 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-v2vbf" Feb 27 10:49:16 crc kubenswrapper[4728]: I0227 10:49:16.484301 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-v2vbf" event={"ID":"f69c278c-545b-4a40-9f34-53d895c528c0","Type":"ContainerDied","Data":"e5bded05e85b2d9b7887263d81a2c417712efa950d6ff9789f418058a03144b8"} Feb 27 10:49:16 crc kubenswrapper[4728]: I0227 10:49:16.484335 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e5bded05e85b2d9b7887263d81a2c417712efa950d6ff9789f418058a03144b8" Feb 27 10:49:16 crc kubenswrapper[4728]: I0227 10:49:16.492887 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 27 10:49:16 crc kubenswrapper[4728]: I0227 10:49:16.492930 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 27 10:49:16 crc kubenswrapper[4728]: I0227 10:49:16.510543 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3ddaa20-d617-4d45-9e82-b31c982de147-config-data" (OuterVolumeSpecName: "config-data") pod "d3ddaa20-d617-4d45-9e82-b31c982de147" (UID: "d3ddaa20-d617-4d45-9e82-b31c982de147"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:49:16 crc kubenswrapper[4728]: I0227 10:49:16.510734 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rgrmr\" (UniqueName: \"kubernetes.io/projected/d3ddaa20-d617-4d45-9e82-b31c982de147-kube-api-access-rgrmr\") on node \"crc\" DevicePath \"\"" Feb 27 10:49:16 crc kubenswrapper[4728]: I0227 10:49:16.510886 4728 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d3ddaa20-d617-4d45-9e82-b31c982de147-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 27 10:49:16 crc kubenswrapper[4728]: I0227 10:49:16.511006 4728 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d3ddaa20-d617-4d45-9e82-b31c982de147-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 27 10:49:16 crc kubenswrapper[4728]: I0227 10:49:16.511022 4728 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f69c278c-545b-4a40-9f34-53d895c528c0-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 10:49:16 crc kubenswrapper[4728]: I0227 10:49:16.511035 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7cb2\" (UniqueName: \"kubernetes.io/projected/f69c278c-545b-4a40-9f34-53d895c528c0-kube-api-access-x7cb2\") on node \"crc\" DevicePath \"\"" Feb 27 10:49:16 crc kubenswrapper[4728]: I0227 10:49:16.511047 4728 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f69c278c-545b-4a40-9f34-53d895c528c0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 10:49:16 crc kubenswrapper[4728]: I0227 10:49:16.511210 4728 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d3ddaa20-d617-4d45-9e82-b31c982de147-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 10:49:16 crc kubenswrapper[4728]: I0227 10:49:16.538367 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3ddaa20-d617-4d45-9e82-b31c982de147-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d3ddaa20-d617-4d45-9e82-b31c982de147" (UID: "d3ddaa20-d617-4d45-9e82-b31c982de147"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:49:16 crc kubenswrapper[4728]: I0227 10:49:16.578896 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 27 10:49:16 crc kubenswrapper[4728]: I0227 10:49:16.612829 4728 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3ddaa20-d617-4d45-9e82-b31c982de147-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 10:49:16 crc kubenswrapper[4728]: I0227 10:49:16.612856 4728 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3ddaa20-d617-4d45-9e82-b31c982de147-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 10:49:16 crc kubenswrapper[4728]: I0227 10:49:16.627881 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-6cb4766cb5-wf4ln"] Feb 27 10:49:16 crc kubenswrapper[4728]: E0227 10:49:16.628561 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3ddaa20-d617-4d45-9e82-b31c982de147" containerName="keystone-bootstrap" Feb 27 10:49:16 crc kubenswrapper[4728]: I0227 10:49:16.628583 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3ddaa20-d617-4d45-9e82-b31c982de147" containerName="keystone-bootstrap" Feb 27 10:49:16 crc kubenswrapper[4728]: E0227 10:49:16.628610 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f69c278c-545b-4a40-9f34-53d895c528c0" containerName="barbican-db-sync" Feb 27 10:49:16 crc kubenswrapper[4728]: I0227 10:49:16.628616 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="f69c278c-545b-4a40-9f34-53d895c528c0" containerName="barbican-db-sync" Feb 27 10:49:16 crc kubenswrapper[4728]: I0227 10:49:16.628835 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="f69c278c-545b-4a40-9f34-53d895c528c0" containerName="barbican-db-sync" Feb 27 10:49:16 crc kubenswrapper[4728]: I0227 10:49:16.628857 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3ddaa20-d617-4d45-9e82-b31c982de147" containerName="keystone-bootstrap" Feb 27 10:49:16 crc kubenswrapper[4728]: I0227 10:49:16.629575 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6cb4766cb5-wf4ln" Feb 27 10:49:16 crc kubenswrapper[4728]: I0227 10:49:16.642876 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Feb 27 10:49:16 crc kubenswrapper[4728]: I0227 10:49:16.643080 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Feb 27 10:49:16 crc kubenswrapper[4728]: I0227 10:49:16.655960 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-6cb4766cb5-wf4ln"] Feb 27 10:49:16 crc kubenswrapper[4728]: I0227 10:49:16.685130 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 27 10:49:16 crc kubenswrapper[4728]: I0227 10:49:16.690136 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-7ff4498744-6lwbb"] Feb 27 10:49:16 crc kubenswrapper[4728]: I0227 10:49:16.714748 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/055c18b7-6800-4f96-b544-7fc72a1eb468-public-tls-certs\") pod \"keystone-6cb4766cb5-wf4ln\" (UID: \"055c18b7-6800-4f96-b544-7fc72a1eb468\") " pod="openstack/keystone-6cb4766cb5-wf4ln" Feb 27 10:49:16 crc kubenswrapper[4728]: I0227 10:49:16.714798 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/055c18b7-6800-4f96-b544-7fc72a1eb468-fernet-keys\") pod \"keystone-6cb4766cb5-wf4ln\" (UID: \"055c18b7-6800-4f96-b544-7fc72a1eb468\") " pod="openstack/keystone-6cb4766cb5-wf4ln" Feb 27 10:49:16 crc kubenswrapper[4728]: I0227 10:49:16.714826 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/055c18b7-6800-4f96-b544-7fc72a1eb468-scripts\") pod \"keystone-6cb4766cb5-wf4ln\" (UID: \"055c18b7-6800-4f96-b544-7fc72a1eb468\") " pod="openstack/keystone-6cb4766cb5-wf4ln" Feb 27 10:49:16 crc kubenswrapper[4728]: I0227 10:49:16.714846 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/055c18b7-6800-4f96-b544-7fc72a1eb468-config-data\") pod \"keystone-6cb4766cb5-wf4ln\" (UID: \"055c18b7-6800-4f96-b544-7fc72a1eb468\") " pod="openstack/keystone-6cb4766cb5-wf4ln" Feb 27 10:49:16 crc kubenswrapper[4728]: I0227 10:49:16.714876 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/055c18b7-6800-4f96-b544-7fc72a1eb468-internal-tls-certs\") pod \"keystone-6cb4766cb5-wf4ln\" (UID: \"055c18b7-6800-4f96-b544-7fc72a1eb468\") " pod="openstack/keystone-6cb4766cb5-wf4ln" Feb 27 10:49:16 crc kubenswrapper[4728]: I0227 10:49:16.714932 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjlqr\" (UniqueName: \"kubernetes.io/projected/055c18b7-6800-4f96-b544-7fc72a1eb468-kube-api-access-pjlqr\") pod \"keystone-6cb4766cb5-wf4ln\" (UID: \"055c18b7-6800-4f96-b544-7fc72a1eb468\") " pod="openstack/keystone-6cb4766cb5-wf4ln" Feb 27 10:49:16 crc kubenswrapper[4728]: I0227 10:49:16.714954 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/055c18b7-6800-4f96-b544-7fc72a1eb468-combined-ca-bundle\") pod \"keystone-6cb4766cb5-wf4ln\" (UID: \"055c18b7-6800-4f96-b544-7fc72a1eb468\") " pod="openstack/keystone-6cb4766cb5-wf4ln" Feb 27 10:49:16 crc kubenswrapper[4728]: I0227 10:49:16.714989 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/055c18b7-6800-4f96-b544-7fc72a1eb468-credential-keys\") pod \"keystone-6cb4766cb5-wf4ln\" (UID: \"055c18b7-6800-4f96-b544-7fc72a1eb468\") " pod="openstack/keystone-6cb4766cb5-wf4ln" Feb 27 10:49:16 crc kubenswrapper[4728]: I0227 10:49:16.816587 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/055c18b7-6800-4f96-b544-7fc72a1eb468-credential-keys\") pod \"keystone-6cb4766cb5-wf4ln\" (UID: \"055c18b7-6800-4f96-b544-7fc72a1eb468\") " pod="openstack/keystone-6cb4766cb5-wf4ln" Feb 27 10:49:16 crc kubenswrapper[4728]: I0227 10:49:16.816765 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/055c18b7-6800-4f96-b544-7fc72a1eb468-public-tls-certs\") pod \"keystone-6cb4766cb5-wf4ln\" (UID: \"055c18b7-6800-4f96-b544-7fc72a1eb468\") " pod="openstack/keystone-6cb4766cb5-wf4ln" Feb 27 10:49:16 crc kubenswrapper[4728]: I0227 10:49:16.816798 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/055c18b7-6800-4f96-b544-7fc72a1eb468-fernet-keys\") pod \"keystone-6cb4766cb5-wf4ln\" (UID: \"055c18b7-6800-4f96-b544-7fc72a1eb468\") " pod="openstack/keystone-6cb4766cb5-wf4ln" Feb 27 10:49:16 crc kubenswrapper[4728]: I0227 10:49:16.816825 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/055c18b7-6800-4f96-b544-7fc72a1eb468-scripts\") pod \"keystone-6cb4766cb5-wf4ln\" (UID: \"055c18b7-6800-4f96-b544-7fc72a1eb468\") " pod="openstack/keystone-6cb4766cb5-wf4ln" Feb 27 10:49:16 crc kubenswrapper[4728]: I0227 10:49:16.816855 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/055c18b7-6800-4f96-b544-7fc72a1eb468-config-data\") pod \"keystone-6cb4766cb5-wf4ln\" (UID: \"055c18b7-6800-4f96-b544-7fc72a1eb468\") " pod="openstack/keystone-6cb4766cb5-wf4ln" Feb 27 10:49:16 crc kubenswrapper[4728]: I0227 10:49:16.816891 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/055c18b7-6800-4f96-b544-7fc72a1eb468-internal-tls-certs\") pod \"keystone-6cb4766cb5-wf4ln\" (UID: \"055c18b7-6800-4f96-b544-7fc72a1eb468\") " pod="openstack/keystone-6cb4766cb5-wf4ln" Feb 27 10:49:16 crc kubenswrapper[4728]: I0227 10:49:16.816932 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pjlqr\" (UniqueName: \"kubernetes.io/projected/055c18b7-6800-4f96-b544-7fc72a1eb468-kube-api-access-pjlqr\") pod \"keystone-6cb4766cb5-wf4ln\" (UID: \"055c18b7-6800-4f96-b544-7fc72a1eb468\") " pod="openstack/keystone-6cb4766cb5-wf4ln" Feb 27 10:49:16 crc kubenswrapper[4728]: I0227 10:49:16.816952 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/055c18b7-6800-4f96-b544-7fc72a1eb468-combined-ca-bundle\") pod \"keystone-6cb4766cb5-wf4ln\" (UID: \"055c18b7-6800-4f96-b544-7fc72a1eb468\") " pod="openstack/keystone-6cb4766cb5-wf4ln" Feb 27 10:49:16 crc kubenswrapper[4728]: I0227 10:49:16.825491 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/055c18b7-6800-4f96-b544-7fc72a1eb468-scripts\") pod \"keystone-6cb4766cb5-wf4ln\" (UID: \"055c18b7-6800-4f96-b544-7fc72a1eb468\") " pod="openstack/keystone-6cb4766cb5-wf4ln" Feb 27 10:49:16 crc kubenswrapper[4728]: I0227 10:49:16.842216 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/055c18b7-6800-4f96-b544-7fc72a1eb468-combined-ca-bundle\") pod \"keystone-6cb4766cb5-wf4ln\" (UID: \"055c18b7-6800-4f96-b544-7fc72a1eb468\") " pod="openstack/keystone-6cb4766cb5-wf4ln" Feb 27 10:49:16 crc kubenswrapper[4728]: I0227 10:49:16.842827 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/055c18b7-6800-4f96-b544-7fc72a1eb468-credential-keys\") pod \"keystone-6cb4766cb5-wf4ln\" (UID: \"055c18b7-6800-4f96-b544-7fc72a1eb468\") " pod="openstack/keystone-6cb4766cb5-wf4ln" Feb 27 10:49:16 crc kubenswrapper[4728]: I0227 10:49:16.842999 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-5cd4cd64f5-rhzmp"] Feb 27 10:49:16 crc kubenswrapper[4728]: I0227 10:49:16.846397 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-5cd4cd64f5-rhzmp" Feb 27 10:49:16 crc kubenswrapper[4728]: I0227 10:49:16.873026 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-5cd4cd64f5-rhzmp"] Feb 27 10:49:16 crc kubenswrapper[4728]: I0227 10:49:16.873219 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 27 10:49:16 crc kubenswrapper[4728]: I0227 10:49:16.873614 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-z6m4k" Feb 27 10:49:16 crc kubenswrapper[4728]: I0227 10:49:16.873836 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Feb 27 10:49:16 crc kubenswrapper[4728]: I0227 10:49:16.876094 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/055c18b7-6800-4f96-b544-7fc72a1eb468-public-tls-certs\") pod \"keystone-6cb4766cb5-wf4ln\" (UID: \"055c18b7-6800-4f96-b544-7fc72a1eb468\") " pod="openstack/keystone-6cb4766cb5-wf4ln" Feb 27 10:49:16 crc kubenswrapper[4728]: I0227 10:49:16.876519 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pjlqr\" (UniqueName: \"kubernetes.io/projected/055c18b7-6800-4f96-b544-7fc72a1eb468-kube-api-access-pjlqr\") pod \"keystone-6cb4766cb5-wf4ln\" (UID: \"055c18b7-6800-4f96-b544-7fc72a1eb468\") " pod="openstack/keystone-6cb4766cb5-wf4ln" Feb 27 10:49:16 crc kubenswrapper[4728]: I0227 10:49:16.879836 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/055c18b7-6800-4f96-b544-7fc72a1eb468-internal-tls-certs\") pod \"keystone-6cb4766cb5-wf4ln\" (UID: \"055c18b7-6800-4f96-b544-7fc72a1eb468\") " pod="openstack/keystone-6cb4766cb5-wf4ln" Feb 27 10:49:16 crc kubenswrapper[4728]: I0227 10:49:16.894724 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/055c18b7-6800-4f96-b544-7fc72a1eb468-config-data\") pod \"keystone-6cb4766cb5-wf4ln\" (UID: \"055c18b7-6800-4f96-b544-7fc72a1eb468\") " pod="openstack/keystone-6cb4766cb5-wf4ln" Feb 27 10:49:16 crc kubenswrapper[4728]: I0227 10:49:16.914892 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/055c18b7-6800-4f96-b544-7fc72a1eb468-fernet-keys\") pod \"keystone-6cb4766cb5-wf4ln\" (UID: \"055c18b7-6800-4f96-b544-7fc72a1eb468\") " pod="openstack/keystone-6cb4766cb5-wf4ln" Feb 27 10:49:16 crc kubenswrapper[4728]: I0227 10:49:16.919746 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/56cf3cf9-51d6-4ec0-ac63-0fb0e567c23d-config-data-custom\") pod \"barbican-worker-5cd4cd64f5-rhzmp\" (UID: \"56cf3cf9-51d6-4ec0-ac63-0fb0e567c23d\") " pod="openstack/barbican-worker-5cd4cd64f5-rhzmp" Feb 27 10:49:16 crc kubenswrapper[4728]: I0227 10:49:16.919790 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56cf3cf9-51d6-4ec0-ac63-0fb0e567c23d-combined-ca-bundle\") pod \"barbican-worker-5cd4cd64f5-rhzmp\" (UID: \"56cf3cf9-51d6-4ec0-ac63-0fb0e567c23d\") " pod="openstack/barbican-worker-5cd4cd64f5-rhzmp" Feb 27 10:49:16 crc kubenswrapper[4728]: I0227 10:49:16.919867 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwrjj\" (UniqueName: \"kubernetes.io/projected/56cf3cf9-51d6-4ec0-ac63-0fb0e567c23d-kube-api-access-mwrjj\") pod \"barbican-worker-5cd4cd64f5-rhzmp\" (UID: \"56cf3cf9-51d6-4ec0-ac63-0fb0e567c23d\") " pod="openstack/barbican-worker-5cd4cd64f5-rhzmp" Feb 27 10:49:16 crc kubenswrapper[4728]: I0227 10:49:16.919887 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56cf3cf9-51d6-4ec0-ac63-0fb0e567c23d-config-data\") pod \"barbican-worker-5cd4cd64f5-rhzmp\" (UID: \"56cf3cf9-51d6-4ec0-ac63-0fb0e567c23d\") " pod="openstack/barbican-worker-5cd4cd64f5-rhzmp" Feb 27 10:49:16 crc kubenswrapper[4728]: I0227 10:49:16.920085 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/56cf3cf9-51d6-4ec0-ac63-0fb0e567c23d-logs\") pod \"barbican-worker-5cd4cd64f5-rhzmp\" (UID: \"56cf3cf9-51d6-4ec0-ac63-0fb0e567c23d\") " pod="openstack/barbican-worker-5cd4cd64f5-rhzmp" Feb 27 10:49:16 crc kubenswrapper[4728]: I0227 10:49:16.952664 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-7575ff7b96-fznp4"] Feb 27 10:49:16 crc kubenswrapper[4728]: I0227 10:49:16.990526 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-7575ff7b96-fznp4"] Feb 27 10:49:16 crc kubenswrapper[4728]: I0227 10:49:16.991015 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-7575ff7b96-fznp4" Feb 27 10:49:17 crc kubenswrapper[4728]: I0227 10:49:17.000738 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Feb 27 10:49:17 crc kubenswrapper[4728]: I0227 10:49:17.004278 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6cb4766cb5-wf4ln" Feb 27 10:49:17 crc kubenswrapper[4728]: I0227 10:49:17.027097 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/56cf3cf9-51d6-4ec0-ac63-0fb0e567c23d-config-data-custom\") pod \"barbican-worker-5cd4cd64f5-rhzmp\" (UID: \"56cf3cf9-51d6-4ec0-ac63-0fb0e567c23d\") " pod="openstack/barbican-worker-5cd4cd64f5-rhzmp" Feb 27 10:49:17 crc kubenswrapper[4728]: I0227 10:49:17.027161 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56cf3cf9-51d6-4ec0-ac63-0fb0e567c23d-combined-ca-bundle\") pod \"barbican-worker-5cd4cd64f5-rhzmp\" (UID: \"56cf3cf9-51d6-4ec0-ac63-0fb0e567c23d\") " pod="openstack/barbican-worker-5cd4cd64f5-rhzmp" Feb 27 10:49:17 crc kubenswrapper[4728]: I0227 10:49:17.027224 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mwrjj\" (UniqueName: \"kubernetes.io/projected/56cf3cf9-51d6-4ec0-ac63-0fb0e567c23d-kube-api-access-mwrjj\") pod \"barbican-worker-5cd4cd64f5-rhzmp\" (UID: \"56cf3cf9-51d6-4ec0-ac63-0fb0e567c23d\") " pod="openstack/barbican-worker-5cd4cd64f5-rhzmp" Feb 27 10:49:17 crc kubenswrapper[4728]: I0227 10:49:17.027269 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56cf3cf9-51d6-4ec0-ac63-0fb0e567c23d-config-data\") pod \"barbican-worker-5cd4cd64f5-rhzmp\" (UID: \"56cf3cf9-51d6-4ec0-ac63-0fb0e567c23d\") " pod="openstack/barbican-worker-5cd4cd64f5-rhzmp" Feb 27 10:49:17 crc kubenswrapper[4728]: I0227 10:49:17.027312 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vn6d\" (UniqueName: \"kubernetes.io/projected/fb615ea1-cb62-47ba-886e-eb2f761fea63-kube-api-access-7vn6d\") pod \"barbican-keystone-listener-7575ff7b96-fznp4\" (UID: \"fb615ea1-cb62-47ba-886e-eb2f761fea63\") " pod="openstack/barbican-keystone-listener-7575ff7b96-fznp4" Feb 27 10:49:17 crc kubenswrapper[4728]: I0227 10:49:17.027334 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fb615ea1-cb62-47ba-886e-eb2f761fea63-config-data-custom\") pod \"barbican-keystone-listener-7575ff7b96-fznp4\" (UID: \"fb615ea1-cb62-47ba-886e-eb2f761fea63\") " pod="openstack/barbican-keystone-listener-7575ff7b96-fznp4" Feb 27 10:49:17 crc kubenswrapper[4728]: I0227 10:49:17.027372 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb615ea1-cb62-47ba-886e-eb2f761fea63-config-data\") pod \"barbican-keystone-listener-7575ff7b96-fznp4\" (UID: \"fb615ea1-cb62-47ba-886e-eb2f761fea63\") " pod="openstack/barbican-keystone-listener-7575ff7b96-fznp4" Feb 27 10:49:17 crc kubenswrapper[4728]: I0227 10:49:17.027425 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb615ea1-cb62-47ba-886e-eb2f761fea63-combined-ca-bundle\") pod \"barbican-keystone-listener-7575ff7b96-fznp4\" (UID: \"fb615ea1-cb62-47ba-886e-eb2f761fea63\") " pod="openstack/barbican-keystone-listener-7575ff7b96-fznp4" Feb 27 10:49:17 crc kubenswrapper[4728]: I0227 10:49:17.027461 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fb615ea1-cb62-47ba-886e-eb2f761fea63-logs\") pod \"barbican-keystone-listener-7575ff7b96-fznp4\" (UID: \"fb615ea1-cb62-47ba-886e-eb2f761fea63\") " pod="openstack/barbican-keystone-listener-7575ff7b96-fznp4" Feb 27 10:49:17 crc kubenswrapper[4728]: I0227 10:49:17.027520 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/56cf3cf9-51d6-4ec0-ac63-0fb0e567c23d-logs\") pod \"barbican-worker-5cd4cd64f5-rhzmp\" (UID: \"56cf3cf9-51d6-4ec0-ac63-0fb0e567c23d\") " pod="openstack/barbican-worker-5cd4cd64f5-rhzmp" Feb 27 10:49:17 crc kubenswrapper[4728]: I0227 10:49:17.028936 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5ccc5c4795-2ch9l"] Feb 27 10:49:17 crc kubenswrapper[4728]: I0227 10:49:17.029246 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5ccc5c4795-2ch9l" podUID="fafc2d4e-4a84-43c4-8ffc-4d5b7b3fe4f7" containerName="dnsmasq-dns" containerID="cri-o://4815408e41f007cf27f75fa6d1de8f0837382754e52ee954edfc5a25e9b7d664" gracePeriod=10 Feb 27 10:49:17 crc kubenswrapper[4728]: I0227 10:49:17.033175 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5ccc5c4795-2ch9l" Feb 27 10:49:17 crc kubenswrapper[4728]: I0227 10:49:17.041751 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/56cf3cf9-51d6-4ec0-ac63-0fb0e567c23d-logs\") pod \"barbican-worker-5cd4cd64f5-rhzmp\" (UID: \"56cf3cf9-51d6-4ec0-ac63-0fb0e567c23d\") " pod="openstack/barbican-worker-5cd4cd64f5-rhzmp" Feb 27 10:49:17 crc kubenswrapper[4728]: I0227 10:49:17.056607 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/56cf3cf9-51d6-4ec0-ac63-0fb0e567c23d-config-data-custom\") pod \"barbican-worker-5cd4cd64f5-rhzmp\" (UID: \"56cf3cf9-51d6-4ec0-ac63-0fb0e567c23d\") " pod="openstack/barbican-worker-5cd4cd64f5-rhzmp" Feb 27 10:49:17 crc kubenswrapper[4728]: I0227 10:49:17.056668 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56cf3cf9-51d6-4ec0-ac63-0fb0e567c23d-combined-ca-bundle\") pod \"barbican-worker-5cd4cd64f5-rhzmp\" (UID: \"56cf3cf9-51d6-4ec0-ac63-0fb0e567c23d\") " pod="openstack/barbican-worker-5cd4cd64f5-rhzmp" Feb 27 10:49:17 crc kubenswrapper[4728]: I0227 10:49:17.075706 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-7676cbc4f4-f7krv"] Feb 27 10:49:17 crc kubenswrapper[4728]: I0227 10:49:17.077942 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-7676cbc4f4-f7krv" Feb 27 10:49:17 crc kubenswrapper[4728]: I0227 10:49:17.100888 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mwrjj\" (UniqueName: \"kubernetes.io/projected/56cf3cf9-51d6-4ec0-ac63-0fb0e567c23d-kube-api-access-mwrjj\") pod \"barbican-worker-5cd4cd64f5-rhzmp\" (UID: \"56cf3cf9-51d6-4ec0-ac63-0fb0e567c23d\") " pod="openstack/barbican-worker-5cd4cd64f5-rhzmp" Feb 27 10:49:17 crc kubenswrapper[4728]: I0227 10:49:17.115924 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56cf3cf9-51d6-4ec0-ac63-0fb0e567c23d-config-data\") pod \"barbican-worker-5cd4cd64f5-rhzmp\" (UID: \"56cf3cf9-51d6-4ec0-ac63-0fb0e567c23d\") " pod="openstack/barbican-worker-5cd4cd64f5-rhzmp" Feb 27 10:49:17 crc kubenswrapper[4728]: I0227 10:49:17.131632 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb615ea1-cb62-47ba-886e-eb2f761fea63-combined-ca-bundle\") pod \"barbican-keystone-listener-7575ff7b96-fznp4\" (UID: \"fb615ea1-cb62-47ba-886e-eb2f761fea63\") " pod="openstack/barbican-keystone-listener-7575ff7b96-fznp4" Feb 27 10:49:17 crc kubenswrapper[4728]: I0227 10:49:17.133988 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e093b475-30b1-478c-b832-8b7b61a5f8f5-config-data\") pod \"barbican-keystone-listener-7676cbc4f4-f7krv\" (UID: \"e093b475-30b1-478c-b832-8b7b61a5f8f5\") " pod="openstack/barbican-keystone-listener-7676cbc4f4-f7krv" Feb 27 10:49:17 crc kubenswrapper[4728]: I0227 10:49:17.134102 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e093b475-30b1-478c-b832-8b7b61a5f8f5-config-data-custom\") pod \"barbican-keystone-listener-7676cbc4f4-f7krv\" (UID: \"e093b475-30b1-478c-b832-8b7b61a5f8f5\") " pod="openstack/barbican-keystone-listener-7676cbc4f4-f7krv" Feb 27 10:49:17 crc kubenswrapper[4728]: I0227 10:49:17.139306 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fb615ea1-cb62-47ba-886e-eb2f761fea63-logs\") pod \"barbican-keystone-listener-7575ff7b96-fznp4\" (UID: \"fb615ea1-cb62-47ba-886e-eb2f761fea63\") " pod="openstack/barbican-keystone-listener-7575ff7b96-fznp4" Feb 27 10:49:17 crc kubenswrapper[4728]: I0227 10:49:17.139696 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2cv5z\" (UniqueName: \"kubernetes.io/projected/e093b475-30b1-478c-b832-8b7b61a5f8f5-kube-api-access-2cv5z\") pod \"barbican-keystone-listener-7676cbc4f4-f7krv\" (UID: \"e093b475-30b1-478c-b832-8b7b61a5f8f5\") " pod="openstack/barbican-keystone-listener-7676cbc4f4-f7krv" Feb 27 10:49:17 crc kubenswrapper[4728]: I0227 10:49:17.139790 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e093b475-30b1-478c-b832-8b7b61a5f8f5-logs\") pod \"barbican-keystone-listener-7676cbc4f4-f7krv\" (UID: \"e093b475-30b1-478c-b832-8b7b61a5f8f5\") " pod="openstack/barbican-keystone-listener-7676cbc4f4-f7krv" Feb 27 10:49:17 crc kubenswrapper[4728]: I0227 10:49:17.139979 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e093b475-30b1-478c-b832-8b7b61a5f8f5-combined-ca-bundle\") pod \"barbican-keystone-listener-7676cbc4f4-f7krv\" (UID: \"e093b475-30b1-478c-b832-8b7b61a5f8f5\") " pod="openstack/barbican-keystone-listener-7676cbc4f4-f7krv" Feb 27 10:49:17 crc kubenswrapper[4728]: I0227 10:49:17.140114 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7vn6d\" (UniqueName: \"kubernetes.io/projected/fb615ea1-cb62-47ba-886e-eb2f761fea63-kube-api-access-7vn6d\") pod \"barbican-keystone-listener-7575ff7b96-fznp4\" (UID: \"fb615ea1-cb62-47ba-886e-eb2f761fea63\") " pod="openstack/barbican-keystone-listener-7575ff7b96-fznp4" Feb 27 10:49:17 crc kubenswrapper[4728]: I0227 10:49:17.140203 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fb615ea1-cb62-47ba-886e-eb2f761fea63-config-data-custom\") pod \"barbican-keystone-listener-7575ff7b96-fznp4\" (UID: \"fb615ea1-cb62-47ba-886e-eb2f761fea63\") " pod="openstack/barbican-keystone-listener-7575ff7b96-fznp4" Feb 27 10:49:17 crc kubenswrapper[4728]: I0227 10:49:17.140329 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb615ea1-cb62-47ba-886e-eb2f761fea63-config-data\") pod \"barbican-keystone-listener-7575ff7b96-fznp4\" (UID: \"fb615ea1-cb62-47ba-886e-eb2f761fea63\") " pod="openstack/barbican-keystone-listener-7575ff7b96-fznp4" Feb 27 10:49:17 crc kubenswrapper[4728]: I0227 10:49:17.140403 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fb615ea1-cb62-47ba-886e-eb2f761fea63-logs\") pod \"barbican-keystone-listener-7575ff7b96-fznp4\" (UID: \"fb615ea1-cb62-47ba-886e-eb2f761fea63\") " pod="openstack/barbican-keystone-listener-7575ff7b96-fznp4" Feb 27 10:49:17 crc kubenswrapper[4728]: I0227 10:49:17.150765 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb615ea1-cb62-47ba-886e-eb2f761fea63-config-data\") pod \"barbican-keystone-listener-7575ff7b96-fznp4\" (UID: \"fb615ea1-cb62-47ba-886e-eb2f761fea63\") " pod="openstack/barbican-keystone-listener-7575ff7b96-fznp4" Feb 27 10:49:17 crc kubenswrapper[4728]: I0227 10:49:17.180640 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb615ea1-cb62-47ba-886e-eb2f761fea63-combined-ca-bundle\") pod \"barbican-keystone-listener-7575ff7b96-fznp4\" (UID: \"fb615ea1-cb62-47ba-886e-eb2f761fea63\") " pod="openstack/barbican-keystone-listener-7575ff7b96-fznp4" Feb 27 10:49:17 crc kubenswrapper[4728]: I0227 10:49:17.203249 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7vn6d\" (UniqueName: \"kubernetes.io/projected/fb615ea1-cb62-47ba-886e-eb2f761fea63-kube-api-access-7vn6d\") pod \"barbican-keystone-listener-7575ff7b96-fznp4\" (UID: \"fb615ea1-cb62-47ba-886e-eb2f761fea63\") " pod="openstack/barbican-keystone-listener-7575ff7b96-fznp4" Feb 27 10:49:17 crc kubenswrapper[4728]: I0227 10:49:17.231281 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fb615ea1-cb62-47ba-886e-eb2f761fea63-config-data-custom\") pod \"barbican-keystone-listener-7575ff7b96-fznp4\" (UID: \"fb615ea1-cb62-47ba-886e-eb2f761fea63\") " pod="openstack/barbican-keystone-listener-7575ff7b96-fznp4" Feb 27 10:49:17 crc kubenswrapper[4728]: I0227 10:49:17.250856 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-5b68b655dc-96rhh"] Feb 27 10:49:17 crc kubenswrapper[4728]: I0227 10:49:17.254615 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-5b68b655dc-96rhh" Feb 27 10:49:17 crc kubenswrapper[4728]: I0227 10:49:17.256311 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e093b475-30b1-478c-b832-8b7b61a5f8f5-config-data\") pod \"barbican-keystone-listener-7676cbc4f4-f7krv\" (UID: \"e093b475-30b1-478c-b832-8b7b61a5f8f5\") " pod="openstack/barbican-keystone-listener-7676cbc4f4-f7krv" Feb 27 10:49:17 crc kubenswrapper[4728]: I0227 10:49:17.256349 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e093b475-30b1-478c-b832-8b7b61a5f8f5-config-data-custom\") pod \"barbican-keystone-listener-7676cbc4f4-f7krv\" (UID: \"e093b475-30b1-478c-b832-8b7b61a5f8f5\") " pod="openstack/barbican-keystone-listener-7676cbc4f4-f7krv" Feb 27 10:49:17 crc kubenswrapper[4728]: I0227 10:49:17.257369 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2cv5z\" (UniqueName: \"kubernetes.io/projected/e093b475-30b1-478c-b832-8b7b61a5f8f5-kube-api-access-2cv5z\") pod \"barbican-keystone-listener-7676cbc4f4-f7krv\" (UID: \"e093b475-30b1-478c-b832-8b7b61a5f8f5\") " pod="openstack/barbican-keystone-listener-7676cbc4f4-f7krv" Feb 27 10:49:17 crc kubenswrapper[4728]: I0227 10:49:17.257411 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e093b475-30b1-478c-b832-8b7b61a5f8f5-logs\") pod \"barbican-keystone-listener-7676cbc4f4-f7krv\" (UID: \"e093b475-30b1-478c-b832-8b7b61a5f8f5\") " pod="openstack/barbican-keystone-listener-7676cbc4f4-f7krv" Feb 27 10:49:17 crc kubenswrapper[4728]: I0227 10:49:17.257488 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e093b475-30b1-478c-b832-8b7b61a5f8f5-combined-ca-bundle\") pod \"barbican-keystone-listener-7676cbc4f4-f7krv\" (UID: \"e093b475-30b1-478c-b832-8b7b61a5f8f5\") " pod="openstack/barbican-keystone-listener-7676cbc4f4-f7krv" Feb 27 10:49:17 crc kubenswrapper[4728]: I0227 10:49:17.262009 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e093b475-30b1-478c-b832-8b7b61a5f8f5-logs\") pod \"barbican-keystone-listener-7676cbc4f4-f7krv\" (UID: \"e093b475-30b1-478c-b832-8b7b61a5f8f5\") " pod="openstack/barbican-keystone-listener-7676cbc4f4-f7krv" Feb 27 10:49:17 crc kubenswrapper[4728]: I0227 10:49:17.264278 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e093b475-30b1-478c-b832-8b7b61a5f8f5-combined-ca-bundle\") pod \"barbican-keystone-listener-7676cbc4f4-f7krv\" (UID: \"e093b475-30b1-478c-b832-8b7b61a5f8f5\") " pod="openstack/barbican-keystone-listener-7676cbc4f4-f7krv" Feb 27 10:49:17 crc kubenswrapper[4728]: I0227 10:49:17.273314 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e093b475-30b1-478c-b832-8b7b61a5f8f5-config-data\") pod \"barbican-keystone-listener-7676cbc4f4-f7krv\" (UID: \"e093b475-30b1-478c-b832-8b7b61a5f8f5\") " pod="openstack/barbican-keystone-listener-7676cbc4f4-f7krv" Feb 27 10:49:17 crc kubenswrapper[4728]: I0227 10:49:17.291466 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e093b475-30b1-478c-b832-8b7b61a5f8f5-config-data-custom\") pod \"barbican-keystone-listener-7676cbc4f4-f7krv\" (UID: \"e093b475-30b1-478c-b832-8b7b61a5f8f5\") " pod="openstack/barbican-keystone-listener-7676cbc4f4-f7krv" Feb 27 10:49:17 crc kubenswrapper[4728]: I0227 10:49:17.329406 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2cv5z\" (UniqueName: \"kubernetes.io/projected/e093b475-30b1-478c-b832-8b7b61a5f8f5-kube-api-access-2cv5z\") pod \"barbican-keystone-listener-7676cbc4f4-f7krv\" (UID: \"e093b475-30b1-478c-b832-8b7b61a5f8f5\") " pod="openstack/barbican-keystone-listener-7676cbc4f4-f7krv" Feb 27 10:49:17 crc kubenswrapper[4728]: I0227 10:49:17.361300 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a75dd8a4-3980-4825-bacf-fe2f0a9221d6-config-data-custom\") pod \"barbican-worker-5b68b655dc-96rhh\" (UID: \"a75dd8a4-3980-4825-bacf-fe2f0a9221d6\") " pod="openstack/barbican-worker-5b68b655dc-96rhh" Feb 27 10:49:17 crc kubenswrapper[4728]: I0227 10:49:17.361350 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j29q4\" (UniqueName: \"kubernetes.io/projected/a75dd8a4-3980-4825-bacf-fe2f0a9221d6-kube-api-access-j29q4\") pod \"barbican-worker-5b68b655dc-96rhh\" (UID: \"a75dd8a4-3980-4825-bacf-fe2f0a9221d6\") " pod="openstack/barbican-worker-5b68b655dc-96rhh" Feb 27 10:49:17 crc kubenswrapper[4728]: I0227 10:49:17.361428 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a75dd8a4-3980-4825-bacf-fe2f0a9221d6-logs\") pod \"barbican-worker-5b68b655dc-96rhh\" (UID: \"a75dd8a4-3980-4825-bacf-fe2f0a9221d6\") " pod="openstack/barbican-worker-5b68b655dc-96rhh" Feb 27 10:49:17 crc kubenswrapper[4728]: I0227 10:49:17.361606 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a75dd8a4-3980-4825-bacf-fe2f0a9221d6-config-data\") pod \"barbican-worker-5b68b655dc-96rhh\" (UID: \"a75dd8a4-3980-4825-bacf-fe2f0a9221d6\") " pod="openstack/barbican-worker-5b68b655dc-96rhh" Feb 27 10:49:17 crc kubenswrapper[4728]: I0227 10:49:17.361628 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a75dd8a4-3980-4825-bacf-fe2f0a9221d6-combined-ca-bundle\") pod \"barbican-worker-5b68b655dc-96rhh\" (UID: \"a75dd8a4-3980-4825-bacf-fe2f0a9221d6\") " pod="openstack/barbican-worker-5b68b655dc-96rhh" Feb 27 10:49:17 crc kubenswrapper[4728]: I0227 10:49:17.361728 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-7676cbc4f4-f7krv"] Feb 27 10:49:17 crc kubenswrapper[4728]: I0227 10:49:17.372558 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-5cd4cd64f5-rhzmp" Feb 27 10:49:17 crc kubenswrapper[4728]: I0227 10:49:17.410362 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-7575ff7b96-fznp4" Feb 27 10:49:17 crc kubenswrapper[4728]: I0227 10:49:17.414576 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-5b68b655dc-96rhh"] Feb 27 10:49:17 crc kubenswrapper[4728]: I0227 10:49:17.441345 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-7676cbc4f4-f7krv" Feb 27 10:49:17 crc kubenswrapper[4728]: I0227 10:49:17.442219 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-688c87cc99-zcq55"] Feb 27 10:49:17 crc kubenswrapper[4728]: I0227 10:49:17.444335 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-688c87cc99-zcq55" Feb 27 10:49:17 crc kubenswrapper[4728]: I0227 10:49:17.467326 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a75dd8a4-3980-4825-bacf-fe2f0a9221d6-config-data\") pod \"barbican-worker-5b68b655dc-96rhh\" (UID: \"a75dd8a4-3980-4825-bacf-fe2f0a9221d6\") " pod="openstack/barbican-worker-5b68b655dc-96rhh" Feb 27 10:49:17 crc kubenswrapper[4728]: I0227 10:49:17.467364 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a75dd8a4-3980-4825-bacf-fe2f0a9221d6-combined-ca-bundle\") pod \"barbican-worker-5b68b655dc-96rhh\" (UID: \"a75dd8a4-3980-4825-bacf-fe2f0a9221d6\") " pod="openstack/barbican-worker-5b68b655dc-96rhh" Feb 27 10:49:17 crc kubenswrapper[4728]: I0227 10:49:17.467445 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a75dd8a4-3980-4825-bacf-fe2f0a9221d6-config-data-custom\") pod \"barbican-worker-5b68b655dc-96rhh\" (UID: \"a75dd8a4-3980-4825-bacf-fe2f0a9221d6\") " pod="openstack/barbican-worker-5b68b655dc-96rhh" Feb 27 10:49:17 crc kubenswrapper[4728]: I0227 10:49:17.467487 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j29q4\" (UniqueName: \"kubernetes.io/projected/a75dd8a4-3980-4825-bacf-fe2f0a9221d6-kube-api-access-j29q4\") pod \"barbican-worker-5b68b655dc-96rhh\" (UID: \"a75dd8a4-3980-4825-bacf-fe2f0a9221d6\") " pod="openstack/barbican-worker-5b68b655dc-96rhh" Feb 27 10:49:17 crc kubenswrapper[4728]: I0227 10:49:17.467635 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a75dd8a4-3980-4825-bacf-fe2f0a9221d6-logs\") pod \"barbican-worker-5b68b655dc-96rhh\" (UID: \"a75dd8a4-3980-4825-bacf-fe2f0a9221d6\") " pod="openstack/barbican-worker-5b68b655dc-96rhh" Feb 27 10:49:17 crc kubenswrapper[4728]: I0227 10:49:17.470561 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a75dd8a4-3980-4825-bacf-fe2f0a9221d6-logs\") pod \"barbican-worker-5b68b655dc-96rhh\" (UID: \"a75dd8a4-3980-4825-bacf-fe2f0a9221d6\") " pod="openstack/barbican-worker-5b68b655dc-96rhh" Feb 27 10:49:17 crc kubenswrapper[4728]: I0227 10:49:17.480465 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a75dd8a4-3980-4825-bacf-fe2f0a9221d6-config-data\") pod \"barbican-worker-5b68b655dc-96rhh\" (UID: \"a75dd8a4-3980-4825-bacf-fe2f0a9221d6\") " pod="openstack/barbican-worker-5b68b655dc-96rhh" Feb 27 10:49:17 crc kubenswrapper[4728]: I0227 10:49:17.482670 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a75dd8a4-3980-4825-bacf-fe2f0a9221d6-combined-ca-bundle\") pod \"barbican-worker-5b68b655dc-96rhh\" (UID: \"a75dd8a4-3980-4825-bacf-fe2f0a9221d6\") " pod="openstack/barbican-worker-5b68b655dc-96rhh" Feb 27 10:49:17 crc kubenswrapper[4728]: I0227 10:49:17.492480 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j29q4\" (UniqueName: \"kubernetes.io/projected/a75dd8a4-3980-4825-bacf-fe2f0a9221d6-kube-api-access-j29q4\") pod \"barbican-worker-5b68b655dc-96rhh\" (UID: \"a75dd8a4-3980-4825-bacf-fe2f0a9221d6\") " pod="openstack/barbican-worker-5b68b655dc-96rhh" Feb 27 10:49:17 crc kubenswrapper[4728]: I0227 10:49:17.498277 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a75dd8a4-3980-4825-bacf-fe2f0a9221d6-config-data-custom\") pod \"barbican-worker-5b68b655dc-96rhh\" (UID: \"a75dd8a4-3980-4825-bacf-fe2f0a9221d6\") " pod="openstack/barbican-worker-5b68b655dc-96rhh" Feb 27 10:49:17 crc kubenswrapper[4728]: I0227 10:49:17.499167 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-688c87cc99-zcq55"] Feb 27 10:49:17 crc kubenswrapper[4728]: I0227 10:49:17.522984 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7ff4498744-6lwbb" event={"ID":"a4671725-3a26-4c77-ad25-01c9aa82bdf0","Type":"ContainerStarted","Data":"78bc388bf6e3846dac3e7a0db319337129eb7aec5bfae3d1e655a02a30250826"} Feb 27 10:49:17 crc kubenswrapper[4728]: I0227 10:49:17.531849 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-74fdd766cd-6wpqm"] Feb 27 10:49:17 crc kubenswrapper[4728]: I0227 10:49:17.547310 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-74fdd766cd-6wpqm" Feb 27 10:49:17 crc kubenswrapper[4728]: I0227 10:49:17.549643 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Feb 27 10:49:17 crc kubenswrapper[4728]: I0227 10:49:17.549903 4728 generic.go:334] "Generic (PLEG): container finished" podID="fafc2d4e-4a84-43c4-8ffc-4d5b7b3fe4f7" containerID="4815408e41f007cf27f75fa6d1de8f0837382754e52ee954edfc5a25e9b7d664" exitCode=0 Feb 27 10:49:17 crc kubenswrapper[4728]: I0227 10:49:17.551198 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc5c4795-2ch9l" event={"ID":"fafc2d4e-4a84-43c4-8ffc-4d5b7b3fe4f7","Type":"ContainerDied","Data":"4815408e41f007cf27f75fa6d1de8f0837382754e52ee954edfc5a25e9b7d664"} Feb 27 10:49:17 crc kubenswrapper[4728]: I0227 10:49:17.551230 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 27 10:49:17 crc kubenswrapper[4728]: I0227 10:49:17.551336 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 27 10:49:17 crc kubenswrapper[4728]: I0227 10:49:17.561959 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-74fdd766cd-6wpqm"] Feb 27 10:49:17 crc kubenswrapper[4728]: I0227 10:49:17.572305 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/73597ba1-35ae-48f3-a74f-14816ae7bc61-config\") pod \"dnsmasq-dns-688c87cc99-zcq55\" (UID: \"73597ba1-35ae-48f3-a74f-14816ae7bc61\") " pod="openstack/dnsmasq-dns-688c87cc99-zcq55" Feb 27 10:49:17 crc kubenswrapper[4728]: I0227 10:49:17.572408 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/73597ba1-35ae-48f3-a74f-14816ae7bc61-dns-svc\") pod \"dnsmasq-dns-688c87cc99-zcq55\" (UID: \"73597ba1-35ae-48f3-a74f-14816ae7bc61\") " pod="openstack/dnsmasq-dns-688c87cc99-zcq55" Feb 27 10:49:17 crc kubenswrapper[4728]: I0227 10:49:17.572456 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/73597ba1-35ae-48f3-a74f-14816ae7bc61-ovsdbserver-sb\") pod \"dnsmasq-dns-688c87cc99-zcq55\" (UID: \"73597ba1-35ae-48f3-a74f-14816ae7bc61\") " pod="openstack/dnsmasq-dns-688c87cc99-zcq55" Feb 27 10:49:17 crc kubenswrapper[4728]: I0227 10:49:17.572482 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/73597ba1-35ae-48f3-a74f-14816ae7bc61-ovsdbserver-nb\") pod \"dnsmasq-dns-688c87cc99-zcq55\" (UID: \"73597ba1-35ae-48f3-a74f-14816ae7bc61\") " pod="openstack/dnsmasq-dns-688c87cc99-zcq55" Feb 27 10:49:17 crc kubenswrapper[4728]: I0227 10:49:17.572675 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42tm2\" (UniqueName: \"kubernetes.io/projected/73597ba1-35ae-48f3-a74f-14816ae7bc61-kube-api-access-42tm2\") pod \"dnsmasq-dns-688c87cc99-zcq55\" (UID: \"73597ba1-35ae-48f3-a74f-14816ae7bc61\") " pod="openstack/dnsmasq-dns-688c87cc99-zcq55" Feb 27 10:49:17 crc kubenswrapper[4728]: I0227 10:49:17.572762 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/73597ba1-35ae-48f3-a74f-14816ae7bc61-dns-swift-storage-0\") pod \"dnsmasq-dns-688c87cc99-zcq55\" (UID: \"73597ba1-35ae-48f3-a74f-14816ae7bc61\") " pod="openstack/dnsmasq-dns-688c87cc99-zcq55" Feb 27 10:49:17 crc kubenswrapper[4728]: I0227 10:49:17.674897 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tqvdh\" (UniqueName: \"kubernetes.io/projected/7bd979a4-451e-4f4d-affa-43bfbc671238-kube-api-access-tqvdh\") pod \"barbican-api-74fdd766cd-6wpqm\" (UID: \"7bd979a4-451e-4f4d-affa-43bfbc671238\") " pod="openstack/barbican-api-74fdd766cd-6wpqm" Feb 27 10:49:17 crc kubenswrapper[4728]: I0227 10:49:17.675233 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/73597ba1-35ae-48f3-a74f-14816ae7bc61-ovsdbserver-sb\") pod \"dnsmasq-dns-688c87cc99-zcq55\" (UID: \"73597ba1-35ae-48f3-a74f-14816ae7bc61\") " pod="openstack/dnsmasq-dns-688c87cc99-zcq55" Feb 27 10:49:17 crc kubenswrapper[4728]: I0227 10:49:17.675270 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/73597ba1-35ae-48f3-a74f-14816ae7bc61-ovsdbserver-nb\") pod \"dnsmasq-dns-688c87cc99-zcq55\" (UID: \"73597ba1-35ae-48f3-a74f-14816ae7bc61\") " pod="openstack/dnsmasq-dns-688c87cc99-zcq55" Feb 27 10:49:17 crc kubenswrapper[4728]: I0227 10:49:17.675309 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7bd979a4-451e-4f4d-affa-43bfbc671238-combined-ca-bundle\") pod \"barbican-api-74fdd766cd-6wpqm\" (UID: \"7bd979a4-451e-4f4d-affa-43bfbc671238\") " pod="openstack/barbican-api-74fdd766cd-6wpqm" Feb 27 10:49:17 crc kubenswrapper[4728]: I0227 10:49:17.675447 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-42tm2\" (UniqueName: \"kubernetes.io/projected/73597ba1-35ae-48f3-a74f-14816ae7bc61-kube-api-access-42tm2\") pod \"dnsmasq-dns-688c87cc99-zcq55\" (UID: \"73597ba1-35ae-48f3-a74f-14816ae7bc61\") " pod="openstack/dnsmasq-dns-688c87cc99-zcq55" Feb 27 10:49:17 crc kubenswrapper[4728]: I0227 10:49:17.675494 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7bd979a4-451e-4f4d-affa-43bfbc671238-config-data-custom\") pod \"barbican-api-74fdd766cd-6wpqm\" (UID: \"7bd979a4-451e-4f4d-affa-43bfbc671238\") " pod="openstack/barbican-api-74fdd766cd-6wpqm" Feb 27 10:49:17 crc kubenswrapper[4728]: I0227 10:49:17.675538 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/73597ba1-35ae-48f3-a74f-14816ae7bc61-dns-swift-storage-0\") pod \"dnsmasq-dns-688c87cc99-zcq55\" (UID: \"73597ba1-35ae-48f3-a74f-14816ae7bc61\") " pod="openstack/dnsmasq-dns-688c87cc99-zcq55" Feb 27 10:49:17 crc kubenswrapper[4728]: I0227 10:49:17.676947 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/73597ba1-35ae-48f3-a74f-14816ae7bc61-ovsdbserver-sb\") pod \"dnsmasq-dns-688c87cc99-zcq55\" (UID: \"73597ba1-35ae-48f3-a74f-14816ae7bc61\") " pod="openstack/dnsmasq-dns-688c87cc99-zcq55" Feb 27 10:49:17 crc kubenswrapper[4728]: I0227 10:49:17.677276 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/73597ba1-35ae-48f3-a74f-14816ae7bc61-dns-swift-storage-0\") pod \"dnsmasq-dns-688c87cc99-zcq55\" (UID: \"73597ba1-35ae-48f3-a74f-14816ae7bc61\") " pod="openstack/dnsmasq-dns-688c87cc99-zcq55" Feb 27 10:49:17 crc kubenswrapper[4728]: I0227 10:49:17.677288 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/73597ba1-35ae-48f3-a74f-14816ae7bc61-ovsdbserver-nb\") pod \"dnsmasq-dns-688c87cc99-zcq55\" (UID: \"73597ba1-35ae-48f3-a74f-14816ae7bc61\") " pod="openstack/dnsmasq-dns-688c87cc99-zcq55" Feb 27 10:49:17 crc kubenswrapper[4728]: I0227 10:49:17.677386 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7bd979a4-451e-4f4d-affa-43bfbc671238-config-data\") pod \"barbican-api-74fdd766cd-6wpqm\" (UID: \"7bd979a4-451e-4f4d-affa-43bfbc671238\") " pod="openstack/barbican-api-74fdd766cd-6wpqm" Feb 27 10:49:17 crc kubenswrapper[4728]: I0227 10:49:17.677430 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/73597ba1-35ae-48f3-a74f-14816ae7bc61-config\") pod \"dnsmasq-dns-688c87cc99-zcq55\" (UID: \"73597ba1-35ae-48f3-a74f-14816ae7bc61\") " pod="openstack/dnsmasq-dns-688c87cc99-zcq55" Feb 27 10:49:17 crc kubenswrapper[4728]: I0227 10:49:17.677520 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7bd979a4-451e-4f4d-affa-43bfbc671238-logs\") pod \"barbican-api-74fdd766cd-6wpqm\" (UID: \"7bd979a4-451e-4f4d-affa-43bfbc671238\") " pod="openstack/barbican-api-74fdd766cd-6wpqm" Feb 27 10:49:17 crc kubenswrapper[4728]: I0227 10:49:17.677589 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/73597ba1-35ae-48f3-a74f-14816ae7bc61-dns-svc\") pod \"dnsmasq-dns-688c87cc99-zcq55\" (UID: \"73597ba1-35ae-48f3-a74f-14816ae7bc61\") " pod="openstack/dnsmasq-dns-688c87cc99-zcq55" Feb 27 10:49:17 crc kubenswrapper[4728]: I0227 10:49:17.678184 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/73597ba1-35ae-48f3-a74f-14816ae7bc61-dns-svc\") pod \"dnsmasq-dns-688c87cc99-zcq55\" (UID: \"73597ba1-35ae-48f3-a74f-14816ae7bc61\") " pod="openstack/dnsmasq-dns-688c87cc99-zcq55" Feb 27 10:49:17 crc kubenswrapper[4728]: I0227 10:49:17.678795 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/73597ba1-35ae-48f3-a74f-14816ae7bc61-config\") pod \"dnsmasq-dns-688c87cc99-zcq55\" (UID: \"73597ba1-35ae-48f3-a74f-14816ae7bc61\") " pod="openstack/dnsmasq-dns-688c87cc99-zcq55" Feb 27 10:49:17 crc kubenswrapper[4728]: I0227 10:49:17.700974 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-42tm2\" (UniqueName: \"kubernetes.io/projected/73597ba1-35ae-48f3-a74f-14816ae7bc61-kube-api-access-42tm2\") pod \"dnsmasq-dns-688c87cc99-zcq55\" (UID: \"73597ba1-35ae-48f3-a74f-14816ae7bc61\") " pod="openstack/dnsmasq-dns-688c87cc99-zcq55" Feb 27 10:49:17 crc kubenswrapper[4728]: I0227 10:49:17.762366 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-5b68b655dc-96rhh" Feb 27 10:49:17 crc kubenswrapper[4728]: I0227 10:49:17.780010 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7bd979a4-451e-4f4d-affa-43bfbc671238-config-data-custom\") pod \"barbican-api-74fdd766cd-6wpqm\" (UID: \"7bd979a4-451e-4f4d-affa-43bfbc671238\") " pod="openstack/barbican-api-74fdd766cd-6wpqm" Feb 27 10:49:17 crc kubenswrapper[4728]: I0227 10:49:17.780083 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7bd979a4-451e-4f4d-affa-43bfbc671238-config-data\") pod \"barbican-api-74fdd766cd-6wpqm\" (UID: \"7bd979a4-451e-4f4d-affa-43bfbc671238\") " pod="openstack/barbican-api-74fdd766cd-6wpqm" Feb 27 10:49:17 crc kubenswrapper[4728]: I0227 10:49:17.780125 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7bd979a4-451e-4f4d-affa-43bfbc671238-logs\") pod \"barbican-api-74fdd766cd-6wpqm\" (UID: \"7bd979a4-451e-4f4d-affa-43bfbc671238\") " pod="openstack/barbican-api-74fdd766cd-6wpqm" Feb 27 10:49:17 crc kubenswrapper[4728]: I0227 10:49:17.780177 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tqvdh\" (UniqueName: \"kubernetes.io/projected/7bd979a4-451e-4f4d-affa-43bfbc671238-kube-api-access-tqvdh\") pod \"barbican-api-74fdd766cd-6wpqm\" (UID: \"7bd979a4-451e-4f4d-affa-43bfbc671238\") " pod="openstack/barbican-api-74fdd766cd-6wpqm" Feb 27 10:49:17 crc kubenswrapper[4728]: I0227 10:49:17.780264 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7bd979a4-451e-4f4d-affa-43bfbc671238-combined-ca-bundle\") pod \"barbican-api-74fdd766cd-6wpqm\" (UID: \"7bd979a4-451e-4f4d-affa-43bfbc671238\") " pod="openstack/barbican-api-74fdd766cd-6wpqm" Feb 27 10:49:17 crc kubenswrapper[4728]: I0227 10:49:17.781451 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7bd979a4-451e-4f4d-affa-43bfbc671238-logs\") pod \"barbican-api-74fdd766cd-6wpqm\" (UID: \"7bd979a4-451e-4f4d-affa-43bfbc671238\") " pod="openstack/barbican-api-74fdd766cd-6wpqm" Feb 27 10:49:17 crc kubenswrapper[4728]: I0227 10:49:17.787588 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7bd979a4-451e-4f4d-affa-43bfbc671238-config-data-custom\") pod \"barbican-api-74fdd766cd-6wpqm\" (UID: \"7bd979a4-451e-4f4d-affa-43bfbc671238\") " pod="openstack/barbican-api-74fdd766cd-6wpqm" Feb 27 10:49:17 crc kubenswrapper[4728]: I0227 10:49:17.792466 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-688c87cc99-zcq55" Feb 27 10:49:17 crc kubenswrapper[4728]: I0227 10:49:17.797726 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7bd979a4-451e-4f4d-affa-43bfbc671238-config-data\") pod \"barbican-api-74fdd766cd-6wpqm\" (UID: \"7bd979a4-451e-4f4d-affa-43bfbc671238\") " pod="openstack/barbican-api-74fdd766cd-6wpqm" Feb 27 10:49:17 crc kubenswrapper[4728]: I0227 10:49:17.812575 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7bd979a4-451e-4f4d-affa-43bfbc671238-combined-ca-bundle\") pod \"barbican-api-74fdd766cd-6wpqm\" (UID: \"7bd979a4-451e-4f4d-affa-43bfbc671238\") " pod="openstack/barbican-api-74fdd766cd-6wpqm" Feb 27 10:49:17 crc kubenswrapper[4728]: I0227 10:49:17.818361 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tqvdh\" (UniqueName: \"kubernetes.io/projected/7bd979a4-451e-4f4d-affa-43bfbc671238-kube-api-access-tqvdh\") pod \"barbican-api-74fdd766cd-6wpqm\" (UID: \"7bd979a4-451e-4f4d-affa-43bfbc671238\") " pod="openstack/barbican-api-74fdd766cd-6wpqm" Feb 27 10:49:17 crc kubenswrapper[4728]: I0227 10:49:17.848718 4728 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5ccc5c4795-2ch9l" podUID="fafc2d4e-4a84-43c4-8ffc-4d5b7b3fe4f7" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.200:5353: connect: connection refused" Feb 27 10:49:17 crc kubenswrapper[4728]: I0227 10:49:17.879209 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-74fdd766cd-6wpqm" Feb 27 10:49:18 crc kubenswrapper[4728]: I0227 10:49:18.450920 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc5c4795-2ch9l" Feb 27 10:49:18 crc kubenswrapper[4728]: I0227 10:49:18.512288 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fafc2d4e-4a84-43c4-8ffc-4d5b7b3fe4f7-config\") pod \"fafc2d4e-4a84-43c4-8ffc-4d5b7b3fe4f7\" (UID: \"fafc2d4e-4a84-43c4-8ffc-4d5b7b3fe4f7\") " Feb 27 10:49:18 crc kubenswrapper[4728]: I0227 10:49:18.512647 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fafc2d4e-4a84-43c4-8ffc-4d5b7b3fe4f7-dns-swift-storage-0\") pod \"fafc2d4e-4a84-43c4-8ffc-4d5b7b3fe4f7\" (UID: \"fafc2d4e-4a84-43c4-8ffc-4d5b7b3fe4f7\") " Feb 27 10:49:18 crc kubenswrapper[4728]: I0227 10:49:18.512675 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-798c7\" (UniqueName: \"kubernetes.io/projected/fafc2d4e-4a84-43c4-8ffc-4d5b7b3fe4f7-kube-api-access-798c7\") pod \"fafc2d4e-4a84-43c4-8ffc-4d5b7b3fe4f7\" (UID: \"fafc2d4e-4a84-43c4-8ffc-4d5b7b3fe4f7\") " Feb 27 10:49:18 crc kubenswrapper[4728]: I0227 10:49:18.512740 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fafc2d4e-4a84-43c4-8ffc-4d5b7b3fe4f7-ovsdbserver-nb\") pod \"fafc2d4e-4a84-43c4-8ffc-4d5b7b3fe4f7\" (UID: \"fafc2d4e-4a84-43c4-8ffc-4d5b7b3fe4f7\") " Feb 27 10:49:18 crc kubenswrapper[4728]: I0227 10:49:18.512908 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fafc2d4e-4a84-43c4-8ffc-4d5b7b3fe4f7-ovsdbserver-sb\") pod \"fafc2d4e-4a84-43c4-8ffc-4d5b7b3fe4f7\" (UID: \"fafc2d4e-4a84-43c4-8ffc-4d5b7b3fe4f7\") " Feb 27 10:49:18 crc kubenswrapper[4728]: I0227 10:49:18.512959 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fafc2d4e-4a84-43c4-8ffc-4d5b7b3fe4f7-dns-svc\") pod \"fafc2d4e-4a84-43c4-8ffc-4d5b7b3fe4f7\" (UID: \"fafc2d4e-4a84-43c4-8ffc-4d5b7b3fe4f7\") " Feb 27 10:49:18 crc kubenswrapper[4728]: I0227 10:49:18.604998 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-dgjpm" event={"ID":"e977ffad-2764-4871-bdc8-24f0c3b4caf1","Type":"ContainerStarted","Data":"dd812a37993c075fb2911e1dcdf493cf564cd9a15900330e4de3fd18c118594f"} Feb 27 10:49:18 crc kubenswrapper[4728]: I0227 10:49:18.652407 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-sync-dgjpm" podStartSLOduration=4.635847505 podStartE2EDuration="52.652385937s" podCreationTimestamp="2026-02-27 10:48:26 +0000 UTC" firstStartedPulling="2026-02-27 10:48:28.51961551 +0000 UTC m=+1328.481981616" lastFinishedPulling="2026-02-27 10:49:16.536153942 +0000 UTC m=+1376.498520048" observedRunningTime="2026-02-27 10:49:18.648049819 +0000 UTC m=+1378.610415915" watchObservedRunningTime="2026-02-27 10:49:18.652385937 +0000 UTC m=+1378.614752043" Feb 27 10:49:18 crc kubenswrapper[4728]: I0227 10:49:18.655068 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc5c4795-2ch9l" event={"ID":"fafc2d4e-4a84-43c4-8ffc-4d5b7b3fe4f7","Type":"ContainerDied","Data":"3749a9512ab3ef2baa36b6560a404280eb44ee0dbd154852727285e2362ab66c"} Feb 27 10:49:18 crc kubenswrapper[4728]: I0227 10:49:18.655121 4728 scope.go:117] "RemoveContainer" containerID="4815408e41f007cf27f75fa6d1de8f0837382754e52ee954edfc5a25e9b7d664" Feb 27 10:49:18 crc kubenswrapper[4728]: I0227 10:49:18.655246 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc5c4795-2ch9l" Feb 27 10:49:18 crc kubenswrapper[4728]: I0227 10:49:18.700703 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7ff4498744-6lwbb" event={"ID":"a4671725-3a26-4c77-ad25-01c9aa82bdf0","Type":"ContainerStarted","Data":"5e64b5e5dee261a73a652b58276e7104d40ee91eb4d60bb79282c5a31b47261a"} Feb 27 10:49:18 crc kubenswrapper[4728]: I0227 10:49:18.734850 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fafc2d4e-4a84-43c4-8ffc-4d5b7b3fe4f7-kube-api-access-798c7" (OuterVolumeSpecName: "kube-api-access-798c7") pod "fafc2d4e-4a84-43c4-8ffc-4d5b7b3fe4f7" (UID: "fafc2d4e-4a84-43c4-8ffc-4d5b7b3fe4f7"). InnerVolumeSpecName "kube-api-access-798c7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:49:18 crc kubenswrapper[4728]: I0227 10:49:18.832653 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-798c7\" (UniqueName: \"kubernetes.io/projected/fafc2d4e-4a84-43c4-8ffc-4d5b7b3fe4f7-kube-api-access-798c7\") on node \"crc\" DevicePath \"\"" Feb 27 10:49:18 crc kubenswrapper[4728]: I0227 10:49:18.838479 4728 scope.go:117] "RemoveContainer" containerID="a83c4c7783488bed6d7e79ac3c7209d140d8857acc63750129049031c01e89ae" Feb 27 10:49:18 crc kubenswrapper[4728]: I0227 10:49:18.842754 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fafc2d4e-4a84-43c4-8ffc-4d5b7b3fe4f7-config" (OuterVolumeSpecName: "config") pod "fafc2d4e-4a84-43c4-8ffc-4d5b7b3fe4f7" (UID: "fafc2d4e-4a84-43c4-8ffc-4d5b7b3fe4f7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:49:18 crc kubenswrapper[4728]: I0227 10:49:18.897029 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fafc2d4e-4a84-43c4-8ffc-4d5b7b3fe4f7-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "fafc2d4e-4a84-43c4-8ffc-4d5b7b3fe4f7" (UID: "fafc2d4e-4a84-43c4-8ffc-4d5b7b3fe4f7"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:49:18 crc kubenswrapper[4728]: I0227 10:49:18.921877 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-5cd4cd64f5-rhzmp"] Feb 27 10:49:18 crc kubenswrapper[4728]: I0227 10:49:18.942281 4728 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fafc2d4e-4a84-43c4-8ffc-4d5b7b3fe4f7-config\") on node \"crc\" DevicePath \"\"" Feb 27 10:49:18 crc kubenswrapper[4728]: I0227 10:49:18.942361 4728 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fafc2d4e-4a84-43c4-8ffc-4d5b7b3fe4f7-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 27 10:49:18 crc kubenswrapper[4728]: I0227 10:49:18.945715 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fafc2d4e-4a84-43c4-8ffc-4d5b7b3fe4f7-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "fafc2d4e-4a84-43c4-8ffc-4d5b7b3fe4f7" (UID: "fafc2d4e-4a84-43c4-8ffc-4d5b7b3fe4f7"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:49:18 crc kubenswrapper[4728]: I0227 10:49:18.957185 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fafc2d4e-4a84-43c4-8ffc-4d5b7b3fe4f7-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "fafc2d4e-4a84-43c4-8ffc-4d5b7b3fe4f7" (UID: "fafc2d4e-4a84-43c4-8ffc-4d5b7b3fe4f7"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:49:18 crc kubenswrapper[4728]: I0227 10:49:18.999007 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fafc2d4e-4a84-43c4-8ffc-4d5b7b3fe4f7-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "fafc2d4e-4a84-43c4-8ffc-4d5b7b3fe4f7" (UID: "fafc2d4e-4a84-43c4-8ffc-4d5b7b3fe4f7"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:49:19 crc kubenswrapper[4728]: I0227 10:49:19.013757 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-6cb4766cb5-wf4ln"] Feb 27 10:49:19 crc kubenswrapper[4728]: I0227 10:49:19.027388 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-7575ff7b96-fznp4"] Feb 27 10:49:19 crc kubenswrapper[4728]: I0227 10:49:19.042585 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-7676cbc4f4-f7krv"] Feb 27 10:49:19 crc kubenswrapper[4728]: I0227 10:49:19.045827 4728 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fafc2d4e-4a84-43c4-8ffc-4d5b7b3fe4f7-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 27 10:49:19 crc kubenswrapper[4728]: I0227 10:49:19.045854 4728 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fafc2d4e-4a84-43c4-8ffc-4d5b7b3fe4f7-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 27 10:49:19 crc kubenswrapper[4728]: I0227 10:49:19.045864 4728 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fafc2d4e-4a84-43c4-8ffc-4d5b7b3fe4f7-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 27 10:49:19 crc kubenswrapper[4728]: I0227 10:49:19.329897 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5ccc5c4795-2ch9l"] Feb 27 10:49:19 crc kubenswrapper[4728]: I0227 10:49:19.353236 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5ccc5c4795-2ch9l"] Feb 27 10:49:19 crc kubenswrapper[4728]: I0227 10:49:19.440927 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-688c87cc99-zcq55"] Feb 27 10:49:19 crc kubenswrapper[4728]: I0227 10:49:19.474575 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-5b68b655dc-96rhh"] Feb 27 10:49:19 crc kubenswrapper[4728]: W0227 10:49:19.503549 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7bd979a4_451e_4f4d_affa_43bfbc671238.slice/crio-ae7b7ab0d08354aad20f1827991f7a87b4b631681daec95bc589721c22fb9583 WatchSource:0}: Error finding container ae7b7ab0d08354aad20f1827991f7a87b4b631681daec95bc589721c22fb9583: Status 404 returned error can't find the container with id ae7b7ab0d08354aad20f1827991f7a87b4b631681daec95bc589721c22fb9583 Feb 27 10:49:19 crc kubenswrapper[4728]: I0227 10:49:19.517323 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-74fdd766cd-6wpqm"] Feb 27 10:49:19 crc kubenswrapper[4728]: I0227 10:49:19.761986 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5b68b655dc-96rhh" event={"ID":"a75dd8a4-3980-4825-bacf-fe2f0a9221d6","Type":"ContainerStarted","Data":"1c368cce429e1edd83b6bb2ac6edbff5b8234d219c2bf189991eefe56df5b333"} Feb 27 10:49:19 crc kubenswrapper[4728]: I0227 10:49:19.764710 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688c87cc99-zcq55" event={"ID":"73597ba1-35ae-48f3-a74f-14816ae7bc61","Type":"ContainerStarted","Data":"470f2fb85fa7cb46e33729307978527b22312bd40f283b3ee64bd023043cd364"} Feb 27 10:49:19 crc kubenswrapper[4728]: I0227 10:49:19.767997 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7676cbc4f4-f7krv" event={"ID":"e093b475-30b1-478c-b832-8b7b61a5f8f5","Type":"ContainerStarted","Data":"510c7581fff6f5ca33136c1bb8954811b3466e9693d8e42b242b401e7b5675da"} Feb 27 10:49:19 crc kubenswrapper[4728]: I0227 10:49:19.772448 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7ff4498744-6lwbb" event={"ID":"a4671725-3a26-4c77-ad25-01c9aa82bdf0","Type":"ContainerStarted","Data":"d004a9842d2f8dd22564f858096272a1c8aa802c0fa92e92f4e304530b77a7f1"} Feb 27 10:49:19 crc kubenswrapper[4728]: I0227 10:49:19.773996 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-7ff4498744-6lwbb" Feb 27 10:49:19 crc kubenswrapper[4728]: I0227 10:49:19.774046 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-7ff4498744-6lwbb" Feb 27 10:49:19 crc kubenswrapper[4728]: I0227 10:49:19.792263 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6cb4766cb5-wf4ln" event={"ID":"055c18b7-6800-4f96-b544-7fc72a1eb468","Type":"ContainerStarted","Data":"cc8a217b9b5207997f26ecf4d04422c0d34afb2d6ec2f726a746941e5bcdcc4c"} Feb 27 10:49:19 crc kubenswrapper[4728]: I0227 10:49:19.792329 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6cb4766cb5-wf4ln" event={"ID":"055c18b7-6800-4f96-b544-7fc72a1eb468","Type":"ContainerStarted","Data":"9208272c19057d574e60035fae57ea1ae88233920921cfa90cabe899822a2e15"} Feb 27 10:49:19 crc kubenswrapper[4728]: I0227 10:49:19.792460 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-6cb4766cb5-wf4ln" Feb 27 10:49:19 crc kubenswrapper[4728]: I0227 10:49:19.803320 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-7ff4498744-6lwbb" podStartSLOduration=4.803294537 podStartE2EDuration="4.803294537s" podCreationTimestamp="2026-02-27 10:49:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:49:19.798951479 +0000 UTC m=+1379.761317585" watchObservedRunningTime="2026-02-27 10:49:19.803294537 +0000 UTC m=+1379.765660643" Feb 27 10:49:19 crc kubenswrapper[4728]: I0227 10:49:19.818832 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-dds8x" event={"ID":"e293ec90-6006-49f0-8e24-8a0f4327d2cf","Type":"ContainerStarted","Data":"7ae680eac9014a0a7a7b44a257a042e0ad152a6c92810dbe55308637990af761"} Feb 27 10:49:19 crc kubenswrapper[4728]: I0227 10:49:19.834580 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-6cb4766cb5-wf4ln" podStartSLOduration=3.8345520950000003 podStartE2EDuration="3.834552095s" podCreationTimestamp="2026-02-27 10:49:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:49:19.822871178 +0000 UTC m=+1379.785237284" watchObservedRunningTime="2026-02-27 10:49:19.834552095 +0000 UTC m=+1379.796918211" Feb 27 10:49:19 crc kubenswrapper[4728]: I0227 10:49:19.839582 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-74fdd766cd-6wpqm" event={"ID":"7bd979a4-451e-4f4d-affa-43bfbc671238","Type":"ContainerStarted","Data":"ae7b7ab0d08354aad20f1827991f7a87b4b631681daec95bc589721c22fb9583"} Feb 27 10:49:19 crc kubenswrapper[4728]: I0227 10:49:19.841634 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5cd4cd64f5-rhzmp" event={"ID":"56cf3cf9-51d6-4ec0-ac63-0fb0e567c23d","Type":"ContainerStarted","Data":"d5c7dd4b086fcf9b62aeb4a215bc19049d7160c3603741af03e0130ef9d80fb4"} Feb 27 10:49:19 crc kubenswrapper[4728]: I0227 10:49:19.857555 4728 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 27 10:49:19 crc kubenswrapper[4728]: I0227 10:49:19.857671 4728 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 27 10:49:19 crc kubenswrapper[4728]: I0227 10:49:19.858389 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7575ff7b96-fznp4" event={"ID":"fb615ea1-cb62-47ba-886e-eb2f761fea63","Type":"ContainerStarted","Data":"5dd7693e78cc9bc31dc82ba6e1c790106fc15e575e580f30aa74a2ed2af38ea5"} Feb 27 10:49:19 crc kubenswrapper[4728]: I0227 10:49:19.868042 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-dds8x" podStartSLOduration=5.578891734 podStartE2EDuration="53.868019943s" podCreationTimestamp="2026-02-27 10:48:26 +0000 UTC" firstStartedPulling="2026-02-27 10:48:28.517845412 +0000 UTC m=+1328.480211518" lastFinishedPulling="2026-02-27 10:49:16.806973621 +0000 UTC m=+1376.769339727" observedRunningTime="2026-02-27 10:49:19.846740896 +0000 UTC m=+1379.809107002" watchObservedRunningTime="2026-02-27 10:49:19.868019943 +0000 UTC m=+1379.830386049" Feb 27 10:49:20 crc kubenswrapper[4728]: I0227 10:49:20.762608 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fafc2d4e-4a84-43c4-8ffc-4d5b7b3fe4f7" path="/var/lib/kubelet/pods/fafc2d4e-4a84-43c4-8ffc-4d5b7b3fe4f7/volumes" Feb 27 10:49:20 crc kubenswrapper[4728]: I0227 10:49:20.763925 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 27 10:49:20 crc kubenswrapper[4728]: I0227 10:49:20.911557 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-74fdd766cd-6wpqm" Feb 27 10:49:20 crc kubenswrapper[4728]: I0227 10:49:20.911584 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-74fdd766cd-6wpqm" Feb 27 10:49:20 crc kubenswrapper[4728]: I0227 10:49:20.911594 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-74fdd766cd-6wpqm" event={"ID":"7bd979a4-451e-4f4d-affa-43bfbc671238","Type":"ContainerStarted","Data":"2f79f0566f330c7ee79dab182686219d894cf8bac85a4838943573d299e090c5"} Feb 27 10:49:20 crc kubenswrapper[4728]: I0227 10:49:20.911610 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-74fdd766cd-6wpqm" event={"ID":"7bd979a4-451e-4f4d-affa-43bfbc671238","Type":"ContainerStarted","Data":"1303e6b84b0023b055b9dd2297ed0c2fb3664f88151c8abddf87df8ef82c3a32"} Feb 27 10:49:20 crc kubenswrapper[4728]: I0227 10:49:20.936893 4728 generic.go:334] "Generic (PLEG): container finished" podID="73597ba1-35ae-48f3-a74f-14816ae7bc61" containerID="62449a8dde36f4272117abc5f949454f04141e898120a07fb80e1e23ef083181" exitCode=0 Feb 27 10:49:20 crc kubenswrapper[4728]: I0227 10:49:20.936961 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688c87cc99-zcq55" event={"ID":"73597ba1-35ae-48f3-a74f-14816ae7bc61","Type":"ContainerDied","Data":"62449a8dde36f4272117abc5f949454f04141e898120a07fb80e1e23ef083181"} Feb 27 10:49:20 crc kubenswrapper[4728]: I0227 10:49:20.944410 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-74fdd766cd-6wpqm" podStartSLOduration=3.944395081 podStartE2EDuration="3.944395081s" podCreationTimestamp="2026-02-27 10:49:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:49:20.942107478 +0000 UTC m=+1380.904473584" watchObservedRunningTime="2026-02-27 10:49:20.944395081 +0000 UTC m=+1380.906761187" Feb 27 10:49:21 crc kubenswrapper[4728]: I0227 10:49:21.178576 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-57f4b5948b-j7k68"] Feb 27 10:49:21 crc kubenswrapper[4728]: E0227 10:49:21.179681 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fafc2d4e-4a84-43c4-8ffc-4d5b7b3fe4f7" containerName="init" Feb 27 10:49:21 crc kubenswrapper[4728]: I0227 10:49:21.179701 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="fafc2d4e-4a84-43c4-8ffc-4d5b7b3fe4f7" containerName="init" Feb 27 10:49:21 crc kubenswrapper[4728]: E0227 10:49:21.179761 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fafc2d4e-4a84-43c4-8ffc-4d5b7b3fe4f7" containerName="dnsmasq-dns" Feb 27 10:49:21 crc kubenswrapper[4728]: I0227 10:49:21.179769 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="fafc2d4e-4a84-43c4-8ffc-4d5b7b3fe4f7" containerName="dnsmasq-dns" Feb 27 10:49:21 crc kubenswrapper[4728]: I0227 10:49:21.180206 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="fafc2d4e-4a84-43c4-8ffc-4d5b7b3fe4f7" containerName="dnsmasq-dns" Feb 27 10:49:21 crc kubenswrapper[4728]: I0227 10:49:21.182265 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-57f4b5948b-j7k68" Feb 27 10:49:21 crc kubenswrapper[4728]: I0227 10:49:21.187442 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Feb 27 10:49:21 crc kubenswrapper[4728]: I0227 10:49:21.187700 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Feb 27 10:49:21 crc kubenswrapper[4728]: I0227 10:49:21.248129 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-57f4b5948b-j7k68"] Feb 27 10:49:21 crc kubenswrapper[4728]: I0227 10:49:21.256539 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29420e42-ebe7-4df2-8418-30b0fcb5c627-config-data\") pod \"barbican-api-57f4b5948b-j7k68\" (UID: \"29420e42-ebe7-4df2-8418-30b0fcb5c627\") " pod="openstack/barbican-api-57f4b5948b-j7k68" Feb 27 10:49:21 crc kubenswrapper[4728]: I0227 10:49:21.256723 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7xrg\" (UniqueName: \"kubernetes.io/projected/29420e42-ebe7-4df2-8418-30b0fcb5c627-kube-api-access-j7xrg\") pod \"barbican-api-57f4b5948b-j7k68\" (UID: \"29420e42-ebe7-4df2-8418-30b0fcb5c627\") " pod="openstack/barbican-api-57f4b5948b-j7k68" Feb 27 10:49:21 crc kubenswrapper[4728]: I0227 10:49:21.256796 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/29420e42-ebe7-4df2-8418-30b0fcb5c627-config-data-custom\") pod \"barbican-api-57f4b5948b-j7k68\" (UID: \"29420e42-ebe7-4df2-8418-30b0fcb5c627\") " pod="openstack/barbican-api-57f4b5948b-j7k68" Feb 27 10:49:21 crc kubenswrapper[4728]: I0227 10:49:21.257160 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/29420e42-ebe7-4df2-8418-30b0fcb5c627-public-tls-certs\") pod \"barbican-api-57f4b5948b-j7k68\" (UID: \"29420e42-ebe7-4df2-8418-30b0fcb5c627\") " pod="openstack/barbican-api-57f4b5948b-j7k68" Feb 27 10:49:21 crc kubenswrapper[4728]: I0227 10:49:21.257344 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/29420e42-ebe7-4df2-8418-30b0fcb5c627-internal-tls-certs\") pod \"barbican-api-57f4b5948b-j7k68\" (UID: \"29420e42-ebe7-4df2-8418-30b0fcb5c627\") " pod="openstack/barbican-api-57f4b5948b-j7k68" Feb 27 10:49:21 crc kubenswrapper[4728]: I0227 10:49:21.257414 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/29420e42-ebe7-4df2-8418-30b0fcb5c627-logs\") pod \"barbican-api-57f4b5948b-j7k68\" (UID: \"29420e42-ebe7-4df2-8418-30b0fcb5c627\") " pod="openstack/barbican-api-57f4b5948b-j7k68" Feb 27 10:49:21 crc kubenswrapper[4728]: I0227 10:49:21.257485 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29420e42-ebe7-4df2-8418-30b0fcb5c627-combined-ca-bundle\") pod \"barbican-api-57f4b5948b-j7k68\" (UID: \"29420e42-ebe7-4df2-8418-30b0fcb5c627\") " pod="openstack/barbican-api-57f4b5948b-j7k68" Feb 27 10:49:21 crc kubenswrapper[4728]: I0227 10:49:21.361859 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/29420e42-ebe7-4df2-8418-30b0fcb5c627-internal-tls-certs\") pod \"barbican-api-57f4b5948b-j7k68\" (UID: \"29420e42-ebe7-4df2-8418-30b0fcb5c627\") " pod="openstack/barbican-api-57f4b5948b-j7k68" Feb 27 10:49:21 crc kubenswrapper[4728]: I0227 10:49:21.361926 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/29420e42-ebe7-4df2-8418-30b0fcb5c627-logs\") pod \"barbican-api-57f4b5948b-j7k68\" (UID: \"29420e42-ebe7-4df2-8418-30b0fcb5c627\") " pod="openstack/barbican-api-57f4b5948b-j7k68" Feb 27 10:49:21 crc kubenswrapper[4728]: I0227 10:49:21.361958 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29420e42-ebe7-4df2-8418-30b0fcb5c627-combined-ca-bundle\") pod \"barbican-api-57f4b5948b-j7k68\" (UID: \"29420e42-ebe7-4df2-8418-30b0fcb5c627\") " pod="openstack/barbican-api-57f4b5948b-j7k68" Feb 27 10:49:21 crc kubenswrapper[4728]: I0227 10:49:21.361978 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29420e42-ebe7-4df2-8418-30b0fcb5c627-config-data\") pod \"barbican-api-57f4b5948b-j7k68\" (UID: \"29420e42-ebe7-4df2-8418-30b0fcb5c627\") " pod="openstack/barbican-api-57f4b5948b-j7k68" Feb 27 10:49:21 crc kubenswrapper[4728]: I0227 10:49:21.362024 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j7xrg\" (UniqueName: \"kubernetes.io/projected/29420e42-ebe7-4df2-8418-30b0fcb5c627-kube-api-access-j7xrg\") pod \"barbican-api-57f4b5948b-j7k68\" (UID: \"29420e42-ebe7-4df2-8418-30b0fcb5c627\") " pod="openstack/barbican-api-57f4b5948b-j7k68" Feb 27 10:49:21 crc kubenswrapper[4728]: I0227 10:49:21.362047 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/29420e42-ebe7-4df2-8418-30b0fcb5c627-config-data-custom\") pod \"barbican-api-57f4b5948b-j7k68\" (UID: \"29420e42-ebe7-4df2-8418-30b0fcb5c627\") " pod="openstack/barbican-api-57f4b5948b-j7k68" Feb 27 10:49:21 crc kubenswrapper[4728]: I0227 10:49:21.362125 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/29420e42-ebe7-4df2-8418-30b0fcb5c627-public-tls-certs\") pod \"barbican-api-57f4b5948b-j7k68\" (UID: \"29420e42-ebe7-4df2-8418-30b0fcb5c627\") " pod="openstack/barbican-api-57f4b5948b-j7k68" Feb 27 10:49:21 crc kubenswrapper[4728]: I0227 10:49:21.369225 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/29420e42-ebe7-4df2-8418-30b0fcb5c627-logs\") pod \"barbican-api-57f4b5948b-j7k68\" (UID: \"29420e42-ebe7-4df2-8418-30b0fcb5c627\") " pod="openstack/barbican-api-57f4b5948b-j7k68" Feb 27 10:49:21 crc kubenswrapper[4728]: I0227 10:49:21.376126 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29420e42-ebe7-4df2-8418-30b0fcb5c627-config-data\") pod \"barbican-api-57f4b5948b-j7k68\" (UID: \"29420e42-ebe7-4df2-8418-30b0fcb5c627\") " pod="openstack/barbican-api-57f4b5948b-j7k68" Feb 27 10:49:21 crc kubenswrapper[4728]: I0227 10:49:21.378118 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/29420e42-ebe7-4df2-8418-30b0fcb5c627-public-tls-certs\") pod \"barbican-api-57f4b5948b-j7k68\" (UID: \"29420e42-ebe7-4df2-8418-30b0fcb5c627\") " pod="openstack/barbican-api-57f4b5948b-j7k68" Feb 27 10:49:21 crc kubenswrapper[4728]: I0227 10:49:21.380133 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/29420e42-ebe7-4df2-8418-30b0fcb5c627-internal-tls-certs\") pod \"barbican-api-57f4b5948b-j7k68\" (UID: \"29420e42-ebe7-4df2-8418-30b0fcb5c627\") " pod="openstack/barbican-api-57f4b5948b-j7k68" Feb 27 10:49:21 crc kubenswrapper[4728]: I0227 10:49:21.392128 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/29420e42-ebe7-4df2-8418-30b0fcb5c627-config-data-custom\") pod \"barbican-api-57f4b5948b-j7k68\" (UID: \"29420e42-ebe7-4df2-8418-30b0fcb5c627\") " pod="openstack/barbican-api-57f4b5948b-j7k68" Feb 27 10:49:21 crc kubenswrapper[4728]: I0227 10:49:21.399036 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j7xrg\" (UniqueName: \"kubernetes.io/projected/29420e42-ebe7-4df2-8418-30b0fcb5c627-kube-api-access-j7xrg\") pod \"barbican-api-57f4b5948b-j7k68\" (UID: \"29420e42-ebe7-4df2-8418-30b0fcb5c627\") " pod="openstack/barbican-api-57f4b5948b-j7k68" Feb 27 10:49:21 crc kubenswrapper[4728]: I0227 10:49:21.401199 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29420e42-ebe7-4df2-8418-30b0fcb5c627-combined-ca-bundle\") pod \"barbican-api-57f4b5948b-j7k68\" (UID: \"29420e42-ebe7-4df2-8418-30b0fcb5c627\") " pod="openstack/barbican-api-57f4b5948b-j7k68" Feb 27 10:49:21 crc kubenswrapper[4728]: I0227 10:49:21.549881 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-57f4b5948b-j7k68" Feb 27 10:49:21 crc kubenswrapper[4728]: I0227 10:49:21.802979 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 27 10:49:21 crc kubenswrapper[4728]: I0227 10:49:21.803088 4728 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 27 10:49:21 crc kubenswrapper[4728]: I0227 10:49:21.872053 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 27 10:49:23 crc kubenswrapper[4728]: I0227 10:49:23.309365 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-57f4b5948b-j7k68"] Feb 27 10:49:23 crc kubenswrapper[4728]: I0227 10:49:23.994743 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5b68b655dc-96rhh" event={"ID":"a75dd8a4-3980-4825-bacf-fe2f0a9221d6","Type":"ContainerStarted","Data":"bf4ab0e8b53432f65f957b609d77dd7c1169f6ce39571b9b0a23f4f0890fe6c6"} Feb 27 10:49:23 crc kubenswrapper[4728]: I0227 10:49:23.995216 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5b68b655dc-96rhh" event={"ID":"a75dd8a4-3980-4825-bacf-fe2f0a9221d6","Type":"ContainerStarted","Data":"7bf2749067ddf95bc8e0bf9535f1d6bfc2be1e4acee89e88a793ba5d9aeb8cdc"} Feb 27 10:49:24 crc kubenswrapper[4728]: I0227 10:49:24.000543 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688c87cc99-zcq55" event={"ID":"73597ba1-35ae-48f3-a74f-14816ae7bc61","Type":"ContainerStarted","Data":"c74115dbc6c810db98fb170a18853192bf94ccc9e2afb719b3711d29f26b429f"} Feb 27 10:49:24 crc kubenswrapper[4728]: I0227 10:49:24.001622 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-688c87cc99-zcq55" Feb 27 10:49:24 crc kubenswrapper[4728]: I0227 10:49:24.006874 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7676cbc4f4-f7krv" event={"ID":"e093b475-30b1-478c-b832-8b7b61a5f8f5","Type":"ContainerStarted","Data":"9634b1a101d7f7dc90fb75bbbeef8348b988b59542efd1dc84c40d8f4ffce27c"} Feb 27 10:49:24 crc kubenswrapper[4728]: I0227 10:49:24.006913 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7676cbc4f4-f7krv" event={"ID":"e093b475-30b1-478c-b832-8b7b61a5f8f5","Type":"ContainerStarted","Data":"87e896e525743547b8f62945b14e0d60a7b281582f75c22e95cdc29ff158baf8"} Feb 27 10:49:24 crc kubenswrapper[4728]: I0227 10:49:24.011919 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5cd4cd64f5-rhzmp" event={"ID":"56cf3cf9-51d6-4ec0-ac63-0fb0e567c23d","Type":"ContainerStarted","Data":"fd3c25c8085189259f47ed83879733feb3cf7ee6663d0ca0b342616ce143e50b"} Feb 27 10:49:24 crc kubenswrapper[4728]: I0227 10:49:24.011966 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5cd4cd64f5-rhzmp" event={"ID":"56cf3cf9-51d6-4ec0-ac63-0fb0e567c23d","Type":"ContainerStarted","Data":"513a27ed751d69a0fb4eea523015ed166374d0b3248a2b9de7386aa26403baa5"} Feb 27 10:49:24 crc kubenswrapper[4728]: I0227 10:49:24.028708 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7575ff7b96-fznp4" event={"ID":"fb615ea1-cb62-47ba-886e-eb2f761fea63","Type":"ContainerStarted","Data":"b0dfce92cd827cfd6c158da85cf236d418482e3001e4b965eaee5de2304a37b3"} Feb 27 10:49:24 crc kubenswrapper[4728]: I0227 10:49:24.028745 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7575ff7b96-fznp4" event={"ID":"fb615ea1-cb62-47ba-886e-eb2f761fea63","Type":"ContainerStarted","Data":"f34013f5f83db04f395fcd50fdc27ef2269837af34f414b41e99d4675f6237ff"} Feb 27 10:49:24 crc kubenswrapper[4728]: I0227 10:49:24.041913 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-5b68b655dc-96rhh" podStartSLOduration=3.759600996 podStartE2EDuration="7.041899011s" podCreationTimestamp="2026-02-27 10:49:17 +0000 UTC" firstStartedPulling="2026-02-27 10:49:19.50418934 +0000 UTC m=+1379.466555446" lastFinishedPulling="2026-02-27 10:49:22.786487345 +0000 UTC m=+1382.748853461" observedRunningTime="2026-02-27 10:49:24.016863302 +0000 UTC m=+1383.979229408" watchObservedRunningTime="2026-02-27 10:49:24.041899011 +0000 UTC m=+1384.004265117" Feb 27 10:49:24 crc kubenswrapper[4728]: I0227 10:49:24.043889 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-57f4b5948b-j7k68" event={"ID":"29420e42-ebe7-4df2-8418-30b0fcb5c627","Type":"ContainerStarted","Data":"09f5a3b7ccf8196440403a86b8c0564f9d3401dde2e8264ccbb49832df428e15"} Feb 27 10:49:24 crc kubenswrapper[4728]: I0227 10:49:24.043937 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-57f4b5948b-j7k68" event={"ID":"29420e42-ebe7-4df2-8418-30b0fcb5c627","Type":"ContainerStarted","Data":"c9e8d26ad238d5b92fb4205f7f17e427050331d44298076df61c96b59bfbf193"} Feb 27 10:49:24 crc kubenswrapper[4728]: I0227 10:49:24.043948 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-57f4b5948b-j7k68" event={"ID":"29420e42-ebe7-4df2-8418-30b0fcb5c627","Type":"ContainerStarted","Data":"03ac74a4a5205260dd29f372c231fb1a5ea263ea3e64c50cdef404a37b78d96c"} Feb 27 10:49:24 crc kubenswrapper[4728]: I0227 10:49:24.045024 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-57f4b5948b-j7k68" Feb 27 10:49:24 crc kubenswrapper[4728]: I0227 10:49:24.045053 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-57f4b5948b-j7k68" Feb 27 10:49:24 crc kubenswrapper[4728]: I0227 10:49:24.062536 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-7676cbc4f4-f7krv" podStartSLOduration=3.381653991 podStartE2EDuration="7.062516731s" podCreationTimestamp="2026-02-27 10:49:17 +0000 UTC" firstStartedPulling="2026-02-27 10:49:19.105886003 +0000 UTC m=+1379.068252109" lastFinishedPulling="2026-02-27 10:49:22.786748743 +0000 UTC m=+1382.749114849" observedRunningTime="2026-02-27 10:49:24.038976922 +0000 UTC m=+1384.001343028" watchObservedRunningTime="2026-02-27 10:49:24.062516731 +0000 UTC m=+1384.024882827" Feb 27 10:49:24 crc kubenswrapper[4728]: I0227 10:49:24.066603 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-5cd4cd64f5-rhzmp"] Feb 27 10:49:24 crc kubenswrapper[4728]: I0227 10:49:24.076062 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-5cd4cd64f5-rhzmp" podStartSLOduration=4.256745202 podStartE2EDuration="8.076042199s" podCreationTimestamp="2026-02-27 10:49:16 +0000 UTC" firstStartedPulling="2026-02-27 10:49:18.966083869 +0000 UTC m=+1378.928449975" lastFinishedPulling="2026-02-27 10:49:22.785380866 +0000 UTC m=+1382.747746972" observedRunningTime="2026-02-27 10:49:24.072087281 +0000 UTC m=+1384.034453387" watchObservedRunningTime="2026-02-27 10:49:24.076042199 +0000 UTC m=+1384.038408305" Feb 27 10:49:24 crc kubenswrapper[4728]: I0227 10:49:24.077871 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-7575ff7b96-fznp4"] Feb 27 10:49:24 crc kubenswrapper[4728]: I0227 10:49:24.120180 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-688c87cc99-zcq55" podStartSLOduration=7.120160885 podStartE2EDuration="7.120160885s" podCreationTimestamp="2026-02-27 10:49:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:49:24.108370966 +0000 UTC m=+1384.070737072" watchObservedRunningTime="2026-02-27 10:49:24.120160885 +0000 UTC m=+1384.082526991" Feb 27 10:49:24 crc kubenswrapper[4728]: I0227 10:49:24.173878 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-57f4b5948b-j7k68" podStartSLOduration=3.173857842 podStartE2EDuration="3.173857842s" podCreationTimestamp="2026-02-27 10:49:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:49:24.133637241 +0000 UTC m=+1384.096003347" watchObservedRunningTime="2026-02-27 10:49:24.173857842 +0000 UTC m=+1384.136223948" Feb 27 10:49:24 crc kubenswrapper[4728]: I0227 10:49:24.186099 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-7575ff7b96-fznp4" podStartSLOduration=4.483578216 podStartE2EDuration="8.186084344s" podCreationTimestamp="2026-02-27 10:49:16 +0000 UTC" firstStartedPulling="2026-02-27 10:49:19.083847154 +0000 UTC m=+1379.046213260" lastFinishedPulling="2026-02-27 10:49:22.786353272 +0000 UTC m=+1382.748719388" observedRunningTime="2026-02-27 10:49:24.154923329 +0000 UTC m=+1384.117289435" watchObservedRunningTime="2026-02-27 10:49:24.186084344 +0000 UTC m=+1384.148450450" Feb 27 10:49:26 crc kubenswrapper[4728]: I0227 10:49:26.079891 4728 generic.go:334] "Generic (PLEG): container finished" podID="e977ffad-2764-4871-bdc8-24f0c3b4caf1" containerID="dd812a37993c075fb2911e1dcdf493cf564cd9a15900330e4de3fd18c118594f" exitCode=0 Feb 27 10:49:26 crc kubenswrapper[4728]: I0227 10:49:26.081333 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-dgjpm" event={"ID":"e977ffad-2764-4871-bdc8-24f0c3b4caf1","Type":"ContainerDied","Data":"dd812a37993c075fb2911e1dcdf493cf564cd9a15900330e4de3fd18c118594f"} Feb 27 10:49:26 crc kubenswrapper[4728]: I0227 10:49:26.081915 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-5cd4cd64f5-rhzmp" podUID="56cf3cf9-51d6-4ec0-ac63-0fb0e567c23d" containerName="barbican-worker-log" containerID="cri-o://513a27ed751d69a0fb4eea523015ed166374d0b3248a2b9de7386aa26403baa5" gracePeriod=30 Feb 27 10:49:26 crc kubenswrapper[4728]: I0227 10:49:26.081943 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-5cd4cd64f5-rhzmp" podUID="56cf3cf9-51d6-4ec0-ac63-0fb0e567c23d" containerName="barbican-worker" containerID="cri-o://fd3c25c8085189259f47ed83879733feb3cf7ee6663d0ca0b342616ce143e50b" gracePeriod=30 Feb 27 10:49:26 crc kubenswrapper[4728]: I0227 10:49:26.082108 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-7575ff7b96-fznp4" podUID="fb615ea1-cb62-47ba-886e-eb2f761fea63" containerName="barbican-keystone-listener-log" containerID="cri-o://f34013f5f83db04f395fcd50fdc27ef2269837af34f414b41e99d4675f6237ff" gracePeriod=30 Feb 27 10:49:26 crc kubenswrapper[4728]: I0227 10:49:26.082123 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-7575ff7b96-fznp4" podUID="fb615ea1-cb62-47ba-886e-eb2f761fea63" containerName="barbican-keystone-listener" containerID="cri-o://b0dfce92cd827cfd6c158da85cf236d418482e3001e4b965eaee5de2304a37b3" gracePeriod=30 Feb 27 10:49:27 crc kubenswrapper[4728]: I0227 10:49:27.096813 4728 generic.go:334] "Generic (PLEG): container finished" podID="56cf3cf9-51d6-4ec0-ac63-0fb0e567c23d" containerID="fd3c25c8085189259f47ed83879733feb3cf7ee6663d0ca0b342616ce143e50b" exitCode=0 Feb 27 10:49:27 crc kubenswrapper[4728]: I0227 10:49:27.097095 4728 generic.go:334] "Generic (PLEG): container finished" podID="56cf3cf9-51d6-4ec0-ac63-0fb0e567c23d" containerID="513a27ed751d69a0fb4eea523015ed166374d0b3248a2b9de7386aa26403baa5" exitCode=143 Feb 27 10:49:27 crc kubenswrapper[4728]: I0227 10:49:27.097137 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5cd4cd64f5-rhzmp" event={"ID":"56cf3cf9-51d6-4ec0-ac63-0fb0e567c23d","Type":"ContainerDied","Data":"fd3c25c8085189259f47ed83879733feb3cf7ee6663d0ca0b342616ce143e50b"} Feb 27 10:49:27 crc kubenswrapper[4728]: I0227 10:49:27.097208 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5cd4cd64f5-rhzmp" event={"ID":"56cf3cf9-51d6-4ec0-ac63-0fb0e567c23d","Type":"ContainerDied","Data":"513a27ed751d69a0fb4eea523015ed166374d0b3248a2b9de7386aa26403baa5"} Feb 27 10:49:27 crc kubenswrapper[4728]: I0227 10:49:27.099430 4728 generic.go:334] "Generic (PLEG): container finished" podID="fb615ea1-cb62-47ba-886e-eb2f761fea63" containerID="b0dfce92cd827cfd6c158da85cf236d418482e3001e4b965eaee5de2304a37b3" exitCode=0 Feb 27 10:49:27 crc kubenswrapper[4728]: I0227 10:49:27.099454 4728 generic.go:334] "Generic (PLEG): container finished" podID="fb615ea1-cb62-47ba-886e-eb2f761fea63" containerID="f34013f5f83db04f395fcd50fdc27ef2269837af34f414b41e99d4675f6237ff" exitCode=143 Feb 27 10:49:27 crc kubenswrapper[4728]: I0227 10:49:27.099497 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7575ff7b96-fznp4" event={"ID":"fb615ea1-cb62-47ba-886e-eb2f761fea63","Type":"ContainerDied","Data":"b0dfce92cd827cfd6c158da85cf236d418482e3001e4b965eaee5de2304a37b3"} Feb 27 10:49:27 crc kubenswrapper[4728]: I0227 10:49:27.099539 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7575ff7b96-fznp4" event={"ID":"fb615ea1-cb62-47ba-886e-eb2f761fea63","Type":"ContainerDied","Data":"f34013f5f83db04f395fcd50fdc27ef2269837af34f414b41e99d4675f6237ff"} Feb 27 10:49:29 crc kubenswrapper[4728]: I0227 10:49:29.890619 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-dgjpm" Feb 27 10:49:30 crc kubenswrapper[4728]: I0227 10:49:30.081140 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b8gs7\" (UniqueName: \"kubernetes.io/projected/e977ffad-2764-4871-bdc8-24f0c3b4caf1-kube-api-access-b8gs7\") pod \"e977ffad-2764-4871-bdc8-24f0c3b4caf1\" (UID: \"e977ffad-2764-4871-bdc8-24f0c3b4caf1\") " Feb 27 10:49:30 crc kubenswrapper[4728]: I0227 10:49:30.081728 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e977ffad-2764-4871-bdc8-24f0c3b4caf1-combined-ca-bundle\") pod \"e977ffad-2764-4871-bdc8-24f0c3b4caf1\" (UID: \"e977ffad-2764-4871-bdc8-24f0c3b4caf1\") " Feb 27 10:49:30 crc kubenswrapper[4728]: I0227 10:49:30.081785 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e977ffad-2764-4871-bdc8-24f0c3b4caf1-config-data\") pod \"e977ffad-2764-4871-bdc8-24f0c3b4caf1\" (UID: \"e977ffad-2764-4871-bdc8-24f0c3b4caf1\") " Feb 27 10:49:30 crc kubenswrapper[4728]: I0227 10:49:30.087098 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-74fdd766cd-6wpqm" Feb 27 10:49:30 crc kubenswrapper[4728]: I0227 10:49:30.089734 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e977ffad-2764-4871-bdc8-24f0c3b4caf1-kube-api-access-b8gs7" (OuterVolumeSpecName: "kube-api-access-b8gs7") pod "e977ffad-2764-4871-bdc8-24f0c3b4caf1" (UID: "e977ffad-2764-4871-bdc8-24f0c3b4caf1"). InnerVolumeSpecName "kube-api-access-b8gs7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:49:30 crc kubenswrapper[4728]: I0227 10:49:30.141307 4728 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-74fdd766cd-6wpqm" podUID="7bd979a4-451e-4f4d-affa-43bfbc671238" containerName="barbican-api" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 27 10:49:30 crc kubenswrapper[4728]: I0227 10:49:30.146816 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e977ffad-2764-4871-bdc8-24f0c3b4caf1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e977ffad-2764-4871-bdc8-24f0c3b4caf1" (UID: "e977ffad-2764-4871-bdc8-24f0c3b4caf1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:49:30 crc kubenswrapper[4728]: I0227 10:49:30.146922 4728 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-74fdd766cd-6wpqm" podUID="7bd979a4-451e-4f4d-affa-43bfbc671238" containerName="barbican-api-log" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 27 10:49:30 crc kubenswrapper[4728]: I0227 10:49:30.157218 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-dgjpm" event={"ID":"e977ffad-2764-4871-bdc8-24f0c3b4caf1","Type":"ContainerDied","Data":"556c1c4bd1fe865165f86e3f38b978424099c08ffa37ddbece050573cd110cbb"} Feb 27 10:49:30 crc kubenswrapper[4728]: I0227 10:49:30.157272 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-dgjpm" Feb 27 10:49:30 crc kubenswrapper[4728]: I0227 10:49:30.157340 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="556c1c4bd1fe865165f86e3f38b978424099c08ffa37ddbece050573cd110cbb" Feb 27 10:49:30 crc kubenswrapper[4728]: I0227 10:49:30.166737 4728 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-74fdd766cd-6wpqm" podUID="7bd979a4-451e-4f4d-affa-43bfbc671238" containerName="barbican-api" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 27 10:49:30 crc kubenswrapper[4728]: I0227 10:49:30.186185 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b8gs7\" (UniqueName: \"kubernetes.io/projected/e977ffad-2764-4871-bdc8-24f0c3b4caf1-kube-api-access-b8gs7\") on node \"crc\" DevicePath \"\"" Feb 27 10:49:30 crc kubenswrapper[4728]: I0227 10:49:30.186224 4728 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e977ffad-2764-4871-bdc8-24f0c3b4caf1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 10:49:30 crc kubenswrapper[4728]: I0227 10:49:30.261059 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e977ffad-2764-4871-bdc8-24f0c3b4caf1-config-data" (OuterVolumeSpecName: "config-data") pod "e977ffad-2764-4871-bdc8-24f0c3b4caf1" (UID: "e977ffad-2764-4871-bdc8-24f0c3b4caf1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:49:30 crc kubenswrapper[4728]: I0227 10:49:30.288544 4728 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e977ffad-2764-4871-bdc8-24f0c3b4caf1-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 10:49:30 crc kubenswrapper[4728]: I0227 10:49:30.526753 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-7575ff7b96-fznp4" Feb 27 10:49:30 crc kubenswrapper[4728]: I0227 10:49:30.533673 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-5cd4cd64f5-rhzmp" Feb 27 10:49:30 crc kubenswrapper[4728]: I0227 10:49:30.597588 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56cf3cf9-51d6-4ec0-ac63-0fb0e567c23d-config-data\") pod \"56cf3cf9-51d6-4ec0-ac63-0fb0e567c23d\" (UID: \"56cf3cf9-51d6-4ec0-ac63-0fb0e567c23d\") " Feb 27 10:49:30 crc kubenswrapper[4728]: I0227 10:49:30.597636 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56cf3cf9-51d6-4ec0-ac63-0fb0e567c23d-combined-ca-bundle\") pod \"56cf3cf9-51d6-4ec0-ac63-0fb0e567c23d\" (UID: \"56cf3cf9-51d6-4ec0-ac63-0fb0e567c23d\") " Feb 27 10:49:30 crc kubenswrapper[4728]: I0227 10:49:30.597719 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/56cf3cf9-51d6-4ec0-ac63-0fb0e567c23d-config-data-custom\") pod \"56cf3cf9-51d6-4ec0-ac63-0fb0e567c23d\" (UID: \"56cf3cf9-51d6-4ec0-ac63-0fb0e567c23d\") " Feb 27 10:49:30 crc kubenswrapper[4728]: I0227 10:49:30.597820 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/56cf3cf9-51d6-4ec0-ac63-0fb0e567c23d-logs\") pod \"56cf3cf9-51d6-4ec0-ac63-0fb0e567c23d\" (UID: \"56cf3cf9-51d6-4ec0-ac63-0fb0e567c23d\") " Feb 27 10:49:30 crc kubenswrapper[4728]: I0227 10:49:30.597871 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mwrjj\" (UniqueName: \"kubernetes.io/projected/56cf3cf9-51d6-4ec0-ac63-0fb0e567c23d-kube-api-access-mwrjj\") pod \"56cf3cf9-51d6-4ec0-ac63-0fb0e567c23d\" (UID: \"56cf3cf9-51d6-4ec0-ac63-0fb0e567c23d\") " Feb 27 10:49:30 crc kubenswrapper[4728]: I0227 10:49:30.597906 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7vn6d\" (UniqueName: \"kubernetes.io/projected/fb615ea1-cb62-47ba-886e-eb2f761fea63-kube-api-access-7vn6d\") pod \"fb615ea1-cb62-47ba-886e-eb2f761fea63\" (UID: \"fb615ea1-cb62-47ba-886e-eb2f761fea63\") " Feb 27 10:49:30 crc kubenswrapper[4728]: I0227 10:49:30.597922 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fb615ea1-cb62-47ba-886e-eb2f761fea63-config-data-custom\") pod \"fb615ea1-cb62-47ba-886e-eb2f761fea63\" (UID: \"fb615ea1-cb62-47ba-886e-eb2f761fea63\") " Feb 27 10:49:30 crc kubenswrapper[4728]: I0227 10:49:30.597947 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb615ea1-cb62-47ba-886e-eb2f761fea63-combined-ca-bundle\") pod \"fb615ea1-cb62-47ba-886e-eb2f761fea63\" (UID: \"fb615ea1-cb62-47ba-886e-eb2f761fea63\") " Feb 27 10:49:30 crc kubenswrapper[4728]: I0227 10:49:30.597979 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fb615ea1-cb62-47ba-886e-eb2f761fea63-logs\") pod \"fb615ea1-cb62-47ba-886e-eb2f761fea63\" (UID: \"fb615ea1-cb62-47ba-886e-eb2f761fea63\") " Feb 27 10:49:30 crc kubenswrapper[4728]: I0227 10:49:30.598022 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb615ea1-cb62-47ba-886e-eb2f761fea63-config-data\") pod \"fb615ea1-cb62-47ba-886e-eb2f761fea63\" (UID: \"fb615ea1-cb62-47ba-886e-eb2f761fea63\") " Feb 27 10:49:30 crc kubenswrapper[4728]: I0227 10:49:30.600072 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/56cf3cf9-51d6-4ec0-ac63-0fb0e567c23d-logs" (OuterVolumeSpecName: "logs") pod "56cf3cf9-51d6-4ec0-ac63-0fb0e567c23d" (UID: "56cf3cf9-51d6-4ec0-ac63-0fb0e567c23d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:49:30 crc kubenswrapper[4728]: I0227 10:49:30.600072 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fb615ea1-cb62-47ba-886e-eb2f761fea63-logs" (OuterVolumeSpecName: "logs") pod "fb615ea1-cb62-47ba-886e-eb2f761fea63" (UID: "fb615ea1-cb62-47ba-886e-eb2f761fea63"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:49:30 crc kubenswrapper[4728]: I0227 10:49:30.600833 4728 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fb615ea1-cb62-47ba-886e-eb2f761fea63-logs\") on node \"crc\" DevicePath \"\"" Feb 27 10:49:30 crc kubenswrapper[4728]: I0227 10:49:30.600857 4728 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/56cf3cf9-51d6-4ec0-ac63-0fb0e567c23d-logs\") on node \"crc\" DevicePath \"\"" Feb 27 10:49:30 crc kubenswrapper[4728]: I0227 10:49:30.601760 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56cf3cf9-51d6-4ec0-ac63-0fb0e567c23d-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "56cf3cf9-51d6-4ec0-ac63-0fb0e567c23d" (UID: "56cf3cf9-51d6-4ec0-ac63-0fb0e567c23d"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:49:30 crc kubenswrapper[4728]: I0227 10:49:30.602845 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb615ea1-cb62-47ba-886e-eb2f761fea63-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "fb615ea1-cb62-47ba-886e-eb2f761fea63" (UID: "fb615ea1-cb62-47ba-886e-eb2f761fea63"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:49:30 crc kubenswrapper[4728]: I0227 10:49:30.603685 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb615ea1-cb62-47ba-886e-eb2f761fea63-kube-api-access-7vn6d" (OuterVolumeSpecName: "kube-api-access-7vn6d") pod "fb615ea1-cb62-47ba-886e-eb2f761fea63" (UID: "fb615ea1-cb62-47ba-886e-eb2f761fea63"). InnerVolumeSpecName "kube-api-access-7vn6d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:49:30 crc kubenswrapper[4728]: I0227 10:49:30.604072 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56cf3cf9-51d6-4ec0-ac63-0fb0e567c23d-kube-api-access-mwrjj" (OuterVolumeSpecName: "kube-api-access-mwrjj") pod "56cf3cf9-51d6-4ec0-ac63-0fb0e567c23d" (UID: "56cf3cf9-51d6-4ec0-ac63-0fb0e567c23d"). InnerVolumeSpecName "kube-api-access-mwrjj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:49:30 crc kubenswrapper[4728]: I0227 10:49:30.626882 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb615ea1-cb62-47ba-886e-eb2f761fea63-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fb615ea1-cb62-47ba-886e-eb2f761fea63" (UID: "fb615ea1-cb62-47ba-886e-eb2f761fea63"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:49:30 crc kubenswrapper[4728]: I0227 10:49:30.653595 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56cf3cf9-51d6-4ec0-ac63-0fb0e567c23d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "56cf3cf9-51d6-4ec0-ac63-0fb0e567c23d" (UID: "56cf3cf9-51d6-4ec0-ac63-0fb0e567c23d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:49:30 crc kubenswrapper[4728]: I0227 10:49:30.658749 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb615ea1-cb62-47ba-886e-eb2f761fea63-config-data" (OuterVolumeSpecName: "config-data") pod "fb615ea1-cb62-47ba-886e-eb2f761fea63" (UID: "fb615ea1-cb62-47ba-886e-eb2f761fea63"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:49:30 crc kubenswrapper[4728]: I0227 10:49:30.676810 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56cf3cf9-51d6-4ec0-ac63-0fb0e567c23d-config-data" (OuterVolumeSpecName: "config-data") pod "56cf3cf9-51d6-4ec0-ac63-0fb0e567c23d" (UID: "56cf3cf9-51d6-4ec0-ac63-0fb0e567c23d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:49:30 crc kubenswrapper[4728]: I0227 10:49:30.707876 4728 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56cf3cf9-51d6-4ec0-ac63-0fb0e567c23d-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 10:49:30 crc kubenswrapper[4728]: I0227 10:49:30.708100 4728 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56cf3cf9-51d6-4ec0-ac63-0fb0e567c23d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 10:49:30 crc kubenswrapper[4728]: I0227 10:49:30.708205 4728 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/56cf3cf9-51d6-4ec0-ac63-0fb0e567c23d-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 27 10:49:30 crc kubenswrapper[4728]: I0227 10:49:30.708284 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mwrjj\" (UniqueName: \"kubernetes.io/projected/56cf3cf9-51d6-4ec0-ac63-0fb0e567c23d-kube-api-access-mwrjj\") on node \"crc\" DevicePath \"\"" Feb 27 10:49:30 crc kubenswrapper[4728]: I0227 10:49:30.708370 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7vn6d\" (UniqueName: \"kubernetes.io/projected/fb615ea1-cb62-47ba-886e-eb2f761fea63-kube-api-access-7vn6d\") on node \"crc\" DevicePath \"\"" Feb 27 10:49:30 crc kubenswrapper[4728]: I0227 10:49:30.708447 4728 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fb615ea1-cb62-47ba-886e-eb2f761fea63-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 27 10:49:30 crc kubenswrapper[4728]: I0227 10:49:30.708556 4728 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb615ea1-cb62-47ba-886e-eb2f761fea63-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 10:49:30 crc kubenswrapper[4728]: I0227 10:49:30.708660 4728 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb615ea1-cb62-47ba-886e-eb2f761fea63-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 10:49:31 crc kubenswrapper[4728]: I0227 10:49:31.168460 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7b7dd98b-314c-4e6c-a45b-31168398fca3","Type":"ContainerStarted","Data":"c7003714eb737ca78c1553e40d96527282cc15dcbdf01510efeccac703a65b43"} Feb 27 10:49:31 crc kubenswrapper[4728]: I0227 10:49:31.169056 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7b7dd98b-314c-4e6c-a45b-31168398fca3" containerName="ceilometer-central-agent" containerID="cri-o://9f7b948c9f018079389be6b868387bb61b1aa295bd67bb6c8159167b8791e6d6" gracePeriod=30 Feb 27 10:49:31 crc kubenswrapper[4728]: I0227 10:49:31.169310 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 27 10:49:31 crc kubenswrapper[4728]: I0227 10:49:31.169599 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7b7dd98b-314c-4e6c-a45b-31168398fca3" containerName="proxy-httpd" containerID="cri-o://c7003714eb737ca78c1553e40d96527282cc15dcbdf01510efeccac703a65b43" gracePeriod=30 Feb 27 10:49:31 crc kubenswrapper[4728]: I0227 10:49:31.169653 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7b7dd98b-314c-4e6c-a45b-31168398fca3" containerName="sg-core" containerID="cri-o://c8824d9275d35912d4d0e9e5c1b5cbaa6967a69a0ddaee99fe5f4d00d59e606a" gracePeriod=30 Feb 27 10:49:31 crc kubenswrapper[4728]: I0227 10:49:31.169687 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7b7dd98b-314c-4e6c-a45b-31168398fca3" containerName="ceilometer-notification-agent" containerID="cri-o://4d48e0452ff5d1b0b1e2e160be3339fd25478438c80b2d61cdb3bdc1a989c207" gracePeriod=30 Feb 27 10:49:31 crc kubenswrapper[4728]: I0227 10:49:31.177653 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-5cd4cd64f5-rhzmp" Feb 27 10:49:31 crc kubenswrapper[4728]: I0227 10:49:31.178469 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5cd4cd64f5-rhzmp" event={"ID":"56cf3cf9-51d6-4ec0-ac63-0fb0e567c23d","Type":"ContainerDied","Data":"d5c7dd4b086fcf9b62aeb4a215bc19049d7160c3603741af03e0130ef9d80fb4"} Feb 27 10:49:31 crc kubenswrapper[4728]: I0227 10:49:31.178517 4728 scope.go:117] "RemoveContainer" containerID="fd3c25c8085189259f47ed83879733feb3cf7ee6663d0ca0b342616ce143e50b" Feb 27 10:49:31 crc kubenswrapper[4728]: I0227 10:49:31.199479 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7575ff7b96-fznp4" event={"ID":"fb615ea1-cb62-47ba-886e-eb2f761fea63","Type":"ContainerDied","Data":"5dd7693e78cc9bc31dc82ba6e1c790106fc15e575e580f30aa74a2ed2af38ea5"} Feb 27 10:49:31 crc kubenswrapper[4728]: I0227 10:49:31.199664 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-7575ff7b96-fznp4" Feb 27 10:49:31 crc kubenswrapper[4728]: I0227 10:49:31.212549 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.463605954 podStartE2EDuration="1m4.212531327s" podCreationTimestamp="2026-02-27 10:48:27 +0000 UTC" firstStartedPulling="2026-02-27 10:48:28.95448758 +0000 UTC m=+1328.916853686" lastFinishedPulling="2026-02-27 10:49:30.703412953 +0000 UTC m=+1390.665779059" observedRunningTime="2026-02-27 10:49:31.200934133 +0000 UTC m=+1391.163300259" watchObservedRunningTime="2026-02-27 10:49:31.212531327 +0000 UTC m=+1391.174897433" Feb 27 10:49:31 crc kubenswrapper[4728]: I0227 10:49:31.260133 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-5cd4cd64f5-rhzmp"] Feb 27 10:49:31 crc kubenswrapper[4728]: I0227 10:49:31.278578 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-worker-5cd4cd64f5-rhzmp"] Feb 27 10:49:31 crc kubenswrapper[4728]: I0227 10:49:31.298490 4728 scope.go:117] "RemoveContainer" containerID="513a27ed751d69a0fb4eea523015ed166374d0b3248a2b9de7386aa26403baa5" Feb 27 10:49:31 crc kubenswrapper[4728]: I0227 10:49:31.313588 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-7575ff7b96-fznp4"] Feb 27 10:49:31 crc kubenswrapper[4728]: I0227 10:49:31.321648 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-keystone-listener-7575ff7b96-fznp4"] Feb 27 10:49:31 crc kubenswrapper[4728]: I0227 10:49:31.453727 4728 scope.go:117] "RemoveContainer" containerID="b0dfce92cd827cfd6c158da85cf236d418482e3001e4b965eaee5de2304a37b3" Feb 27 10:49:31 crc kubenswrapper[4728]: I0227 10:49:31.473632 4728 scope.go:117] "RemoveContainer" containerID="f34013f5f83db04f395fcd50fdc27ef2269837af34f414b41e99d4675f6237ff" Feb 27 10:49:31 crc kubenswrapper[4728]: I0227 10:49:31.781776 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-74fdd766cd-6wpqm" Feb 27 10:49:32 crc kubenswrapper[4728]: I0227 10:49:32.212477 4728 generic.go:334] "Generic (PLEG): container finished" podID="e293ec90-6006-49f0-8e24-8a0f4327d2cf" containerID="7ae680eac9014a0a7a7b44a257a042e0ad152a6c92810dbe55308637990af761" exitCode=0 Feb 27 10:49:32 crc kubenswrapper[4728]: I0227 10:49:32.212752 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-dds8x" event={"ID":"e293ec90-6006-49f0-8e24-8a0f4327d2cf","Type":"ContainerDied","Data":"7ae680eac9014a0a7a7b44a257a042e0ad152a6c92810dbe55308637990af761"} Feb 27 10:49:32 crc kubenswrapper[4728]: I0227 10:49:32.215933 4728 generic.go:334] "Generic (PLEG): container finished" podID="7b7dd98b-314c-4e6c-a45b-31168398fca3" containerID="c7003714eb737ca78c1553e40d96527282cc15dcbdf01510efeccac703a65b43" exitCode=0 Feb 27 10:49:32 crc kubenswrapper[4728]: I0227 10:49:32.215968 4728 generic.go:334] "Generic (PLEG): container finished" podID="7b7dd98b-314c-4e6c-a45b-31168398fca3" containerID="c8824d9275d35912d4d0e9e5c1b5cbaa6967a69a0ddaee99fe5f4d00d59e606a" exitCode=2 Feb 27 10:49:32 crc kubenswrapper[4728]: I0227 10:49:32.215977 4728 generic.go:334] "Generic (PLEG): container finished" podID="7b7dd98b-314c-4e6c-a45b-31168398fca3" containerID="9f7b948c9f018079389be6b868387bb61b1aa295bd67bb6c8159167b8791e6d6" exitCode=0 Feb 27 10:49:32 crc kubenswrapper[4728]: I0227 10:49:32.216003 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7b7dd98b-314c-4e6c-a45b-31168398fca3","Type":"ContainerDied","Data":"c7003714eb737ca78c1553e40d96527282cc15dcbdf01510efeccac703a65b43"} Feb 27 10:49:32 crc kubenswrapper[4728]: I0227 10:49:32.216037 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7b7dd98b-314c-4e6c-a45b-31168398fca3","Type":"ContainerDied","Data":"c8824d9275d35912d4d0e9e5c1b5cbaa6967a69a0ddaee99fe5f4d00d59e606a"} Feb 27 10:49:32 crc kubenswrapper[4728]: I0227 10:49:32.216048 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7b7dd98b-314c-4e6c-a45b-31168398fca3","Type":"ContainerDied","Data":"9f7b948c9f018079389be6b868387bb61b1aa295bd67bb6c8159167b8791e6d6"} Feb 27 10:49:32 crc kubenswrapper[4728]: I0227 10:49:32.742946 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56cf3cf9-51d6-4ec0-ac63-0fb0e567c23d" path="/var/lib/kubelet/pods/56cf3cf9-51d6-4ec0-ac63-0fb0e567c23d/volumes" Feb 27 10:49:32 crc kubenswrapper[4728]: I0227 10:49:32.743607 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb615ea1-cb62-47ba-886e-eb2f761fea63" path="/var/lib/kubelet/pods/fb615ea1-cb62-47ba-886e-eb2f761fea63/volumes" Feb 27 10:49:32 crc kubenswrapper[4728]: I0227 10:49:32.794720 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-688c87cc99-zcq55" Feb 27 10:49:32 crc kubenswrapper[4728]: I0227 10:49:32.878448 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57c957c4ff-947zv"] Feb 27 10:49:32 crc kubenswrapper[4728]: I0227 10:49:32.878848 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57c957c4ff-947zv" podUID="4028a7cf-4a8a-4901-96f2-ee5577db5592" containerName="dnsmasq-dns" containerID="cri-o://446191cd0a92bd3c333bb45fe9c5adb346168e0b6a2a0f1cf83b3c8a4b1ab86b" gracePeriod=10 Feb 27 10:49:33 crc kubenswrapper[4728]: I0227 10:49:33.123521 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-57f4b5948b-j7k68" Feb 27 10:49:33 crc kubenswrapper[4728]: I0227 10:49:33.183974 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-57f4b5948b-j7k68" Feb 27 10:49:33 crc kubenswrapper[4728]: I0227 10:49:33.292713 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-74fdd766cd-6wpqm"] Feb 27 10:49:33 crc kubenswrapper[4728]: I0227 10:49:33.292950 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-74fdd766cd-6wpqm" podUID="7bd979a4-451e-4f4d-affa-43bfbc671238" containerName="barbican-api-log" containerID="cri-o://1303e6b84b0023b055b9dd2297ed0c2fb3664f88151c8abddf87df8ef82c3a32" gracePeriod=30 Feb 27 10:49:33 crc kubenswrapper[4728]: I0227 10:49:33.293433 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-74fdd766cd-6wpqm" podUID="7bd979a4-451e-4f4d-affa-43bfbc671238" containerName="barbican-api" containerID="cri-o://2f79f0566f330c7ee79dab182686219d894cf8bac85a4838943573d299e090c5" gracePeriod=30 Feb 27 10:49:33 crc kubenswrapper[4728]: I0227 10:49:33.298581 4728 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-74fdd766cd-6wpqm" podUID="7bd979a4-451e-4f4d-affa-43bfbc671238" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.210:9311/healthcheck\": EOF" Feb 27 10:49:33 crc kubenswrapper[4728]: I0227 10:49:33.298926 4728 generic.go:334] "Generic (PLEG): container finished" podID="4028a7cf-4a8a-4901-96f2-ee5577db5592" containerID="446191cd0a92bd3c333bb45fe9c5adb346168e0b6a2a0f1cf83b3c8a4b1ab86b" exitCode=0 Feb 27 10:49:33 crc kubenswrapper[4728]: I0227 10:49:33.299106 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57c957c4ff-947zv" event={"ID":"4028a7cf-4a8a-4901-96f2-ee5577db5592","Type":"ContainerDied","Data":"446191cd0a92bd3c333bb45fe9c5adb346168e0b6a2a0f1cf83b3c8a4b1ab86b"} Feb 27 10:49:33 crc kubenswrapper[4728]: I0227 10:49:33.525426 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57c957c4ff-947zv" Feb 27 10:49:33 crc kubenswrapper[4728]: I0227 10:49:33.657916 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4028a7cf-4a8a-4901-96f2-ee5577db5592-ovsdbserver-nb\") pod \"4028a7cf-4a8a-4901-96f2-ee5577db5592\" (UID: \"4028a7cf-4a8a-4901-96f2-ee5577db5592\") " Feb 27 10:49:33 crc kubenswrapper[4728]: I0227 10:49:33.658039 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4028a7cf-4a8a-4901-96f2-ee5577db5592-dns-svc\") pod \"4028a7cf-4a8a-4901-96f2-ee5577db5592\" (UID: \"4028a7cf-4a8a-4901-96f2-ee5577db5592\") " Feb 27 10:49:33 crc kubenswrapper[4728]: I0227 10:49:33.658210 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4028a7cf-4a8a-4901-96f2-ee5577db5592-config\") pod \"4028a7cf-4a8a-4901-96f2-ee5577db5592\" (UID: \"4028a7cf-4a8a-4901-96f2-ee5577db5592\") " Feb 27 10:49:33 crc kubenswrapper[4728]: I0227 10:49:33.658255 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4028a7cf-4a8a-4901-96f2-ee5577db5592-dns-swift-storage-0\") pod \"4028a7cf-4a8a-4901-96f2-ee5577db5592\" (UID: \"4028a7cf-4a8a-4901-96f2-ee5577db5592\") " Feb 27 10:49:33 crc kubenswrapper[4728]: I0227 10:49:33.658289 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4028a7cf-4a8a-4901-96f2-ee5577db5592-ovsdbserver-sb\") pod \"4028a7cf-4a8a-4901-96f2-ee5577db5592\" (UID: \"4028a7cf-4a8a-4901-96f2-ee5577db5592\") " Feb 27 10:49:33 crc kubenswrapper[4728]: I0227 10:49:33.658346 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lpt8l\" (UniqueName: \"kubernetes.io/projected/4028a7cf-4a8a-4901-96f2-ee5577db5592-kube-api-access-lpt8l\") pod \"4028a7cf-4a8a-4901-96f2-ee5577db5592\" (UID: \"4028a7cf-4a8a-4901-96f2-ee5577db5592\") " Feb 27 10:49:33 crc kubenswrapper[4728]: I0227 10:49:33.682656 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4028a7cf-4a8a-4901-96f2-ee5577db5592-kube-api-access-lpt8l" (OuterVolumeSpecName: "kube-api-access-lpt8l") pod "4028a7cf-4a8a-4901-96f2-ee5577db5592" (UID: "4028a7cf-4a8a-4901-96f2-ee5577db5592"). InnerVolumeSpecName "kube-api-access-lpt8l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:49:33 crc kubenswrapper[4728]: I0227 10:49:33.741935 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4028a7cf-4a8a-4901-96f2-ee5577db5592-config" (OuterVolumeSpecName: "config") pod "4028a7cf-4a8a-4901-96f2-ee5577db5592" (UID: "4028a7cf-4a8a-4901-96f2-ee5577db5592"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:49:33 crc kubenswrapper[4728]: I0227 10:49:33.741968 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4028a7cf-4a8a-4901-96f2-ee5577db5592-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "4028a7cf-4a8a-4901-96f2-ee5577db5592" (UID: "4028a7cf-4a8a-4901-96f2-ee5577db5592"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:49:33 crc kubenswrapper[4728]: I0227 10:49:33.746726 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4028a7cf-4a8a-4901-96f2-ee5577db5592-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "4028a7cf-4a8a-4901-96f2-ee5577db5592" (UID: "4028a7cf-4a8a-4901-96f2-ee5577db5592"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:49:33 crc kubenswrapper[4728]: I0227 10:49:33.761389 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4028a7cf-4a8a-4901-96f2-ee5577db5592-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4028a7cf-4a8a-4901-96f2-ee5577db5592" (UID: "4028a7cf-4a8a-4901-96f2-ee5577db5592"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:49:33 crc kubenswrapper[4728]: I0227 10:49:33.761463 4728 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4028a7cf-4a8a-4901-96f2-ee5577db5592-config\") on node \"crc\" DevicePath \"\"" Feb 27 10:49:33 crc kubenswrapper[4728]: I0227 10:49:33.761845 4728 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4028a7cf-4a8a-4901-96f2-ee5577db5592-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 27 10:49:33 crc kubenswrapper[4728]: I0227 10:49:33.761930 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lpt8l\" (UniqueName: \"kubernetes.io/projected/4028a7cf-4a8a-4901-96f2-ee5577db5592-kube-api-access-lpt8l\") on node \"crc\" DevicePath \"\"" Feb 27 10:49:33 crc kubenswrapper[4728]: I0227 10:49:33.762022 4728 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4028a7cf-4a8a-4901-96f2-ee5577db5592-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 27 10:49:33 crc kubenswrapper[4728]: I0227 10:49:33.768550 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4028a7cf-4a8a-4901-96f2-ee5577db5592-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "4028a7cf-4a8a-4901-96f2-ee5577db5592" (UID: "4028a7cf-4a8a-4901-96f2-ee5577db5592"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:49:33 crc kubenswrapper[4728]: I0227 10:49:33.824714 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-dds8x" Feb 27 10:49:33 crc kubenswrapper[4728]: I0227 10:49:33.863954 4728 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4028a7cf-4a8a-4901-96f2-ee5577db5592-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 27 10:49:33 crc kubenswrapper[4728]: I0227 10:49:33.863985 4728 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4028a7cf-4a8a-4901-96f2-ee5577db5592-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 27 10:49:33 crc kubenswrapper[4728]: I0227 10:49:33.966223 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sm7br\" (UniqueName: \"kubernetes.io/projected/e293ec90-6006-49f0-8e24-8a0f4327d2cf-kube-api-access-sm7br\") pod \"e293ec90-6006-49f0-8e24-8a0f4327d2cf\" (UID: \"e293ec90-6006-49f0-8e24-8a0f4327d2cf\") " Feb 27 10:49:33 crc kubenswrapper[4728]: I0227 10:49:33.966708 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e293ec90-6006-49f0-8e24-8a0f4327d2cf-etc-machine-id\") pod \"e293ec90-6006-49f0-8e24-8a0f4327d2cf\" (UID: \"e293ec90-6006-49f0-8e24-8a0f4327d2cf\") " Feb 27 10:49:33 crc kubenswrapper[4728]: I0227 10:49:33.966798 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e293ec90-6006-49f0-8e24-8a0f4327d2cf-db-sync-config-data\") pod \"e293ec90-6006-49f0-8e24-8a0f4327d2cf\" (UID: \"e293ec90-6006-49f0-8e24-8a0f4327d2cf\") " Feb 27 10:49:33 crc kubenswrapper[4728]: I0227 10:49:33.967116 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e293ec90-6006-49f0-8e24-8a0f4327d2cf-config-data\") pod \"e293ec90-6006-49f0-8e24-8a0f4327d2cf\" (UID: \"e293ec90-6006-49f0-8e24-8a0f4327d2cf\") " Feb 27 10:49:33 crc kubenswrapper[4728]: I0227 10:49:33.967177 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e293ec90-6006-49f0-8e24-8a0f4327d2cf-scripts\") pod \"e293ec90-6006-49f0-8e24-8a0f4327d2cf\" (UID: \"e293ec90-6006-49f0-8e24-8a0f4327d2cf\") " Feb 27 10:49:33 crc kubenswrapper[4728]: I0227 10:49:33.967232 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e293ec90-6006-49f0-8e24-8a0f4327d2cf-combined-ca-bundle\") pod \"e293ec90-6006-49f0-8e24-8a0f4327d2cf\" (UID: \"e293ec90-6006-49f0-8e24-8a0f4327d2cf\") " Feb 27 10:49:33 crc kubenswrapper[4728]: I0227 10:49:33.971445 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e293ec90-6006-49f0-8e24-8a0f4327d2cf-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "e293ec90-6006-49f0-8e24-8a0f4327d2cf" (UID: "e293ec90-6006-49f0-8e24-8a0f4327d2cf"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 10:49:33 crc kubenswrapper[4728]: I0227 10:49:33.975846 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e293ec90-6006-49f0-8e24-8a0f4327d2cf-scripts" (OuterVolumeSpecName: "scripts") pod "e293ec90-6006-49f0-8e24-8a0f4327d2cf" (UID: "e293ec90-6006-49f0-8e24-8a0f4327d2cf"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:49:33 crc kubenswrapper[4728]: I0227 10:49:33.976912 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e293ec90-6006-49f0-8e24-8a0f4327d2cf-kube-api-access-sm7br" (OuterVolumeSpecName: "kube-api-access-sm7br") pod "e293ec90-6006-49f0-8e24-8a0f4327d2cf" (UID: "e293ec90-6006-49f0-8e24-8a0f4327d2cf"). InnerVolumeSpecName "kube-api-access-sm7br". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:49:33 crc kubenswrapper[4728]: I0227 10:49:33.981615 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e293ec90-6006-49f0-8e24-8a0f4327d2cf-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "e293ec90-6006-49f0-8e24-8a0f4327d2cf" (UID: "e293ec90-6006-49f0-8e24-8a0f4327d2cf"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:49:34 crc kubenswrapper[4728]: I0227 10:49:34.059694 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e293ec90-6006-49f0-8e24-8a0f4327d2cf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e293ec90-6006-49f0-8e24-8a0f4327d2cf" (UID: "e293ec90-6006-49f0-8e24-8a0f4327d2cf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:49:34 crc kubenswrapper[4728]: I0227 10:49:34.067652 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e293ec90-6006-49f0-8e24-8a0f4327d2cf-config-data" (OuterVolumeSpecName: "config-data") pod "e293ec90-6006-49f0-8e24-8a0f4327d2cf" (UID: "e293ec90-6006-49f0-8e24-8a0f4327d2cf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:49:34 crc kubenswrapper[4728]: I0227 10:49:34.070083 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sm7br\" (UniqueName: \"kubernetes.io/projected/e293ec90-6006-49f0-8e24-8a0f4327d2cf-kube-api-access-sm7br\") on node \"crc\" DevicePath \"\"" Feb 27 10:49:34 crc kubenswrapper[4728]: I0227 10:49:34.070119 4728 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e293ec90-6006-49f0-8e24-8a0f4327d2cf-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 27 10:49:34 crc kubenswrapper[4728]: I0227 10:49:34.070131 4728 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e293ec90-6006-49f0-8e24-8a0f4327d2cf-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 10:49:34 crc kubenswrapper[4728]: I0227 10:49:34.070140 4728 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e293ec90-6006-49f0-8e24-8a0f4327d2cf-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 10:49:34 crc kubenswrapper[4728]: I0227 10:49:34.070149 4728 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e293ec90-6006-49f0-8e24-8a0f4327d2cf-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 10:49:34 crc kubenswrapper[4728]: I0227 10:49:34.070159 4728 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e293ec90-6006-49f0-8e24-8a0f4327d2cf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 10:49:34 crc kubenswrapper[4728]: I0227 10:49:34.309119 4728 generic.go:334] "Generic (PLEG): container finished" podID="7b7dd98b-314c-4e6c-a45b-31168398fca3" containerID="4d48e0452ff5d1b0b1e2e160be3339fd25478438c80b2d61cdb3bdc1a989c207" exitCode=0 Feb 27 10:49:34 crc kubenswrapper[4728]: I0227 10:49:34.309175 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7b7dd98b-314c-4e6c-a45b-31168398fca3","Type":"ContainerDied","Data":"4d48e0452ff5d1b0b1e2e160be3339fd25478438c80b2d61cdb3bdc1a989c207"} Feb 27 10:49:34 crc kubenswrapper[4728]: I0227 10:49:34.310402 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-dds8x" event={"ID":"e293ec90-6006-49f0-8e24-8a0f4327d2cf","Type":"ContainerDied","Data":"b6f2a7cc5773b343b0059d34e89bad92ddd5d95a363ee4540a1b0a055dd6c6a6"} Feb 27 10:49:34 crc kubenswrapper[4728]: I0227 10:49:34.310427 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b6f2a7cc5773b343b0059d34e89bad92ddd5d95a363ee4540a1b0a055dd6c6a6" Feb 27 10:49:34 crc kubenswrapper[4728]: I0227 10:49:34.310474 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-dds8x" Feb 27 10:49:34 crc kubenswrapper[4728]: I0227 10:49:34.319892 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57c957c4ff-947zv" Feb 27 10:49:34 crc kubenswrapper[4728]: I0227 10:49:34.319892 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57c957c4ff-947zv" event={"ID":"4028a7cf-4a8a-4901-96f2-ee5577db5592","Type":"ContainerDied","Data":"b3e9ae206216f35b1724d7e0e152f9251e66797a7951f24736c877ec5b37adc8"} Feb 27 10:49:34 crc kubenswrapper[4728]: I0227 10:49:34.320193 4728 scope.go:117] "RemoveContainer" containerID="446191cd0a92bd3c333bb45fe9c5adb346168e0b6a2a0f1cf83b3c8a4b1ab86b" Feb 27 10:49:34 crc kubenswrapper[4728]: I0227 10:49:34.322462 4728 generic.go:334] "Generic (PLEG): container finished" podID="7bd979a4-451e-4f4d-affa-43bfbc671238" containerID="1303e6b84b0023b055b9dd2297ed0c2fb3664f88151c8abddf87df8ef82c3a32" exitCode=143 Feb 27 10:49:34 crc kubenswrapper[4728]: I0227 10:49:34.322799 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-74fdd766cd-6wpqm" event={"ID":"7bd979a4-451e-4f4d-affa-43bfbc671238","Type":"ContainerDied","Data":"1303e6b84b0023b055b9dd2297ed0c2fb3664f88151c8abddf87df8ef82c3a32"} Feb 27 10:49:34 crc kubenswrapper[4728]: I0227 10:49:34.341716 4728 scope.go:117] "RemoveContainer" containerID="a2fd235e74b6961cd1469db099014e0c5392a53cd2641a0220669630643886ac" Feb 27 10:49:34 crc kubenswrapper[4728]: I0227 10:49:34.356701 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57c957c4ff-947zv"] Feb 27 10:49:34 crc kubenswrapper[4728]: I0227 10:49:34.366952 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57c957c4ff-947zv"] Feb 27 10:49:34 crc kubenswrapper[4728]: I0227 10:49:34.556063 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 27 10:49:34 crc kubenswrapper[4728]: E0227 10:49:34.556764 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4028a7cf-4a8a-4901-96f2-ee5577db5592" containerName="init" Feb 27 10:49:34 crc kubenswrapper[4728]: I0227 10:49:34.556781 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="4028a7cf-4a8a-4901-96f2-ee5577db5592" containerName="init" Feb 27 10:49:34 crc kubenswrapper[4728]: E0227 10:49:34.556799 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4028a7cf-4a8a-4901-96f2-ee5577db5592" containerName="dnsmasq-dns" Feb 27 10:49:34 crc kubenswrapper[4728]: I0227 10:49:34.556805 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="4028a7cf-4a8a-4901-96f2-ee5577db5592" containerName="dnsmasq-dns" Feb 27 10:49:34 crc kubenswrapper[4728]: E0227 10:49:34.556815 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e293ec90-6006-49f0-8e24-8a0f4327d2cf" containerName="cinder-db-sync" Feb 27 10:49:34 crc kubenswrapper[4728]: I0227 10:49:34.556821 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="e293ec90-6006-49f0-8e24-8a0f4327d2cf" containerName="cinder-db-sync" Feb 27 10:49:34 crc kubenswrapper[4728]: E0227 10:49:34.556841 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e977ffad-2764-4871-bdc8-24f0c3b4caf1" containerName="heat-db-sync" Feb 27 10:49:34 crc kubenswrapper[4728]: I0227 10:49:34.556849 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="e977ffad-2764-4871-bdc8-24f0c3b4caf1" containerName="heat-db-sync" Feb 27 10:49:34 crc kubenswrapper[4728]: E0227 10:49:34.556860 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56cf3cf9-51d6-4ec0-ac63-0fb0e567c23d" containerName="barbican-worker" Feb 27 10:49:34 crc kubenswrapper[4728]: I0227 10:49:34.556866 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="56cf3cf9-51d6-4ec0-ac63-0fb0e567c23d" containerName="barbican-worker" Feb 27 10:49:34 crc kubenswrapper[4728]: E0227 10:49:34.556876 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb615ea1-cb62-47ba-886e-eb2f761fea63" containerName="barbican-keystone-listener" Feb 27 10:49:34 crc kubenswrapper[4728]: I0227 10:49:34.556881 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb615ea1-cb62-47ba-886e-eb2f761fea63" containerName="barbican-keystone-listener" Feb 27 10:49:34 crc kubenswrapper[4728]: E0227 10:49:34.556902 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56cf3cf9-51d6-4ec0-ac63-0fb0e567c23d" containerName="barbican-worker-log" Feb 27 10:49:34 crc kubenswrapper[4728]: I0227 10:49:34.556908 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="56cf3cf9-51d6-4ec0-ac63-0fb0e567c23d" containerName="barbican-worker-log" Feb 27 10:49:34 crc kubenswrapper[4728]: E0227 10:49:34.556921 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb615ea1-cb62-47ba-886e-eb2f761fea63" containerName="barbican-keystone-listener-log" Feb 27 10:49:34 crc kubenswrapper[4728]: I0227 10:49:34.556926 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb615ea1-cb62-47ba-886e-eb2f761fea63" containerName="barbican-keystone-listener-log" Feb 27 10:49:34 crc kubenswrapper[4728]: I0227 10:49:34.557108 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="56cf3cf9-51d6-4ec0-ac63-0fb0e567c23d" containerName="barbican-worker-log" Feb 27 10:49:34 crc kubenswrapper[4728]: I0227 10:49:34.557116 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="56cf3cf9-51d6-4ec0-ac63-0fb0e567c23d" containerName="barbican-worker" Feb 27 10:49:34 crc kubenswrapper[4728]: I0227 10:49:34.557136 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="e293ec90-6006-49f0-8e24-8a0f4327d2cf" containerName="cinder-db-sync" Feb 27 10:49:34 crc kubenswrapper[4728]: I0227 10:49:34.557147 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb615ea1-cb62-47ba-886e-eb2f761fea63" containerName="barbican-keystone-listener" Feb 27 10:49:34 crc kubenswrapper[4728]: I0227 10:49:34.557157 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="e977ffad-2764-4871-bdc8-24f0c3b4caf1" containerName="heat-db-sync" Feb 27 10:49:34 crc kubenswrapper[4728]: I0227 10:49:34.557169 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb615ea1-cb62-47ba-886e-eb2f761fea63" containerName="barbican-keystone-listener-log" Feb 27 10:49:34 crc kubenswrapper[4728]: I0227 10:49:34.557179 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="4028a7cf-4a8a-4901-96f2-ee5577db5592" containerName="dnsmasq-dns" Feb 27 10:49:34 crc kubenswrapper[4728]: I0227 10:49:34.561118 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 27 10:49:34 crc kubenswrapper[4728]: I0227 10:49:34.565576 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 27 10:49:34 crc kubenswrapper[4728]: I0227 10:49:34.565701 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-lfz4z" Feb 27 10:49:34 crc kubenswrapper[4728]: I0227 10:49:34.565801 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 27 10:49:34 crc kubenswrapper[4728]: I0227 10:49:34.567966 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 27 10:49:34 crc kubenswrapper[4728]: I0227 10:49:34.575685 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 27 10:49:34 crc kubenswrapper[4728]: I0227 10:49:34.619997 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6bb4fc677f-67mnv"] Feb 27 10:49:34 crc kubenswrapper[4728]: I0227 10:49:34.621815 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bb4fc677f-67mnv" Feb 27 10:49:34 crc kubenswrapper[4728]: I0227 10:49:34.649372 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6bb4fc677f-67mnv"] Feb 27 10:49:34 crc kubenswrapper[4728]: I0227 10:49:34.684654 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80018c03-d2c1-4ac8-8e4b-1bb5e3c26b21-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"80018c03-d2c1-4ac8-8e4b-1bb5e3c26b21\") " pod="openstack/cinder-scheduler-0" Feb 27 10:49:34 crc kubenswrapper[4728]: I0227 10:49:34.684716 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6ac390a6-133b-434f-a853-cf7fc9d18de1-ovsdbserver-sb\") pod \"dnsmasq-dns-6bb4fc677f-67mnv\" (UID: \"6ac390a6-133b-434f-a853-cf7fc9d18de1\") " pod="openstack/dnsmasq-dns-6bb4fc677f-67mnv" Feb 27 10:49:34 crc kubenswrapper[4728]: I0227 10:49:34.684740 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/80018c03-d2c1-4ac8-8e4b-1bb5e3c26b21-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"80018c03-d2c1-4ac8-8e4b-1bb5e3c26b21\") " pod="openstack/cinder-scheduler-0" Feb 27 10:49:34 crc kubenswrapper[4728]: I0227 10:49:34.684767 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/80018c03-d2c1-4ac8-8e4b-1bb5e3c26b21-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"80018c03-d2c1-4ac8-8e4b-1bb5e3c26b21\") " pod="openstack/cinder-scheduler-0" Feb 27 10:49:34 crc kubenswrapper[4728]: I0227 10:49:34.684787 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6ac390a6-133b-434f-a853-cf7fc9d18de1-ovsdbserver-nb\") pod \"dnsmasq-dns-6bb4fc677f-67mnv\" (UID: \"6ac390a6-133b-434f-a853-cf7fc9d18de1\") " pod="openstack/dnsmasq-dns-6bb4fc677f-67mnv" Feb 27 10:49:34 crc kubenswrapper[4728]: I0227 10:49:34.684828 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qqflt\" (UniqueName: \"kubernetes.io/projected/6ac390a6-133b-434f-a853-cf7fc9d18de1-kube-api-access-qqflt\") pod \"dnsmasq-dns-6bb4fc677f-67mnv\" (UID: \"6ac390a6-133b-434f-a853-cf7fc9d18de1\") " pod="openstack/dnsmasq-dns-6bb4fc677f-67mnv" Feb 27 10:49:34 crc kubenswrapper[4728]: I0227 10:49:34.684853 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6ac390a6-133b-434f-a853-cf7fc9d18de1-dns-swift-storage-0\") pod \"dnsmasq-dns-6bb4fc677f-67mnv\" (UID: \"6ac390a6-133b-434f-a853-cf7fc9d18de1\") " pod="openstack/dnsmasq-dns-6bb4fc677f-67mnv" Feb 27 10:49:34 crc kubenswrapper[4728]: I0227 10:49:34.684892 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xx2hm\" (UniqueName: \"kubernetes.io/projected/80018c03-d2c1-4ac8-8e4b-1bb5e3c26b21-kube-api-access-xx2hm\") pod \"cinder-scheduler-0\" (UID: \"80018c03-d2c1-4ac8-8e4b-1bb5e3c26b21\") " pod="openstack/cinder-scheduler-0" Feb 27 10:49:34 crc kubenswrapper[4728]: I0227 10:49:34.684912 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80018c03-d2c1-4ac8-8e4b-1bb5e3c26b21-config-data\") pod \"cinder-scheduler-0\" (UID: \"80018c03-d2c1-4ac8-8e4b-1bb5e3c26b21\") " pod="openstack/cinder-scheduler-0" Feb 27 10:49:34 crc kubenswrapper[4728]: I0227 10:49:34.684927 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/80018c03-d2c1-4ac8-8e4b-1bb5e3c26b21-scripts\") pod \"cinder-scheduler-0\" (UID: \"80018c03-d2c1-4ac8-8e4b-1bb5e3c26b21\") " pod="openstack/cinder-scheduler-0" Feb 27 10:49:34 crc kubenswrapper[4728]: I0227 10:49:34.684967 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6ac390a6-133b-434f-a853-cf7fc9d18de1-dns-svc\") pod \"dnsmasq-dns-6bb4fc677f-67mnv\" (UID: \"6ac390a6-133b-434f-a853-cf7fc9d18de1\") " pod="openstack/dnsmasq-dns-6bb4fc677f-67mnv" Feb 27 10:49:34 crc kubenswrapper[4728]: I0227 10:49:34.684996 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ac390a6-133b-434f-a853-cf7fc9d18de1-config\") pod \"dnsmasq-dns-6bb4fc677f-67mnv\" (UID: \"6ac390a6-133b-434f-a853-cf7fc9d18de1\") " pod="openstack/dnsmasq-dns-6bb4fc677f-67mnv" Feb 27 10:49:34 crc kubenswrapper[4728]: I0227 10:49:34.742889 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4028a7cf-4a8a-4901-96f2-ee5577db5592" path="/var/lib/kubelet/pods/4028a7cf-4a8a-4901-96f2-ee5577db5592/volumes" Feb 27 10:49:34 crc kubenswrapper[4728]: I0227 10:49:34.787287 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6ac390a6-133b-434f-a853-cf7fc9d18de1-ovsdbserver-sb\") pod \"dnsmasq-dns-6bb4fc677f-67mnv\" (UID: \"6ac390a6-133b-434f-a853-cf7fc9d18de1\") " pod="openstack/dnsmasq-dns-6bb4fc677f-67mnv" Feb 27 10:49:34 crc kubenswrapper[4728]: I0227 10:49:34.787341 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/80018c03-d2c1-4ac8-8e4b-1bb5e3c26b21-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"80018c03-d2c1-4ac8-8e4b-1bb5e3c26b21\") " pod="openstack/cinder-scheduler-0" Feb 27 10:49:34 crc kubenswrapper[4728]: I0227 10:49:34.787374 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/80018c03-d2c1-4ac8-8e4b-1bb5e3c26b21-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"80018c03-d2c1-4ac8-8e4b-1bb5e3c26b21\") " pod="openstack/cinder-scheduler-0" Feb 27 10:49:34 crc kubenswrapper[4728]: I0227 10:49:34.787399 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6ac390a6-133b-434f-a853-cf7fc9d18de1-ovsdbserver-nb\") pod \"dnsmasq-dns-6bb4fc677f-67mnv\" (UID: \"6ac390a6-133b-434f-a853-cf7fc9d18de1\") " pod="openstack/dnsmasq-dns-6bb4fc677f-67mnv" Feb 27 10:49:34 crc kubenswrapper[4728]: I0227 10:49:34.787447 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qqflt\" (UniqueName: \"kubernetes.io/projected/6ac390a6-133b-434f-a853-cf7fc9d18de1-kube-api-access-qqflt\") pod \"dnsmasq-dns-6bb4fc677f-67mnv\" (UID: \"6ac390a6-133b-434f-a853-cf7fc9d18de1\") " pod="openstack/dnsmasq-dns-6bb4fc677f-67mnv" Feb 27 10:49:34 crc kubenswrapper[4728]: I0227 10:49:34.787483 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6ac390a6-133b-434f-a853-cf7fc9d18de1-dns-swift-storage-0\") pod \"dnsmasq-dns-6bb4fc677f-67mnv\" (UID: \"6ac390a6-133b-434f-a853-cf7fc9d18de1\") " pod="openstack/dnsmasq-dns-6bb4fc677f-67mnv" Feb 27 10:49:34 crc kubenswrapper[4728]: I0227 10:49:34.787548 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xx2hm\" (UniqueName: \"kubernetes.io/projected/80018c03-d2c1-4ac8-8e4b-1bb5e3c26b21-kube-api-access-xx2hm\") pod \"cinder-scheduler-0\" (UID: \"80018c03-d2c1-4ac8-8e4b-1bb5e3c26b21\") " pod="openstack/cinder-scheduler-0" Feb 27 10:49:34 crc kubenswrapper[4728]: I0227 10:49:34.787569 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80018c03-d2c1-4ac8-8e4b-1bb5e3c26b21-config-data\") pod \"cinder-scheduler-0\" (UID: \"80018c03-d2c1-4ac8-8e4b-1bb5e3c26b21\") " pod="openstack/cinder-scheduler-0" Feb 27 10:49:34 crc kubenswrapper[4728]: I0227 10:49:34.787587 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/80018c03-d2c1-4ac8-8e4b-1bb5e3c26b21-scripts\") pod \"cinder-scheduler-0\" (UID: \"80018c03-d2c1-4ac8-8e4b-1bb5e3c26b21\") " pod="openstack/cinder-scheduler-0" Feb 27 10:49:34 crc kubenswrapper[4728]: I0227 10:49:34.787648 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6ac390a6-133b-434f-a853-cf7fc9d18de1-dns-svc\") pod \"dnsmasq-dns-6bb4fc677f-67mnv\" (UID: \"6ac390a6-133b-434f-a853-cf7fc9d18de1\") " pod="openstack/dnsmasq-dns-6bb4fc677f-67mnv" Feb 27 10:49:34 crc kubenswrapper[4728]: I0227 10:49:34.787683 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ac390a6-133b-434f-a853-cf7fc9d18de1-config\") pod \"dnsmasq-dns-6bb4fc677f-67mnv\" (UID: \"6ac390a6-133b-434f-a853-cf7fc9d18de1\") " pod="openstack/dnsmasq-dns-6bb4fc677f-67mnv" Feb 27 10:49:34 crc kubenswrapper[4728]: I0227 10:49:34.787785 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80018c03-d2c1-4ac8-8e4b-1bb5e3c26b21-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"80018c03-d2c1-4ac8-8e4b-1bb5e3c26b21\") " pod="openstack/cinder-scheduler-0" Feb 27 10:49:34 crc kubenswrapper[4728]: I0227 10:49:34.794975 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6ac390a6-133b-434f-a853-cf7fc9d18de1-dns-swift-storage-0\") pod \"dnsmasq-dns-6bb4fc677f-67mnv\" (UID: \"6ac390a6-133b-434f-a853-cf7fc9d18de1\") " pod="openstack/dnsmasq-dns-6bb4fc677f-67mnv" Feb 27 10:49:34 crc kubenswrapper[4728]: I0227 10:49:34.795579 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6ac390a6-133b-434f-a853-cf7fc9d18de1-ovsdbserver-nb\") pod \"dnsmasq-dns-6bb4fc677f-67mnv\" (UID: \"6ac390a6-133b-434f-a853-cf7fc9d18de1\") " pod="openstack/dnsmasq-dns-6bb4fc677f-67mnv" Feb 27 10:49:34 crc kubenswrapper[4728]: I0227 10:49:34.795928 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80018c03-d2c1-4ac8-8e4b-1bb5e3c26b21-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"80018c03-d2c1-4ac8-8e4b-1bb5e3c26b21\") " pod="openstack/cinder-scheduler-0" Feb 27 10:49:34 crc kubenswrapper[4728]: I0227 10:49:34.796552 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6ac390a6-133b-434f-a853-cf7fc9d18de1-dns-svc\") pod \"dnsmasq-dns-6bb4fc677f-67mnv\" (UID: \"6ac390a6-133b-434f-a853-cf7fc9d18de1\") " pod="openstack/dnsmasq-dns-6bb4fc677f-67mnv" Feb 27 10:49:34 crc kubenswrapper[4728]: I0227 10:49:34.796601 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6ac390a6-133b-434f-a853-cf7fc9d18de1-ovsdbserver-sb\") pod \"dnsmasq-dns-6bb4fc677f-67mnv\" (UID: \"6ac390a6-133b-434f-a853-cf7fc9d18de1\") " pod="openstack/dnsmasq-dns-6bb4fc677f-67mnv" Feb 27 10:49:34 crc kubenswrapper[4728]: I0227 10:49:34.796895 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/80018c03-d2c1-4ac8-8e4b-1bb5e3c26b21-scripts\") pod \"cinder-scheduler-0\" (UID: \"80018c03-d2c1-4ac8-8e4b-1bb5e3c26b21\") " pod="openstack/cinder-scheduler-0" Feb 27 10:49:34 crc kubenswrapper[4728]: I0227 10:49:34.796945 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/80018c03-d2c1-4ac8-8e4b-1bb5e3c26b21-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"80018c03-d2c1-4ac8-8e4b-1bb5e3c26b21\") " pod="openstack/cinder-scheduler-0" Feb 27 10:49:34 crc kubenswrapper[4728]: I0227 10:49:34.797132 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ac390a6-133b-434f-a853-cf7fc9d18de1-config\") pod \"dnsmasq-dns-6bb4fc677f-67mnv\" (UID: \"6ac390a6-133b-434f-a853-cf7fc9d18de1\") " pod="openstack/dnsmasq-dns-6bb4fc677f-67mnv" Feb 27 10:49:34 crc kubenswrapper[4728]: I0227 10:49:34.797591 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/80018c03-d2c1-4ac8-8e4b-1bb5e3c26b21-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"80018c03-d2c1-4ac8-8e4b-1bb5e3c26b21\") " pod="openstack/cinder-scheduler-0" Feb 27 10:49:34 crc kubenswrapper[4728]: I0227 10:49:34.801808 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 27 10:49:34 crc kubenswrapper[4728]: I0227 10:49:34.805991 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 27 10:49:34 crc kubenswrapper[4728]: I0227 10:49:34.808550 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80018c03-d2c1-4ac8-8e4b-1bb5e3c26b21-config-data\") pod \"cinder-scheduler-0\" (UID: \"80018c03-d2c1-4ac8-8e4b-1bb5e3c26b21\") " pod="openstack/cinder-scheduler-0" Feb 27 10:49:34 crc kubenswrapper[4728]: I0227 10:49:34.813853 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 27 10:49:34 crc kubenswrapper[4728]: I0227 10:49:34.820822 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xx2hm\" (UniqueName: \"kubernetes.io/projected/80018c03-d2c1-4ac8-8e4b-1bb5e3c26b21-kube-api-access-xx2hm\") pod \"cinder-scheduler-0\" (UID: \"80018c03-d2c1-4ac8-8e4b-1bb5e3c26b21\") " pod="openstack/cinder-scheduler-0" Feb 27 10:49:34 crc kubenswrapper[4728]: I0227 10:49:34.820820 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qqflt\" (UniqueName: \"kubernetes.io/projected/6ac390a6-133b-434f-a853-cf7fc9d18de1-kube-api-access-qqflt\") pod \"dnsmasq-dns-6bb4fc677f-67mnv\" (UID: \"6ac390a6-133b-434f-a853-cf7fc9d18de1\") " pod="openstack/dnsmasq-dns-6bb4fc677f-67mnv" Feb 27 10:49:34 crc kubenswrapper[4728]: I0227 10:49:34.842321 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 27 10:49:34 crc kubenswrapper[4728]: I0227 10:49:34.877613 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 27 10:49:34 crc kubenswrapper[4728]: I0227 10:49:34.889247 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/baeadb7e-f0a8-40fc-8529-3ffc87d16c0f-etc-machine-id\") pod \"cinder-api-0\" (UID: \"baeadb7e-f0a8-40fc-8529-3ffc87d16c0f\") " pod="openstack/cinder-api-0" Feb 27 10:49:34 crc kubenswrapper[4728]: I0227 10:49:34.889312 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/baeadb7e-f0a8-40fc-8529-3ffc87d16c0f-logs\") pod \"cinder-api-0\" (UID: \"baeadb7e-f0a8-40fc-8529-3ffc87d16c0f\") " pod="openstack/cinder-api-0" Feb 27 10:49:34 crc kubenswrapper[4728]: I0227 10:49:34.889384 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xftbk\" (UniqueName: \"kubernetes.io/projected/baeadb7e-f0a8-40fc-8529-3ffc87d16c0f-kube-api-access-xftbk\") pod \"cinder-api-0\" (UID: \"baeadb7e-f0a8-40fc-8529-3ffc87d16c0f\") " pod="openstack/cinder-api-0" Feb 27 10:49:34 crc kubenswrapper[4728]: I0227 10:49:34.889403 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/baeadb7e-f0a8-40fc-8529-3ffc87d16c0f-config-data-custom\") pod \"cinder-api-0\" (UID: \"baeadb7e-f0a8-40fc-8529-3ffc87d16c0f\") " pod="openstack/cinder-api-0" Feb 27 10:49:34 crc kubenswrapper[4728]: I0227 10:49:34.889432 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/baeadb7e-f0a8-40fc-8529-3ffc87d16c0f-scripts\") pod \"cinder-api-0\" (UID: \"baeadb7e-f0a8-40fc-8529-3ffc87d16c0f\") " pod="openstack/cinder-api-0" Feb 27 10:49:34 crc kubenswrapper[4728]: I0227 10:49:34.889452 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/baeadb7e-f0a8-40fc-8529-3ffc87d16c0f-config-data\") pod \"cinder-api-0\" (UID: \"baeadb7e-f0a8-40fc-8529-3ffc87d16c0f\") " pod="openstack/cinder-api-0" Feb 27 10:49:34 crc kubenswrapper[4728]: I0227 10:49:34.889494 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/baeadb7e-f0a8-40fc-8529-3ffc87d16c0f-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"baeadb7e-f0a8-40fc-8529-3ffc87d16c0f\") " pod="openstack/cinder-api-0" Feb 27 10:49:34 crc kubenswrapper[4728]: I0227 10:49:34.952730 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bb4fc677f-67mnv" Feb 27 10:49:35 crc kubenswrapper[4728]: I0227 10:49:35.009468 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xftbk\" (UniqueName: \"kubernetes.io/projected/baeadb7e-f0a8-40fc-8529-3ffc87d16c0f-kube-api-access-xftbk\") pod \"cinder-api-0\" (UID: \"baeadb7e-f0a8-40fc-8529-3ffc87d16c0f\") " pod="openstack/cinder-api-0" Feb 27 10:49:35 crc kubenswrapper[4728]: I0227 10:49:35.009804 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/baeadb7e-f0a8-40fc-8529-3ffc87d16c0f-config-data-custom\") pod \"cinder-api-0\" (UID: \"baeadb7e-f0a8-40fc-8529-3ffc87d16c0f\") " pod="openstack/cinder-api-0" Feb 27 10:49:35 crc kubenswrapper[4728]: I0227 10:49:35.009861 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/baeadb7e-f0a8-40fc-8529-3ffc87d16c0f-scripts\") pod \"cinder-api-0\" (UID: \"baeadb7e-f0a8-40fc-8529-3ffc87d16c0f\") " pod="openstack/cinder-api-0" Feb 27 10:49:35 crc kubenswrapper[4728]: I0227 10:49:35.009896 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/baeadb7e-f0a8-40fc-8529-3ffc87d16c0f-config-data\") pod \"cinder-api-0\" (UID: \"baeadb7e-f0a8-40fc-8529-3ffc87d16c0f\") " pod="openstack/cinder-api-0" Feb 27 10:49:35 crc kubenswrapper[4728]: I0227 10:49:35.009990 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/baeadb7e-f0a8-40fc-8529-3ffc87d16c0f-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"baeadb7e-f0a8-40fc-8529-3ffc87d16c0f\") " pod="openstack/cinder-api-0" Feb 27 10:49:35 crc kubenswrapper[4728]: I0227 10:49:35.010148 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/baeadb7e-f0a8-40fc-8529-3ffc87d16c0f-etc-machine-id\") pod \"cinder-api-0\" (UID: \"baeadb7e-f0a8-40fc-8529-3ffc87d16c0f\") " pod="openstack/cinder-api-0" Feb 27 10:49:35 crc kubenswrapper[4728]: I0227 10:49:35.010227 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/baeadb7e-f0a8-40fc-8529-3ffc87d16c0f-logs\") pod \"cinder-api-0\" (UID: \"baeadb7e-f0a8-40fc-8529-3ffc87d16c0f\") " pod="openstack/cinder-api-0" Feb 27 10:49:35 crc kubenswrapper[4728]: I0227 10:49:35.016396 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/baeadb7e-f0a8-40fc-8529-3ffc87d16c0f-etc-machine-id\") pod \"cinder-api-0\" (UID: \"baeadb7e-f0a8-40fc-8529-3ffc87d16c0f\") " pod="openstack/cinder-api-0" Feb 27 10:49:35 crc kubenswrapper[4728]: I0227 10:49:35.027305 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/baeadb7e-f0a8-40fc-8529-3ffc87d16c0f-logs\") pod \"cinder-api-0\" (UID: \"baeadb7e-f0a8-40fc-8529-3ffc87d16c0f\") " pod="openstack/cinder-api-0" Feb 27 10:49:35 crc kubenswrapper[4728]: I0227 10:49:35.035450 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/baeadb7e-f0a8-40fc-8529-3ffc87d16c0f-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"baeadb7e-f0a8-40fc-8529-3ffc87d16c0f\") " pod="openstack/cinder-api-0" Feb 27 10:49:35 crc kubenswrapper[4728]: I0227 10:49:35.037673 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/baeadb7e-f0a8-40fc-8529-3ffc87d16c0f-config-data\") pod \"cinder-api-0\" (UID: \"baeadb7e-f0a8-40fc-8529-3ffc87d16c0f\") " pod="openstack/cinder-api-0" Feb 27 10:49:35 crc kubenswrapper[4728]: I0227 10:49:35.037992 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/baeadb7e-f0a8-40fc-8529-3ffc87d16c0f-scripts\") pod \"cinder-api-0\" (UID: \"baeadb7e-f0a8-40fc-8529-3ffc87d16c0f\") " pod="openstack/cinder-api-0" Feb 27 10:49:35 crc kubenswrapper[4728]: I0227 10:49:35.042808 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/baeadb7e-f0a8-40fc-8529-3ffc87d16c0f-config-data-custom\") pod \"cinder-api-0\" (UID: \"baeadb7e-f0a8-40fc-8529-3ffc87d16c0f\") " pod="openstack/cinder-api-0" Feb 27 10:49:35 crc kubenswrapper[4728]: I0227 10:49:35.058739 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xftbk\" (UniqueName: \"kubernetes.io/projected/baeadb7e-f0a8-40fc-8529-3ffc87d16c0f-kube-api-access-xftbk\") pod \"cinder-api-0\" (UID: \"baeadb7e-f0a8-40fc-8529-3ffc87d16c0f\") " pod="openstack/cinder-api-0" Feb 27 10:49:35 crc kubenswrapper[4728]: I0227 10:49:35.222124 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 27 10:49:35 crc kubenswrapper[4728]: I0227 10:49:35.258020 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 27 10:49:35 crc kubenswrapper[4728]: I0227 10:49:35.349098 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7b7dd98b-314c-4e6c-a45b-31168398fca3","Type":"ContainerDied","Data":"7ebafb18aae18f1edef65f0f1b611c9ecfcbf7297844106be1a19afb0c193edb"} Feb 27 10:49:35 crc kubenswrapper[4728]: I0227 10:49:35.349157 4728 scope.go:117] "RemoveContainer" containerID="c7003714eb737ca78c1553e40d96527282cc15dcbdf01510efeccac703a65b43" Feb 27 10:49:35 crc kubenswrapper[4728]: I0227 10:49:35.349320 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 27 10:49:35 crc kubenswrapper[4728]: I0227 10:49:35.435281 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b7dd98b-314c-4e6c-a45b-31168398fca3-combined-ca-bundle\") pod \"7b7dd98b-314c-4e6c-a45b-31168398fca3\" (UID: \"7b7dd98b-314c-4e6c-a45b-31168398fca3\") " Feb 27 10:49:35 crc kubenswrapper[4728]: I0227 10:49:35.435449 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2782x\" (UniqueName: \"kubernetes.io/projected/7b7dd98b-314c-4e6c-a45b-31168398fca3-kube-api-access-2782x\") pod \"7b7dd98b-314c-4e6c-a45b-31168398fca3\" (UID: \"7b7dd98b-314c-4e6c-a45b-31168398fca3\") " Feb 27 10:49:35 crc kubenswrapper[4728]: I0227 10:49:35.435558 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b7dd98b-314c-4e6c-a45b-31168398fca3-config-data\") pod \"7b7dd98b-314c-4e6c-a45b-31168398fca3\" (UID: \"7b7dd98b-314c-4e6c-a45b-31168398fca3\") " Feb 27 10:49:35 crc kubenswrapper[4728]: I0227 10:49:35.435603 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7b7dd98b-314c-4e6c-a45b-31168398fca3-log-httpd\") pod \"7b7dd98b-314c-4e6c-a45b-31168398fca3\" (UID: \"7b7dd98b-314c-4e6c-a45b-31168398fca3\") " Feb 27 10:49:35 crc kubenswrapper[4728]: I0227 10:49:35.435632 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7b7dd98b-314c-4e6c-a45b-31168398fca3-run-httpd\") pod \"7b7dd98b-314c-4e6c-a45b-31168398fca3\" (UID: \"7b7dd98b-314c-4e6c-a45b-31168398fca3\") " Feb 27 10:49:35 crc kubenswrapper[4728]: I0227 10:49:35.435781 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7b7dd98b-314c-4e6c-a45b-31168398fca3-scripts\") pod \"7b7dd98b-314c-4e6c-a45b-31168398fca3\" (UID: \"7b7dd98b-314c-4e6c-a45b-31168398fca3\") " Feb 27 10:49:35 crc kubenswrapper[4728]: I0227 10:49:35.435854 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7b7dd98b-314c-4e6c-a45b-31168398fca3-sg-core-conf-yaml\") pod \"7b7dd98b-314c-4e6c-a45b-31168398fca3\" (UID: \"7b7dd98b-314c-4e6c-a45b-31168398fca3\") " Feb 27 10:49:35 crc kubenswrapper[4728]: I0227 10:49:35.441393 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7b7dd98b-314c-4e6c-a45b-31168398fca3-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "7b7dd98b-314c-4e6c-a45b-31168398fca3" (UID: "7b7dd98b-314c-4e6c-a45b-31168398fca3"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:49:35 crc kubenswrapper[4728]: I0227 10:49:35.441713 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7b7dd98b-314c-4e6c-a45b-31168398fca3-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "7b7dd98b-314c-4e6c-a45b-31168398fca3" (UID: "7b7dd98b-314c-4e6c-a45b-31168398fca3"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:49:35 crc kubenswrapper[4728]: I0227 10:49:35.457198 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b7dd98b-314c-4e6c-a45b-31168398fca3-scripts" (OuterVolumeSpecName: "scripts") pod "7b7dd98b-314c-4e6c-a45b-31168398fca3" (UID: "7b7dd98b-314c-4e6c-a45b-31168398fca3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:49:35 crc kubenswrapper[4728]: I0227 10:49:35.469697 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b7dd98b-314c-4e6c-a45b-31168398fca3-kube-api-access-2782x" (OuterVolumeSpecName: "kube-api-access-2782x") pod "7b7dd98b-314c-4e6c-a45b-31168398fca3" (UID: "7b7dd98b-314c-4e6c-a45b-31168398fca3"). InnerVolumeSpecName "kube-api-access-2782x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:49:35 crc kubenswrapper[4728]: I0227 10:49:35.480145 4728 scope.go:117] "RemoveContainer" containerID="c8824d9275d35912d4d0e9e5c1b5cbaa6967a69a0ddaee99fe5f4d00d59e606a" Feb 27 10:49:35 crc kubenswrapper[4728]: I0227 10:49:35.494617 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b7dd98b-314c-4e6c-a45b-31168398fca3-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "7b7dd98b-314c-4e6c-a45b-31168398fca3" (UID: "7b7dd98b-314c-4e6c-a45b-31168398fca3"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:49:35 crc kubenswrapper[4728]: I0227 10:49:35.509992 4728 scope.go:117] "RemoveContainer" containerID="4d48e0452ff5d1b0b1e2e160be3339fd25478438c80b2d61cdb3bdc1a989c207" Feb 27 10:49:35 crc kubenswrapper[4728]: I0227 10:49:35.538125 4728 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7b7dd98b-314c-4e6c-a45b-31168398fca3-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 10:49:35 crc kubenswrapper[4728]: I0227 10:49:35.538156 4728 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7b7dd98b-314c-4e6c-a45b-31168398fca3-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 27 10:49:35 crc kubenswrapper[4728]: I0227 10:49:35.538168 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2782x\" (UniqueName: \"kubernetes.io/projected/7b7dd98b-314c-4e6c-a45b-31168398fca3-kube-api-access-2782x\") on node \"crc\" DevicePath \"\"" Feb 27 10:49:35 crc kubenswrapper[4728]: I0227 10:49:35.538176 4728 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7b7dd98b-314c-4e6c-a45b-31168398fca3-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 27 10:49:35 crc kubenswrapper[4728]: I0227 10:49:35.538185 4728 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7b7dd98b-314c-4e6c-a45b-31168398fca3-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 27 10:49:35 crc kubenswrapper[4728]: I0227 10:49:35.542436 4728 scope.go:117] "RemoveContainer" containerID="9f7b948c9f018079389be6b868387bb61b1aa295bd67bb6c8159167b8791e6d6" Feb 27 10:49:35 crc kubenswrapper[4728]: I0227 10:49:35.555310 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6bb4fc677f-67mnv"] Feb 27 10:49:35 crc kubenswrapper[4728]: I0227 10:49:35.568274 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b7dd98b-314c-4e6c-a45b-31168398fca3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7b7dd98b-314c-4e6c-a45b-31168398fca3" (UID: "7b7dd98b-314c-4e6c-a45b-31168398fca3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:49:35 crc kubenswrapper[4728]: I0227 10:49:35.641844 4728 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b7dd98b-314c-4e6c-a45b-31168398fca3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 10:49:35 crc kubenswrapper[4728]: I0227 10:49:35.677599 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b7dd98b-314c-4e6c-a45b-31168398fca3-config-data" (OuterVolumeSpecName: "config-data") pod "7b7dd98b-314c-4e6c-a45b-31168398fca3" (UID: "7b7dd98b-314c-4e6c-a45b-31168398fca3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:49:35 crc kubenswrapper[4728]: I0227 10:49:35.739605 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 27 10:49:35 crc kubenswrapper[4728]: I0227 10:49:35.744259 4728 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b7dd98b-314c-4e6c-a45b-31168398fca3-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 10:49:35 crc kubenswrapper[4728]: I0227 10:49:35.953777 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 27 10:49:35 crc kubenswrapper[4728]: W0227 10:49:35.968789 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbaeadb7e_f0a8_40fc_8529_3ffc87d16c0f.slice/crio-0ef3b4554f54a30bd3cfbf5ef0e18a6a2f64e6cf83be8ccd07ea7545f806cc2a WatchSource:0}: Error finding container 0ef3b4554f54a30bd3cfbf5ef0e18a6a2f64e6cf83be8ccd07ea7545f806cc2a: Status 404 returned error can't find the container with id 0ef3b4554f54a30bd3cfbf5ef0e18a6a2f64e6cf83be8ccd07ea7545f806cc2a Feb 27 10:49:35 crc kubenswrapper[4728]: I0227 10:49:35.986830 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 27 10:49:36 crc kubenswrapper[4728]: I0227 10:49:36.001206 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 27 10:49:36 crc kubenswrapper[4728]: I0227 10:49:36.026442 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 27 10:49:36 crc kubenswrapper[4728]: E0227 10:49:36.026887 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b7dd98b-314c-4e6c-a45b-31168398fca3" containerName="sg-core" Feb 27 10:49:36 crc kubenswrapper[4728]: I0227 10:49:36.026900 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b7dd98b-314c-4e6c-a45b-31168398fca3" containerName="sg-core" Feb 27 10:49:36 crc kubenswrapper[4728]: E0227 10:49:36.026915 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b7dd98b-314c-4e6c-a45b-31168398fca3" containerName="ceilometer-central-agent" Feb 27 10:49:36 crc kubenswrapper[4728]: I0227 10:49:36.026921 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b7dd98b-314c-4e6c-a45b-31168398fca3" containerName="ceilometer-central-agent" Feb 27 10:49:36 crc kubenswrapper[4728]: E0227 10:49:36.026941 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b7dd98b-314c-4e6c-a45b-31168398fca3" containerName="ceilometer-notification-agent" Feb 27 10:49:36 crc kubenswrapper[4728]: I0227 10:49:36.026947 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b7dd98b-314c-4e6c-a45b-31168398fca3" containerName="ceilometer-notification-agent" Feb 27 10:49:36 crc kubenswrapper[4728]: E0227 10:49:36.026960 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b7dd98b-314c-4e6c-a45b-31168398fca3" containerName="proxy-httpd" Feb 27 10:49:36 crc kubenswrapper[4728]: I0227 10:49:36.026965 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b7dd98b-314c-4e6c-a45b-31168398fca3" containerName="proxy-httpd" Feb 27 10:49:36 crc kubenswrapper[4728]: I0227 10:49:36.027189 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b7dd98b-314c-4e6c-a45b-31168398fca3" containerName="proxy-httpd" Feb 27 10:49:36 crc kubenswrapper[4728]: I0227 10:49:36.027207 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b7dd98b-314c-4e6c-a45b-31168398fca3" containerName="ceilometer-notification-agent" Feb 27 10:49:36 crc kubenswrapper[4728]: I0227 10:49:36.027219 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b7dd98b-314c-4e6c-a45b-31168398fca3" containerName="sg-core" Feb 27 10:49:36 crc kubenswrapper[4728]: I0227 10:49:36.027232 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b7dd98b-314c-4e6c-a45b-31168398fca3" containerName="ceilometer-central-agent" Feb 27 10:49:36 crc kubenswrapper[4728]: I0227 10:49:36.029345 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 27 10:49:36 crc kubenswrapper[4728]: I0227 10:49:36.033557 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 27 10:49:36 crc kubenswrapper[4728]: I0227 10:49:36.033802 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 27 10:49:36 crc kubenswrapper[4728]: I0227 10:49:36.043792 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 27 10:49:36 crc kubenswrapper[4728]: I0227 10:49:36.152372 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fcc4fffc-ff97-47b0-885c-f8d81d189bcf-config-data\") pod \"ceilometer-0\" (UID: \"fcc4fffc-ff97-47b0-885c-f8d81d189bcf\") " pod="openstack/ceilometer-0" Feb 27 10:49:36 crc kubenswrapper[4728]: I0227 10:49:36.152445 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fcc4fffc-ff97-47b0-885c-f8d81d189bcf-log-httpd\") pod \"ceilometer-0\" (UID: \"fcc4fffc-ff97-47b0-885c-f8d81d189bcf\") " pod="openstack/ceilometer-0" Feb 27 10:49:36 crc kubenswrapper[4728]: I0227 10:49:36.152469 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fcc4fffc-ff97-47b0-885c-f8d81d189bcf-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fcc4fffc-ff97-47b0-885c-f8d81d189bcf\") " pod="openstack/ceilometer-0" Feb 27 10:49:36 crc kubenswrapper[4728]: I0227 10:49:36.152559 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fcc4fffc-ff97-47b0-885c-f8d81d189bcf-run-httpd\") pod \"ceilometer-0\" (UID: \"fcc4fffc-ff97-47b0-885c-f8d81d189bcf\") " pod="openstack/ceilometer-0" Feb 27 10:49:36 crc kubenswrapper[4728]: I0227 10:49:36.152588 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fcc4fffc-ff97-47b0-885c-f8d81d189bcf-scripts\") pod \"ceilometer-0\" (UID: \"fcc4fffc-ff97-47b0-885c-f8d81d189bcf\") " pod="openstack/ceilometer-0" Feb 27 10:49:36 crc kubenswrapper[4728]: I0227 10:49:36.152625 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fcc4fffc-ff97-47b0-885c-f8d81d189bcf-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fcc4fffc-ff97-47b0-885c-f8d81d189bcf\") " pod="openstack/ceilometer-0" Feb 27 10:49:36 crc kubenswrapper[4728]: I0227 10:49:36.152685 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwxrq\" (UniqueName: \"kubernetes.io/projected/fcc4fffc-ff97-47b0-885c-f8d81d189bcf-kube-api-access-bwxrq\") pod \"ceilometer-0\" (UID: \"fcc4fffc-ff97-47b0-885c-f8d81d189bcf\") " pod="openstack/ceilometer-0" Feb 27 10:49:36 crc kubenswrapper[4728]: I0227 10:49:36.254278 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fcc4fffc-ff97-47b0-885c-f8d81d189bcf-config-data\") pod \"ceilometer-0\" (UID: \"fcc4fffc-ff97-47b0-885c-f8d81d189bcf\") " pod="openstack/ceilometer-0" Feb 27 10:49:36 crc kubenswrapper[4728]: I0227 10:49:36.254350 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fcc4fffc-ff97-47b0-885c-f8d81d189bcf-log-httpd\") pod \"ceilometer-0\" (UID: \"fcc4fffc-ff97-47b0-885c-f8d81d189bcf\") " pod="openstack/ceilometer-0" Feb 27 10:49:36 crc kubenswrapper[4728]: I0227 10:49:36.254377 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fcc4fffc-ff97-47b0-885c-f8d81d189bcf-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fcc4fffc-ff97-47b0-885c-f8d81d189bcf\") " pod="openstack/ceilometer-0" Feb 27 10:49:36 crc kubenswrapper[4728]: I0227 10:49:36.254454 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fcc4fffc-ff97-47b0-885c-f8d81d189bcf-run-httpd\") pod \"ceilometer-0\" (UID: \"fcc4fffc-ff97-47b0-885c-f8d81d189bcf\") " pod="openstack/ceilometer-0" Feb 27 10:49:36 crc kubenswrapper[4728]: I0227 10:49:36.254520 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fcc4fffc-ff97-47b0-885c-f8d81d189bcf-scripts\") pod \"ceilometer-0\" (UID: \"fcc4fffc-ff97-47b0-885c-f8d81d189bcf\") " pod="openstack/ceilometer-0" Feb 27 10:49:36 crc kubenswrapper[4728]: I0227 10:49:36.254563 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fcc4fffc-ff97-47b0-885c-f8d81d189bcf-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fcc4fffc-ff97-47b0-885c-f8d81d189bcf\") " pod="openstack/ceilometer-0" Feb 27 10:49:36 crc kubenswrapper[4728]: I0227 10:49:36.254616 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bwxrq\" (UniqueName: \"kubernetes.io/projected/fcc4fffc-ff97-47b0-885c-f8d81d189bcf-kube-api-access-bwxrq\") pod \"ceilometer-0\" (UID: \"fcc4fffc-ff97-47b0-885c-f8d81d189bcf\") " pod="openstack/ceilometer-0" Feb 27 10:49:36 crc kubenswrapper[4728]: I0227 10:49:36.256595 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fcc4fffc-ff97-47b0-885c-f8d81d189bcf-run-httpd\") pod \"ceilometer-0\" (UID: \"fcc4fffc-ff97-47b0-885c-f8d81d189bcf\") " pod="openstack/ceilometer-0" Feb 27 10:49:36 crc kubenswrapper[4728]: I0227 10:49:36.256763 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fcc4fffc-ff97-47b0-885c-f8d81d189bcf-log-httpd\") pod \"ceilometer-0\" (UID: \"fcc4fffc-ff97-47b0-885c-f8d81d189bcf\") " pod="openstack/ceilometer-0" Feb 27 10:49:36 crc kubenswrapper[4728]: I0227 10:49:36.261186 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fcc4fffc-ff97-47b0-885c-f8d81d189bcf-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fcc4fffc-ff97-47b0-885c-f8d81d189bcf\") " pod="openstack/ceilometer-0" Feb 27 10:49:36 crc kubenswrapper[4728]: I0227 10:49:36.267328 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fcc4fffc-ff97-47b0-885c-f8d81d189bcf-scripts\") pod \"ceilometer-0\" (UID: \"fcc4fffc-ff97-47b0-885c-f8d81d189bcf\") " pod="openstack/ceilometer-0" Feb 27 10:49:36 crc kubenswrapper[4728]: I0227 10:49:36.267584 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fcc4fffc-ff97-47b0-885c-f8d81d189bcf-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fcc4fffc-ff97-47b0-885c-f8d81d189bcf\") " pod="openstack/ceilometer-0" Feb 27 10:49:36 crc kubenswrapper[4728]: I0227 10:49:36.271128 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fcc4fffc-ff97-47b0-885c-f8d81d189bcf-config-data\") pod \"ceilometer-0\" (UID: \"fcc4fffc-ff97-47b0-885c-f8d81d189bcf\") " pod="openstack/ceilometer-0" Feb 27 10:49:36 crc kubenswrapper[4728]: I0227 10:49:36.271811 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bwxrq\" (UniqueName: \"kubernetes.io/projected/fcc4fffc-ff97-47b0-885c-f8d81d189bcf-kube-api-access-bwxrq\") pod \"ceilometer-0\" (UID: \"fcc4fffc-ff97-47b0-885c-f8d81d189bcf\") " pod="openstack/ceilometer-0" Feb 27 10:49:36 crc kubenswrapper[4728]: I0227 10:49:36.374225 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"baeadb7e-f0a8-40fc-8529-3ffc87d16c0f","Type":"ContainerStarted","Data":"0ef3b4554f54a30bd3cfbf5ef0e18a6a2f64e6cf83be8ccd07ea7545f806cc2a"} Feb 27 10:49:36 crc kubenswrapper[4728]: I0227 10:49:36.379580 4728 generic.go:334] "Generic (PLEG): container finished" podID="6ac390a6-133b-434f-a853-cf7fc9d18de1" containerID="b4704af0a9c43403ef4f3d24fefd50ee5b346c2d963535d161ca1e5b68777f3d" exitCode=0 Feb 27 10:49:36 crc kubenswrapper[4728]: I0227 10:49:36.379706 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb4fc677f-67mnv" event={"ID":"6ac390a6-133b-434f-a853-cf7fc9d18de1","Type":"ContainerDied","Data":"b4704af0a9c43403ef4f3d24fefd50ee5b346c2d963535d161ca1e5b68777f3d"} Feb 27 10:49:36 crc kubenswrapper[4728]: I0227 10:49:36.379949 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb4fc677f-67mnv" event={"ID":"6ac390a6-133b-434f-a853-cf7fc9d18de1","Type":"ContainerStarted","Data":"7baceb6ff4a47a0f088d9f22e4ccbb8aaa682c02f7db99253e549cb3f392b047"} Feb 27 10:49:36 crc kubenswrapper[4728]: I0227 10:49:36.395035 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"80018c03-d2c1-4ac8-8e4b-1bb5e3c26b21","Type":"ContainerStarted","Data":"3381c8a23161c81f0ef34a3308b77600f122fa003409a71835527c67902d6984"} Feb 27 10:49:36 crc kubenswrapper[4728]: I0227 10:49:36.451637 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 27 10:49:36 crc kubenswrapper[4728]: I0227 10:49:36.780641 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b7dd98b-314c-4e6c-a45b-31168398fca3" path="/var/lib/kubelet/pods/7b7dd98b-314c-4e6c-a45b-31168398fca3/volumes" Feb 27 10:49:36 crc kubenswrapper[4728]: I0227 10:49:36.787054 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 27 10:49:37 crc kubenswrapper[4728]: I0227 10:49:37.102685 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 27 10:49:37 crc kubenswrapper[4728]: I0227 10:49:37.410253 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fcc4fffc-ff97-47b0-885c-f8d81d189bcf","Type":"ContainerStarted","Data":"c1eeeab23961a4293f63ea7140efd4514d22c73e9c262153628c35c63175bdd8"} Feb 27 10:49:37 crc kubenswrapper[4728]: I0227 10:49:37.412550 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"baeadb7e-f0a8-40fc-8529-3ffc87d16c0f","Type":"ContainerStarted","Data":"fbd2753768f6a0b4c2107ffab1205e2bbf17877a2376979a78eb4774a63e79f9"} Feb 27 10:49:37 crc kubenswrapper[4728]: I0227 10:49:37.415366 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb4fc677f-67mnv" event={"ID":"6ac390a6-133b-434f-a853-cf7fc9d18de1","Type":"ContainerStarted","Data":"144dcde888cd69d067767c13a3cfa8aef249f02c2010b5aa3981ea8193e9f6a7"} Feb 27 10:49:37 crc kubenswrapper[4728]: I0227 10:49:37.415661 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6bb4fc677f-67mnv" Feb 27 10:49:37 crc kubenswrapper[4728]: I0227 10:49:37.441992 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6bb4fc677f-67mnv" podStartSLOduration=3.441971275 podStartE2EDuration="3.441971275s" podCreationTimestamp="2026-02-27 10:49:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:49:37.432360864 +0000 UTC m=+1397.394726970" watchObservedRunningTime="2026-02-27 10:49:37.441971275 +0000 UTC m=+1397.404337381" Feb 27 10:49:38 crc kubenswrapper[4728]: I0227 10:49:38.019971 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-6bbb6b68c6-zhv6j" Feb 27 10:49:38 crc kubenswrapper[4728]: I0227 10:49:38.305648 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-54dbd7489c-x96kn"] Feb 27 10:49:38 crc kubenswrapper[4728]: I0227 10:49:38.306249 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-54dbd7489c-x96kn" podUID="ac391aa0-3053-4675-a2c2-8c418ed9bd3a" containerName="neutron-api" containerID="cri-o://a109578385f7c7f0783ccb545d370fef90c64a9f399e14183da769b7a760367f" gracePeriod=30 Feb 27 10:49:38 crc kubenswrapper[4728]: I0227 10:49:38.307078 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-54dbd7489c-x96kn" podUID="ac391aa0-3053-4675-a2c2-8c418ed9bd3a" containerName="neutron-httpd" containerID="cri-o://cc4e75e51c2221b31239e3ccc5cb02ff809b659244ceda42d4952a1c7d985487" gracePeriod=30 Feb 27 10:49:38 crc kubenswrapper[4728]: I0227 10:49:38.317789 4728 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-54dbd7489c-x96kn" podUID="ac391aa0-3053-4675-a2c2-8c418ed9bd3a" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.202:9696/\": read tcp 10.217.0.2:56604->10.217.0.202:9696: read: connection reset by peer" Feb 27 10:49:38 crc kubenswrapper[4728]: I0227 10:49:38.334038 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-8589b5d689-mwmds"] Feb 27 10:49:38 crc kubenswrapper[4728]: I0227 10:49:38.351230 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8589b5d689-mwmds" Feb 27 10:49:38 crc kubenswrapper[4728]: I0227 10:49:38.360154 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-8589b5d689-mwmds"] Feb 27 10:49:38 crc kubenswrapper[4728]: I0227 10:49:38.473875 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b60bed6-0eb1-40f9-a560-5488d7b2a551-ovndb-tls-certs\") pod \"neutron-8589b5d689-mwmds\" (UID: \"0b60bed6-0eb1-40f9-a560-5488d7b2a551\") " pod="openstack/neutron-8589b5d689-mwmds" Feb 27 10:49:38 crc kubenswrapper[4728]: I0227 10:49:38.473959 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b60bed6-0eb1-40f9-a560-5488d7b2a551-internal-tls-certs\") pod \"neutron-8589b5d689-mwmds\" (UID: \"0b60bed6-0eb1-40f9-a560-5488d7b2a551\") " pod="openstack/neutron-8589b5d689-mwmds" Feb 27 10:49:38 crc kubenswrapper[4728]: I0227 10:49:38.474007 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/0b60bed6-0eb1-40f9-a560-5488d7b2a551-httpd-config\") pod \"neutron-8589b5d689-mwmds\" (UID: \"0b60bed6-0eb1-40f9-a560-5488d7b2a551\") " pod="openstack/neutron-8589b5d689-mwmds" Feb 27 10:49:38 crc kubenswrapper[4728]: I0227 10:49:38.474023 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b60bed6-0eb1-40f9-a560-5488d7b2a551-combined-ca-bundle\") pod \"neutron-8589b5d689-mwmds\" (UID: \"0b60bed6-0eb1-40f9-a560-5488d7b2a551\") " pod="openstack/neutron-8589b5d689-mwmds" Feb 27 10:49:38 crc kubenswrapper[4728]: I0227 10:49:38.474089 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjj8q\" (UniqueName: \"kubernetes.io/projected/0b60bed6-0eb1-40f9-a560-5488d7b2a551-kube-api-access-kjj8q\") pod \"neutron-8589b5d689-mwmds\" (UID: \"0b60bed6-0eb1-40f9-a560-5488d7b2a551\") " pod="openstack/neutron-8589b5d689-mwmds" Feb 27 10:49:38 crc kubenswrapper[4728]: I0227 10:49:38.474150 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b60bed6-0eb1-40f9-a560-5488d7b2a551-public-tls-certs\") pod \"neutron-8589b5d689-mwmds\" (UID: \"0b60bed6-0eb1-40f9-a560-5488d7b2a551\") " pod="openstack/neutron-8589b5d689-mwmds" Feb 27 10:49:38 crc kubenswrapper[4728]: I0227 10:49:38.474210 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0b60bed6-0eb1-40f9-a560-5488d7b2a551-config\") pod \"neutron-8589b5d689-mwmds\" (UID: \"0b60bed6-0eb1-40f9-a560-5488d7b2a551\") " pod="openstack/neutron-8589b5d689-mwmds" Feb 27 10:49:38 crc kubenswrapper[4728]: I0227 10:49:38.520092 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"80018c03-d2c1-4ac8-8e4b-1bb5e3c26b21","Type":"ContainerStarted","Data":"3e138468c9c2d0600f6be684fa1b62744cccd4456942382b240f652b5fe448b1"} Feb 27 10:49:38 crc kubenswrapper[4728]: I0227 10:49:38.583063 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b60bed6-0eb1-40f9-a560-5488d7b2a551-ovndb-tls-certs\") pod \"neutron-8589b5d689-mwmds\" (UID: \"0b60bed6-0eb1-40f9-a560-5488d7b2a551\") " pod="openstack/neutron-8589b5d689-mwmds" Feb 27 10:49:38 crc kubenswrapper[4728]: I0227 10:49:38.583361 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b60bed6-0eb1-40f9-a560-5488d7b2a551-internal-tls-certs\") pod \"neutron-8589b5d689-mwmds\" (UID: \"0b60bed6-0eb1-40f9-a560-5488d7b2a551\") " pod="openstack/neutron-8589b5d689-mwmds" Feb 27 10:49:38 crc kubenswrapper[4728]: I0227 10:49:38.583390 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/0b60bed6-0eb1-40f9-a560-5488d7b2a551-httpd-config\") pod \"neutron-8589b5d689-mwmds\" (UID: \"0b60bed6-0eb1-40f9-a560-5488d7b2a551\") " pod="openstack/neutron-8589b5d689-mwmds" Feb 27 10:49:38 crc kubenswrapper[4728]: I0227 10:49:38.583404 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b60bed6-0eb1-40f9-a560-5488d7b2a551-combined-ca-bundle\") pod \"neutron-8589b5d689-mwmds\" (UID: \"0b60bed6-0eb1-40f9-a560-5488d7b2a551\") " pod="openstack/neutron-8589b5d689-mwmds" Feb 27 10:49:38 crc kubenswrapper[4728]: I0227 10:49:38.583456 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kjj8q\" (UniqueName: \"kubernetes.io/projected/0b60bed6-0eb1-40f9-a560-5488d7b2a551-kube-api-access-kjj8q\") pod \"neutron-8589b5d689-mwmds\" (UID: \"0b60bed6-0eb1-40f9-a560-5488d7b2a551\") " pod="openstack/neutron-8589b5d689-mwmds" Feb 27 10:49:38 crc kubenswrapper[4728]: I0227 10:49:38.583484 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b60bed6-0eb1-40f9-a560-5488d7b2a551-public-tls-certs\") pod \"neutron-8589b5d689-mwmds\" (UID: \"0b60bed6-0eb1-40f9-a560-5488d7b2a551\") " pod="openstack/neutron-8589b5d689-mwmds" Feb 27 10:49:38 crc kubenswrapper[4728]: I0227 10:49:38.583544 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0b60bed6-0eb1-40f9-a560-5488d7b2a551-config\") pod \"neutron-8589b5d689-mwmds\" (UID: \"0b60bed6-0eb1-40f9-a560-5488d7b2a551\") " pod="openstack/neutron-8589b5d689-mwmds" Feb 27 10:49:38 crc kubenswrapper[4728]: I0227 10:49:38.590864 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"baeadb7e-f0a8-40fc-8529-3ffc87d16c0f","Type":"ContainerStarted","Data":"3b397fe0d7c420a104894c1c16becf66b45f3b4c2aee942a00b5d6a3e69caf8a"} Feb 27 10:49:38 crc kubenswrapper[4728]: I0227 10:49:38.591079 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="baeadb7e-f0a8-40fc-8529-3ffc87d16c0f" containerName="cinder-api-log" containerID="cri-o://fbd2753768f6a0b4c2107ffab1205e2bbf17877a2376979a78eb4774a63e79f9" gracePeriod=30 Feb 27 10:49:38 crc kubenswrapper[4728]: I0227 10:49:38.591387 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 27 10:49:38 crc kubenswrapper[4728]: I0227 10:49:38.591657 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="baeadb7e-f0a8-40fc-8529-3ffc87d16c0f" containerName="cinder-api" containerID="cri-o://3b397fe0d7c420a104894c1c16becf66b45f3b4c2aee942a00b5d6a3e69caf8a" gracePeriod=30 Feb 27 10:49:38 crc kubenswrapper[4728]: I0227 10:49:38.599713 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b60bed6-0eb1-40f9-a560-5488d7b2a551-combined-ca-bundle\") pod \"neutron-8589b5d689-mwmds\" (UID: \"0b60bed6-0eb1-40f9-a560-5488d7b2a551\") " pod="openstack/neutron-8589b5d689-mwmds" Feb 27 10:49:38 crc kubenswrapper[4728]: I0227 10:49:38.600218 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b60bed6-0eb1-40f9-a560-5488d7b2a551-ovndb-tls-certs\") pod \"neutron-8589b5d689-mwmds\" (UID: \"0b60bed6-0eb1-40f9-a560-5488d7b2a551\") " pod="openstack/neutron-8589b5d689-mwmds" Feb 27 10:49:38 crc kubenswrapper[4728]: I0227 10:49:38.607012 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b60bed6-0eb1-40f9-a560-5488d7b2a551-internal-tls-certs\") pod \"neutron-8589b5d689-mwmds\" (UID: \"0b60bed6-0eb1-40f9-a560-5488d7b2a551\") " pod="openstack/neutron-8589b5d689-mwmds" Feb 27 10:49:38 crc kubenswrapper[4728]: I0227 10:49:38.612863 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/0b60bed6-0eb1-40f9-a560-5488d7b2a551-httpd-config\") pod \"neutron-8589b5d689-mwmds\" (UID: \"0b60bed6-0eb1-40f9-a560-5488d7b2a551\") " pod="openstack/neutron-8589b5d689-mwmds" Feb 27 10:49:38 crc kubenswrapper[4728]: I0227 10:49:38.613280 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/0b60bed6-0eb1-40f9-a560-5488d7b2a551-config\") pod \"neutron-8589b5d689-mwmds\" (UID: \"0b60bed6-0eb1-40f9-a560-5488d7b2a551\") " pod="openstack/neutron-8589b5d689-mwmds" Feb 27 10:49:38 crc kubenswrapper[4728]: I0227 10:49:38.617435 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fcc4fffc-ff97-47b0-885c-f8d81d189bcf","Type":"ContainerStarted","Data":"8ebba70ccc18b948278ad3d65f5711ae8ff260a5edfefb06ea4ce731f7cb09be"} Feb 27 10:49:38 crc kubenswrapper[4728]: I0227 10:49:38.617461 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b60bed6-0eb1-40f9-a560-5488d7b2a551-public-tls-certs\") pod \"neutron-8589b5d689-mwmds\" (UID: \"0b60bed6-0eb1-40f9-a560-5488d7b2a551\") " pod="openstack/neutron-8589b5d689-mwmds" Feb 27 10:49:38 crc kubenswrapper[4728]: I0227 10:49:38.617123 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kjj8q\" (UniqueName: \"kubernetes.io/projected/0b60bed6-0eb1-40f9-a560-5488d7b2a551-kube-api-access-kjj8q\") pod \"neutron-8589b5d689-mwmds\" (UID: \"0b60bed6-0eb1-40f9-a560-5488d7b2a551\") " pod="openstack/neutron-8589b5d689-mwmds" Feb 27 10:49:38 crc kubenswrapper[4728]: I0227 10:49:38.636192 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.63617568 podStartE2EDuration="4.63617568s" podCreationTimestamp="2026-02-27 10:49:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:49:38.632534871 +0000 UTC m=+1398.594900977" watchObservedRunningTime="2026-02-27 10:49:38.63617568 +0000 UTC m=+1398.598541786" Feb 27 10:49:38 crc kubenswrapper[4728]: I0227 10:49:38.730569 4728 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-74fdd766cd-6wpqm" podUID="7bd979a4-451e-4f4d-affa-43bfbc671238" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.210:9311/healthcheck\": read tcp 10.217.0.2:60678->10.217.0.210:9311: read: connection reset by peer" Feb 27 10:49:38 crc kubenswrapper[4728]: I0227 10:49:38.730590 4728 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-74fdd766cd-6wpqm" podUID="7bd979a4-451e-4f4d-affa-43bfbc671238" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.210:9311/healthcheck\": read tcp 10.217.0.2:60670->10.217.0.210:9311: read: connection reset by peer" Feb 27 10:49:38 crc kubenswrapper[4728]: I0227 10:49:38.863398 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8589b5d689-mwmds" Feb 27 10:49:39 crc kubenswrapper[4728]: I0227 10:49:39.523543 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 27 10:49:39 crc kubenswrapper[4728]: I0227 10:49:39.568787 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-74fdd766cd-6wpqm" Feb 27 10:49:39 crc kubenswrapper[4728]: I0227 10:49:39.632957 4728 generic.go:334] "Generic (PLEG): container finished" podID="7bd979a4-451e-4f4d-affa-43bfbc671238" containerID="2f79f0566f330c7ee79dab182686219d894cf8bac85a4838943573d299e090c5" exitCode=0 Feb 27 10:49:39 crc kubenswrapper[4728]: I0227 10:49:39.633295 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-74fdd766cd-6wpqm" event={"ID":"7bd979a4-451e-4f4d-affa-43bfbc671238","Type":"ContainerDied","Data":"2f79f0566f330c7ee79dab182686219d894cf8bac85a4838943573d299e090c5"} Feb 27 10:49:39 crc kubenswrapper[4728]: I0227 10:49:39.633369 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-74fdd766cd-6wpqm" event={"ID":"7bd979a4-451e-4f4d-affa-43bfbc671238","Type":"ContainerDied","Data":"ae7b7ab0d08354aad20f1827991f7a87b4b631681daec95bc589721c22fb9583"} Feb 27 10:49:39 crc kubenswrapper[4728]: I0227 10:49:39.633386 4728 scope.go:117] "RemoveContainer" containerID="2f79f0566f330c7ee79dab182686219d894cf8bac85a4838943573d299e090c5" Feb 27 10:49:39 crc kubenswrapper[4728]: I0227 10:49:39.633620 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-74fdd766cd-6wpqm" Feb 27 10:49:39 crc kubenswrapper[4728]: I0227 10:49:39.646822 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"80018c03-d2c1-4ac8-8e4b-1bb5e3c26b21","Type":"ContainerStarted","Data":"4d862e30b41c56c498db401b236ce3b4055ba15e70d611a9811046aec8455736"} Feb 27 10:49:39 crc kubenswrapper[4728]: I0227 10:49:39.663142 4728 generic.go:334] "Generic (PLEG): container finished" podID="baeadb7e-f0a8-40fc-8529-3ffc87d16c0f" containerID="3b397fe0d7c420a104894c1c16becf66b45f3b4c2aee942a00b5d6a3e69caf8a" exitCode=0 Feb 27 10:49:39 crc kubenswrapper[4728]: I0227 10:49:39.663195 4728 generic.go:334] "Generic (PLEG): container finished" podID="baeadb7e-f0a8-40fc-8529-3ffc87d16c0f" containerID="fbd2753768f6a0b4c2107ffab1205e2bbf17877a2376979a78eb4774a63e79f9" exitCode=143 Feb 27 10:49:39 crc kubenswrapper[4728]: I0227 10:49:39.663341 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"baeadb7e-f0a8-40fc-8529-3ffc87d16c0f","Type":"ContainerDied","Data":"3b397fe0d7c420a104894c1c16becf66b45f3b4c2aee942a00b5d6a3e69caf8a"} Feb 27 10:49:39 crc kubenswrapper[4728]: I0227 10:49:39.663391 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"baeadb7e-f0a8-40fc-8529-3ffc87d16c0f","Type":"ContainerDied","Data":"fbd2753768f6a0b4c2107ffab1205e2bbf17877a2376979a78eb4774a63e79f9"} Feb 27 10:49:39 crc kubenswrapper[4728]: I0227 10:49:39.663404 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"baeadb7e-f0a8-40fc-8529-3ffc87d16c0f","Type":"ContainerDied","Data":"0ef3b4554f54a30bd3cfbf5ef0e18a6a2f64e6cf83be8ccd07ea7545f806cc2a"} Feb 27 10:49:39 crc kubenswrapper[4728]: I0227 10:49:39.664192 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 27 10:49:39 crc kubenswrapper[4728]: I0227 10:49:39.668703 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.785283265 podStartE2EDuration="5.668692667s" podCreationTimestamp="2026-02-27 10:49:34 +0000 UTC" firstStartedPulling="2026-02-27 10:49:35.758236996 +0000 UTC m=+1395.720603102" lastFinishedPulling="2026-02-27 10:49:36.641646308 +0000 UTC m=+1396.604012504" observedRunningTime="2026-02-27 10:49:39.665021797 +0000 UTC m=+1399.627387903" watchObservedRunningTime="2026-02-27 10:49:39.668692667 +0000 UTC m=+1399.631058773" Feb 27 10:49:39 crc kubenswrapper[4728]: I0227 10:49:39.677368 4728 generic.go:334] "Generic (PLEG): container finished" podID="ac391aa0-3053-4675-a2c2-8c418ed9bd3a" containerID="cc4e75e51c2221b31239e3ccc5cb02ff809b659244ceda42d4952a1c7d985487" exitCode=0 Feb 27 10:49:39 crc kubenswrapper[4728]: I0227 10:49:39.677461 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-54dbd7489c-x96kn" event={"ID":"ac391aa0-3053-4675-a2c2-8c418ed9bd3a","Type":"ContainerDied","Data":"cc4e75e51c2221b31239e3ccc5cb02ff809b659244ceda42d4952a1c7d985487"} Feb 27 10:49:39 crc kubenswrapper[4728]: I0227 10:49:39.697480 4728 scope.go:117] "RemoveContainer" containerID="1303e6b84b0023b055b9dd2297ed0c2fb3664f88151c8abddf87df8ef82c3a32" Feb 27 10:49:39 crc kubenswrapper[4728]: I0227 10:49:39.702844 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fcc4fffc-ff97-47b0-885c-f8d81d189bcf","Type":"ContainerStarted","Data":"7a1ca7c2ada1415b20dbc22a8e914f5e2b2b33630a8f7c0bfef955d7a48d1a50"} Feb 27 10:49:39 crc kubenswrapper[4728]: I0227 10:49:39.703719 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-8589b5d689-mwmds"] Feb 27 10:49:39 crc kubenswrapper[4728]: I0227 10:49:39.708226 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7bd979a4-451e-4f4d-affa-43bfbc671238-logs\") pod \"7bd979a4-451e-4f4d-affa-43bfbc671238\" (UID: \"7bd979a4-451e-4f4d-affa-43bfbc671238\") " Feb 27 10:49:39 crc kubenswrapper[4728]: I0227 10:49:39.708268 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xftbk\" (UniqueName: \"kubernetes.io/projected/baeadb7e-f0a8-40fc-8529-3ffc87d16c0f-kube-api-access-xftbk\") pod \"baeadb7e-f0a8-40fc-8529-3ffc87d16c0f\" (UID: \"baeadb7e-f0a8-40fc-8529-3ffc87d16c0f\") " Feb 27 10:49:39 crc kubenswrapper[4728]: I0227 10:49:39.708310 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7bd979a4-451e-4f4d-affa-43bfbc671238-config-data-custom\") pod \"7bd979a4-451e-4f4d-affa-43bfbc671238\" (UID: \"7bd979a4-451e-4f4d-affa-43bfbc671238\") " Feb 27 10:49:39 crc kubenswrapper[4728]: I0227 10:49:39.708360 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/baeadb7e-f0a8-40fc-8529-3ffc87d16c0f-scripts\") pod \"baeadb7e-f0a8-40fc-8529-3ffc87d16c0f\" (UID: \"baeadb7e-f0a8-40fc-8529-3ffc87d16c0f\") " Feb 27 10:49:39 crc kubenswrapper[4728]: I0227 10:49:39.708468 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7bd979a4-451e-4f4d-affa-43bfbc671238-combined-ca-bundle\") pod \"7bd979a4-451e-4f4d-affa-43bfbc671238\" (UID: \"7bd979a4-451e-4f4d-affa-43bfbc671238\") " Feb 27 10:49:39 crc kubenswrapper[4728]: I0227 10:49:39.708540 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/baeadb7e-f0a8-40fc-8529-3ffc87d16c0f-config-data-custom\") pod \"baeadb7e-f0a8-40fc-8529-3ffc87d16c0f\" (UID: \"baeadb7e-f0a8-40fc-8529-3ffc87d16c0f\") " Feb 27 10:49:39 crc kubenswrapper[4728]: I0227 10:49:39.708630 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/baeadb7e-f0a8-40fc-8529-3ffc87d16c0f-combined-ca-bundle\") pod \"baeadb7e-f0a8-40fc-8529-3ffc87d16c0f\" (UID: \"baeadb7e-f0a8-40fc-8529-3ffc87d16c0f\") " Feb 27 10:49:39 crc kubenswrapper[4728]: I0227 10:49:39.708703 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tqvdh\" (UniqueName: \"kubernetes.io/projected/7bd979a4-451e-4f4d-affa-43bfbc671238-kube-api-access-tqvdh\") pod \"7bd979a4-451e-4f4d-affa-43bfbc671238\" (UID: \"7bd979a4-451e-4f4d-affa-43bfbc671238\") " Feb 27 10:49:39 crc kubenswrapper[4728]: I0227 10:49:39.708731 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/baeadb7e-f0a8-40fc-8529-3ffc87d16c0f-etc-machine-id\") pod \"baeadb7e-f0a8-40fc-8529-3ffc87d16c0f\" (UID: \"baeadb7e-f0a8-40fc-8529-3ffc87d16c0f\") " Feb 27 10:49:39 crc kubenswrapper[4728]: I0227 10:49:39.708762 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7bd979a4-451e-4f4d-affa-43bfbc671238-config-data\") pod \"7bd979a4-451e-4f4d-affa-43bfbc671238\" (UID: \"7bd979a4-451e-4f4d-affa-43bfbc671238\") " Feb 27 10:49:39 crc kubenswrapper[4728]: I0227 10:49:39.708793 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/baeadb7e-f0a8-40fc-8529-3ffc87d16c0f-config-data\") pod \"baeadb7e-f0a8-40fc-8529-3ffc87d16c0f\" (UID: \"baeadb7e-f0a8-40fc-8529-3ffc87d16c0f\") " Feb 27 10:49:39 crc kubenswrapper[4728]: I0227 10:49:39.708808 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/baeadb7e-f0a8-40fc-8529-3ffc87d16c0f-logs\") pod \"baeadb7e-f0a8-40fc-8529-3ffc87d16c0f\" (UID: \"baeadb7e-f0a8-40fc-8529-3ffc87d16c0f\") " Feb 27 10:49:39 crc kubenswrapper[4728]: I0227 10:49:39.709576 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/baeadb7e-f0a8-40fc-8529-3ffc87d16c0f-logs" (OuterVolumeSpecName: "logs") pod "baeadb7e-f0a8-40fc-8529-3ffc87d16c0f" (UID: "baeadb7e-f0a8-40fc-8529-3ffc87d16c0f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:49:39 crc kubenswrapper[4728]: I0227 10:49:39.709615 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/baeadb7e-f0a8-40fc-8529-3ffc87d16c0f-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "baeadb7e-f0a8-40fc-8529-3ffc87d16c0f" (UID: "baeadb7e-f0a8-40fc-8529-3ffc87d16c0f"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 10:49:39 crc kubenswrapper[4728]: I0227 10:49:39.709670 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7bd979a4-451e-4f4d-affa-43bfbc671238-logs" (OuterVolumeSpecName: "logs") pod "7bd979a4-451e-4f4d-affa-43bfbc671238" (UID: "7bd979a4-451e-4f4d-affa-43bfbc671238"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:49:39 crc kubenswrapper[4728]: I0227 10:49:39.715866 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7bd979a4-451e-4f4d-affa-43bfbc671238-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "7bd979a4-451e-4f4d-affa-43bfbc671238" (UID: "7bd979a4-451e-4f4d-affa-43bfbc671238"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:49:39 crc kubenswrapper[4728]: I0227 10:49:39.722695 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/baeadb7e-f0a8-40fc-8529-3ffc87d16c0f-scripts" (OuterVolumeSpecName: "scripts") pod "baeadb7e-f0a8-40fc-8529-3ffc87d16c0f" (UID: "baeadb7e-f0a8-40fc-8529-3ffc87d16c0f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:49:39 crc kubenswrapper[4728]: I0227 10:49:39.724138 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/baeadb7e-f0a8-40fc-8529-3ffc87d16c0f-kube-api-access-xftbk" (OuterVolumeSpecName: "kube-api-access-xftbk") pod "baeadb7e-f0a8-40fc-8529-3ffc87d16c0f" (UID: "baeadb7e-f0a8-40fc-8529-3ffc87d16c0f"). InnerVolumeSpecName "kube-api-access-xftbk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:49:39 crc kubenswrapper[4728]: I0227 10:49:39.746765 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bd979a4-451e-4f4d-affa-43bfbc671238-kube-api-access-tqvdh" (OuterVolumeSpecName: "kube-api-access-tqvdh") pod "7bd979a4-451e-4f4d-affa-43bfbc671238" (UID: "7bd979a4-451e-4f4d-affa-43bfbc671238"). InnerVolumeSpecName "kube-api-access-tqvdh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:49:39 crc kubenswrapper[4728]: I0227 10:49:39.746887 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/baeadb7e-f0a8-40fc-8529-3ffc87d16c0f-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "baeadb7e-f0a8-40fc-8529-3ffc87d16c0f" (UID: "baeadb7e-f0a8-40fc-8529-3ffc87d16c0f"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:49:39 crc kubenswrapper[4728]: I0227 10:49:39.748382 4728 scope.go:117] "RemoveContainer" containerID="2f79f0566f330c7ee79dab182686219d894cf8bac85a4838943573d299e090c5" Feb 27 10:49:39 crc kubenswrapper[4728]: E0227 10:49:39.749980 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2f79f0566f330c7ee79dab182686219d894cf8bac85a4838943573d299e090c5\": container with ID starting with 2f79f0566f330c7ee79dab182686219d894cf8bac85a4838943573d299e090c5 not found: ID does not exist" containerID="2f79f0566f330c7ee79dab182686219d894cf8bac85a4838943573d299e090c5" Feb 27 10:49:39 crc kubenswrapper[4728]: I0227 10:49:39.750097 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f79f0566f330c7ee79dab182686219d894cf8bac85a4838943573d299e090c5"} err="failed to get container status \"2f79f0566f330c7ee79dab182686219d894cf8bac85a4838943573d299e090c5\": rpc error: code = NotFound desc = could not find container \"2f79f0566f330c7ee79dab182686219d894cf8bac85a4838943573d299e090c5\": container with ID starting with 2f79f0566f330c7ee79dab182686219d894cf8bac85a4838943573d299e090c5 not found: ID does not exist" Feb 27 10:49:39 crc kubenswrapper[4728]: I0227 10:49:39.750348 4728 scope.go:117] "RemoveContainer" containerID="1303e6b84b0023b055b9dd2297ed0c2fb3664f88151c8abddf87df8ef82c3a32" Feb 27 10:49:39 crc kubenswrapper[4728]: E0227 10:49:39.751211 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1303e6b84b0023b055b9dd2297ed0c2fb3664f88151c8abddf87df8ef82c3a32\": container with ID starting with 1303e6b84b0023b055b9dd2297ed0c2fb3664f88151c8abddf87df8ef82c3a32 not found: ID does not exist" containerID="1303e6b84b0023b055b9dd2297ed0c2fb3664f88151c8abddf87df8ef82c3a32" Feb 27 10:49:39 crc kubenswrapper[4728]: I0227 10:49:39.751266 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1303e6b84b0023b055b9dd2297ed0c2fb3664f88151c8abddf87df8ef82c3a32"} err="failed to get container status \"1303e6b84b0023b055b9dd2297ed0c2fb3664f88151c8abddf87df8ef82c3a32\": rpc error: code = NotFound desc = could not find container \"1303e6b84b0023b055b9dd2297ed0c2fb3664f88151c8abddf87df8ef82c3a32\": container with ID starting with 1303e6b84b0023b055b9dd2297ed0c2fb3664f88151c8abddf87df8ef82c3a32 not found: ID does not exist" Feb 27 10:49:39 crc kubenswrapper[4728]: I0227 10:49:39.751291 4728 scope.go:117] "RemoveContainer" containerID="3b397fe0d7c420a104894c1c16becf66b45f3b4c2aee942a00b5d6a3e69caf8a" Feb 27 10:49:39 crc kubenswrapper[4728]: I0227 10:49:39.776535 4728 scope.go:117] "RemoveContainer" containerID="fbd2753768f6a0b4c2107ffab1205e2bbf17877a2376979a78eb4774a63e79f9" Feb 27 10:49:39 crc kubenswrapper[4728]: I0227 10:49:39.786625 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/baeadb7e-f0a8-40fc-8529-3ffc87d16c0f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "baeadb7e-f0a8-40fc-8529-3ffc87d16c0f" (UID: "baeadb7e-f0a8-40fc-8529-3ffc87d16c0f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:49:39 crc kubenswrapper[4728]: I0227 10:49:39.793826 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7bd979a4-451e-4f4d-affa-43bfbc671238-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7bd979a4-451e-4f4d-affa-43bfbc671238" (UID: "7bd979a4-451e-4f4d-affa-43bfbc671238"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:49:39 crc kubenswrapper[4728]: I0227 10:49:39.806766 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/baeadb7e-f0a8-40fc-8529-3ffc87d16c0f-config-data" (OuterVolumeSpecName: "config-data") pod "baeadb7e-f0a8-40fc-8529-3ffc87d16c0f" (UID: "baeadb7e-f0a8-40fc-8529-3ffc87d16c0f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:49:39 crc kubenswrapper[4728]: I0227 10:49:39.812638 4728 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7bd979a4-451e-4f4d-affa-43bfbc671238-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 10:49:39 crc kubenswrapper[4728]: I0227 10:49:39.812687 4728 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/baeadb7e-f0a8-40fc-8529-3ffc87d16c0f-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 27 10:49:39 crc kubenswrapper[4728]: I0227 10:49:39.812698 4728 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/baeadb7e-f0a8-40fc-8529-3ffc87d16c0f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 10:49:39 crc kubenswrapper[4728]: I0227 10:49:39.812773 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tqvdh\" (UniqueName: \"kubernetes.io/projected/7bd979a4-451e-4f4d-affa-43bfbc671238-kube-api-access-tqvdh\") on node \"crc\" DevicePath \"\"" Feb 27 10:49:39 crc kubenswrapper[4728]: I0227 10:49:39.812785 4728 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/baeadb7e-f0a8-40fc-8529-3ffc87d16c0f-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 27 10:49:39 crc kubenswrapper[4728]: I0227 10:49:39.812926 4728 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/baeadb7e-f0a8-40fc-8529-3ffc87d16c0f-logs\") on node \"crc\" DevicePath \"\"" Feb 27 10:49:39 crc kubenswrapper[4728]: I0227 10:49:39.813010 4728 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/baeadb7e-f0a8-40fc-8529-3ffc87d16c0f-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 10:49:39 crc kubenswrapper[4728]: I0227 10:49:39.813101 4728 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7bd979a4-451e-4f4d-affa-43bfbc671238-logs\") on node \"crc\" DevicePath \"\"" Feb 27 10:49:39 crc kubenswrapper[4728]: I0227 10:49:39.813113 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xftbk\" (UniqueName: \"kubernetes.io/projected/baeadb7e-f0a8-40fc-8529-3ffc87d16c0f-kube-api-access-xftbk\") on node \"crc\" DevicePath \"\"" Feb 27 10:49:39 crc kubenswrapper[4728]: I0227 10:49:39.813123 4728 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7bd979a4-451e-4f4d-affa-43bfbc671238-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 27 10:49:39 crc kubenswrapper[4728]: I0227 10:49:39.813132 4728 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/baeadb7e-f0a8-40fc-8529-3ffc87d16c0f-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 10:49:39 crc kubenswrapper[4728]: I0227 10:49:39.814353 4728 scope.go:117] "RemoveContainer" containerID="3b397fe0d7c420a104894c1c16becf66b45f3b4c2aee942a00b5d6a3e69caf8a" Feb 27 10:49:39 crc kubenswrapper[4728]: E0227 10:49:39.814796 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3b397fe0d7c420a104894c1c16becf66b45f3b4c2aee942a00b5d6a3e69caf8a\": container with ID starting with 3b397fe0d7c420a104894c1c16becf66b45f3b4c2aee942a00b5d6a3e69caf8a not found: ID does not exist" containerID="3b397fe0d7c420a104894c1c16becf66b45f3b4c2aee942a00b5d6a3e69caf8a" Feb 27 10:49:39 crc kubenswrapper[4728]: I0227 10:49:39.814833 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3b397fe0d7c420a104894c1c16becf66b45f3b4c2aee942a00b5d6a3e69caf8a"} err="failed to get container status \"3b397fe0d7c420a104894c1c16becf66b45f3b4c2aee942a00b5d6a3e69caf8a\": rpc error: code = NotFound desc = could not find container \"3b397fe0d7c420a104894c1c16becf66b45f3b4c2aee942a00b5d6a3e69caf8a\": container with ID starting with 3b397fe0d7c420a104894c1c16becf66b45f3b4c2aee942a00b5d6a3e69caf8a not found: ID does not exist" Feb 27 10:49:39 crc kubenswrapper[4728]: I0227 10:49:39.814856 4728 scope.go:117] "RemoveContainer" containerID="fbd2753768f6a0b4c2107ffab1205e2bbf17877a2376979a78eb4774a63e79f9" Feb 27 10:49:39 crc kubenswrapper[4728]: E0227 10:49:39.815269 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fbd2753768f6a0b4c2107ffab1205e2bbf17877a2376979a78eb4774a63e79f9\": container with ID starting with fbd2753768f6a0b4c2107ffab1205e2bbf17877a2376979a78eb4774a63e79f9 not found: ID does not exist" containerID="fbd2753768f6a0b4c2107ffab1205e2bbf17877a2376979a78eb4774a63e79f9" Feb 27 10:49:39 crc kubenswrapper[4728]: I0227 10:49:39.815292 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fbd2753768f6a0b4c2107ffab1205e2bbf17877a2376979a78eb4774a63e79f9"} err="failed to get container status \"fbd2753768f6a0b4c2107ffab1205e2bbf17877a2376979a78eb4774a63e79f9\": rpc error: code = NotFound desc = could not find container \"fbd2753768f6a0b4c2107ffab1205e2bbf17877a2376979a78eb4774a63e79f9\": container with ID starting with fbd2753768f6a0b4c2107ffab1205e2bbf17877a2376979a78eb4774a63e79f9 not found: ID does not exist" Feb 27 10:49:39 crc kubenswrapper[4728]: I0227 10:49:39.815305 4728 scope.go:117] "RemoveContainer" containerID="3b397fe0d7c420a104894c1c16becf66b45f3b4c2aee942a00b5d6a3e69caf8a" Feb 27 10:49:39 crc kubenswrapper[4728]: I0227 10:49:39.815571 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3b397fe0d7c420a104894c1c16becf66b45f3b4c2aee942a00b5d6a3e69caf8a"} err="failed to get container status \"3b397fe0d7c420a104894c1c16becf66b45f3b4c2aee942a00b5d6a3e69caf8a\": rpc error: code = NotFound desc = could not find container \"3b397fe0d7c420a104894c1c16becf66b45f3b4c2aee942a00b5d6a3e69caf8a\": container with ID starting with 3b397fe0d7c420a104894c1c16becf66b45f3b4c2aee942a00b5d6a3e69caf8a not found: ID does not exist" Feb 27 10:49:39 crc kubenswrapper[4728]: I0227 10:49:39.815614 4728 scope.go:117] "RemoveContainer" containerID="fbd2753768f6a0b4c2107ffab1205e2bbf17877a2376979a78eb4774a63e79f9" Feb 27 10:49:39 crc kubenswrapper[4728]: I0227 10:49:39.815996 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fbd2753768f6a0b4c2107ffab1205e2bbf17877a2376979a78eb4774a63e79f9"} err="failed to get container status \"fbd2753768f6a0b4c2107ffab1205e2bbf17877a2376979a78eb4774a63e79f9\": rpc error: code = NotFound desc = could not find container \"fbd2753768f6a0b4c2107ffab1205e2bbf17877a2376979a78eb4774a63e79f9\": container with ID starting with fbd2753768f6a0b4c2107ffab1205e2bbf17877a2376979a78eb4774a63e79f9 not found: ID does not exist" Feb 27 10:49:39 crc kubenswrapper[4728]: I0227 10:49:39.826259 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7bd979a4-451e-4f4d-affa-43bfbc671238-config-data" (OuterVolumeSpecName: "config-data") pod "7bd979a4-451e-4f4d-affa-43bfbc671238" (UID: "7bd979a4-451e-4f4d-affa-43bfbc671238"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:49:39 crc kubenswrapper[4728]: I0227 10:49:39.879449 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 27 10:49:39 crc kubenswrapper[4728]: I0227 10:49:39.916036 4728 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7bd979a4-451e-4f4d-affa-43bfbc671238-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 10:49:39 crc kubenswrapper[4728]: I0227 10:49:39.971051 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-74fdd766cd-6wpqm"] Feb 27 10:49:39 crc kubenswrapper[4728]: I0227 10:49:39.980114 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-74fdd766cd-6wpqm"] Feb 27 10:49:40 crc kubenswrapper[4728]: I0227 10:49:40.005717 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 27 10:49:40 crc kubenswrapper[4728]: I0227 10:49:40.013542 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Feb 27 10:49:40 crc kubenswrapper[4728]: I0227 10:49:40.038798 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 27 10:49:40 crc kubenswrapper[4728]: E0227 10:49:40.039393 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="baeadb7e-f0a8-40fc-8529-3ffc87d16c0f" containerName="cinder-api-log" Feb 27 10:49:40 crc kubenswrapper[4728]: I0227 10:49:40.039419 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="baeadb7e-f0a8-40fc-8529-3ffc87d16c0f" containerName="cinder-api-log" Feb 27 10:49:40 crc kubenswrapper[4728]: E0227 10:49:40.039436 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7bd979a4-451e-4f4d-affa-43bfbc671238" containerName="barbican-api" Feb 27 10:49:40 crc kubenswrapper[4728]: I0227 10:49:40.039447 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="7bd979a4-451e-4f4d-affa-43bfbc671238" containerName="barbican-api" Feb 27 10:49:40 crc kubenswrapper[4728]: E0227 10:49:40.039461 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="baeadb7e-f0a8-40fc-8529-3ffc87d16c0f" containerName="cinder-api" Feb 27 10:49:40 crc kubenswrapper[4728]: I0227 10:49:40.039467 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="baeadb7e-f0a8-40fc-8529-3ffc87d16c0f" containerName="cinder-api" Feb 27 10:49:40 crc kubenswrapper[4728]: E0227 10:49:40.039532 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7bd979a4-451e-4f4d-affa-43bfbc671238" containerName="barbican-api-log" Feb 27 10:49:40 crc kubenswrapper[4728]: I0227 10:49:40.039542 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="7bd979a4-451e-4f4d-affa-43bfbc671238" containerName="barbican-api-log" Feb 27 10:49:40 crc kubenswrapper[4728]: I0227 10:49:40.039803 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="7bd979a4-451e-4f4d-affa-43bfbc671238" containerName="barbican-api-log" Feb 27 10:49:40 crc kubenswrapper[4728]: I0227 10:49:40.039834 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="baeadb7e-f0a8-40fc-8529-3ffc87d16c0f" containerName="cinder-api-log" Feb 27 10:49:40 crc kubenswrapper[4728]: I0227 10:49:40.039847 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="baeadb7e-f0a8-40fc-8529-3ffc87d16c0f" containerName="cinder-api" Feb 27 10:49:40 crc kubenswrapper[4728]: I0227 10:49:40.039862 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="7bd979a4-451e-4f4d-affa-43bfbc671238" containerName="barbican-api" Feb 27 10:49:40 crc kubenswrapper[4728]: I0227 10:49:40.041269 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 27 10:49:40 crc kubenswrapper[4728]: I0227 10:49:40.044833 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Feb 27 10:49:40 crc kubenswrapper[4728]: I0227 10:49:40.044911 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Feb 27 10:49:40 crc kubenswrapper[4728]: I0227 10:49:40.045029 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 27 10:49:40 crc kubenswrapper[4728]: I0227 10:49:40.053982 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 27 10:49:40 crc kubenswrapper[4728]: I0227 10:49:40.228957 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7d7ce831-a58b-49ab-a571-f4f8072e1dcd-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"7d7ce831-a58b-49ab-a571-f4f8072e1dcd\") " pod="openstack/cinder-api-0" Feb 27 10:49:40 crc kubenswrapper[4728]: I0227 10:49:40.229254 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7d7ce831-a58b-49ab-a571-f4f8072e1dcd-logs\") pod \"cinder-api-0\" (UID: \"7d7ce831-a58b-49ab-a571-f4f8072e1dcd\") " pod="openstack/cinder-api-0" Feb 27 10:49:40 crc kubenswrapper[4728]: I0227 10:49:40.229276 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7d7ce831-a58b-49ab-a571-f4f8072e1dcd-scripts\") pod \"cinder-api-0\" (UID: \"7d7ce831-a58b-49ab-a571-f4f8072e1dcd\") " pod="openstack/cinder-api-0" Feb 27 10:49:40 crc kubenswrapper[4728]: I0227 10:49:40.229320 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d7ce831-a58b-49ab-a571-f4f8072e1dcd-config-data\") pod \"cinder-api-0\" (UID: \"7d7ce831-a58b-49ab-a571-f4f8072e1dcd\") " pod="openstack/cinder-api-0" Feb 27 10:49:40 crc kubenswrapper[4728]: I0227 10:49:40.229353 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7d7ce831-a58b-49ab-a571-f4f8072e1dcd-etc-machine-id\") pod \"cinder-api-0\" (UID: \"7d7ce831-a58b-49ab-a571-f4f8072e1dcd\") " pod="openstack/cinder-api-0" Feb 27 10:49:40 crc kubenswrapper[4728]: I0227 10:49:40.229422 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hk4dg\" (UniqueName: \"kubernetes.io/projected/7d7ce831-a58b-49ab-a571-f4f8072e1dcd-kube-api-access-hk4dg\") pod \"cinder-api-0\" (UID: \"7d7ce831-a58b-49ab-a571-f4f8072e1dcd\") " pod="openstack/cinder-api-0" Feb 27 10:49:40 crc kubenswrapper[4728]: I0227 10:49:40.229444 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7d7ce831-a58b-49ab-a571-f4f8072e1dcd-public-tls-certs\") pod \"cinder-api-0\" (UID: \"7d7ce831-a58b-49ab-a571-f4f8072e1dcd\") " pod="openstack/cinder-api-0" Feb 27 10:49:40 crc kubenswrapper[4728]: I0227 10:49:40.229495 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7d7ce831-a58b-49ab-a571-f4f8072e1dcd-config-data-custom\") pod \"cinder-api-0\" (UID: \"7d7ce831-a58b-49ab-a571-f4f8072e1dcd\") " pod="openstack/cinder-api-0" Feb 27 10:49:40 crc kubenswrapper[4728]: I0227 10:49:40.229532 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d7ce831-a58b-49ab-a571-f4f8072e1dcd-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"7d7ce831-a58b-49ab-a571-f4f8072e1dcd\") " pod="openstack/cinder-api-0" Feb 27 10:49:40 crc kubenswrapper[4728]: I0227 10:49:40.304412 4728 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-54dbd7489c-x96kn" podUID="ac391aa0-3053-4675-a2c2-8c418ed9bd3a" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.202:9696/\": dial tcp 10.217.0.202:9696: connect: connection refused" Feb 27 10:49:40 crc kubenswrapper[4728]: I0227 10:49:40.331149 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7d7ce831-a58b-49ab-a571-f4f8072e1dcd-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"7d7ce831-a58b-49ab-a571-f4f8072e1dcd\") " pod="openstack/cinder-api-0" Feb 27 10:49:40 crc kubenswrapper[4728]: I0227 10:49:40.331275 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7d7ce831-a58b-49ab-a571-f4f8072e1dcd-logs\") pod \"cinder-api-0\" (UID: \"7d7ce831-a58b-49ab-a571-f4f8072e1dcd\") " pod="openstack/cinder-api-0" Feb 27 10:49:40 crc kubenswrapper[4728]: I0227 10:49:40.331302 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7d7ce831-a58b-49ab-a571-f4f8072e1dcd-scripts\") pod \"cinder-api-0\" (UID: \"7d7ce831-a58b-49ab-a571-f4f8072e1dcd\") " pod="openstack/cinder-api-0" Feb 27 10:49:40 crc kubenswrapper[4728]: I0227 10:49:40.331361 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d7ce831-a58b-49ab-a571-f4f8072e1dcd-config-data\") pod \"cinder-api-0\" (UID: \"7d7ce831-a58b-49ab-a571-f4f8072e1dcd\") " pod="openstack/cinder-api-0" Feb 27 10:49:40 crc kubenswrapper[4728]: I0227 10:49:40.331396 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7d7ce831-a58b-49ab-a571-f4f8072e1dcd-etc-machine-id\") pod \"cinder-api-0\" (UID: \"7d7ce831-a58b-49ab-a571-f4f8072e1dcd\") " pod="openstack/cinder-api-0" Feb 27 10:49:40 crc kubenswrapper[4728]: I0227 10:49:40.331481 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hk4dg\" (UniqueName: \"kubernetes.io/projected/7d7ce831-a58b-49ab-a571-f4f8072e1dcd-kube-api-access-hk4dg\") pod \"cinder-api-0\" (UID: \"7d7ce831-a58b-49ab-a571-f4f8072e1dcd\") " pod="openstack/cinder-api-0" Feb 27 10:49:40 crc kubenswrapper[4728]: I0227 10:49:40.331512 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7d7ce831-a58b-49ab-a571-f4f8072e1dcd-public-tls-certs\") pod \"cinder-api-0\" (UID: \"7d7ce831-a58b-49ab-a571-f4f8072e1dcd\") " pod="openstack/cinder-api-0" Feb 27 10:49:40 crc kubenswrapper[4728]: I0227 10:49:40.331532 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7d7ce831-a58b-49ab-a571-f4f8072e1dcd-config-data-custom\") pod \"cinder-api-0\" (UID: \"7d7ce831-a58b-49ab-a571-f4f8072e1dcd\") " pod="openstack/cinder-api-0" Feb 27 10:49:40 crc kubenswrapper[4728]: I0227 10:49:40.331553 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d7ce831-a58b-49ab-a571-f4f8072e1dcd-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"7d7ce831-a58b-49ab-a571-f4f8072e1dcd\") " pod="openstack/cinder-api-0" Feb 27 10:49:40 crc kubenswrapper[4728]: I0227 10:49:40.331554 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7d7ce831-a58b-49ab-a571-f4f8072e1dcd-etc-machine-id\") pod \"cinder-api-0\" (UID: \"7d7ce831-a58b-49ab-a571-f4f8072e1dcd\") " pod="openstack/cinder-api-0" Feb 27 10:49:40 crc kubenswrapper[4728]: I0227 10:49:40.331795 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7d7ce831-a58b-49ab-a571-f4f8072e1dcd-logs\") pod \"cinder-api-0\" (UID: \"7d7ce831-a58b-49ab-a571-f4f8072e1dcd\") " pod="openstack/cinder-api-0" Feb 27 10:49:40 crc kubenswrapper[4728]: I0227 10:49:40.336405 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7d7ce831-a58b-49ab-a571-f4f8072e1dcd-public-tls-certs\") pod \"cinder-api-0\" (UID: \"7d7ce831-a58b-49ab-a571-f4f8072e1dcd\") " pod="openstack/cinder-api-0" Feb 27 10:49:40 crc kubenswrapper[4728]: I0227 10:49:40.337533 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d7ce831-a58b-49ab-a571-f4f8072e1dcd-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"7d7ce831-a58b-49ab-a571-f4f8072e1dcd\") " pod="openstack/cinder-api-0" Feb 27 10:49:40 crc kubenswrapper[4728]: I0227 10:49:40.337774 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7d7ce831-a58b-49ab-a571-f4f8072e1dcd-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"7d7ce831-a58b-49ab-a571-f4f8072e1dcd\") " pod="openstack/cinder-api-0" Feb 27 10:49:40 crc kubenswrapper[4728]: I0227 10:49:40.338702 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d7ce831-a58b-49ab-a571-f4f8072e1dcd-config-data\") pod \"cinder-api-0\" (UID: \"7d7ce831-a58b-49ab-a571-f4f8072e1dcd\") " pod="openstack/cinder-api-0" Feb 27 10:49:40 crc kubenswrapper[4728]: I0227 10:49:40.345973 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7d7ce831-a58b-49ab-a571-f4f8072e1dcd-scripts\") pod \"cinder-api-0\" (UID: \"7d7ce831-a58b-49ab-a571-f4f8072e1dcd\") " pod="openstack/cinder-api-0" Feb 27 10:49:40 crc kubenswrapper[4728]: I0227 10:49:40.351201 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7d7ce831-a58b-49ab-a571-f4f8072e1dcd-config-data-custom\") pod \"cinder-api-0\" (UID: \"7d7ce831-a58b-49ab-a571-f4f8072e1dcd\") " pod="openstack/cinder-api-0" Feb 27 10:49:40 crc kubenswrapper[4728]: I0227 10:49:40.358489 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hk4dg\" (UniqueName: \"kubernetes.io/projected/7d7ce831-a58b-49ab-a571-f4f8072e1dcd-kube-api-access-hk4dg\") pod \"cinder-api-0\" (UID: \"7d7ce831-a58b-49ab-a571-f4f8072e1dcd\") " pod="openstack/cinder-api-0" Feb 27 10:49:40 crc kubenswrapper[4728]: I0227 10:49:40.529623 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 27 10:49:40 crc kubenswrapper[4728]: I0227 10:49:40.723452 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8589b5d689-mwmds" event={"ID":"0b60bed6-0eb1-40f9-a560-5488d7b2a551","Type":"ContainerStarted","Data":"890418648d178c766306761e429e5b58bb4569ef71fcf37497839872b9da4a52"} Feb 27 10:49:40 crc kubenswrapper[4728]: I0227 10:49:40.723957 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8589b5d689-mwmds" event={"ID":"0b60bed6-0eb1-40f9-a560-5488d7b2a551","Type":"ContainerStarted","Data":"886cdee3dc25717f23dca72f5c04b24fdb1e5e106ba39770247961c00f338c1d"} Feb 27 10:49:40 crc kubenswrapper[4728]: I0227 10:49:40.723976 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8589b5d689-mwmds" event={"ID":"0b60bed6-0eb1-40f9-a560-5488d7b2a551","Type":"ContainerStarted","Data":"26619c1950f406b157e1ab4fe85dc81daeb7c8b33df0766ca75e875e265e7cc4"} Feb 27 10:49:40 crc kubenswrapper[4728]: I0227 10:49:40.753872 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-8589b5d689-mwmds" podStartSLOduration=2.753846023 podStartE2EDuration="2.753846023s" podCreationTimestamp="2026-02-27 10:49:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:49:40.742437544 +0000 UTC m=+1400.704803650" watchObservedRunningTime="2026-02-27 10:49:40.753846023 +0000 UTC m=+1400.716212129" Feb 27 10:49:40 crc kubenswrapper[4728]: I0227 10:49:40.768582 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bd979a4-451e-4f4d-affa-43bfbc671238" path="/var/lib/kubelet/pods/7bd979a4-451e-4f4d-affa-43bfbc671238/volumes" Feb 27 10:49:40 crc kubenswrapper[4728]: I0227 10:49:40.769557 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="baeadb7e-f0a8-40fc-8529-3ffc87d16c0f" path="/var/lib/kubelet/pods/baeadb7e-f0a8-40fc-8529-3ffc87d16c0f/volumes" Feb 27 10:49:40 crc kubenswrapper[4728]: I0227 10:49:40.770388 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-8589b5d689-mwmds" Feb 27 10:49:40 crc kubenswrapper[4728]: I0227 10:49:40.770419 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fcc4fffc-ff97-47b0-885c-f8d81d189bcf","Type":"ContainerStarted","Data":"e4be0bdaf86ef0679db41a5f996a4ee82922c465e2d6ed87f02a92c254bce567"} Feb 27 10:49:41 crc kubenswrapper[4728]: I0227 10:49:41.031224 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 27 10:49:41 crc kubenswrapper[4728]: I0227 10:49:41.750002 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"7d7ce831-a58b-49ab-a571-f4f8072e1dcd","Type":"ContainerStarted","Data":"31f265f3810d9963dbf8d8b5432295aceea8b209650ec36dec323d3d93213363"} Feb 27 10:49:41 crc kubenswrapper[4728]: I0227 10:49:41.750435 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"7d7ce831-a58b-49ab-a571-f4f8072e1dcd","Type":"ContainerStarted","Data":"244d24a8fc8d7dde152b77bb1545f3f93537a3793a926a67073c60d87b84f415"} Feb 27 10:49:41 crc kubenswrapper[4728]: I0227 10:49:41.753533 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fcc4fffc-ff97-47b0-885c-f8d81d189bcf","Type":"ContainerStarted","Data":"07c17f4c98c066e72adfb76f94e23dfe160bfe260b02fcc24388b969d4eb97f1"} Feb 27 10:49:41 crc kubenswrapper[4728]: I0227 10:49:41.754069 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 27 10:49:41 crc kubenswrapper[4728]: I0227 10:49:41.788746 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.619717407 podStartE2EDuration="6.788725914s" podCreationTimestamp="2026-02-27 10:49:35 +0000 UTC" firstStartedPulling="2026-02-27 10:49:37.119126684 +0000 UTC m=+1397.081492790" lastFinishedPulling="2026-02-27 10:49:41.288135191 +0000 UTC m=+1401.250501297" observedRunningTime="2026-02-27 10:49:41.781642202 +0000 UTC m=+1401.744008308" watchObservedRunningTime="2026-02-27 10:49:41.788725914 +0000 UTC m=+1401.751092030" Feb 27 10:49:42 crc kubenswrapper[4728]: I0227 10:49:42.766538 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"7d7ce831-a58b-49ab-a571-f4f8072e1dcd","Type":"ContainerStarted","Data":"2e0f552ed56f6cecb90bfb13dd458c781ec0bb67237c0e87f3ebe66200eee618"} Feb 27 10:49:42 crc kubenswrapper[4728]: I0227 10:49:42.767029 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 27 10:49:42 crc kubenswrapper[4728]: I0227 10:49:42.770550 4728 generic.go:334] "Generic (PLEG): container finished" podID="ac391aa0-3053-4675-a2c2-8c418ed9bd3a" containerID="a109578385f7c7f0783ccb545d370fef90c64a9f399e14183da769b7a760367f" exitCode=0 Feb 27 10:49:42 crc kubenswrapper[4728]: I0227 10:49:42.770686 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-54dbd7489c-x96kn" event={"ID":"ac391aa0-3053-4675-a2c2-8c418ed9bd3a","Type":"ContainerDied","Data":"a109578385f7c7f0783ccb545d370fef90c64a9f399e14183da769b7a760367f"} Feb 27 10:49:42 crc kubenswrapper[4728]: I0227 10:49:42.814493 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=2.814468978 podStartE2EDuration="2.814468978s" podCreationTimestamp="2026-02-27 10:49:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:49:42.807468738 +0000 UTC m=+1402.769834854" watchObservedRunningTime="2026-02-27 10:49:42.814468978 +0000 UTC m=+1402.776835094" Feb 27 10:49:43 crc kubenswrapper[4728]: I0227 10:49:43.196235 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-54dbd7489c-x96kn" Feb 27 10:49:43 crc kubenswrapper[4728]: I0227 10:49:43.297462 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2rrz4\" (UniqueName: \"kubernetes.io/projected/ac391aa0-3053-4675-a2c2-8c418ed9bd3a-kube-api-access-2rrz4\") pod \"ac391aa0-3053-4675-a2c2-8c418ed9bd3a\" (UID: \"ac391aa0-3053-4675-a2c2-8c418ed9bd3a\") " Feb 27 10:49:43 crc kubenswrapper[4728]: I0227 10:49:43.297596 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ac391aa0-3053-4675-a2c2-8c418ed9bd3a-config\") pod \"ac391aa0-3053-4675-a2c2-8c418ed9bd3a\" (UID: \"ac391aa0-3053-4675-a2c2-8c418ed9bd3a\") " Feb 27 10:49:43 crc kubenswrapper[4728]: I0227 10:49:43.297677 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ac391aa0-3053-4675-a2c2-8c418ed9bd3a-internal-tls-certs\") pod \"ac391aa0-3053-4675-a2c2-8c418ed9bd3a\" (UID: \"ac391aa0-3053-4675-a2c2-8c418ed9bd3a\") " Feb 27 10:49:43 crc kubenswrapper[4728]: I0227 10:49:43.297729 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ac391aa0-3053-4675-a2c2-8c418ed9bd3a-public-tls-certs\") pod \"ac391aa0-3053-4675-a2c2-8c418ed9bd3a\" (UID: \"ac391aa0-3053-4675-a2c2-8c418ed9bd3a\") " Feb 27 10:49:43 crc kubenswrapper[4728]: I0227 10:49:43.298527 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ac391aa0-3053-4675-a2c2-8c418ed9bd3a-ovndb-tls-certs\") pod \"ac391aa0-3053-4675-a2c2-8c418ed9bd3a\" (UID: \"ac391aa0-3053-4675-a2c2-8c418ed9bd3a\") " Feb 27 10:49:43 crc kubenswrapper[4728]: I0227 10:49:43.298566 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac391aa0-3053-4675-a2c2-8c418ed9bd3a-combined-ca-bundle\") pod \"ac391aa0-3053-4675-a2c2-8c418ed9bd3a\" (UID: \"ac391aa0-3053-4675-a2c2-8c418ed9bd3a\") " Feb 27 10:49:43 crc kubenswrapper[4728]: I0227 10:49:43.298581 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/ac391aa0-3053-4675-a2c2-8c418ed9bd3a-httpd-config\") pod \"ac391aa0-3053-4675-a2c2-8c418ed9bd3a\" (UID: \"ac391aa0-3053-4675-a2c2-8c418ed9bd3a\") " Feb 27 10:49:43 crc kubenswrapper[4728]: I0227 10:49:43.303411 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac391aa0-3053-4675-a2c2-8c418ed9bd3a-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "ac391aa0-3053-4675-a2c2-8c418ed9bd3a" (UID: "ac391aa0-3053-4675-a2c2-8c418ed9bd3a"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:49:43 crc kubenswrapper[4728]: I0227 10:49:43.304700 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac391aa0-3053-4675-a2c2-8c418ed9bd3a-kube-api-access-2rrz4" (OuterVolumeSpecName: "kube-api-access-2rrz4") pod "ac391aa0-3053-4675-a2c2-8c418ed9bd3a" (UID: "ac391aa0-3053-4675-a2c2-8c418ed9bd3a"). InnerVolumeSpecName "kube-api-access-2rrz4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:49:43 crc kubenswrapper[4728]: I0227 10:49:43.362266 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac391aa0-3053-4675-a2c2-8c418ed9bd3a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ac391aa0-3053-4675-a2c2-8c418ed9bd3a" (UID: "ac391aa0-3053-4675-a2c2-8c418ed9bd3a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:49:43 crc kubenswrapper[4728]: I0227 10:49:43.363043 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac391aa0-3053-4675-a2c2-8c418ed9bd3a-config" (OuterVolumeSpecName: "config") pod "ac391aa0-3053-4675-a2c2-8c418ed9bd3a" (UID: "ac391aa0-3053-4675-a2c2-8c418ed9bd3a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:49:43 crc kubenswrapper[4728]: I0227 10:49:43.363264 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac391aa0-3053-4675-a2c2-8c418ed9bd3a-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "ac391aa0-3053-4675-a2c2-8c418ed9bd3a" (UID: "ac391aa0-3053-4675-a2c2-8c418ed9bd3a"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:49:43 crc kubenswrapper[4728]: I0227 10:49:43.389727 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac391aa0-3053-4675-a2c2-8c418ed9bd3a-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "ac391aa0-3053-4675-a2c2-8c418ed9bd3a" (UID: "ac391aa0-3053-4675-a2c2-8c418ed9bd3a"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:49:43 crc kubenswrapper[4728]: I0227 10:49:43.392928 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac391aa0-3053-4675-a2c2-8c418ed9bd3a-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "ac391aa0-3053-4675-a2c2-8c418ed9bd3a" (UID: "ac391aa0-3053-4675-a2c2-8c418ed9bd3a"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:49:43 crc kubenswrapper[4728]: I0227 10:49:43.401670 4728 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ac391aa0-3053-4675-a2c2-8c418ed9bd3a-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 27 10:49:43 crc kubenswrapper[4728]: I0227 10:49:43.401839 4728 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac391aa0-3053-4675-a2c2-8c418ed9bd3a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 10:49:43 crc kubenswrapper[4728]: I0227 10:49:43.401932 4728 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/ac391aa0-3053-4675-a2c2-8c418ed9bd3a-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 27 10:49:43 crc kubenswrapper[4728]: I0227 10:49:43.402000 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2rrz4\" (UniqueName: \"kubernetes.io/projected/ac391aa0-3053-4675-a2c2-8c418ed9bd3a-kube-api-access-2rrz4\") on node \"crc\" DevicePath \"\"" Feb 27 10:49:43 crc kubenswrapper[4728]: I0227 10:49:43.402064 4728 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/ac391aa0-3053-4675-a2c2-8c418ed9bd3a-config\") on node \"crc\" DevicePath \"\"" Feb 27 10:49:43 crc kubenswrapper[4728]: I0227 10:49:43.402126 4728 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ac391aa0-3053-4675-a2c2-8c418ed9bd3a-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 27 10:49:43 crc kubenswrapper[4728]: I0227 10:49:43.402182 4728 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ac391aa0-3053-4675-a2c2-8c418ed9bd3a-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 27 10:49:43 crc kubenswrapper[4728]: I0227 10:49:43.787250 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-54dbd7489c-x96kn" Feb 27 10:49:43 crc kubenswrapper[4728]: I0227 10:49:43.788109 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-54dbd7489c-x96kn" event={"ID":"ac391aa0-3053-4675-a2c2-8c418ed9bd3a","Type":"ContainerDied","Data":"aab8d97bffb126c27ac1d703ea996f44950eb8e950724b9e56f18ba77aa678f3"} Feb 27 10:49:43 crc kubenswrapper[4728]: I0227 10:49:43.788148 4728 scope.go:117] "RemoveContainer" containerID="cc4e75e51c2221b31239e3ccc5cb02ff809b659244ceda42d4952a1c7d985487" Feb 27 10:49:43 crc kubenswrapper[4728]: I0227 10:49:43.840124 4728 scope.go:117] "RemoveContainer" containerID="a109578385f7c7f0783ccb545d370fef90c64a9f399e14183da769b7a760367f" Feb 27 10:49:43 crc kubenswrapper[4728]: I0227 10:49:43.841901 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-54dbd7489c-x96kn"] Feb 27 10:49:43 crc kubenswrapper[4728]: I0227 10:49:43.859322 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-54dbd7489c-x96kn"] Feb 27 10:49:44 crc kubenswrapper[4728]: I0227 10:49:44.750372 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac391aa0-3053-4675-a2c2-8c418ed9bd3a" path="/var/lib/kubelet/pods/ac391aa0-3053-4675-a2c2-8c418ed9bd3a/volumes" Feb 27 10:49:44 crc kubenswrapper[4728]: I0227 10:49:44.954789 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6bb4fc677f-67mnv" Feb 27 10:49:45 crc kubenswrapper[4728]: I0227 10:49:45.038805 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-688c87cc99-zcq55"] Feb 27 10:49:45 crc kubenswrapper[4728]: I0227 10:49:45.039262 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-688c87cc99-zcq55" podUID="73597ba1-35ae-48f3-a74f-14816ae7bc61" containerName="dnsmasq-dns" containerID="cri-o://c74115dbc6c810db98fb170a18853192bf94ccc9e2afb719b3711d29f26b429f" gracePeriod=10 Feb 27 10:49:45 crc kubenswrapper[4728]: I0227 10:49:45.144485 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 27 10:49:45 crc kubenswrapper[4728]: I0227 10:49:45.199436 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 27 10:49:45 crc kubenswrapper[4728]: I0227 10:49:45.703638 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-688c87cc99-zcq55" Feb 27 10:49:45 crc kubenswrapper[4728]: I0227 10:49:45.815498 4728 generic.go:334] "Generic (PLEG): container finished" podID="73597ba1-35ae-48f3-a74f-14816ae7bc61" containerID="c74115dbc6c810db98fb170a18853192bf94ccc9e2afb719b3711d29f26b429f" exitCode=0 Feb 27 10:49:45 crc kubenswrapper[4728]: I0227 10:49:45.815543 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688c87cc99-zcq55" event={"ID":"73597ba1-35ae-48f3-a74f-14816ae7bc61","Type":"ContainerDied","Data":"c74115dbc6c810db98fb170a18853192bf94ccc9e2afb719b3711d29f26b429f"} Feb 27 10:49:45 crc kubenswrapper[4728]: I0227 10:49:45.815598 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688c87cc99-zcq55" event={"ID":"73597ba1-35ae-48f3-a74f-14816ae7bc61","Type":"ContainerDied","Data":"470f2fb85fa7cb46e33729307978527b22312bd40f283b3ee64bd023043cd364"} Feb 27 10:49:45 crc kubenswrapper[4728]: I0227 10:49:45.815620 4728 scope.go:117] "RemoveContainer" containerID="c74115dbc6c810db98fb170a18853192bf94ccc9e2afb719b3711d29f26b429f" Feb 27 10:49:45 crc kubenswrapper[4728]: I0227 10:49:45.815684 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="80018c03-d2c1-4ac8-8e4b-1bb5e3c26b21" containerName="cinder-scheduler" containerID="cri-o://3e138468c9c2d0600f6be684fa1b62744cccd4456942382b240f652b5fe448b1" gracePeriod=30 Feb 27 10:49:45 crc kubenswrapper[4728]: I0227 10:49:45.815568 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-688c87cc99-zcq55" Feb 27 10:49:45 crc kubenswrapper[4728]: I0227 10:49:45.815917 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="80018c03-d2c1-4ac8-8e4b-1bb5e3c26b21" containerName="probe" containerID="cri-o://4d862e30b41c56c498db401b236ce3b4055ba15e70d611a9811046aec8455736" gracePeriod=30 Feb 27 10:49:45 crc kubenswrapper[4728]: I0227 10:49:45.839051 4728 scope.go:117] "RemoveContainer" containerID="62449a8dde36f4272117abc5f949454f04141e898120a07fb80e1e23ef083181" Feb 27 10:49:45 crc kubenswrapper[4728]: I0227 10:49:45.877718 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/73597ba1-35ae-48f3-a74f-14816ae7bc61-config\") pod \"73597ba1-35ae-48f3-a74f-14816ae7bc61\" (UID: \"73597ba1-35ae-48f3-a74f-14816ae7bc61\") " Feb 27 10:49:45 crc kubenswrapper[4728]: I0227 10:49:45.877776 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/73597ba1-35ae-48f3-a74f-14816ae7bc61-ovsdbserver-nb\") pod \"73597ba1-35ae-48f3-a74f-14816ae7bc61\" (UID: \"73597ba1-35ae-48f3-a74f-14816ae7bc61\") " Feb 27 10:49:45 crc kubenswrapper[4728]: I0227 10:49:45.877813 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/73597ba1-35ae-48f3-a74f-14816ae7bc61-dns-swift-storage-0\") pod \"73597ba1-35ae-48f3-a74f-14816ae7bc61\" (UID: \"73597ba1-35ae-48f3-a74f-14816ae7bc61\") " Feb 27 10:49:45 crc kubenswrapper[4728]: I0227 10:49:45.877847 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-42tm2\" (UniqueName: \"kubernetes.io/projected/73597ba1-35ae-48f3-a74f-14816ae7bc61-kube-api-access-42tm2\") pod \"73597ba1-35ae-48f3-a74f-14816ae7bc61\" (UID: \"73597ba1-35ae-48f3-a74f-14816ae7bc61\") " Feb 27 10:49:45 crc kubenswrapper[4728]: I0227 10:49:45.877912 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/73597ba1-35ae-48f3-a74f-14816ae7bc61-ovsdbserver-sb\") pod \"73597ba1-35ae-48f3-a74f-14816ae7bc61\" (UID: \"73597ba1-35ae-48f3-a74f-14816ae7bc61\") " Feb 27 10:49:45 crc kubenswrapper[4728]: I0227 10:49:45.877987 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/73597ba1-35ae-48f3-a74f-14816ae7bc61-dns-svc\") pod \"73597ba1-35ae-48f3-a74f-14816ae7bc61\" (UID: \"73597ba1-35ae-48f3-a74f-14816ae7bc61\") " Feb 27 10:49:45 crc kubenswrapper[4728]: I0227 10:49:45.886005 4728 scope.go:117] "RemoveContainer" containerID="c74115dbc6c810db98fb170a18853192bf94ccc9e2afb719b3711d29f26b429f" Feb 27 10:49:45 crc kubenswrapper[4728]: I0227 10:49:45.886788 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73597ba1-35ae-48f3-a74f-14816ae7bc61-kube-api-access-42tm2" (OuterVolumeSpecName: "kube-api-access-42tm2") pod "73597ba1-35ae-48f3-a74f-14816ae7bc61" (UID: "73597ba1-35ae-48f3-a74f-14816ae7bc61"). InnerVolumeSpecName "kube-api-access-42tm2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:49:45 crc kubenswrapper[4728]: E0227 10:49:45.891947 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c74115dbc6c810db98fb170a18853192bf94ccc9e2afb719b3711d29f26b429f\": container with ID starting with c74115dbc6c810db98fb170a18853192bf94ccc9e2afb719b3711d29f26b429f not found: ID does not exist" containerID="c74115dbc6c810db98fb170a18853192bf94ccc9e2afb719b3711d29f26b429f" Feb 27 10:49:45 crc kubenswrapper[4728]: I0227 10:49:45.891985 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c74115dbc6c810db98fb170a18853192bf94ccc9e2afb719b3711d29f26b429f"} err="failed to get container status \"c74115dbc6c810db98fb170a18853192bf94ccc9e2afb719b3711d29f26b429f\": rpc error: code = NotFound desc = could not find container \"c74115dbc6c810db98fb170a18853192bf94ccc9e2afb719b3711d29f26b429f\": container with ID starting with c74115dbc6c810db98fb170a18853192bf94ccc9e2afb719b3711d29f26b429f not found: ID does not exist" Feb 27 10:49:45 crc kubenswrapper[4728]: I0227 10:49:45.892006 4728 scope.go:117] "RemoveContainer" containerID="62449a8dde36f4272117abc5f949454f04141e898120a07fb80e1e23ef083181" Feb 27 10:49:45 crc kubenswrapper[4728]: E0227 10:49:45.892543 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"62449a8dde36f4272117abc5f949454f04141e898120a07fb80e1e23ef083181\": container with ID starting with 62449a8dde36f4272117abc5f949454f04141e898120a07fb80e1e23ef083181 not found: ID does not exist" containerID="62449a8dde36f4272117abc5f949454f04141e898120a07fb80e1e23ef083181" Feb 27 10:49:45 crc kubenswrapper[4728]: I0227 10:49:45.892592 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"62449a8dde36f4272117abc5f949454f04141e898120a07fb80e1e23ef083181"} err="failed to get container status \"62449a8dde36f4272117abc5f949454f04141e898120a07fb80e1e23ef083181\": rpc error: code = NotFound desc = could not find container \"62449a8dde36f4272117abc5f949454f04141e898120a07fb80e1e23ef083181\": container with ID starting with 62449a8dde36f4272117abc5f949454f04141e898120a07fb80e1e23ef083181 not found: ID does not exist" Feb 27 10:49:45 crc kubenswrapper[4728]: I0227 10:49:45.944376 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/73597ba1-35ae-48f3-a74f-14816ae7bc61-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "73597ba1-35ae-48f3-a74f-14816ae7bc61" (UID: "73597ba1-35ae-48f3-a74f-14816ae7bc61"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:49:45 crc kubenswrapper[4728]: I0227 10:49:45.949517 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/73597ba1-35ae-48f3-a74f-14816ae7bc61-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "73597ba1-35ae-48f3-a74f-14816ae7bc61" (UID: "73597ba1-35ae-48f3-a74f-14816ae7bc61"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:49:45 crc kubenswrapper[4728]: I0227 10:49:45.954539 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/73597ba1-35ae-48f3-a74f-14816ae7bc61-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "73597ba1-35ae-48f3-a74f-14816ae7bc61" (UID: "73597ba1-35ae-48f3-a74f-14816ae7bc61"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:49:45 crc kubenswrapper[4728]: I0227 10:49:45.960031 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/73597ba1-35ae-48f3-a74f-14816ae7bc61-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "73597ba1-35ae-48f3-a74f-14816ae7bc61" (UID: "73597ba1-35ae-48f3-a74f-14816ae7bc61"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:49:45 crc kubenswrapper[4728]: I0227 10:49:45.960849 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/73597ba1-35ae-48f3-a74f-14816ae7bc61-config" (OuterVolumeSpecName: "config") pod "73597ba1-35ae-48f3-a74f-14816ae7bc61" (UID: "73597ba1-35ae-48f3-a74f-14816ae7bc61"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:49:45 crc kubenswrapper[4728]: I0227 10:49:45.980417 4728 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/73597ba1-35ae-48f3-a74f-14816ae7bc61-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 27 10:49:45 crc kubenswrapper[4728]: I0227 10:49:45.980447 4728 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/73597ba1-35ae-48f3-a74f-14816ae7bc61-config\") on node \"crc\" DevicePath \"\"" Feb 27 10:49:45 crc kubenswrapper[4728]: I0227 10:49:45.980457 4728 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/73597ba1-35ae-48f3-a74f-14816ae7bc61-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 27 10:49:45 crc kubenswrapper[4728]: I0227 10:49:45.980469 4728 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/73597ba1-35ae-48f3-a74f-14816ae7bc61-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 27 10:49:45 crc kubenswrapper[4728]: I0227 10:49:45.980478 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-42tm2\" (UniqueName: \"kubernetes.io/projected/73597ba1-35ae-48f3-a74f-14816ae7bc61-kube-api-access-42tm2\") on node \"crc\" DevicePath \"\"" Feb 27 10:49:45 crc kubenswrapper[4728]: I0227 10:49:45.980487 4728 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/73597ba1-35ae-48f3-a74f-14816ae7bc61-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 27 10:49:46 crc kubenswrapper[4728]: I0227 10:49:46.153427 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-688c87cc99-zcq55"] Feb 27 10:49:46 crc kubenswrapper[4728]: I0227 10:49:46.169759 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-688c87cc99-zcq55"] Feb 27 10:49:46 crc kubenswrapper[4728]: I0227 10:49:46.738775 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="73597ba1-35ae-48f3-a74f-14816ae7bc61" path="/var/lib/kubelet/pods/73597ba1-35ae-48f3-a74f-14816ae7bc61/volumes" Feb 27 10:49:46 crc kubenswrapper[4728]: I0227 10:49:46.828189 4728 generic.go:334] "Generic (PLEG): container finished" podID="80018c03-d2c1-4ac8-8e4b-1bb5e3c26b21" containerID="4d862e30b41c56c498db401b236ce3b4055ba15e70d611a9811046aec8455736" exitCode=0 Feb 27 10:49:46 crc kubenswrapper[4728]: I0227 10:49:46.828233 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"80018c03-d2c1-4ac8-8e4b-1bb5e3c26b21","Type":"ContainerDied","Data":"4d862e30b41c56c498db401b236ce3b4055ba15e70d611a9811046aec8455736"} Feb 27 10:49:47 crc kubenswrapper[4728]: I0227 10:49:47.816956 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-7ff4498744-6lwbb" Feb 27 10:49:47 crc kubenswrapper[4728]: I0227 10:49:47.827607 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-7ff4498744-6lwbb" Feb 27 10:49:48 crc kubenswrapper[4728]: I0227 10:49:48.219795 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-768d449478-rt9ff"] Feb 27 10:49:48 crc kubenswrapper[4728]: E0227 10:49:48.220453 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73597ba1-35ae-48f3-a74f-14816ae7bc61" containerName="init" Feb 27 10:49:48 crc kubenswrapper[4728]: I0227 10:49:48.220471 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="73597ba1-35ae-48f3-a74f-14816ae7bc61" containerName="init" Feb 27 10:49:48 crc kubenswrapper[4728]: E0227 10:49:48.220489 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac391aa0-3053-4675-a2c2-8c418ed9bd3a" containerName="neutron-httpd" Feb 27 10:49:48 crc kubenswrapper[4728]: I0227 10:49:48.220497 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac391aa0-3053-4675-a2c2-8c418ed9bd3a" containerName="neutron-httpd" Feb 27 10:49:48 crc kubenswrapper[4728]: E0227 10:49:48.220528 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac391aa0-3053-4675-a2c2-8c418ed9bd3a" containerName="neutron-api" Feb 27 10:49:48 crc kubenswrapper[4728]: I0227 10:49:48.220534 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac391aa0-3053-4675-a2c2-8c418ed9bd3a" containerName="neutron-api" Feb 27 10:49:48 crc kubenswrapper[4728]: E0227 10:49:48.220565 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73597ba1-35ae-48f3-a74f-14816ae7bc61" containerName="dnsmasq-dns" Feb 27 10:49:48 crc kubenswrapper[4728]: I0227 10:49:48.220571 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="73597ba1-35ae-48f3-a74f-14816ae7bc61" containerName="dnsmasq-dns" Feb 27 10:49:48 crc kubenswrapper[4728]: I0227 10:49:48.220772 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac391aa0-3053-4675-a2c2-8c418ed9bd3a" containerName="neutron-api" Feb 27 10:49:48 crc kubenswrapper[4728]: I0227 10:49:48.220803 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="73597ba1-35ae-48f3-a74f-14816ae7bc61" containerName="dnsmasq-dns" Feb 27 10:49:48 crc kubenswrapper[4728]: I0227 10:49:48.220821 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac391aa0-3053-4675-a2c2-8c418ed9bd3a" containerName="neutron-httpd" Feb 27 10:49:48 crc kubenswrapper[4728]: I0227 10:49:48.221928 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-768d449478-rt9ff" Feb 27 10:49:48 crc kubenswrapper[4728]: I0227 10:49:48.239385 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-768d449478-rt9ff"] Feb 27 10:49:48 crc kubenswrapper[4728]: I0227 10:49:48.328946 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/89946113-053b-4a87-acd4-50402c66d0c2-public-tls-certs\") pod \"placement-768d449478-rt9ff\" (UID: \"89946113-053b-4a87-acd4-50402c66d0c2\") " pod="openstack/placement-768d449478-rt9ff" Feb 27 10:49:48 crc kubenswrapper[4728]: I0227 10:49:48.329158 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89946113-053b-4a87-acd4-50402c66d0c2-combined-ca-bundle\") pod \"placement-768d449478-rt9ff\" (UID: \"89946113-053b-4a87-acd4-50402c66d0c2\") " pod="openstack/placement-768d449478-rt9ff" Feb 27 10:49:48 crc kubenswrapper[4728]: I0227 10:49:48.329199 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89946113-053b-4a87-acd4-50402c66d0c2-config-data\") pod \"placement-768d449478-rt9ff\" (UID: \"89946113-053b-4a87-acd4-50402c66d0c2\") " pod="openstack/placement-768d449478-rt9ff" Feb 27 10:49:48 crc kubenswrapper[4728]: I0227 10:49:48.329312 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/89946113-053b-4a87-acd4-50402c66d0c2-scripts\") pod \"placement-768d449478-rt9ff\" (UID: \"89946113-053b-4a87-acd4-50402c66d0c2\") " pod="openstack/placement-768d449478-rt9ff" Feb 27 10:49:48 crc kubenswrapper[4728]: I0227 10:49:48.329602 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/89946113-053b-4a87-acd4-50402c66d0c2-internal-tls-certs\") pod \"placement-768d449478-rt9ff\" (UID: \"89946113-053b-4a87-acd4-50402c66d0c2\") " pod="openstack/placement-768d449478-rt9ff" Feb 27 10:49:48 crc kubenswrapper[4728]: I0227 10:49:48.329680 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/89946113-053b-4a87-acd4-50402c66d0c2-logs\") pod \"placement-768d449478-rt9ff\" (UID: \"89946113-053b-4a87-acd4-50402c66d0c2\") " pod="openstack/placement-768d449478-rt9ff" Feb 27 10:49:48 crc kubenswrapper[4728]: I0227 10:49:48.329893 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67cpf\" (UniqueName: \"kubernetes.io/projected/89946113-053b-4a87-acd4-50402c66d0c2-kube-api-access-67cpf\") pod \"placement-768d449478-rt9ff\" (UID: \"89946113-053b-4a87-acd4-50402c66d0c2\") " pod="openstack/placement-768d449478-rt9ff" Feb 27 10:49:48 crc kubenswrapper[4728]: I0227 10:49:48.440439 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/89946113-053b-4a87-acd4-50402c66d0c2-internal-tls-certs\") pod \"placement-768d449478-rt9ff\" (UID: \"89946113-053b-4a87-acd4-50402c66d0c2\") " pod="openstack/placement-768d449478-rt9ff" Feb 27 10:49:48 crc kubenswrapper[4728]: I0227 10:49:48.440550 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/89946113-053b-4a87-acd4-50402c66d0c2-logs\") pod \"placement-768d449478-rt9ff\" (UID: \"89946113-053b-4a87-acd4-50402c66d0c2\") " pod="openstack/placement-768d449478-rt9ff" Feb 27 10:49:48 crc kubenswrapper[4728]: I0227 10:49:48.440752 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-67cpf\" (UniqueName: \"kubernetes.io/projected/89946113-053b-4a87-acd4-50402c66d0c2-kube-api-access-67cpf\") pod \"placement-768d449478-rt9ff\" (UID: \"89946113-053b-4a87-acd4-50402c66d0c2\") " pod="openstack/placement-768d449478-rt9ff" Feb 27 10:49:48 crc kubenswrapper[4728]: I0227 10:49:48.440898 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/89946113-053b-4a87-acd4-50402c66d0c2-public-tls-certs\") pod \"placement-768d449478-rt9ff\" (UID: \"89946113-053b-4a87-acd4-50402c66d0c2\") " pod="openstack/placement-768d449478-rt9ff" Feb 27 10:49:48 crc kubenswrapper[4728]: I0227 10:49:48.441038 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89946113-053b-4a87-acd4-50402c66d0c2-combined-ca-bundle\") pod \"placement-768d449478-rt9ff\" (UID: \"89946113-053b-4a87-acd4-50402c66d0c2\") " pod="openstack/placement-768d449478-rt9ff" Feb 27 10:49:48 crc kubenswrapper[4728]: I0227 10:49:48.441071 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89946113-053b-4a87-acd4-50402c66d0c2-config-data\") pod \"placement-768d449478-rt9ff\" (UID: \"89946113-053b-4a87-acd4-50402c66d0c2\") " pod="openstack/placement-768d449478-rt9ff" Feb 27 10:49:48 crc kubenswrapper[4728]: I0227 10:49:48.441095 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/89946113-053b-4a87-acd4-50402c66d0c2-scripts\") pod \"placement-768d449478-rt9ff\" (UID: \"89946113-053b-4a87-acd4-50402c66d0c2\") " pod="openstack/placement-768d449478-rt9ff" Feb 27 10:49:48 crc kubenswrapper[4728]: I0227 10:49:48.442790 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/89946113-053b-4a87-acd4-50402c66d0c2-logs\") pod \"placement-768d449478-rt9ff\" (UID: \"89946113-053b-4a87-acd4-50402c66d0c2\") " pod="openstack/placement-768d449478-rt9ff" Feb 27 10:49:48 crc kubenswrapper[4728]: I0227 10:49:48.448997 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89946113-053b-4a87-acd4-50402c66d0c2-combined-ca-bundle\") pod \"placement-768d449478-rt9ff\" (UID: \"89946113-053b-4a87-acd4-50402c66d0c2\") " pod="openstack/placement-768d449478-rt9ff" Feb 27 10:49:48 crc kubenswrapper[4728]: I0227 10:49:48.449057 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89946113-053b-4a87-acd4-50402c66d0c2-config-data\") pod \"placement-768d449478-rt9ff\" (UID: \"89946113-053b-4a87-acd4-50402c66d0c2\") " pod="openstack/placement-768d449478-rt9ff" Feb 27 10:49:48 crc kubenswrapper[4728]: I0227 10:49:48.449443 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/89946113-053b-4a87-acd4-50402c66d0c2-scripts\") pod \"placement-768d449478-rt9ff\" (UID: \"89946113-053b-4a87-acd4-50402c66d0c2\") " pod="openstack/placement-768d449478-rt9ff" Feb 27 10:49:48 crc kubenswrapper[4728]: I0227 10:49:48.460913 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-67cpf\" (UniqueName: \"kubernetes.io/projected/89946113-053b-4a87-acd4-50402c66d0c2-kube-api-access-67cpf\") pod \"placement-768d449478-rt9ff\" (UID: \"89946113-053b-4a87-acd4-50402c66d0c2\") " pod="openstack/placement-768d449478-rt9ff" Feb 27 10:49:48 crc kubenswrapper[4728]: I0227 10:49:48.467686 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/89946113-053b-4a87-acd4-50402c66d0c2-public-tls-certs\") pod \"placement-768d449478-rt9ff\" (UID: \"89946113-053b-4a87-acd4-50402c66d0c2\") " pod="openstack/placement-768d449478-rt9ff" Feb 27 10:49:48 crc kubenswrapper[4728]: I0227 10:49:48.468105 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/89946113-053b-4a87-acd4-50402c66d0c2-internal-tls-certs\") pod \"placement-768d449478-rt9ff\" (UID: \"89946113-053b-4a87-acd4-50402c66d0c2\") " pod="openstack/placement-768d449478-rt9ff" Feb 27 10:49:48 crc kubenswrapper[4728]: I0227 10:49:48.553221 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-768d449478-rt9ff" Feb 27 10:49:49 crc kubenswrapper[4728]: I0227 10:49:49.103186 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-768d449478-rt9ff"] Feb 27 10:49:49 crc kubenswrapper[4728]: I0227 10:49:49.747938 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-6cb4766cb5-wf4ln" Feb 27 10:49:49 crc kubenswrapper[4728]: I0227 10:49:49.892243 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-768d449478-rt9ff" event={"ID":"89946113-053b-4a87-acd4-50402c66d0c2","Type":"ContainerStarted","Data":"a829bdd16191dfaade6b874395cfb4eac1ea743b409628c189d139444258a671"} Feb 27 10:49:49 crc kubenswrapper[4728]: I0227 10:49:49.892289 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-768d449478-rt9ff" event={"ID":"89946113-053b-4a87-acd4-50402c66d0c2","Type":"ContainerStarted","Data":"c00938bb1b56953c69cb5d9c33c470372766b0b9ed53459cc948c18a06da3cb1"} Feb 27 10:49:49 crc kubenswrapper[4728]: I0227 10:49:49.892299 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-768d449478-rt9ff" event={"ID":"89946113-053b-4a87-acd4-50402c66d0c2","Type":"ContainerStarted","Data":"1e1c04491f31d392d570a9f29798551bfce2e30772fc126794662c21cfc8b6ec"} Feb 27 10:49:49 crc kubenswrapper[4728]: I0227 10:49:49.892576 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-768d449478-rt9ff" Feb 27 10:49:49 crc kubenswrapper[4728]: I0227 10:49:49.892603 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-768d449478-rt9ff" Feb 27 10:49:49 crc kubenswrapper[4728]: I0227 10:49:49.925680 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-768d449478-rt9ff" podStartSLOduration=1.925659661 podStartE2EDuration="1.925659661s" podCreationTimestamp="2026-02-27 10:49:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:49:49.91786306 +0000 UTC m=+1409.880229166" watchObservedRunningTime="2026-02-27 10:49:49.925659661 +0000 UTC m=+1409.888025767" Feb 27 10:49:50 crc kubenswrapper[4728]: I0227 10:49:50.608580 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Feb 27 10:49:50 crc kubenswrapper[4728]: I0227 10:49:50.611110 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 27 10:49:50 crc kubenswrapper[4728]: I0227 10:49:50.613631 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-4tfv8" Feb 27 10:49:50 crc kubenswrapper[4728]: I0227 10:49:50.617330 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Feb 27 10:49:50 crc kubenswrapper[4728]: I0227 10:49:50.617529 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Feb 27 10:49:50 crc kubenswrapper[4728]: I0227 10:49:50.624623 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 27 10:49:50 crc kubenswrapper[4728]: I0227 10:49:50.708675 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/b6a05f62-98d5-4df6-9cbb-d4bbb1778642-openstack-config-secret\") pod \"openstackclient\" (UID: \"b6a05f62-98d5-4df6-9cbb-d4bbb1778642\") " pod="openstack/openstackclient" Feb 27 10:49:50 crc kubenswrapper[4728]: I0227 10:49:50.708750 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d24xk\" (UniqueName: \"kubernetes.io/projected/b6a05f62-98d5-4df6-9cbb-d4bbb1778642-kube-api-access-d24xk\") pod \"openstackclient\" (UID: \"b6a05f62-98d5-4df6-9cbb-d4bbb1778642\") " pod="openstack/openstackclient" Feb 27 10:49:50 crc kubenswrapper[4728]: I0227 10:49:50.709066 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/b6a05f62-98d5-4df6-9cbb-d4bbb1778642-openstack-config\") pod \"openstackclient\" (UID: \"b6a05f62-98d5-4df6-9cbb-d4bbb1778642\") " pod="openstack/openstackclient" Feb 27 10:49:50 crc kubenswrapper[4728]: I0227 10:49:50.709161 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6a05f62-98d5-4df6-9cbb-d4bbb1778642-combined-ca-bundle\") pod \"openstackclient\" (UID: \"b6a05f62-98d5-4df6-9cbb-d4bbb1778642\") " pod="openstack/openstackclient" Feb 27 10:49:50 crc kubenswrapper[4728]: I0227 10:49:50.811646 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/b6a05f62-98d5-4df6-9cbb-d4bbb1778642-openstack-config-secret\") pod \"openstackclient\" (UID: \"b6a05f62-98d5-4df6-9cbb-d4bbb1778642\") " pod="openstack/openstackclient" Feb 27 10:49:50 crc kubenswrapper[4728]: I0227 10:49:50.811754 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d24xk\" (UniqueName: \"kubernetes.io/projected/b6a05f62-98d5-4df6-9cbb-d4bbb1778642-kube-api-access-d24xk\") pod \"openstackclient\" (UID: \"b6a05f62-98d5-4df6-9cbb-d4bbb1778642\") " pod="openstack/openstackclient" Feb 27 10:49:50 crc kubenswrapper[4728]: I0227 10:49:50.811894 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/b6a05f62-98d5-4df6-9cbb-d4bbb1778642-openstack-config\") pod \"openstackclient\" (UID: \"b6a05f62-98d5-4df6-9cbb-d4bbb1778642\") " pod="openstack/openstackclient" Feb 27 10:49:50 crc kubenswrapper[4728]: I0227 10:49:50.811982 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6a05f62-98d5-4df6-9cbb-d4bbb1778642-combined-ca-bundle\") pod \"openstackclient\" (UID: \"b6a05f62-98d5-4df6-9cbb-d4bbb1778642\") " pod="openstack/openstackclient" Feb 27 10:49:50 crc kubenswrapper[4728]: I0227 10:49:50.812718 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/b6a05f62-98d5-4df6-9cbb-d4bbb1778642-openstack-config\") pod \"openstackclient\" (UID: \"b6a05f62-98d5-4df6-9cbb-d4bbb1778642\") " pod="openstack/openstackclient" Feb 27 10:49:50 crc kubenswrapper[4728]: I0227 10:49:50.819411 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6a05f62-98d5-4df6-9cbb-d4bbb1778642-combined-ca-bundle\") pod \"openstackclient\" (UID: \"b6a05f62-98d5-4df6-9cbb-d4bbb1778642\") " pod="openstack/openstackclient" Feb 27 10:49:50 crc kubenswrapper[4728]: I0227 10:49:50.820478 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/b6a05f62-98d5-4df6-9cbb-d4bbb1778642-openstack-config-secret\") pod \"openstackclient\" (UID: \"b6a05f62-98d5-4df6-9cbb-d4bbb1778642\") " pod="openstack/openstackclient" Feb 27 10:49:50 crc kubenswrapper[4728]: I0227 10:49:50.828300 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d24xk\" (UniqueName: \"kubernetes.io/projected/b6a05f62-98d5-4df6-9cbb-d4bbb1778642-kube-api-access-d24xk\") pod \"openstackclient\" (UID: \"b6a05f62-98d5-4df6-9cbb-d4bbb1778642\") " pod="openstack/openstackclient" Feb 27 10:49:50 crc kubenswrapper[4728]: I0227 10:49:50.896962 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 27 10:49:50 crc kubenswrapper[4728]: I0227 10:49:50.907851 4728 generic.go:334] "Generic (PLEG): container finished" podID="80018c03-d2c1-4ac8-8e4b-1bb5e3c26b21" containerID="3e138468c9c2d0600f6be684fa1b62744cccd4456942382b240f652b5fe448b1" exitCode=0 Feb 27 10:49:50 crc kubenswrapper[4728]: I0227 10:49:50.909189 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 27 10:49:50 crc kubenswrapper[4728]: I0227 10:49:50.909402 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"80018c03-d2c1-4ac8-8e4b-1bb5e3c26b21","Type":"ContainerDied","Data":"3e138468c9c2d0600f6be684fa1b62744cccd4456942382b240f652b5fe448b1"} Feb 27 10:49:50 crc kubenswrapper[4728]: I0227 10:49:50.909435 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"80018c03-d2c1-4ac8-8e4b-1bb5e3c26b21","Type":"ContainerDied","Data":"3381c8a23161c81f0ef34a3308b77600f122fa003409a71835527c67902d6984"} Feb 27 10:49:50 crc kubenswrapper[4728]: I0227 10:49:50.909453 4728 scope.go:117] "RemoveContainer" containerID="4d862e30b41c56c498db401b236ce3b4055ba15e70d611a9811046aec8455736" Feb 27 10:49:50 crc kubenswrapper[4728]: I0227 10:49:50.941665 4728 scope.go:117] "RemoveContainer" containerID="3e138468c9c2d0600f6be684fa1b62744cccd4456942382b240f652b5fe448b1" Feb 27 10:49:50 crc kubenswrapper[4728]: I0227 10:49:50.943114 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 27 10:49:51 crc kubenswrapper[4728]: I0227 10:49:51.009354 4728 scope.go:117] "RemoveContainer" containerID="4d862e30b41c56c498db401b236ce3b4055ba15e70d611a9811046aec8455736" Feb 27 10:49:51 crc kubenswrapper[4728]: E0227 10:49:51.009941 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d862e30b41c56c498db401b236ce3b4055ba15e70d611a9811046aec8455736\": container with ID starting with 4d862e30b41c56c498db401b236ce3b4055ba15e70d611a9811046aec8455736 not found: ID does not exist" containerID="4d862e30b41c56c498db401b236ce3b4055ba15e70d611a9811046aec8455736" Feb 27 10:49:51 crc kubenswrapper[4728]: I0227 10:49:51.010007 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d862e30b41c56c498db401b236ce3b4055ba15e70d611a9811046aec8455736"} err="failed to get container status \"4d862e30b41c56c498db401b236ce3b4055ba15e70d611a9811046aec8455736\": rpc error: code = NotFound desc = could not find container \"4d862e30b41c56c498db401b236ce3b4055ba15e70d611a9811046aec8455736\": container with ID starting with 4d862e30b41c56c498db401b236ce3b4055ba15e70d611a9811046aec8455736 not found: ID does not exist" Feb 27 10:49:51 crc kubenswrapper[4728]: I0227 10:49:51.010040 4728 scope.go:117] "RemoveContainer" containerID="3e138468c9c2d0600f6be684fa1b62744cccd4456942382b240f652b5fe448b1" Feb 27 10:49:51 crc kubenswrapper[4728]: E0227 10:49:51.010898 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3e138468c9c2d0600f6be684fa1b62744cccd4456942382b240f652b5fe448b1\": container with ID starting with 3e138468c9c2d0600f6be684fa1b62744cccd4456942382b240f652b5fe448b1 not found: ID does not exist" containerID="3e138468c9c2d0600f6be684fa1b62744cccd4456942382b240f652b5fe448b1" Feb 27 10:49:51 crc kubenswrapper[4728]: I0227 10:49:51.010953 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3e138468c9c2d0600f6be684fa1b62744cccd4456942382b240f652b5fe448b1"} err="failed to get container status \"3e138468c9c2d0600f6be684fa1b62744cccd4456942382b240f652b5fe448b1\": rpc error: code = NotFound desc = could not find container \"3e138468c9c2d0600f6be684fa1b62744cccd4456942382b240f652b5fe448b1\": container with ID starting with 3e138468c9c2d0600f6be684fa1b62744cccd4456942382b240f652b5fe448b1 not found: ID does not exist" Feb 27 10:49:51 crc kubenswrapper[4728]: I0227 10:49:51.017114 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80018c03-d2c1-4ac8-8e4b-1bb5e3c26b21-config-data\") pod \"80018c03-d2c1-4ac8-8e4b-1bb5e3c26b21\" (UID: \"80018c03-d2c1-4ac8-8e4b-1bb5e3c26b21\") " Feb 27 10:49:51 crc kubenswrapper[4728]: I0227 10:49:51.017565 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/80018c03-d2c1-4ac8-8e4b-1bb5e3c26b21-config-data-custom\") pod \"80018c03-d2c1-4ac8-8e4b-1bb5e3c26b21\" (UID: \"80018c03-d2c1-4ac8-8e4b-1bb5e3c26b21\") " Feb 27 10:49:51 crc kubenswrapper[4728]: I0227 10:49:51.017608 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/80018c03-d2c1-4ac8-8e4b-1bb5e3c26b21-scripts\") pod \"80018c03-d2c1-4ac8-8e4b-1bb5e3c26b21\" (UID: \"80018c03-d2c1-4ac8-8e4b-1bb5e3c26b21\") " Feb 27 10:49:51 crc kubenswrapper[4728]: I0227 10:49:51.017806 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xx2hm\" (UniqueName: \"kubernetes.io/projected/80018c03-d2c1-4ac8-8e4b-1bb5e3c26b21-kube-api-access-xx2hm\") pod \"80018c03-d2c1-4ac8-8e4b-1bb5e3c26b21\" (UID: \"80018c03-d2c1-4ac8-8e4b-1bb5e3c26b21\") " Feb 27 10:49:51 crc kubenswrapper[4728]: I0227 10:49:51.017934 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80018c03-d2c1-4ac8-8e4b-1bb5e3c26b21-combined-ca-bundle\") pod \"80018c03-d2c1-4ac8-8e4b-1bb5e3c26b21\" (UID: \"80018c03-d2c1-4ac8-8e4b-1bb5e3c26b21\") " Feb 27 10:49:51 crc kubenswrapper[4728]: I0227 10:49:51.017969 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/80018c03-d2c1-4ac8-8e4b-1bb5e3c26b21-etc-machine-id\") pod \"80018c03-d2c1-4ac8-8e4b-1bb5e3c26b21\" (UID: \"80018c03-d2c1-4ac8-8e4b-1bb5e3c26b21\") " Feb 27 10:49:51 crc kubenswrapper[4728]: I0227 10:49:51.019922 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/80018c03-d2c1-4ac8-8e4b-1bb5e3c26b21-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "80018c03-d2c1-4ac8-8e4b-1bb5e3c26b21" (UID: "80018c03-d2c1-4ac8-8e4b-1bb5e3c26b21"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 10:49:51 crc kubenswrapper[4728]: I0227 10:49:51.024169 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80018c03-d2c1-4ac8-8e4b-1bb5e3c26b21-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "80018c03-d2c1-4ac8-8e4b-1bb5e3c26b21" (UID: "80018c03-d2c1-4ac8-8e4b-1bb5e3c26b21"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:49:51 crc kubenswrapper[4728]: I0227 10:49:51.025663 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80018c03-d2c1-4ac8-8e4b-1bb5e3c26b21-scripts" (OuterVolumeSpecName: "scripts") pod "80018c03-d2c1-4ac8-8e4b-1bb5e3c26b21" (UID: "80018c03-d2c1-4ac8-8e4b-1bb5e3c26b21"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:49:51 crc kubenswrapper[4728]: I0227 10:49:51.042270 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80018c03-d2c1-4ac8-8e4b-1bb5e3c26b21-kube-api-access-xx2hm" (OuterVolumeSpecName: "kube-api-access-xx2hm") pod "80018c03-d2c1-4ac8-8e4b-1bb5e3c26b21" (UID: "80018c03-d2c1-4ac8-8e4b-1bb5e3c26b21"). InnerVolumeSpecName "kube-api-access-xx2hm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:49:51 crc kubenswrapper[4728]: I0227 10:49:51.102739 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80018c03-d2c1-4ac8-8e4b-1bb5e3c26b21-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "80018c03-d2c1-4ac8-8e4b-1bb5e3c26b21" (UID: "80018c03-d2c1-4ac8-8e4b-1bb5e3c26b21"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:49:51 crc kubenswrapper[4728]: I0227 10:49:51.121295 4728 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80018c03-d2c1-4ac8-8e4b-1bb5e3c26b21-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 10:49:51 crc kubenswrapper[4728]: I0227 10:49:51.121334 4728 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/80018c03-d2c1-4ac8-8e4b-1bb5e3c26b21-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 27 10:49:51 crc kubenswrapper[4728]: I0227 10:49:51.121345 4728 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/80018c03-d2c1-4ac8-8e4b-1bb5e3c26b21-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 27 10:49:51 crc kubenswrapper[4728]: I0227 10:49:51.121356 4728 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/80018c03-d2c1-4ac8-8e4b-1bb5e3c26b21-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 10:49:51 crc kubenswrapper[4728]: I0227 10:49:51.121366 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xx2hm\" (UniqueName: \"kubernetes.io/projected/80018c03-d2c1-4ac8-8e4b-1bb5e3c26b21-kube-api-access-xx2hm\") on node \"crc\" DevicePath \"\"" Feb 27 10:49:51 crc kubenswrapper[4728]: I0227 10:49:51.167878 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80018c03-d2c1-4ac8-8e4b-1bb5e3c26b21-config-data" (OuterVolumeSpecName: "config-data") pod "80018c03-d2c1-4ac8-8e4b-1bb5e3c26b21" (UID: "80018c03-d2c1-4ac8-8e4b-1bb5e3c26b21"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:49:51 crc kubenswrapper[4728]: I0227 10:49:51.224271 4728 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80018c03-d2c1-4ac8-8e4b-1bb5e3c26b21-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 10:49:51 crc kubenswrapper[4728]: I0227 10:49:51.295890 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 27 10:49:51 crc kubenswrapper[4728]: I0227 10:49:51.305341 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 27 10:49:51 crc kubenswrapper[4728]: I0227 10:49:51.322278 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 27 10:49:51 crc kubenswrapper[4728]: E0227 10:49:51.322737 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80018c03-d2c1-4ac8-8e4b-1bb5e3c26b21" containerName="cinder-scheduler" Feb 27 10:49:51 crc kubenswrapper[4728]: I0227 10:49:51.322749 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="80018c03-d2c1-4ac8-8e4b-1bb5e3c26b21" containerName="cinder-scheduler" Feb 27 10:49:51 crc kubenswrapper[4728]: E0227 10:49:51.322818 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80018c03-d2c1-4ac8-8e4b-1bb5e3c26b21" containerName="probe" Feb 27 10:49:51 crc kubenswrapper[4728]: I0227 10:49:51.322825 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="80018c03-d2c1-4ac8-8e4b-1bb5e3c26b21" containerName="probe" Feb 27 10:49:51 crc kubenswrapper[4728]: I0227 10:49:51.323013 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="80018c03-d2c1-4ac8-8e4b-1bb5e3c26b21" containerName="cinder-scheduler" Feb 27 10:49:51 crc kubenswrapper[4728]: I0227 10:49:51.323045 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="80018c03-d2c1-4ac8-8e4b-1bb5e3c26b21" containerName="probe" Feb 27 10:49:51 crc kubenswrapper[4728]: I0227 10:49:51.324179 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 27 10:49:51 crc kubenswrapper[4728]: I0227 10:49:51.327291 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 27 10:49:51 crc kubenswrapper[4728]: I0227 10:49:51.347366 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 27 10:49:51 crc kubenswrapper[4728]: I0227 10:49:51.427867 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q96zd\" (UniqueName: \"kubernetes.io/projected/77e12b0f-4d5d-470b-8a0b-bdf2b09f42e2-kube-api-access-q96zd\") pod \"cinder-scheduler-0\" (UID: \"77e12b0f-4d5d-470b-8a0b-bdf2b09f42e2\") " pod="openstack/cinder-scheduler-0" Feb 27 10:49:51 crc kubenswrapper[4728]: I0227 10:49:51.428258 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77e12b0f-4d5d-470b-8a0b-bdf2b09f42e2-config-data\") pod \"cinder-scheduler-0\" (UID: \"77e12b0f-4d5d-470b-8a0b-bdf2b09f42e2\") " pod="openstack/cinder-scheduler-0" Feb 27 10:49:51 crc kubenswrapper[4728]: I0227 10:49:51.428288 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77e12b0f-4d5d-470b-8a0b-bdf2b09f42e2-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"77e12b0f-4d5d-470b-8a0b-bdf2b09f42e2\") " pod="openstack/cinder-scheduler-0" Feb 27 10:49:51 crc kubenswrapper[4728]: I0227 10:49:51.428515 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/77e12b0f-4d5d-470b-8a0b-bdf2b09f42e2-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"77e12b0f-4d5d-470b-8a0b-bdf2b09f42e2\") " pod="openstack/cinder-scheduler-0" Feb 27 10:49:51 crc kubenswrapper[4728]: I0227 10:49:51.428543 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/77e12b0f-4d5d-470b-8a0b-bdf2b09f42e2-scripts\") pod \"cinder-scheduler-0\" (UID: \"77e12b0f-4d5d-470b-8a0b-bdf2b09f42e2\") " pod="openstack/cinder-scheduler-0" Feb 27 10:49:51 crc kubenswrapper[4728]: I0227 10:49:51.428747 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/77e12b0f-4d5d-470b-8a0b-bdf2b09f42e2-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"77e12b0f-4d5d-470b-8a0b-bdf2b09f42e2\") " pod="openstack/cinder-scheduler-0" Feb 27 10:49:51 crc kubenswrapper[4728]: I0227 10:49:51.492358 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 27 10:49:51 crc kubenswrapper[4728]: I0227 10:49:51.530451 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/77e12b0f-4d5d-470b-8a0b-bdf2b09f42e2-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"77e12b0f-4d5d-470b-8a0b-bdf2b09f42e2\") " pod="openstack/cinder-scheduler-0" Feb 27 10:49:51 crc kubenswrapper[4728]: I0227 10:49:51.530516 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/77e12b0f-4d5d-470b-8a0b-bdf2b09f42e2-scripts\") pod \"cinder-scheduler-0\" (UID: \"77e12b0f-4d5d-470b-8a0b-bdf2b09f42e2\") " pod="openstack/cinder-scheduler-0" Feb 27 10:49:51 crc kubenswrapper[4728]: I0227 10:49:51.530594 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/77e12b0f-4d5d-470b-8a0b-bdf2b09f42e2-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"77e12b0f-4d5d-470b-8a0b-bdf2b09f42e2\") " pod="openstack/cinder-scheduler-0" Feb 27 10:49:51 crc kubenswrapper[4728]: I0227 10:49:51.530637 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q96zd\" (UniqueName: \"kubernetes.io/projected/77e12b0f-4d5d-470b-8a0b-bdf2b09f42e2-kube-api-access-q96zd\") pod \"cinder-scheduler-0\" (UID: \"77e12b0f-4d5d-470b-8a0b-bdf2b09f42e2\") " pod="openstack/cinder-scheduler-0" Feb 27 10:49:51 crc kubenswrapper[4728]: I0227 10:49:51.530666 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77e12b0f-4d5d-470b-8a0b-bdf2b09f42e2-config-data\") pod \"cinder-scheduler-0\" (UID: \"77e12b0f-4d5d-470b-8a0b-bdf2b09f42e2\") " pod="openstack/cinder-scheduler-0" Feb 27 10:49:51 crc kubenswrapper[4728]: I0227 10:49:51.530682 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77e12b0f-4d5d-470b-8a0b-bdf2b09f42e2-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"77e12b0f-4d5d-470b-8a0b-bdf2b09f42e2\") " pod="openstack/cinder-scheduler-0" Feb 27 10:49:51 crc kubenswrapper[4728]: I0227 10:49:51.531583 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/77e12b0f-4d5d-470b-8a0b-bdf2b09f42e2-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"77e12b0f-4d5d-470b-8a0b-bdf2b09f42e2\") " pod="openstack/cinder-scheduler-0" Feb 27 10:49:51 crc kubenswrapper[4728]: I0227 10:49:51.535415 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/77e12b0f-4d5d-470b-8a0b-bdf2b09f42e2-scripts\") pod \"cinder-scheduler-0\" (UID: \"77e12b0f-4d5d-470b-8a0b-bdf2b09f42e2\") " pod="openstack/cinder-scheduler-0" Feb 27 10:49:51 crc kubenswrapper[4728]: I0227 10:49:51.535952 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77e12b0f-4d5d-470b-8a0b-bdf2b09f42e2-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"77e12b0f-4d5d-470b-8a0b-bdf2b09f42e2\") " pod="openstack/cinder-scheduler-0" Feb 27 10:49:51 crc kubenswrapper[4728]: I0227 10:49:51.536398 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77e12b0f-4d5d-470b-8a0b-bdf2b09f42e2-config-data\") pod \"cinder-scheduler-0\" (UID: \"77e12b0f-4d5d-470b-8a0b-bdf2b09f42e2\") " pod="openstack/cinder-scheduler-0" Feb 27 10:49:51 crc kubenswrapper[4728]: I0227 10:49:51.536988 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/77e12b0f-4d5d-470b-8a0b-bdf2b09f42e2-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"77e12b0f-4d5d-470b-8a0b-bdf2b09f42e2\") " pod="openstack/cinder-scheduler-0" Feb 27 10:49:51 crc kubenswrapper[4728]: I0227 10:49:51.553845 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q96zd\" (UniqueName: \"kubernetes.io/projected/77e12b0f-4d5d-470b-8a0b-bdf2b09f42e2-kube-api-access-q96zd\") pod \"cinder-scheduler-0\" (UID: \"77e12b0f-4d5d-470b-8a0b-bdf2b09f42e2\") " pod="openstack/cinder-scheduler-0" Feb 27 10:49:51 crc kubenswrapper[4728]: I0227 10:49:51.645859 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 27 10:49:51 crc kubenswrapper[4728]: I0227 10:49:51.925519 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"b6a05f62-98d5-4df6-9cbb-d4bbb1778642","Type":"ContainerStarted","Data":"dbed970196f5c214dcb96fff6007fc069d141f2b4f6a539727f6ed6ab3433f4e"} Feb 27 10:49:52 crc kubenswrapper[4728]: I0227 10:49:52.141520 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 27 10:49:52 crc kubenswrapper[4728]: I0227 10:49:52.741531 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="80018c03-d2c1-4ac8-8e4b-1bb5e3c26b21" path="/var/lib/kubelet/pods/80018c03-d2c1-4ac8-8e4b-1bb5e3c26b21/volumes" Feb 27 10:49:52 crc kubenswrapper[4728]: I0227 10:49:52.957717 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"77e12b0f-4d5d-470b-8a0b-bdf2b09f42e2","Type":"ContainerStarted","Data":"d5dc44658676435f8bec95e69c60e168118ac8c8fac7f3a58b46544972ccac9c"} Feb 27 10:49:52 crc kubenswrapper[4728]: I0227 10:49:52.958052 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"77e12b0f-4d5d-470b-8a0b-bdf2b09f42e2","Type":"ContainerStarted","Data":"f8508f46452bf72bd785d68ca61192b12c351a89dfdc4da26e26d514df07ea59"} Feb 27 10:49:52 crc kubenswrapper[4728]: I0227 10:49:52.969790 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Feb 27 10:49:53 crc kubenswrapper[4728]: I0227 10:49:53.971155 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"77e12b0f-4d5d-470b-8a0b-bdf2b09f42e2","Type":"ContainerStarted","Data":"40db99b9a3dc23d059d3e1fcb7a61e3cf2224086df4af5976a67055ef37b5570"} Feb 27 10:49:53 crc kubenswrapper[4728]: I0227 10:49:53.994065 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=2.994048456 podStartE2EDuration="2.994048456s" podCreationTimestamp="2026-02-27 10:49:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:49:53.987641523 +0000 UTC m=+1413.950007639" watchObservedRunningTime="2026-02-27 10:49:53.994048456 +0000 UTC m=+1413.956414552" Feb 27 10:49:56 crc kubenswrapper[4728]: I0227 10:49:56.139143 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-84dc467799-w92cc"] Feb 27 10:49:56 crc kubenswrapper[4728]: I0227 10:49:56.141693 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-84dc467799-w92cc" Feb 27 10:49:56 crc kubenswrapper[4728]: I0227 10:49:56.144975 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Feb 27 10:49:56 crc kubenswrapper[4728]: I0227 10:49:56.155418 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-engine-config-data" Feb 27 10:49:56 crc kubenswrapper[4728]: I0227 10:49:56.155628 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-s58fl" Feb 27 10:49:56 crc kubenswrapper[4728]: I0227 10:49:56.162466 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-84dc467799-w92cc"] Feb 27 10:49:56 crc kubenswrapper[4728]: I0227 10:49:56.246629 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7d978555f9-6gjtz"] Feb 27 10:49:56 crc kubenswrapper[4728]: I0227 10:49:56.250957 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0db7498-5f3f-4550-932e-64f7d721e902-config-data\") pod \"heat-engine-84dc467799-w92cc\" (UID: \"b0db7498-5f3f-4550-932e-64f7d721e902\") " pod="openstack/heat-engine-84dc467799-w92cc" Feb 27 10:49:56 crc kubenswrapper[4728]: I0227 10:49:56.251021 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0db7498-5f3f-4550-932e-64f7d721e902-combined-ca-bundle\") pod \"heat-engine-84dc467799-w92cc\" (UID: \"b0db7498-5f3f-4550-932e-64f7d721e902\") " pod="openstack/heat-engine-84dc467799-w92cc" Feb 27 10:49:56 crc kubenswrapper[4728]: I0227 10:49:56.251041 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b0db7498-5f3f-4550-932e-64f7d721e902-config-data-custom\") pod \"heat-engine-84dc467799-w92cc\" (UID: \"b0db7498-5f3f-4550-932e-64f7d721e902\") " pod="openstack/heat-engine-84dc467799-w92cc" Feb 27 10:49:56 crc kubenswrapper[4728]: I0227 10:49:56.251099 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mdj7s\" (UniqueName: \"kubernetes.io/projected/b0db7498-5f3f-4550-932e-64f7d721e902-kube-api-access-mdj7s\") pod \"heat-engine-84dc467799-w92cc\" (UID: \"b0db7498-5f3f-4550-932e-64f7d721e902\") " pod="openstack/heat-engine-84dc467799-w92cc" Feb 27 10:49:56 crc kubenswrapper[4728]: I0227 10:49:56.251178 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d978555f9-6gjtz" Feb 27 10:49:56 crc kubenswrapper[4728]: I0227 10:49:56.271630 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7d978555f9-6gjtz"] Feb 27 10:49:56 crc kubenswrapper[4728]: I0227 10:49:56.351648 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-54dc5858d7-7975b"] Feb 27 10:49:56 crc kubenswrapper[4728]: I0227 10:49:56.354204 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0db7498-5f3f-4550-932e-64f7d721e902-combined-ca-bundle\") pod \"heat-engine-84dc467799-w92cc\" (UID: \"b0db7498-5f3f-4550-932e-64f7d721e902\") " pod="openstack/heat-engine-84dc467799-w92cc" Feb 27 10:49:56 crc kubenswrapper[4728]: I0227 10:49:56.354242 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b0db7498-5f3f-4550-932e-64f7d721e902-config-data-custom\") pod \"heat-engine-84dc467799-w92cc\" (UID: \"b0db7498-5f3f-4550-932e-64f7d721e902\") " pod="openstack/heat-engine-84dc467799-w92cc" Feb 27 10:49:56 crc kubenswrapper[4728]: I0227 10:49:56.354315 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mdj7s\" (UniqueName: \"kubernetes.io/projected/b0db7498-5f3f-4550-932e-64f7d721e902-kube-api-access-mdj7s\") pod \"heat-engine-84dc467799-w92cc\" (UID: \"b0db7498-5f3f-4550-932e-64f7d721e902\") " pod="openstack/heat-engine-84dc467799-w92cc" Feb 27 10:49:56 crc kubenswrapper[4728]: I0227 10:49:56.354435 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0db7498-5f3f-4550-932e-64f7d721e902-config-data\") pod \"heat-engine-84dc467799-w92cc\" (UID: \"b0db7498-5f3f-4550-932e-64f7d721e902\") " pod="openstack/heat-engine-84dc467799-w92cc" Feb 27 10:49:56 crc kubenswrapper[4728]: I0227 10:49:56.358327 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-54dc5858d7-7975b" Feb 27 10:49:56 crc kubenswrapper[4728]: I0227 10:49:56.361604 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-cfnapi-config-data" Feb 27 10:49:56 crc kubenswrapper[4728]: I0227 10:49:56.368352 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b0db7498-5f3f-4550-932e-64f7d721e902-config-data-custom\") pod \"heat-engine-84dc467799-w92cc\" (UID: \"b0db7498-5f3f-4550-932e-64f7d721e902\") " pod="openstack/heat-engine-84dc467799-w92cc" Feb 27 10:49:56 crc kubenswrapper[4728]: I0227 10:49:56.370623 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0db7498-5f3f-4550-932e-64f7d721e902-config-data\") pod \"heat-engine-84dc467799-w92cc\" (UID: \"b0db7498-5f3f-4550-932e-64f7d721e902\") " pod="openstack/heat-engine-84dc467799-w92cc" Feb 27 10:49:56 crc kubenswrapper[4728]: I0227 10:49:56.383522 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0db7498-5f3f-4550-932e-64f7d721e902-combined-ca-bundle\") pod \"heat-engine-84dc467799-w92cc\" (UID: \"b0db7498-5f3f-4550-932e-64f7d721e902\") " pod="openstack/heat-engine-84dc467799-w92cc" Feb 27 10:49:56 crc kubenswrapper[4728]: I0227 10:49:56.388146 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mdj7s\" (UniqueName: \"kubernetes.io/projected/b0db7498-5f3f-4550-932e-64f7d721e902-kube-api-access-mdj7s\") pod \"heat-engine-84dc467799-w92cc\" (UID: \"b0db7498-5f3f-4550-932e-64f7d721e902\") " pod="openstack/heat-engine-84dc467799-w92cc" Feb 27 10:49:56 crc kubenswrapper[4728]: I0227 10:49:56.391776 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-54dc5858d7-7975b"] Feb 27 10:49:56 crc kubenswrapper[4728]: I0227 10:49:56.407669 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-5dcd5b5d76-tfs97"] Feb 27 10:49:56 crc kubenswrapper[4728]: I0227 10:49:56.409271 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-5dcd5b5d76-tfs97" Feb 27 10:49:56 crc kubenswrapper[4728]: I0227 10:49:56.415349 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-api-config-data" Feb 27 10:49:56 crc kubenswrapper[4728]: I0227 10:49:56.419485 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-5dcd5b5d76-tfs97"] Feb 27 10:49:56 crc kubenswrapper[4728]: I0227 10:49:56.466762 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e7ce3c0-d850-41a4-862d-6fd83e20ae1f-combined-ca-bundle\") pod \"heat-cfnapi-54dc5858d7-7975b\" (UID: \"5e7ce3c0-d850-41a4-862d-6fd83e20ae1f\") " pod="openstack/heat-cfnapi-54dc5858d7-7975b" Feb 27 10:49:56 crc kubenswrapper[4728]: I0227 10:49:56.466928 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11767f8f-ebed-4306-b5f7-5e79182d0ad1-config\") pod \"dnsmasq-dns-7d978555f9-6gjtz\" (UID: \"11767f8f-ebed-4306-b5f7-5e79182d0ad1\") " pod="openstack/dnsmasq-dns-7d978555f9-6gjtz" Feb 27 10:49:56 crc kubenswrapper[4728]: I0227 10:49:56.467147 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/11767f8f-ebed-4306-b5f7-5e79182d0ad1-dns-svc\") pod \"dnsmasq-dns-7d978555f9-6gjtz\" (UID: \"11767f8f-ebed-4306-b5f7-5e79182d0ad1\") " pod="openstack/dnsmasq-dns-7d978555f9-6gjtz" Feb 27 10:49:56 crc kubenswrapper[4728]: I0227 10:49:56.467202 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-smjx7\" (UniqueName: \"kubernetes.io/projected/11767f8f-ebed-4306-b5f7-5e79182d0ad1-kube-api-access-smjx7\") pod \"dnsmasq-dns-7d978555f9-6gjtz\" (UID: \"11767f8f-ebed-4306-b5f7-5e79182d0ad1\") " pod="openstack/dnsmasq-dns-7d978555f9-6gjtz" Feb 27 10:49:56 crc kubenswrapper[4728]: I0227 10:49:56.467242 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5e7ce3c0-d850-41a4-862d-6fd83e20ae1f-config-data-custom\") pod \"heat-cfnapi-54dc5858d7-7975b\" (UID: \"5e7ce3c0-d850-41a4-862d-6fd83e20ae1f\") " pod="openstack/heat-cfnapi-54dc5858d7-7975b" Feb 27 10:49:56 crc kubenswrapper[4728]: I0227 10:49:56.467260 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9bhr\" (UniqueName: \"kubernetes.io/projected/5e7ce3c0-d850-41a4-862d-6fd83e20ae1f-kube-api-access-x9bhr\") pod \"heat-cfnapi-54dc5858d7-7975b\" (UID: \"5e7ce3c0-d850-41a4-862d-6fd83e20ae1f\") " pod="openstack/heat-cfnapi-54dc5858d7-7975b" Feb 27 10:49:56 crc kubenswrapper[4728]: I0227 10:49:56.467365 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/11767f8f-ebed-4306-b5f7-5e79182d0ad1-ovsdbserver-nb\") pod \"dnsmasq-dns-7d978555f9-6gjtz\" (UID: \"11767f8f-ebed-4306-b5f7-5e79182d0ad1\") " pod="openstack/dnsmasq-dns-7d978555f9-6gjtz" Feb 27 10:49:56 crc kubenswrapper[4728]: I0227 10:49:56.467423 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/11767f8f-ebed-4306-b5f7-5e79182d0ad1-dns-swift-storage-0\") pod \"dnsmasq-dns-7d978555f9-6gjtz\" (UID: \"11767f8f-ebed-4306-b5f7-5e79182d0ad1\") " pod="openstack/dnsmasq-dns-7d978555f9-6gjtz" Feb 27 10:49:56 crc kubenswrapper[4728]: I0227 10:49:56.467438 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/11767f8f-ebed-4306-b5f7-5e79182d0ad1-ovsdbserver-sb\") pod \"dnsmasq-dns-7d978555f9-6gjtz\" (UID: \"11767f8f-ebed-4306-b5f7-5e79182d0ad1\") " pod="openstack/dnsmasq-dns-7d978555f9-6gjtz" Feb 27 10:49:56 crc kubenswrapper[4728]: I0227 10:49:56.467555 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e7ce3c0-d850-41a4-862d-6fd83e20ae1f-config-data\") pod \"heat-cfnapi-54dc5858d7-7975b\" (UID: \"5e7ce3c0-d850-41a4-862d-6fd83e20ae1f\") " pod="openstack/heat-cfnapi-54dc5858d7-7975b" Feb 27 10:49:56 crc kubenswrapper[4728]: I0227 10:49:56.473144 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-84dc467799-w92cc" Feb 27 10:49:56 crc kubenswrapper[4728]: I0227 10:49:56.569816 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6hh8\" (UniqueName: \"kubernetes.io/projected/96dc96a4-9c81-4702-8678-1f6824535e01-kube-api-access-s6hh8\") pod \"heat-api-5dcd5b5d76-tfs97\" (UID: \"96dc96a4-9c81-4702-8678-1f6824535e01\") " pod="openstack/heat-api-5dcd5b5d76-tfs97" Feb 27 10:49:56 crc kubenswrapper[4728]: I0227 10:49:56.569861 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96dc96a4-9c81-4702-8678-1f6824535e01-config-data\") pod \"heat-api-5dcd5b5d76-tfs97\" (UID: \"96dc96a4-9c81-4702-8678-1f6824535e01\") " pod="openstack/heat-api-5dcd5b5d76-tfs97" Feb 27 10:49:56 crc kubenswrapper[4728]: I0227 10:49:56.569893 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/11767f8f-ebed-4306-b5f7-5e79182d0ad1-ovsdbserver-nb\") pod \"dnsmasq-dns-7d978555f9-6gjtz\" (UID: \"11767f8f-ebed-4306-b5f7-5e79182d0ad1\") " pod="openstack/dnsmasq-dns-7d978555f9-6gjtz" Feb 27 10:49:56 crc kubenswrapper[4728]: I0227 10:49:56.569931 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/11767f8f-ebed-4306-b5f7-5e79182d0ad1-dns-swift-storage-0\") pod \"dnsmasq-dns-7d978555f9-6gjtz\" (UID: \"11767f8f-ebed-4306-b5f7-5e79182d0ad1\") " pod="openstack/dnsmasq-dns-7d978555f9-6gjtz" Feb 27 10:49:56 crc kubenswrapper[4728]: I0227 10:49:56.569953 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/11767f8f-ebed-4306-b5f7-5e79182d0ad1-ovsdbserver-sb\") pod \"dnsmasq-dns-7d978555f9-6gjtz\" (UID: \"11767f8f-ebed-4306-b5f7-5e79182d0ad1\") " pod="openstack/dnsmasq-dns-7d978555f9-6gjtz" Feb 27 10:49:56 crc kubenswrapper[4728]: I0227 10:49:56.570001 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e7ce3c0-d850-41a4-862d-6fd83e20ae1f-config-data\") pod \"heat-cfnapi-54dc5858d7-7975b\" (UID: \"5e7ce3c0-d850-41a4-862d-6fd83e20ae1f\") " pod="openstack/heat-cfnapi-54dc5858d7-7975b" Feb 27 10:49:56 crc kubenswrapper[4728]: I0227 10:49:56.570039 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e7ce3c0-d850-41a4-862d-6fd83e20ae1f-combined-ca-bundle\") pod \"heat-cfnapi-54dc5858d7-7975b\" (UID: \"5e7ce3c0-d850-41a4-862d-6fd83e20ae1f\") " pod="openstack/heat-cfnapi-54dc5858d7-7975b" Feb 27 10:49:56 crc kubenswrapper[4728]: I0227 10:49:56.570072 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11767f8f-ebed-4306-b5f7-5e79182d0ad1-config\") pod \"dnsmasq-dns-7d978555f9-6gjtz\" (UID: \"11767f8f-ebed-4306-b5f7-5e79182d0ad1\") " pod="openstack/dnsmasq-dns-7d978555f9-6gjtz" Feb 27 10:49:56 crc kubenswrapper[4728]: I0227 10:49:56.570111 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/96dc96a4-9c81-4702-8678-1f6824535e01-config-data-custom\") pod \"heat-api-5dcd5b5d76-tfs97\" (UID: \"96dc96a4-9c81-4702-8678-1f6824535e01\") " pod="openstack/heat-api-5dcd5b5d76-tfs97" Feb 27 10:49:56 crc kubenswrapper[4728]: I0227 10:49:56.570128 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96dc96a4-9c81-4702-8678-1f6824535e01-combined-ca-bundle\") pod \"heat-api-5dcd5b5d76-tfs97\" (UID: \"96dc96a4-9c81-4702-8678-1f6824535e01\") " pod="openstack/heat-api-5dcd5b5d76-tfs97" Feb 27 10:49:56 crc kubenswrapper[4728]: I0227 10:49:56.570178 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/11767f8f-ebed-4306-b5f7-5e79182d0ad1-dns-svc\") pod \"dnsmasq-dns-7d978555f9-6gjtz\" (UID: \"11767f8f-ebed-4306-b5f7-5e79182d0ad1\") " pod="openstack/dnsmasq-dns-7d978555f9-6gjtz" Feb 27 10:49:56 crc kubenswrapper[4728]: I0227 10:49:56.570205 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-smjx7\" (UniqueName: \"kubernetes.io/projected/11767f8f-ebed-4306-b5f7-5e79182d0ad1-kube-api-access-smjx7\") pod \"dnsmasq-dns-7d978555f9-6gjtz\" (UID: \"11767f8f-ebed-4306-b5f7-5e79182d0ad1\") " pod="openstack/dnsmasq-dns-7d978555f9-6gjtz" Feb 27 10:49:56 crc kubenswrapper[4728]: I0227 10:49:56.570226 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5e7ce3c0-d850-41a4-862d-6fd83e20ae1f-config-data-custom\") pod \"heat-cfnapi-54dc5858d7-7975b\" (UID: \"5e7ce3c0-d850-41a4-862d-6fd83e20ae1f\") " pod="openstack/heat-cfnapi-54dc5858d7-7975b" Feb 27 10:49:56 crc kubenswrapper[4728]: I0227 10:49:56.570244 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x9bhr\" (UniqueName: \"kubernetes.io/projected/5e7ce3c0-d850-41a4-862d-6fd83e20ae1f-kube-api-access-x9bhr\") pod \"heat-cfnapi-54dc5858d7-7975b\" (UID: \"5e7ce3c0-d850-41a4-862d-6fd83e20ae1f\") " pod="openstack/heat-cfnapi-54dc5858d7-7975b" Feb 27 10:49:56 crc kubenswrapper[4728]: I0227 10:49:56.571126 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/11767f8f-ebed-4306-b5f7-5e79182d0ad1-ovsdbserver-nb\") pod \"dnsmasq-dns-7d978555f9-6gjtz\" (UID: \"11767f8f-ebed-4306-b5f7-5e79182d0ad1\") " pod="openstack/dnsmasq-dns-7d978555f9-6gjtz" Feb 27 10:49:56 crc kubenswrapper[4728]: I0227 10:49:56.576120 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11767f8f-ebed-4306-b5f7-5e79182d0ad1-config\") pod \"dnsmasq-dns-7d978555f9-6gjtz\" (UID: \"11767f8f-ebed-4306-b5f7-5e79182d0ad1\") " pod="openstack/dnsmasq-dns-7d978555f9-6gjtz" Feb 27 10:49:56 crc kubenswrapper[4728]: I0227 10:49:56.576391 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/11767f8f-ebed-4306-b5f7-5e79182d0ad1-dns-svc\") pod \"dnsmasq-dns-7d978555f9-6gjtz\" (UID: \"11767f8f-ebed-4306-b5f7-5e79182d0ad1\") " pod="openstack/dnsmasq-dns-7d978555f9-6gjtz" Feb 27 10:49:56 crc kubenswrapper[4728]: I0227 10:49:56.577311 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5e7ce3c0-d850-41a4-862d-6fd83e20ae1f-config-data-custom\") pod \"heat-cfnapi-54dc5858d7-7975b\" (UID: \"5e7ce3c0-d850-41a4-862d-6fd83e20ae1f\") " pod="openstack/heat-cfnapi-54dc5858d7-7975b" Feb 27 10:49:56 crc kubenswrapper[4728]: I0227 10:49:56.577405 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/11767f8f-ebed-4306-b5f7-5e79182d0ad1-dns-swift-storage-0\") pod \"dnsmasq-dns-7d978555f9-6gjtz\" (UID: \"11767f8f-ebed-4306-b5f7-5e79182d0ad1\") " pod="openstack/dnsmasq-dns-7d978555f9-6gjtz" Feb 27 10:49:56 crc kubenswrapper[4728]: I0227 10:49:56.577637 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/11767f8f-ebed-4306-b5f7-5e79182d0ad1-ovsdbserver-sb\") pod \"dnsmasq-dns-7d978555f9-6gjtz\" (UID: \"11767f8f-ebed-4306-b5f7-5e79182d0ad1\") " pod="openstack/dnsmasq-dns-7d978555f9-6gjtz" Feb 27 10:49:56 crc kubenswrapper[4728]: I0227 10:49:56.587355 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e7ce3c0-d850-41a4-862d-6fd83e20ae1f-combined-ca-bundle\") pod \"heat-cfnapi-54dc5858d7-7975b\" (UID: \"5e7ce3c0-d850-41a4-862d-6fd83e20ae1f\") " pod="openstack/heat-cfnapi-54dc5858d7-7975b" Feb 27 10:49:56 crc kubenswrapper[4728]: I0227 10:49:56.590822 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-smjx7\" (UniqueName: \"kubernetes.io/projected/11767f8f-ebed-4306-b5f7-5e79182d0ad1-kube-api-access-smjx7\") pod \"dnsmasq-dns-7d978555f9-6gjtz\" (UID: \"11767f8f-ebed-4306-b5f7-5e79182d0ad1\") " pod="openstack/dnsmasq-dns-7d978555f9-6gjtz" Feb 27 10:49:56 crc kubenswrapper[4728]: I0227 10:49:56.595619 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x9bhr\" (UniqueName: \"kubernetes.io/projected/5e7ce3c0-d850-41a4-862d-6fd83e20ae1f-kube-api-access-x9bhr\") pod \"heat-cfnapi-54dc5858d7-7975b\" (UID: \"5e7ce3c0-d850-41a4-862d-6fd83e20ae1f\") " pod="openstack/heat-cfnapi-54dc5858d7-7975b" Feb 27 10:49:56 crc kubenswrapper[4728]: I0227 10:49:56.596995 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e7ce3c0-d850-41a4-862d-6fd83e20ae1f-config-data\") pod \"heat-cfnapi-54dc5858d7-7975b\" (UID: \"5e7ce3c0-d850-41a4-862d-6fd83e20ae1f\") " pod="openstack/heat-cfnapi-54dc5858d7-7975b" Feb 27 10:49:56 crc kubenswrapper[4728]: I0227 10:49:56.605308 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d978555f9-6gjtz" Feb 27 10:49:56 crc kubenswrapper[4728]: I0227 10:49:56.647918 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 27 10:49:56 crc kubenswrapper[4728]: I0227 10:49:56.672294 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s6hh8\" (UniqueName: \"kubernetes.io/projected/96dc96a4-9c81-4702-8678-1f6824535e01-kube-api-access-s6hh8\") pod \"heat-api-5dcd5b5d76-tfs97\" (UID: \"96dc96a4-9c81-4702-8678-1f6824535e01\") " pod="openstack/heat-api-5dcd5b5d76-tfs97" Feb 27 10:49:56 crc kubenswrapper[4728]: I0227 10:49:56.672334 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96dc96a4-9c81-4702-8678-1f6824535e01-config-data\") pod \"heat-api-5dcd5b5d76-tfs97\" (UID: \"96dc96a4-9c81-4702-8678-1f6824535e01\") " pod="openstack/heat-api-5dcd5b5d76-tfs97" Feb 27 10:49:56 crc kubenswrapper[4728]: I0227 10:49:56.672469 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/96dc96a4-9c81-4702-8678-1f6824535e01-config-data-custom\") pod \"heat-api-5dcd5b5d76-tfs97\" (UID: \"96dc96a4-9c81-4702-8678-1f6824535e01\") " pod="openstack/heat-api-5dcd5b5d76-tfs97" Feb 27 10:49:56 crc kubenswrapper[4728]: I0227 10:49:56.672483 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96dc96a4-9c81-4702-8678-1f6824535e01-combined-ca-bundle\") pod \"heat-api-5dcd5b5d76-tfs97\" (UID: \"96dc96a4-9c81-4702-8678-1f6824535e01\") " pod="openstack/heat-api-5dcd5b5d76-tfs97" Feb 27 10:49:56 crc kubenswrapper[4728]: I0227 10:49:56.676740 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96dc96a4-9c81-4702-8678-1f6824535e01-combined-ca-bundle\") pod \"heat-api-5dcd5b5d76-tfs97\" (UID: \"96dc96a4-9c81-4702-8678-1f6824535e01\") " pod="openstack/heat-api-5dcd5b5d76-tfs97" Feb 27 10:49:56 crc kubenswrapper[4728]: I0227 10:49:56.680034 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/96dc96a4-9c81-4702-8678-1f6824535e01-config-data-custom\") pod \"heat-api-5dcd5b5d76-tfs97\" (UID: \"96dc96a4-9c81-4702-8678-1f6824535e01\") " pod="openstack/heat-api-5dcd5b5d76-tfs97" Feb 27 10:49:56 crc kubenswrapper[4728]: I0227 10:49:56.681519 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96dc96a4-9c81-4702-8678-1f6824535e01-config-data\") pod \"heat-api-5dcd5b5d76-tfs97\" (UID: \"96dc96a4-9c81-4702-8678-1f6824535e01\") " pod="openstack/heat-api-5dcd5b5d76-tfs97" Feb 27 10:49:56 crc kubenswrapper[4728]: I0227 10:49:56.691103 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s6hh8\" (UniqueName: \"kubernetes.io/projected/96dc96a4-9c81-4702-8678-1f6824535e01-kube-api-access-s6hh8\") pod \"heat-api-5dcd5b5d76-tfs97\" (UID: \"96dc96a4-9c81-4702-8678-1f6824535e01\") " pod="openstack/heat-api-5dcd5b5d76-tfs97" Feb 27 10:49:56 crc kubenswrapper[4728]: I0227 10:49:56.795350 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-6bfbb66dbc-8ddj8"] Feb 27 10:49:56 crc kubenswrapper[4728]: I0227 10:49:56.797734 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-6bfbb66dbc-8ddj8" Feb 27 10:49:56 crc kubenswrapper[4728]: I0227 10:49:56.801394 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 27 10:49:56 crc kubenswrapper[4728]: I0227 10:49:56.801576 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Feb 27 10:49:56 crc kubenswrapper[4728]: I0227 10:49:56.801682 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Feb 27 10:49:56 crc kubenswrapper[4728]: I0227 10:49:56.815900 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-6bfbb66dbc-8ddj8"] Feb 27 10:49:56 crc kubenswrapper[4728]: I0227 10:49:56.820420 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-54dc5858d7-7975b" Feb 27 10:49:56 crc kubenswrapper[4728]: I0227 10:49:56.821395 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-5dcd5b5d76-tfs97" Feb 27 10:49:56 crc kubenswrapper[4728]: I0227 10:49:56.982511 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0633a337-c673-43b4-a012-ac41403a02a1-etc-swift\") pod \"swift-proxy-6bfbb66dbc-8ddj8\" (UID: \"0633a337-c673-43b4-a012-ac41403a02a1\") " pod="openstack/swift-proxy-6bfbb66dbc-8ddj8" Feb 27 10:49:56 crc kubenswrapper[4728]: I0227 10:49:56.982667 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0633a337-c673-43b4-a012-ac41403a02a1-config-data\") pod \"swift-proxy-6bfbb66dbc-8ddj8\" (UID: \"0633a337-c673-43b4-a012-ac41403a02a1\") " pod="openstack/swift-proxy-6bfbb66dbc-8ddj8" Feb 27 10:49:56 crc kubenswrapper[4728]: I0227 10:49:56.982725 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0633a337-c673-43b4-a012-ac41403a02a1-run-httpd\") pod \"swift-proxy-6bfbb66dbc-8ddj8\" (UID: \"0633a337-c673-43b4-a012-ac41403a02a1\") " pod="openstack/swift-proxy-6bfbb66dbc-8ddj8" Feb 27 10:49:56 crc kubenswrapper[4728]: I0227 10:49:56.982743 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0633a337-c673-43b4-a012-ac41403a02a1-log-httpd\") pod \"swift-proxy-6bfbb66dbc-8ddj8\" (UID: \"0633a337-c673-43b4-a012-ac41403a02a1\") " pod="openstack/swift-proxy-6bfbb66dbc-8ddj8" Feb 27 10:49:56 crc kubenswrapper[4728]: I0227 10:49:56.982816 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zh5b\" (UniqueName: \"kubernetes.io/projected/0633a337-c673-43b4-a012-ac41403a02a1-kube-api-access-7zh5b\") pod \"swift-proxy-6bfbb66dbc-8ddj8\" (UID: \"0633a337-c673-43b4-a012-ac41403a02a1\") " pod="openstack/swift-proxy-6bfbb66dbc-8ddj8" Feb 27 10:49:56 crc kubenswrapper[4728]: I0227 10:49:56.982970 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0633a337-c673-43b4-a012-ac41403a02a1-combined-ca-bundle\") pod \"swift-proxy-6bfbb66dbc-8ddj8\" (UID: \"0633a337-c673-43b4-a012-ac41403a02a1\") " pod="openstack/swift-proxy-6bfbb66dbc-8ddj8" Feb 27 10:49:56 crc kubenswrapper[4728]: I0227 10:49:56.983258 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0633a337-c673-43b4-a012-ac41403a02a1-public-tls-certs\") pod \"swift-proxy-6bfbb66dbc-8ddj8\" (UID: \"0633a337-c673-43b4-a012-ac41403a02a1\") " pod="openstack/swift-proxy-6bfbb66dbc-8ddj8" Feb 27 10:49:56 crc kubenswrapper[4728]: I0227 10:49:56.983326 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0633a337-c673-43b4-a012-ac41403a02a1-internal-tls-certs\") pod \"swift-proxy-6bfbb66dbc-8ddj8\" (UID: \"0633a337-c673-43b4-a012-ac41403a02a1\") " pod="openstack/swift-proxy-6bfbb66dbc-8ddj8" Feb 27 10:49:57 crc kubenswrapper[4728]: I0227 10:49:57.085616 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0633a337-c673-43b4-a012-ac41403a02a1-public-tls-certs\") pod \"swift-proxy-6bfbb66dbc-8ddj8\" (UID: \"0633a337-c673-43b4-a012-ac41403a02a1\") " pod="openstack/swift-proxy-6bfbb66dbc-8ddj8" Feb 27 10:49:57 crc kubenswrapper[4728]: I0227 10:49:57.085895 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0633a337-c673-43b4-a012-ac41403a02a1-internal-tls-certs\") pod \"swift-proxy-6bfbb66dbc-8ddj8\" (UID: \"0633a337-c673-43b4-a012-ac41403a02a1\") " pod="openstack/swift-proxy-6bfbb66dbc-8ddj8" Feb 27 10:49:57 crc kubenswrapper[4728]: I0227 10:49:57.086005 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0633a337-c673-43b4-a012-ac41403a02a1-etc-swift\") pod \"swift-proxy-6bfbb66dbc-8ddj8\" (UID: \"0633a337-c673-43b4-a012-ac41403a02a1\") " pod="openstack/swift-proxy-6bfbb66dbc-8ddj8" Feb 27 10:49:57 crc kubenswrapper[4728]: I0227 10:49:57.086043 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0633a337-c673-43b4-a012-ac41403a02a1-config-data\") pod \"swift-proxy-6bfbb66dbc-8ddj8\" (UID: \"0633a337-c673-43b4-a012-ac41403a02a1\") " pod="openstack/swift-proxy-6bfbb66dbc-8ddj8" Feb 27 10:49:57 crc kubenswrapper[4728]: I0227 10:49:57.086065 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0633a337-c673-43b4-a012-ac41403a02a1-run-httpd\") pod \"swift-proxy-6bfbb66dbc-8ddj8\" (UID: \"0633a337-c673-43b4-a012-ac41403a02a1\") " pod="openstack/swift-proxy-6bfbb66dbc-8ddj8" Feb 27 10:49:57 crc kubenswrapper[4728]: I0227 10:49:57.086086 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0633a337-c673-43b4-a012-ac41403a02a1-log-httpd\") pod \"swift-proxy-6bfbb66dbc-8ddj8\" (UID: \"0633a337-c673-43b4-a012-ac41403a02a1\") " pod="openstack/swift-proxy-6bfbb66dbc-8ddj8" Feb 27 10:49:57 crc kubenswrapper[4728]: I0227 10:49:57.086104 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7zh5b\" (UniqueName: \"kubernetes.io/projected/0633a337-c673-43b4-a012-ac41403a02a1-kube-api-access-7zh5b\") pod \"swift-proxy-6bfbb66dbc-8ddj8\" (UID: \"0633a337-c673-43b4-a012-ac41403a02a1\") " pod="openstack/swift-proxy-6bfbb66dbc-8ddj8" Feb 27 10:49:57 crc kubenswrapper[4728]: I0227 10:49:57.086158 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0633a337-c673-43b4-a012-ac41403a02a1-combined-ca-bundle\") pod \"swift-proxy-6bfbb66dbc-8ddj8\" (UID: \"0633a337-c673-43b4-a012-ac41403a02a1\") " pod="openstack/swift-proxy-6bfbb66dbc-8ddj8" Feb 27 10:49:57 crc kubenswrapper[4728]: I0227 10:49:57.087294 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0633a337-c673-43b4-a012-ac41403a02a1-run-httpd\") pod \"swift-proxy-6bfbb66dbc-8ddj8\" (UID: \"0633a337-c673-43b4-a012-ac41403a02a1\") " pod="openstack/swift-proxy-6bfbb66dbc-8ddj8" Feb 27 10:49:57 crc kubenswrapper[4728]: I0227 10:49:57.087461 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0633a337-c673-43b4-a012-ac41403a02a1-log-httpd\") pod \"swift-proxy-6bfbb66dbc-8ddj8\" (UID: \"0633a337-c673-43b4-a012-ac41403a02a1\") " pod="openstack/swift-proxy-6bfbb66dbc-8ddj8" Feb 27 10:49:57 crc kubenswrapper[4728]: I0227 10:49:57.093209 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0633a337-c673-43b4-a012-ac41403a02a1-public-tls-certs\") pod \"swift-proxy-6bfbb66dbc-8ddj8\" (UID: \"0633a337-c673-43b4-a012-ac41403a02a1\") " pod="openstack/swift-proxy-6bfbb66dbc-8ddj8" Feb 27 10:49:57 crc kubenswrapper[4728]: I0227 10:49:57.093892 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0633a337-c673-43b4-a012-ac41403a02a1-combined-ca-bundle\") pod \"swift-proxy-6bfbb66dbc-8ddj8\" (UID: \"0633a337-c673-43b4-a012-ac41403a02a1\") " pod="openstack/swift-proxy-6bfbb66dbc-8ddj8" Feb 27 10:49:57 crc kubenswrapper[4728]: I0227 10:49:57.094086 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0633a337-c673-43b4-a012-ac41403a02a1-internal-tls-certs\") pod \"swift-proxy-6bfbb66dbc-8ddj8\" (UID: \"0633a337-c673-43b4-a012-ac41403a02a1\") " pod="openstack/swift-proxy-6bfbb66dbc-8ddj8" Feb 27 10:49:57 crc kubenswrapper[4728]: I0227 10:49:57.095667 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0633a337-c673-43b4-a012-ac41403a02a1-etc-swift\") pod \"swift-proxy-6bfbb66dbc-8ddj8\" (UID: \"0633a337-c673-43b4-a012-ac41403a02a1\") " pod="openstack/swift-proxy-6bfbb66dbc-8ddj8" Feb 27 10:49:57 crc kubenswrapper[4728]: I0227 10:49:57.100111 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0633a337-c673-43b4-a012-ac41403a02a1-config-data\") pod \"swift-proxy-6bfbb66dbc-8ddj8\" (UID: \"0633a337-c673-43b4-a012-ac41403a02a1\") " pod="openstack/swift-proxy-6bfbb66dbc-8ddj8" Feb 27 10:49:57 crc kubenswrapper[4728]: I0227 10:49:57.104786 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7zh5b\" (UniqueName: \"kubernetes.io/projected/0633a337-c673-43b4-a012-ac41403a02a1-kube-api-access-7zh5b\") pod \"swift-proxy-6bfbb66dbc-8ddj8\" (UID: \"0633a337-c673-43b4-a012-ac41403a02a1\") " pod="openstack/swift-proxy-6bfbb66dbc-8ddj8" Feb 27 10:49:57 crc kubenswrapper[4728]: I0227 10:49:57.137263 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-6bfbb66dbc-8ddj8" Feb 27 10:49:57 crc kubenswrapper[4728]: I0227 10:49:57.511383 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 27 10:49:57 crc kubenswrapper[4728]: I0227 10:49:57.511659 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fcc4fffc-ff97-47b0-885c-f8d81d189bcf" containerName="ceilometer-central-agent" containerID="cri-o://8ebba70ccc18b948278ad3d65f5711ae8ff260a5edfefb06ea4ce731f7cb09be" gracePeriod=30 Feb 27 10:49:57 crc kubenswrapper[4728]: I0227 10:49:57.512378 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fcc4fffc-ff97-47b0-885c-f8d81d189bcf" containerName="proxy-httpd" containerID="cri-o://07c17f4c98c066e72adfb76f94e23dfe160bfe260b02fcc24388b969d4eb97f1" gracePeriod=30 Feb 27 10:49:57 crc kubenswrapper[4728]: I0227 10:49:57.512431 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fcc4fffc-ff97-47b0-885c-f8d81d189bcf" containerName="sg-core" containerID="cri-o://e4be0bdaf86ef0679db41a5f996a4ee82922c465e2d6ed87f02a92c254bce567" gracePeriod=30 Feb 27 10:49:57 crc kubenswrapper[4728]: I0227 10:49:57.512475 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fcc4fffc-ff97-47b0-885c-f8d81d189bcf" containerName="ceilometer-notification-agent" containerID="cri-o://7a1ca7c2ada1415b20dbc22a8e914f5e2b2b33630a8f7c0bfef955d7a48d1a50" gracePeriod=30 Feb 27 10:49:57 crc kubenswrapper[4728]: I0227 10:49:57.530625 4728 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="fcc4fffc-ff97-47b0-885c-f8d81d189bcf" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.215:3000/\": EOF" Feb 27 10:49:58 crc kubenswrapper[4728]: I0227 10:49:58.056310 4728 generic.go:334] "Generic (PLEG): container finished" podID="fcc4fffc-ff97-47b0-885c-f8d81d189bcf" containerID="07c17f4c98c066e72adfb76f94e23dfe160bfe260b02fcc24388b969d4eb97f1" exitCode=0 Feb 27 10:49:58 crc kubenswrapper[4728]: I0227 10:49:58.056593 4728 generic.go:334] "Generic (PLEG): container finished" podID="fcc4fffc-ff97-47b0-885c-f8d81d189bcf" containerID="e4be0bdaf86ef0679db41a5f996a4ee82922c465e2d6ed87f02a92c254bce567" exitCode=2 Feb 27 10:49:58 crc kubenswrapper[4728]: I0227 10:49:58.056604 4728 generic.go:334] "Generic (PLEG): container finished" podID="fcc4fffc-ff97-47b0-885c-f8d81d189bcf" containerID="8ebba70ccc18b948278ad3d65f5711ae8ff260a5edfefb06ea4ce731f7cb09be" exitCode=0 Feb 27 10:49:58 crc kubenswrapper[4728]: I0227 10:49:58.056395 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fcc4fffc-ff97-47b0-885c-f8d81d189bcf","Type":"ContainerDied","Data":"07c17f4c98c066e72adfb76f94e23dfe160bfe260b02fcc24388b969d4eb97f1"} Feb 27 10:49:58 crc kubenswrapper[4728]: I0227 10:49:58.056640 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fcc4fffc-ff97-47b0-885c-f8d81d189bcf","Type":"ContainerDied","Data":"e4be0bdaf86ef0679db41a5f996a4ee82922c465e2d6ed87f02a92c254bce567"} Feb 27 10:49:58 crc kubenswrapper[4728]: I0227 10:49:58.056654 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fcc4fffc-ff97-47b0-885c-f8d81d189bcf","Type":"ContainerDied","Data":"8ebba70ccc18b948278ad3d65f5711ae8ff260a5edfefb06ea4ce731f7cb09be"} Feb 27 10:49:59 crc kubenswrapper[4728]: I0227 10:49:59.005535 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-dmpcg"] Feb 27 10:49:59 crc kubenswrapper[4728]: I0227 10:49:59.018119 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-dmpcg" Feb 27 10:49:59 crc kubenswrapper[4728]: I0227 10:49:59.026677 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-dmpcg"] Feb 27 10:49:59 crc kubenswrapper[4728]: I0227 10:49:59.041775 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9d5f0465-1665-4c7f-b8f5-e345efbe3872-operator-scripts\") pod \"nova-api-db-create-dmpcg\" (UID: \"9d5f0465-1665-4c7f-b8f5-e345efbe3872\") " pod="openstack/nova-api-db-create-dmpcg" Feb 27 10:49:59 crc kubenswrapper[4728]: I0227 10:49:59.041985 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qqwx\" (UniqueName: \"kubernetes.io/projected/9d5f0465-1665-4c7f-b8f5-e345efbe3872-kube-api-access-8qqwx\") pod \"nova-api-db-create-dmpcg\" (UID: \"9d5f0465-1665-4c7f-b8f5-e345efbe3872\") " pod="openstack/nova-api-db-create-dmpcg" Feb 27 10:49:59 crc kubenswrapper[4728]: I0227 10:49:59.117237 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-b66a-account-create-update-wtmz5"] Feb 27 10:49:59 crc kubenswrapper[4728]: I0227 10:49:59.118710 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-b66a-account-create-update-wtmz5" Feb 27 10:49:59 crc kubenswrapper[4728]: I0227 10:49:59.123211 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Feb 27 10:49:59 crc kubenswrapper[4728]: I0227 10:49:59.143938 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9d5f0465-1665-4c7f-b8f5-e345efbe3872-operator-scripts\") pod \"nova-api-db-create-dmpcg\" (UID: \"9d5f0465-1665-4c7f-b8f5-e345efbe3872\") " pod="openstack/nova-api-db-create-dmpcg" Feb 27 10:49:59 crc kubenswrapper[4728]: I0227 10:49:59.143986 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/af86f825-d16c-4856-a823-01f06824a97f-operator-scripts\") pod \"nova-api-b66a-account-create-update-wtmz5\" (UID: \"af86f825-d16c-4856-a823-01f06824a97f\") " pod="openstack/nova-api-b66a-account-create-update-wtmz5" Feb 27 10:49:59 crc kubenswrapper[4728]: I0227 10:49:59.144138 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8qqwx\" (UniqueName: \"kubernetes.io/projected/9d5f0465-1665-4c7f-b8f5-e345efbe3872-kube-api-access-8qqwx\") pod \"nova-api-db-create-dmpcg\" (UID: \"9d5f0465-1665-4c7f-b8f5-e345efbe3872\") " pod="openstack/nova-api-db-create-dmpcg" Feb 27 10:49:59 crc kubenswrapper[4728]: I0227 10:49:59.144162 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mctgl\" (UniqueName: \"kubernetes.io/projected/af86f825-d16c-4856-a823-01f06824a97f-kube-api-access-mctgl\") pod \"nova-api-b66a-account-create-update-wtmz5\" (UID: \"af86f825-d16c-4856-a823-01f06824a97f\") " pod="openstack/nova-api-b66a-account-create-update-wtmz5" Feb 27 10:49:59 crc kubenswrapper[4728]: I0227 10:49:59.144949 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9d5f0465-1665-4c7f-b8f5-e345efbe3872-operator-scripts\") pod \"nova-api-db-create-dmpcg\" (UID: \"9d5f0465-1665-4c7f-b8f5-e345efbe3872\") " pod="openstack/nova-api-db-create-dmpcg" Feb 27 10:49:59 crc kubenswrapper[4728]: I0227 10:49:59.144988 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-b66a-account-create-update-wtmz5"] Feb 27 10:49:59 crc kubenswrapper[4728]: I0227 10:49:59.192154 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8qqwx\" (UniqueName: \"kubernetes.io/projected/9d5f0465-1665-4c7f-b8f5-e345efbe3872-kube-api-access-8qqwx\") pod \"nova-api-db-create-dmpcg\" (UID: \"9d5f0465-1665-4c7f-b8f5-e345efbe3872\") " pod="openstack/nova-api-db-create-dmpcg" Feb 27 10:49:59 crc kubenswrapper[4728]: I0227 10:49:59.203188 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-cnj5s"] Feb 27 10:49:59 crc kubenswrapper[4728]: I0227 10:49:59.204787 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-cnj5s" Feb 27 10:49:59 crc kubenswrapper[4728]: I0227 10:49:59.225640 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-cnj5s"] Feb 27 10:49:59 crc kubenswrapper[4728]: I0227 10:49:59.252009 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mctgl\" (UniqueName: \"kubernetes.io/projected/af86f825-d16c-4856-a823-01f06824a97f-kube-api-access-mctgl\") pod \"nova-api-b66a-account-create-update-wtmz5\" (UID: \"af86f825-d16c-4856-a823-01f06824a97f\") " pod="openstack/nova-api-b66a-account-create-update-wtmz5" Feb 27 10:49:59 crc kubenswrapper[4728]: I0227 10:49:59.252215 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/af86f825-d16c-4856-a823-01f06824a97f-operator-scripts\") pod \"nova-api-b66a-account-create-update-wtmz5\" (UID: \"af86f825-d16c-4856-a823-01f06824a97f\") " pod="openstack/nova-api-b66a-account-create-update-wtmz5" Feb 27 10:49:59 crc kubenswrapper[4728]: I0227 10:49:59.252359 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mqptr\" (UniqueName: \"kubernetes.io/projected/c297845b-bf34-4c1d-88f0-3f9b28d09e54-kube-api-access-mqptr\") pod \"nova-cell0-db-create-cnj5s\" (UID: \"c297845b-bf34-4c1d-88f0-3f9b28d09e54\") " pod="openstack/nova-cell0-db-create-cnj5s" Feb 27 10:49:59 crc kubenswrapper[4728]: I0227 10:49:59.252419 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c297845b-bf34-4c1d-88f0-3f9b28d09e54-operator-scripts\") pod \"nova-cell0-db-create-cnj5s\" (UID: \"c297845b-bf34-4c1d-88f0-3f9b28d09e54\") " pod="openstack/nova-cell0-db-create-cnj5s" Feb 27 10:49:59 crc kubenswrapper[4728]: I0227 10:49:59.253352 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/af86f825-d16c-4856-a823-01f06824a97f-operator-scripts\") pod \"nova-api-b66a-account-create-update-wtmz5\" (UID: \"af86f825-d16c-4856-a823-01f06824a97f\") " pod="openstack/nova-api-b66a-account-create-update-wtmz5" Feb 27 10:49:59 crc kubenswrapper[4728]: I0227 10:49:59.299543 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mctgl\" (UniqueName: \"kubernetes.io/projected/af86f825-d16c-4856-a823-01f06824a97f-kube-api-access-mctgl\") pod \"nova-api-b66a-account-create-update-wtmz5\" (UID: \"af86f825-d16c-4856-a823-01f06824a97f\") " pod="openstack/nova-api-b66a-account-create-update-wtmz5" Feb 27 10:49:59 crc kubenswrapper[4728]: I0227 10:49:59.329758 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-m9lhq"] Feb 27 10:49:59 crc kubenswrapper[4728]: I0227 10:49:59.331354 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-m9lhq" Feb 27 10:49:59 crc kubenswrapper[4728]: I0227 10:49:59.354711 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-m9lhq"] Feb 27 10:49:59 crc kubenswrapper[4728]: I0227 10:49:59.358986 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/390e39cb-69e2-4f3b-94a7-deef2d13c027-operator-scripts\") pod \"nova-cell1-db-create-m9lhq\" (UID: \"390e39cb-69e2-4f3b-94a7-deef2d13c027\") " pod="openstack/nova-cell1-db-create-m9lhq" Feb 27 10:49:59 crc kubenswrapper[4728]: I0227 10:49:59.359150 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdjwt\" (UniqueName: \"kubernetes.io/projected/390e39cb-69e2-4f3b-94a7-deef2d13c027-kube-api-access-cdjwt\") pod \"nova-cell1-db-create-m9lhq\" (UID: \"390e39cb-69e2-4f3b-94a7-deef2d13c027\") " pod="openstack/nova-cell1-db-create-m9lhq" Feb 27 10:49:59 crc kubenswrapper[4728]: I0227 10:49:59.359303 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mqptr\" (UniqueName: \"kubernetes.io/projected/c297845b-bf34-4c1d-88f0-3f9b28d09e54-kube-api-access-mqptr\") pod \"nova-cell0-db-create-cnj5s\" (UID: \"c297845b-bf34-4c1d-88f0-3f9b28d09e54\") " pod="openstack/nova-cell0-db-create-cnj5s" Feb 27 10:49:59 crc kubenswrapper[4728]: I0227 10:49:59.359431 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c297845b-bf34-4c1d-88f0-3f9b28d09e54-operator-scripts\") pod \"nova-cell0-db-create-cnj5s\" (UID: \"c297845b-bf34-4c1d-88f0-3f9b28d09e54\") " pod="openstack/nova-cell0-db-create-cnj5s" Feb 27 10:49:59 crc kubenswrapper[4728]: I0227 10:49:59.360265 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c297845b-bf34-4c1d-88f0-3f9b28d09e54-operator-scripts\") pod \"nova-cell0-db-create-cnj5s\" (UID: \"c297845b-bf34-4c1d-88f0-3f9b28d09e54\") " pod="openstack/nova-cell0-db-create-cnj5s" Feb 27 10:49:59 crc kubenswrapper[4728]: I0227 10:49:59.362033 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-dmpcg" Feb 27 10:49:59 crc kubenswrapper[4728]: I0227 10:49:59.380347 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mqptr\" (UniqueName: \"kubernetes.io/projected/c297845b-bf34-4c1d-88f0-3f9b28d09e54-kube-api-access-mqptr\") pod \"nova-cell0-db-create-cnj5s\" (UID: \"c297845b-bf34-4c1d-88f0-3f9b28d09e54\") " pod="openstack/nova-cell0-db-create-cnj5s" Feb 27 10:49:59 crc kubenswrapper[4728]: I0227 10:49:59.382207 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-7371-account-create-update-6lhlf"] Feb 27 10:49:59 crc kubenswrapper[4728]: I0227 10:49:59.384251 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-7371-account-create-update-6lhlf" Feb 27 10:49:59 crc kubenswrapper[4728]: I0227 10:49:59.388893 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Feb 27 10:49:59 crc kubenswrapper[4728]: I0227 10:49:59.393344 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-7371-account-create-update-6lhlf"] Feb 27 10:49:59 crc kubenswrapper[4728]: I0227 10:49:59.442010 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-b66a-account-create-update-wtmz5" Feb 27 10:49:59 crc kubenswrapper[4728]: I0227 10:49:59.460898 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/051376ee-f481-42d6-a69c-637247569a1d-operator-scripts\") pod \"nova-cell0-7371-account-create-update-6lhlf\" (UID: \"051376ee-f481-42d6-a69c-637247569a1d\") " pod="openstack/nova-cell0-7371-account-create-update-6lhlf" Feb 27 10:49:59 crc kubenswrapper[4728]: I0227 10:49:59.461266 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xnl6x\" (UniqueName: \"kubernetes.io/projected/051376ee-f481-42d6-a69c-637247569a1d-kube-api-access-xnl6x\") pod \"nova-cell0-7371-account-create-update-6lhlf\" (UID: \"051376ee-f481-42d6-a69c-637247569a1d\") " pod="openstack/nova-cell0-7371-account-create-update-6lhlf" Feb 27 10:49:59 crc kubenswrapper[4728]: I0227 10:49:59.462210 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/390e39cb-69e2-4f3b-94a7-deef2d13c027-operator-scripts\") pod \"nova-cell1-db-create-m9lhq\" (UID: \"390e39cb-69e2-4f3b-94a7-deef2d13c027\") " pod="openstack/nova-cell1-db-create-m9lhq" Feb 27 10:49:59 crc kubenswrapper[4728]: I0227 10:49:59.462379 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cdjwt\" (UniqueName: \"kubernetes.io/projected/390e39cb-69e2-4f3b-94a7-deef2d13c027-kube-api-access-cdjwt\") pod \"nova-cell1-db-create-m9lhq\" (UID: \"390e39cb-69e2-4f3b-94a7-deef2d13c027\") " pod="openstack/nova-cell1-db-create-m9lhq" Feb 27 10:49:59 crc kubenswrapper[4728]: I0227 10:49:59.463774 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/390e39cb-69e2-4f3b-94a7-deef2d13c027-operator-scripts\") pod \"nova-cell1-db-create-m9lhq\" (UID: \"390e39cb-69e2-4f3b-94a7-deef2d13c027\") " pod="openstack/nova-cell1-db-create-m9lhq" Feb 27 10:49:59 crc kubenswrapper[4728]: I0227 10:49:59.487224 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cdjwt\" (UniqueName: \"kubernetes.io/projected/390e39cb-69e2-4f3b-94a7-deef2d13c027-kube-api-access-cdjwt\") pod \"nova-cell1-db-create-m9lhq\" (UID: \"390e39cb-69e2-4f3b-94a7-deef2d13c027\") " pod="openstack/nova-cell1-db-create-m9lhq" Feb 27 10:49:59 crc kubenswrapper[4728]: I0227 10:49:59.511458 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-3833-account-create-update-sllck"] Feb 27 10:49:59 crc kubenswrapper[4728]: I0227 10:49:59.514387 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-3833-account-create-update-sllck" Feb 27 10:49:59 crc kubenswrapper[4728]: I0227 10:49:59.521792 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Feb 27 10:49:59 crc kubenswrapper[4728]: I0227 10:49:59.531073 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-3833-account-create-update-sllck"] Feb 27 10:49:59 crc kubenswrapper[4728]: I0227 10:49:59.563762 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-cnj5s" Feb 27 10:49:59 crc kubenswrapper[4728]: I0227 10:49:59.564484 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xnl6x\" (UniqueName: \"kubernetes.io/projected/051376ee-f481-42d6-a69c-637247569a1d-kube-api-access-xnl6x\") pod \"nova-cell0-7371-account-create-update-6lhlf\" (UID: \"051376ee-f481-42d6-a69c-637247569a1d\") " pod="openstack/nova-cell0-7371-account-create-update-6lhlf" Feb 27 10:49:59 crc kubenswrapper[4728]: I0227 10:49:59.564612 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b3f1a225-d366-4d33-bd31-53ef143b546b-operator-scripts\") pod \"nova-cell1-3833-account-create-update-sllck\" (UID: \"b3f1a225-d366-4d33-bd31-53ef143b546b\") " pod="openstack/nova-cell1-3833-account-create-update-sllck" Feb 27 10:49:59 crc kubenswrapper[4728]: I0227 10:49:59.564667 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njf2q\" (UniqueName: \"kubernetes.io/projected/b3f1a225-d366-4d33-bd31-53ef143b546b-kube-api-access-njf2q\") pod \"nova-cell1-3833-account-create-update-sllck\" (UID: \"b3f1a225-d366-4d33-bd31-53ef143b546b\") " pod="openstack/nova-cell1-3833-account-create-update-sllck" Feb 27 10:49:59 crc kubenswrapper[4728]: I0227 10:49:59.564688 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/051376ee-f481-42d6-a69c-637247569a1d-operator-scripts\") pod \"nova-cell0-7371-account-create-update-6lhlf\" (UID: \"051376ee-f481-42d6-a69c-637247569a1d\") " pod="openstack/nova-cell0-7371-account-create-update-6lhlf" Feb 27 10:49:59 crc kubenswrapper[4728]: I0227 10:49:59.565357 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/051376ee-f481-42d6-a69c-637247569a1d-operator-scripts\") pod \"nova-cell0-7371-account-create-update-6lhlf\" (UID: \"051376ee-f481-42d6-a69c-637247569a1d\") " pod="openstack/nova-cell0-7371-account-create-update-6lhlf" Feb 27 10:49:59 crc kubenswrapper[4728]: I0227 10:49:59.603339 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xnl6x\" (UniqueName: \"kubernetes.io/projected/051376ee-f481-42d6-a69c-637247569a1d-kube-api-access-xnl6x\") pod \"nova-cell0-7371-account-create-update-6lhlf\" (UID: \"051376ee-f481-42d6-a69c-637247569a1d\") " pod="openstack/nova-cell0-7371-account-create-update-6lhlf" Feb 27 10:49:59 crc kubenswrapper[4728]: I0227 10:49:59.666383 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b3f1a225-d366-4d33-bd31-53ef143b546b-operator-scripts\") pod \"nova-cell1-3833-account-create-update-sllck\" (UID: \"b3f1a225-d366-4d33-bd31-53ef143b546b\") " pod="openstack/nova-cell1-3833-account-create-update-sllck" Feb 27 10:49:59 crc kubenswrapper[4728]: I0227 10:49:59.666477 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njf2q\" (UniqueName: \"kubernetes.io/projected/b3f1a225-d366-4d33-bd31-53ef143b546b-kube-api-access-njf2q\") pod \"nova-cell1-3833-account-create-update-sllck\" (UID: \"b3f1a225-d366-4d33-bd31-53ef143b546b\") " pod="openstack/nova-cell1-3833-account-create-update-sllck" Feb 27 10:49:59 crc kubenswrapper[4728]: I0227 10:49:59.667026 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b3f1a225-d366-4d33-bd31-53ef143b546b-operator-scripts\") pod \"nova-cell1-3833-account-create-update-sllck\" (UID: \"b3f1a225-d366-4d33-bd31-53ef143b546b\") " pod="openstack/nova-cell1-3833-account-create-update-sllck" Feb 27 10:49:59 crc kubenswrapper[4728]: I0227 10:49:59.679592 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-m9lhq" Feb 27 10:49:59 crc kubenswrapper[4728]: I0227 10:49:59.681641 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njf2q\" (UniqueName: \"kubernetes.io/projected/b3f1a225-d366-4d33-bd31-53ef143b546b-kube-api-access-njf2q\") pod \"nova-cell1-3833-account-create-update-sllck\" (UID: \"b3f1a225-d366-4d33-bd31-53ef143b546b\") " pod="openstack/nova-cell1-3833-account-create-update-sllck" Feb 27 10:49:59 crc kubenswrapper[4728]: I0227 10:49:59.754011 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-7371-account-create-update-6lhlf" Feb 27 10:49:59 crc kubenswrapper[4728]: I0227 10:49:59.851834 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-3833-account-create-update-sllck" Feb 27 10:50:00 crc kubenswrapper[4728]: I0227 10:50:00.201682 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536490-5x76j"] Feb 27 10:50:00 crc kubenswrapper[4728]: I0227 10:50:00.209164 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536490-5x76j" Feb 27 10:50:00 crc kubenswrapper[4728]: I0227 10:50:00.214371 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-wxdfj" Feb 27 10:50:00 crc kubenswrapper[4728]: I0227 10:50:00.221500 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 10:50:00 crc kubenswrapper[4728]: I0227 10:50:00.227600 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 10:50:00 crc kubenswrapper[4728]: I0227 10:50:00.262025 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536490-5x76j"] Feb 27 10:50:00 crc kubenswrapper[4728]: I0227 10:50:00.318818 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9px8h\" (UniqueName: \"kubernetes.io/projected/33b33e17-516b-44dc-b35c-267b155abfa3-kube-api-access-9px8h\") pod \"auto-csr-approver-29536490-5x76j\" (UID: \"33b33e17-516b-44dc-b35c-267b155abfa3\") " pod="openshift-infra/auto-csr-approver-29536490-5x76j" Feb 27 10:50:00 crc kubenswrapper[4728]: I0227 10:50:00.421553 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9px8h\" (UniqueName: \"kubernetes.io/projected/33b33e17-516b-44dc-b35c-267b155abfa3-kube-api-access-9px8h\") pod \"auto-csr-approver-29536490-5x76j\" (UID: \"33b33e17-516b-44dc-b35c-267b155abfa3\") " pod="openshift-infra/auto-csr-approver-29536490-5x76j" Feb 27 10:50:00 crc kubenswrapper[4728]: I0227 10:50:00.452800 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9px8h\" (UniqueName: \"kubernetes.io/projected/33b33e17-516b-44dc-b35c-267b155abfa3-kube-api-access-9px8h\") pod \"auto-csr-approver-29536490-5x76j\" (UID: \"33b33e17-516b-44dc-b35c-267b155abfa3\") " pod="openshift-infra/auto-csr-approver-29536490-5x76j" Feb 27 10:50:00 crc kubenswrapper[4728]: I0227 10:50:00.568271 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536490-5x76j" Feb 27 10:50:01 crc kubenswrapper[4728]: I0227 10:50:01.103339 4728 generic.go:334] "Generic (PLEG): container finished" podID="fcc4fffc-ff97-47b0-885c-f8d81d189bcf" containerID="7a1ca7c2ada1415b20dbc22a8e914f5e2b2b33630a8f7c0bfef955d7a48d1a50" exitCode=0 Feb 27 10:50:01 crc kubenswrapper[4728]: I0227 10:50:01.103384 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fcc4fffc-ff97-47b0-885c-f8d81d189bcf","Type":"ContainerDied","Data":"7a1ca7c2ada1415b20dbc22a8e914f5e2b2b33630a8f7c0bfef955d7a48d1a50"} Feb 27 10:50:01 crc kubenswrapper[4728]: I0227 10:50:01.874571 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 27 10:50:02 crc kubenswrapper[4728]: I0227 10:50:02.887407 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-7777f965d-45648"] Feb 27 10:50:02 crc kubenswrapper[4728]: I0227 10:50:02.890198 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-7777f965d-45648" Feb 27 10:50:02 crc kubenswrapper[4728]: I0227 10:50:02.913611 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-5d7b9476fd-gdljj"] Feb 27 10:50:02 crc kubenswrapper[4728]: I0227 10:50:02.915669 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-5d7b9476fd-gdljj" Feb 27 10:50:02 crc kubenswrapper[4728]: I0227 10:50:02.948386 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-7d59b769b4-g5cgn"] Feb 27 10:50:02 crc kubenswrapper[4728]: I0227 10:50:02.949911 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-7d59b769b4-g5cgn" Feb 27 10:50:02 crc kubenswrapper[4728]: I0227 10:50:02.969086 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-7777f965d-45648"] Feb 27 10:50:03 crc kubenswrapper[4728]: I0227 10:50:02.998752 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-plvmw\" (UniqueName: \"kubernetes.io/projected/cdcc3d16-3cf1-48e9-accc-c65d869be697-kube-api-access-plvmw\") pod \"heat-api-7d59b769b4-g5cgn\" (UID: \"cdcc3d16-3cf1-48e9-accc-c65d869be697\") " pod="openstack/heat-api-7d59b769b4-g5cgn" Feb 27 10:50:03 crc kubenswrapper[4728]: I0227 10:50:02.998805 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfcacc51-e5e0-4d12-a6d1-b810a9e93e42-combined-ca-bundle\") pod \"heat-engine-7777f965d-45648\" (UID: \"cfcacc51-e5e0-4d12-a6d1-b810a9e93e42\") " pod="openstack/heat-engine-7777f965d-45648" Feb 27 10:50:03 crc kubenswrapper[4728]: I0227 10:50:02.998865 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cfcacc51-e5e0-4d12-a6d1-b810a9e93e42-config-data\") pod \"heat-engine-7777f965d-45648\" (UID: \"cfcacc51-e5e0-4d12-a6d1-b810a9e93e42\") " pod="openstack/heat-engine-7777f965d-45648" Feb 27 10:50:03 crc kubenswrapper[4728]: I0227 10:50:02.998886 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cdcc3d16-3cf1-48e9-accc-c65d869be697-config-data\") pod \"heat-api-7d59b769b4-g5cgn\" (UID: \"cdcc3d16-3cf1-48e9-accc-c65d869be697\") " pod="openstack/heat-api-7d59b769b4-g5cgn" Feb 27 10:50:03 crc kubenswrapper[4728]: I0227 10:50:02.998907 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cdcc3d16-3cf1-48e9-accc-c65d869be697-combined-ca-bundle\") pod \"heat-api-7d59b769b4-g5cgn\" (UID: \"cdcc3d16-3cf1-48e9-accc-c65d869be697\") " pod="openstack/heat-api-7d59b769b4-g5cgn" Feb 27 10:50:03 crc kubenswrapper[4728]: I0227 10:50:02.998951 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ca804f78-493d-459d-9061-ee9fe01d8732-config-data-custom\") pod \"heat-cfnapi-5d7b9476fd-gdljj\" (UID: \"ca804f78-493d-459d-9061-ee9fe01d8732\") " pod="openstack/heat-cfnapi-5d7b9476fd-gdljj" Feb 27 10:50:03 crc kubenswrapper[4728]: I0227 10:50:02.998985 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cfcacc51-e5e0-4d12-a6d1-b810a9e93e42-config-data-custom\") pod \"heat-engine-7777f965d-45648\" (UID: \"cfcacc51-e5e0-4d12-a6d1-b810a9e93e42\") " pod="openstack/heat-engine-7777f965d-45648" Feb 27 10:50:03 crc kubenswrapper[4728]: I0227 10:50:02.999027 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca804f78-493d-459d-9061-ee9fe01d8732-config-data\") pod \"heat-cfnapi-5d7b9476fd-gdljj\" (UID: \"ca804f78-493d-459d-9061-ee9fe01d8732\") " pod="openstack/heat-cfnapi-5d7b9476fd-gdljj" Feb 27 10:50:03 crc kubenswrapper[4728]: I0227 10:50:02.999053 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cdcc3d16-3cf1-48e9-accc-c65d869be697-config-data-custom\") pod \"heat-api-7d59b769b4-g5cgn\" (UID: \"cdcc3d16-3cf1-48e9-accc-c65d869be697\") " pod="openstack/heat-api-7d59b769b4-g5cgn" Feb 27 10:50:03 crc kubenswrapper[4728]: I0227 10:50:02.999086 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca804f78-493d-459d-9061-ee9fe01d8732-combined-ca-bundle\") pod \"heat-cfnapi-5d7b9476fd-gdljj\" (UID: \"ca804f78-493d-459d-9061-ee9fe01d8732\") " pod="openstack/heat-cfnapi-5d7b9476fd-gdljj" Feb 27 10:50:03 crc kubenswrapper[4728]: I0227 10:50:02.999163 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvxjn\" (UniqueName: \"kubernetes.io/projected/ca804f78-493d-459d-9061-ee9fe01d8732-kube-api-access-zvxjn\") pod \"heat-cfnapi-5d7b9476fd-gdljj\" (UID: \"ca804f78-493d-459d-9061-ee9fe01d8732\") " pod="openstack/heat-cfnapi-5d7b9476fd-gdljj" Feb 27 10:50:03 crc kubenswrapper[4728]: I0227 10:50:02.999215 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zsbdf\" (UniqueName: \"kubernetes.io/projected/cfcacc51-e5e0-4d12-a6d1-b810a9e93e42-kube-api-access-zsbdf\") pod \"heat-engine-7777f965d-45648\" (UID: \"cfcacc51-e5e0-4d12-a6d1-b810a9e93e42\") " pod="openstack/heat-engine-7777f965d-45648" Feb 27 10:50:03 crc kubenswrapper[4728]: I0227 10:50:03.024508 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-7d59b769b4-g5cgn"] Feb 27 10:50:03 crc kubenswrapper[4728]: I0227 10:50:03.042003 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-5d7b9476fd-gdljj"] Feb 27 10:50:03 crc kubenswrapper[4728]: I0227 10:50:03.110232 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zsbdf\" (UniqueName: \"kubernetes.io/projected/cfcacc51-e5e0-4d12-a6d1-b810a9e93e42-kube-api-access-zsbdf\") pod \"heat-engine-7777f965d-45648\" (UID: \"cfcacc51-e5e0-4d12-a6d1-b810a9e93e42\") " pod="openstack/heat-engine-7777f965d-45648" Feb 27 10:50:03 crc kubenswrapper[4728]: I0227 10:50:03.110713 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-plvmw\" (UniqueName: \"kubernetes.io/projected/cdcc3d16-3cf1-48e9-accc-c65d869be697-kube-api-access-plvmw\") pod \"heat-api-7d59b769b4-g5cgn\" (UID: \"cdcc3d16-3cf1-48e9-accc-c65d869be697\") " pod="openstack/heat-api-7d59b769b4-g5cgn" Feb 27 10:50:03 crc kubenswrapper[4728]: I0227 10:50:03.110866 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfcacc51-e5e0-4d12-a6d1-b810a9e93e42-combined-ca-bundle\") pod \"heat-engine-7777f965d-45648\" (UID: \"cfcacc51-e5e0-4d12-a6d1-b810a9e93e42\") " pod="openstack/heat-engine-7777f965d-45648" Feb 27 10:50:03 crc kubenswrapper[4728]: I0227 10:50:03.111711 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cfcacc51-e5e0-4d12-a6d1-b810a9e93e42-config-data\") pod \"heat-engine-7777f965d-45648\" (UID: \"cfcacc51-e5e0-4d12-a6d1-b810a9e93e42\") " pod="openstack/heat-engine-7777f965d-45648" Feb 27 10:50:03 crc kubenswrapper[4728]: I0227 10:50:03.111855 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cdcc3d16-3cf1-48e9-accc-c65d869be697-config-data\") pod \"heat-api-7d59b769b4-g5cgn\" (UID: \"cdcc3d16-3cf1-48e9-accc-c65d869be697\") " pod="openstack/heat-api-7d59b769b4-g5cgn" Feb 27 10:50:03 crc kubenswrapper[4728]: I0227 10:50:03.112161 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cdcc3d16-3cf1-48e9-accc-c65d869be697-combined-ca-bundle\") pod \"heat-api-7d59b769b4-g5cgn\" (UID: \"cdcc3d16-3cf1-48e9-accc-c65d869be697\") " pod="openstack/heat-api-7d59b769b4-g5cgn" Feb 27 10:50:03 crc kubenswrapper[4728]: I0227 10:50:03.112395 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ca804f78-493d-459d-9061-ee9fe01d8732-config-data-custom\") pod \"heat-cfnapi-5d7b9476fd-gdljj\" (UID: \"ca804f78-493d-459d-9061-ee9fe01d8732\") " pod="openstack/heat-cfnapi-5d7b9476fd-gdljj" Feb 27 10:50:03 crc kubenswrapper[4728]: I0227 10:50:03.112619 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cfcacc51-e5e0-4d12-a6d1-b810a9e93e42-config-data-custom\") pod \"heat-engine-7777f965d-45648\" (UID: \"cfcacc51-e5e0-4d12-a6d1-b810a9e93e42\") " pod="openstack/heat-engine-7777f965d-45648" Feb 27 10:50:03 crc kubenswrapper[4728]: I0227 10:50:03.112844 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca804f78-493d-459d-9061-ee9fe01d8732-config-data\") pod \"heat-cfnapi-5d7b9476fd-gdljj\" (UID: \"ca804f78-493d-459d-9061-ee9fe01d8732\") " pod="openstack/heat-cfnapi-5d7b9476fd-gdljj" Feb 27 10:50:03 crc kubenswrapper[4728]: I0227 10:50:03.112982 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cdcc3d16-3cf1-48e9-accc-c65d869be697-config-data-custom\") pod \"heat-api-7d59b769b4-g5cgn\" (UID: \"cdcc3d16-3cf1-48e9-accc-c65d869be697\") " pod="openstack/heat-api-7d59b769b4-g5cgn" Feb 27 10:50:03 crc kubenswrapper[4728]: I0227 10:50:03.113131 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca804f78-493d-459d-9061-ee9fe01d8732-combined-ca-bundle\") pod \"heat-cfnapi-5d7b9476fd-gdljj\" (UID: \"ca804f78-493d-459d-9061-ee9fe01d8732\") " pod="openstack/heat-cfnapi-5d7b9476fd-gdljj" Feb 27 10:50:03 crc kubenswrapper[4728]: I0227 10:50:03.113318 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zvxjn\" (UniqueName: \"kubernetes.io/projected/ca804f78-493d-459d-9061-ee9fe01d8732-kube-api-access-zvxjn\") pod \"heat-cfnapi-5d7b9476fd-gdljj\" (UID: \"ca804f78-493d-459d-9061-ee9fe01d8732\") " pod="openstack/heat-cfnapi-5d7b9476fd-gdljj" Feb 27 10:50:03 crc kubenswrapper[4728]: I0227 10:50:03.123525 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cdcc3d16-3cf1-48e9-accc-c65d869be697-combined-ca-bundle\") pod \"heat-api-7d59b769b4-g5cgn\" (UID: \"cdcc3d16-3cf1-48e9-accc-c65d869be697\") " pod="openstack/heat-api-7d59b769b4-g5cgn" Feb 27 10:50:03 crc kubenswrapper[4728]: I0227 10:50:03.125545 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cfcacc51-e5e0-4d12-a6d1-b810a9e93e42-config-data\") pod \"heat-engine-7777f965d-45648\" (UID: \"cfcacc51-e5e0-4d12-a6d1-b810a9e93e42\") " pod="openstack/heat-engine-7777f965d-45648" Feb 27 10:50:03 crc kubenswrapper[4728]: I0227 10:50:03.127324 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cdcc3d16-3cf1-48e9-accc-c65d869be697-config-data-custom\") pod \"heat-api-7d59b769b4-g5cgn\" (UID: \"cdcc3d16-3cf1-48e9-accc-c65d869be697\") " pod="openstack/heat-api-7d59b769b4-g5cgn" Feb 27 10:50:03 crc kubenswrapper[4728]: I0227 10:50:03.128733 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ca804f78-493d-459d-9061-ee9fe01d8732-config-data-custom\") pod \"heat-cfnapi-5d7b9476fd-gdljj\" (UID: \"ca804f78-493d-459d-9061-ee9fe01d8732\") " pod="openstack/heat-cfnapi-5d7b9476fd-gdljj" Feb 27 10:50:03 crc kubenswrapper[4728]: I0227 10:50:03.141004 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cfcacc51-e5e0-4d12-a6d1-b810a9e93e42-config-data-custom\") pod \"heat-engine-7777f965d-45648\" (UID: \"cfcacc51-e5e0-4d12-a6d1-b810a9e93e42\") " pod="openstack/heat-engine-7777f965d-45648" Feb 27 10:50:03 crc kubenswrapper[4728]: I0227 10:50:03.142466 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca804f78-493d-459d-9061-ee9fe01d8732-config-data\") pod \"heat-cfnapi-5d7b9476fd-gdljj\" (UID: \"ca804f78-493d-459d-9061-ee9fe01d8732\") " pod="openstack/heat-cfnapi-5d7b9476fd-gdljj" Feb 27 10:50:03 crc kubenswrapper[4728]: I0227 10:50:03.144752 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca804f78-493d-459d-9061-ee9fe01d8732-combined-ca-bundle\") pod \"heat-cfnapi-5d7b9476fd-gdljj\" (UID: \"ca804f78-493d-459d-9061-ee9fe01d8732\") " pod="openstack/heat-cfnapi-5d7b9476fd-gdljj" Feb 27 10:50:03 crc kubenswrapper[4728]: I0227 10:50:03.145665 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cdcc3d16-3cf1-48e9-accc-c65d869be697-config-data\") pod \"heat-api-7d59b769b4-g5cgn\" (UID: \"cdcc3d16-3cf1-48e9-accc-c65d869be697\") " pod="openstack/heat-api-7d59b769b4-g5cgn" Feb 27 10:50:03 crc kubenswrapper[4728]: I0227 10:50:03.146629 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zsbdf\" (UniqueName: \"kubernetes.io/projected/cfcacc51-e5e0-4d12-a6d1-b810a9e93e42-kube-api-access-zsbdf\") pod \"heat-engine-7777f965d-45648\" (UID: \"cfcacc51-e5e0-4d12-a6d1-b810a9e93e42\") " pod="openstack/heat-engine-7777f965d-45648" Feb 27 10:50:03 crc kubenswrapper[4728]: I0227 10:50:03.148035 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfcacc51-e5e0-4d12-a6d1-b810a9e93e42-combined-ca-bundle\") pod \"heat-engine-7777f965d-45648\" (UID: \"cfcacc51-e5e0-4d12-a6d1-b810a9e93e42\") " pod="openstack/heat-engine-7777f965d-45648" Feb 27 10:50:03 crc kubenswrapper[4728]: I0227 10:50:03.156183 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zvxjn\" (UniqueName: \"kubernetes.io/projected/ca804f78-493d-459d-9061-ee9fe01d8732-kube-api-access-zvxjn\") pod \"heat-cfnapi-5d7b9476fd-gdljj\" (UID: \"ca804f78-493d-459d-9061-ee9fe01d8732\") " pod="openstack/heat-cfnapi-5d7b9476fd-gdljj" Feb 27 10:50:03 crc kubenswrapper[4728]: I0227 10:50:03.168662 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-plvmw\" (UniqueName: \"kubernetes.io/projected/cdcc3d16-3cf1-48e9-accc-c65d869be697-kube-api-access-plvmw\") pod \"heat-api-7d59b769b4-g5cgn\" (UID: \"cdcc3d16-3cf1-48e9-accc-c65d869be697\") " pod="openstack/heat-api-7d59b769b4-g5cgn" Feb 27 10:50:03 crc kubenswrapper[4728]: I0227 10:50:03.216375 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-7777f965d-45648" Feb 27 10:50:03 crc kubenswrapper[4728]: I0227 10:50:03.236861 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-5d7b9476fd-gdljj" Feb 27 10:50:03 crc kubenswrapper[4728]: I0227 10:50:03.266299 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-7d59b769b4-g5cgn" Feb 27 10:50:04 crc kubenswrapper[4728]: I0227 10:50:04.432525 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-7371-account-create-update-6lhlf"] Feb 27 10:50:04 crc kubenswrapper[4728]: W0227 10:50:04.556497 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod051376ee_f481_42d6_a69c_637247569a1d.slice/crio-be1ac0778f36f8e355d4923673df7c1665acabeccba1888d4d4f0663c86ec29a WatchSource:0}: Error finding container be1ac0778f36f8e355d4923673df7c1665acabeccba1888d4d4f0663c86ec29a: Status 404 returned error can't find the container with id be1ac0778f36f8e355d4923673df7c1665acabeccba1888d4d4f0663c86ec29a Feb 27 10:50:05 crc kubenswrapper[4728]: I0227 10:50:04.878403 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 27 10:50:05 crc kubenswrapper[4728]: I0227 10:50:05.000683 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fcc4fffc-ff97-47b0-885c-f8d81d189bcf-scripts\") pod \"fcc4fffc-ff97-47b0-885c-f8d81d189bcf\" (UID: \"fcc4fffc-ff97-47b0-885c-f8d81d189bcf\") " Feb 27 10:50:05 crc kubenswrapper[4728]: I0227 10:50:05.000758 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fcc4fffc-ff97-47b0-885c-f8d81d189bcf-run-httpd\") pod \"fcc4fffc-ff97-47b0-885c-f8d81d189bcf\" (UID: \"fcc4fffc-ff97-47b0-885c-f8d81d189bcf\") " Feb 27 10:50:05 crc kubenswrapper[4728]: I0227 10:50:05.000794 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fcc4fffc-ff97-47b0-885c-f8d81d189bcf-log-httpd\") pod \"fcc4fffc-ff97-47b0-885c-f8d81d189bcf\" (UID: \"fcc4fffc-ff97-47b0-885c-f8d81d189bcf\") " Feb 27 10:50:05 crc kubenswrapper[4728]: I0227 10:50:05.000846 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fcc4fffc-ff97-47b0-885c-f8d81d189bcf-sg-core-conf-yaml\") pod \"fcc4fffc-ff97-47b0-885c-f8d81d189bcf\" (UID: \"fcc4fffc-ff97-47b0-885c-f8d81d189bcf\") " Feb 27 10:50:05 crc kubenswrapper[4728]: I0227 10:50:05.000984 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fcc4fffc-ff97-47b0-885c-f8d81d189bcf-combined-ca-bundle\") pod \"fcc4fffc-ff97-47b0-885c-f8d81d189bcf\" (UID: \"fcc4fffc-ff97-47b0-885c-f8d81d189bcf\") " Feb 27 10:50:05 crc kubenswrapper[4728]: I0227 10:50:05.001041 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fcc4fffc-ff97-47b0-885c-f8d81d189bcf-config-data\") pod \"fcc4fffc-ff97-47b0-885c-f8d81d189bcf\" (UID: \"fcc4fffc-ff97-47b0-885c-f8d81d189bcf\") " Feb 27 10:50:05 crc kubenswrapper[4728]: I0227 10:50:05.001138 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bwxrq\" (UniqueName: \"kubernetes.io/projected/fcc4fffc-ff97-47b0-885c-f8d81d189bcf-kube-api-access-bwxrq\") pod \"fcc4fffc-ff97-47b0-885c-f8d81d189bcf\" (UID: \"fcc4fffc-ff97-47b0-885c-f8d81d189bcf\") " Feb 27 10:50:05 crc kubenswrapper[4728]: I0227 10:50:05.001212 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fcc4fffc-ff97-47b0-885c-f8d81d189bcf-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "fcc4fffc-ff97-47b0-885c-f8d81d189bcf" (UID: "fcc4fffc-ff97-47b0-885c-f8d81d189bcf"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:50:05 crc kubenswrapper[4728]: I0227 10:50:05.001695 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fcc4fffc-ff97-47b0-885c-f8d81d189bcf-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "fcc4fffc-ff97-47b0-885c-f8d81d189bcf" (UID: "fcc4fffc-ff97-47b0-885c-f8d81d189bcf"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:50:05 crc kubenswrapper[4728]: I0227 10:50:05.002271 4728 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fcc4fffc-ff97-47b0-885c-f8d81d189bcf-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 27 10:50:05 crc kubenswrapper[4728]: I0227 10:50:05.002291 4728 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fcc4fffc-ff97-47b0-885c-f8d81d189bcf-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 27 10:50:05 crc kubenswrapper[4728]: I0227 10:50:05.038430 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fcc4fffc-ff97-47b0-885c-f8d81d189bcf-kube-api-access-bwxrq" (OuterVolumeSpecName: "kube-api-access-bwxrq") pod "fcc4fffc-ff97-47b0-885c-f8d81d189bcf" (UID: "fcc4fffc-ff97-47b0-885c-f8d81d189bcf"). InnerVolumeSpecName "kube-api-access-bwxrq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:50:05 crc kubenswrapper[4728]: I0227 10:50:05.048256 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fcc4fffc-ff97-47b0-885c-f8d81d189bcf-scripts" (OuterVolumeSpecName: "scripts") pod "fcc4fffc-ff97-47b0-885c-f8d81d189bcf" (UID: "fcc4fffc-ff97-47b0-885c-f8d81d189bcf"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:50:05 crc kubenswrapper[4728]: I0227 10:50:05.094730 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fcc4fffc-ff97-47b0-885c-f8d81d189bcf-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "fcc4fffc-ff97-47b0-885c-f8d81d189bcf" (UID: "fcc4fffc-ff97-47b0-885c-f8d81d189bcf"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:50:05 crc kubenswrapper[4728]: I0227 10:50:05.104984 4728 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fcc4fffc-ff97-47b0-885c-f8d81d189bcf-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 27 10:50:05 crc kubenswrapper[4728]: I0227 10:50:05.105006 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bwxrq\" (UniqueName: \"kubernetes.io/projected/fcc4fffc-ff97-47b0-885c-f8d81d189bcf-kube-api-access-bwxrq\") on node \"crc\" DevicePath \"\"" Feb 27 10:50:05 crc kubenswrapper[4728]: I0227 10:50:05.105017 4728 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fcc4fffc-ff97-47b0-885c-f8d81d189bcf-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 10:50:05 crc kubenswrapper[4728]: I0227 10:50:05.217938 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fcc4fffc-ff97-47b0-885c-f8d81d189bcf-config-data" (OuterVolumeSpecName: "config-data") pod "fcc4fffc-ff97-47b0-885c-f8d81d189bcf" (UID: "fcc4fffc-ff97-47b0-885c-f8d81d189bcf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:50:05 crc kubenswrapper[4728]: I0227 10:50:05.238628 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"b6a05f62-98d5-4df6-9cbb-d4bbb1778642","Type":"ContainerStarted","Data":"00b87c53199074787d0a01635aa38d440246eb906649dde66f6381dc95e26cde"} Feb 27 10:50:05 crc kubenswrapper[4728]: I0227 10:50:05.242152 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-cnj5s"] Feb 27 10:50:05 crc kubenswrapper[4728]: I0227 10:50:05.258580 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.868211907 podStartE2EDuration="15.25856191s" podCreationTimestamp="2026-02-27 10:49:50 +0000 UTC" firstStartedPulling="2026-02-27 10:49:51.48522929 +0000 UTC m=+1411.447595396" lastFinishedPulling="2026-02-27 10:50:03.875579303 +0000 UTC m=+1423.837945399" observedRunningTime="2026-02-27 10:50:05.252403783 +0000 UTC m=+1425.214769889" watchObservedRunningTime="2026-02-27 10:50:05.25856191 +0000 UTC m=+1425.220928016" Feb 27 10:50:05 crc kubenswrapper[4728]: I0227 10:50:05.274916 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 27 10:50:05 crc kubenswrapper[4728]: I0227 10:50:05.275909 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fcc4fffc-ff97-47b0-885c-f8d81d189bcf","Type":"ContainerDied","Data":"c1eeeab23961a4293f63ea7140efd4514d22c73e9c262153628c35c63175bdd8"} Feb 27 10:50:05 crc kubenswrapper[4728]: I0227 10:50:05.275946 4728 scope.go:117] "RemoveContainer" containerID="07c17f4c98c066e72adfb76f94e23dfe160bfe260b02fcc24388b969d4eb97f1" Feb 27 10:50:05 crc kubenswrapper[4728]: I0227 10:50:05.295853 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-7371-account-create-update-6lhlf" event={"ID":"051376ee-f481-42d6-a69c-637247569a1d","Type":"ContainerStarted","Data":"edadc918fd6b5518e1194a93b819e609d220ae4326c3699c90bca9a9be1cfe95"} Feb 27 10:50:05 crc kubenswrapper[4728]: I0227 10:50:05.295919 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-7371-account-create-update-6lhlf" event={"ID":"051376ee-f481-42d6-a69c-637247569a1d","Type":"ContainerStarted","Data":"be1ac0778f36f8e355d4923673df7c1665acabeccba1888d4d4f0663c86ec29a"} Feb 27 10:50:05 crc kubenswrapper[4728]: I0227 10:50:05.310975 4728 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fcc4fffc-ff97-47b0-885c-f8d81d189bcf-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 10:50:05 crc kubenswrapper[4728]: I0227 10:50:05.316169 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-7371-account-create-update-6lhlf" podStartSLOduration=6.316153413 podStartE2EDuration="6.316153413s" podCreationTimestamp="2026-02-27 10:49:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:50:05.314302752 +0000 UTC m=+1425.276668858" watchObservedRunningTime="2026-02-27 10:50:05.316153413 +0000 UTC m=+1425.278519519" Feb 27 10:50:05 crc kubenswrapper[4728]: I0227 10:50:05.325106 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fcc4fffc-ff97-47b0-885c-f8d81d189bcf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fcc4fffc-ff97-47b0-885c-f8d81d189bcf" (UID: "fcc4fffc-ff97-47b0-885c-f8d81d189bcf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:50:05 crc kubenswrapper[4728]: I0227 10:50:05.414455 4728 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fcc4fffc-ff97-47b0-885c-f8d81d189bcf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 10:50:05 crc kubenswrapper[4728]: I0227 10:50:05.439847 4728 scope.go:117] "RemoveContainer" containerID="e4be0bdaf86ef0679db41a5f996a4ee82922c465e2d6ed87f02a92c254bce567" Feb 27 10:50:05 crc kubenswrapper[4728]: I0227 10:50:05.560021 4728 scope.go:117] "RemoveContainer" containerID="7a1ca7c2ada1415b20dbc22a8e914f5e2b2b33630a8f7c0bfef955d7a48d1a50" Feb 27 10:50:05 crc kubenswrapper[4728]: I0227 10:50:05.594712 4728 scope.go:117] "RemoveContainer" containerID="8ebba70ccc18b948278ad3d65f5711ae8ff260a5edfefb06ea4ce731f7cb09be" Feb 27 10:50:05 crc kubenswrapper[4728]: I0227 10:50:05.641889 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 27 10:50:05 crc kubenswrapper[4728]: I0227 10:50:05.674617 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 27 10:50:05 crc kubenswrapper[4728]: I0227 10:50:05.691276 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 27 10:50:05 crc kubenswrapper[4728]: E0227 10:50:05.692075 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fcc4fffc-ff97-47b0-885c-f8d81d189bcf" containerName="ceilometer-central-agent" Feb 27 10:50:05 crc kubenswrapper[4728]: I0227 10:50:05.692093 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="fcc4fffc-ff97-47b0-885c-f8d81d189bcf" containerName="ceilometer-central-agent" Feb 27 10:50:05 crc kubenswrapper[4728]: E0227 10:50:05.692110 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fcc4fffc-ff97-47b0-885c-f8d81d189bcf" containerName="proxy-httpd" Feb 27 10:50:05 crc kubenswrapper[4728]: I0227 10:50:05.692117 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="fcc4fffc-ff97-47b0-885c-f8d81d189bcf" containerName="proxy-httpd" Feb 27 10:50:05 crc kubenswrapper[4728]: E0227 10:50:05.692142 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fcc4fffc-ff97-47b0-885c-f8d81d189bcf" containerName="sg-core" Feb 27 10:50:05 crc kubenswrapper[4728]: I0227 10:50:05.692148 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="fcc4fffc-ff97-47b0-885c-f8d81d189bcf" containerName="sg-core" Feb 27 10:50:05 crc kubenswrapper[4728]: E0227 10:50:05.692171 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fcc4fffc-ff97-47b0-885c-f8d81d189bcf" containerName="ceilometer-notification-agent" Feb 27 10:50:05 crc kubenswrapper[4728]: I0227 10:50:05.692177 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="fcc4fffc-ff97-47b0-885c-f8d81d189bcf" containerName="ceilometer-notification-agent" Feb 27 10:50:05 crc kubenswrapper[4728]: I0227 10:50:05.692399 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="fcc4fffc-ff97-47b0-885c-f8d81d189bcf" containerName="sg-core" Feb 27 10:50:05 crc kubenswrapper[4728]: I0227 10:50:05.692427 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="fcc4fffc-ff97-47b0-885c-f8d81d189bcf" containerName="proxy-httpd" Feb 27 10:50:05 crc kubenswrapper[4728]: I0227 10:50:05.692441 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="fcc4fffc-ff97-47b0-885c-f8d81d189bcf" containerName="ceilometer-central-agent" Feb 27 10:50:05 crc kubenswrapper[4728]: I0227 10:50:05.692453 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="fcc4fffc-ff97-47b0-885c-f8d81d189bcf" containerName="ceilometer-notification-agent" Feb 27 10:50:05 crc kubenswrapper[4728]: I0227 10:50:05.695403 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 27 10:50:05 crc kubenswrapper[4728]: I0227 10:50:05.698062 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 27 10:50:05 crc kubenswrapper[4728]: I0227 10:50:05.698235 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 27 10:50:05 crc kubenswrapper[4728]: I0227 10:50:05.704785 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 27 10:50:05 crc kubenswrapper[4728]: I0227 10:50:05.722072 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/38a4c39e-48ad-4e05-a4b8-162fef5472b4-log-httpd\") pod \"ceilometer-0\" (UID: \"38a4c39e-48ad-4e05-a4b8-162fef5472b4\") " pod="openstack/ceilometer-0" Feb 27 10:50:05 crc kubenswrapper[4728]: I0227 10:50:05.722157 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qq4h\" (UniqueName: \"kubernetes.io/projected/38a4c39e-48ad-4e05-a4b8-162fef5472b4-kube-api-access-9qq4h\") pod \"ceilometer-0\" (UID: \"38a4c39e-48ad-4e05-a4b8-162fef5472b4\") " pod="openstack/ceilometer-0" Feb 27 10:50:05 crc kubenswrapper[4728]: I0227 10:50:05.722187 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38a4c39e-48ad-4e05-a4b8-162fef5472b4-config-data\") pod \"ceilometer-0\" (UID: \"38a4c39e-48ad-4e05-a4b8-162fef5472b4\") " pod="openstack/ceilometer-0" Feb 27 10:50:05 crc kubenswrapper[4728]: I0227 10:50:05.722221 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38a4c39e-48ad-4e05-a4b8-162fef5472b4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"38a4c39e-48ad-4e05-a4b8-162fef5472b4\") " pod="openstack/ceilometer-0" Feb 27 10:50:05 crc kubenswrapper[4728]: I0227 10:50:05.722401 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/38a4c39e-48ad-4e05-a4b8-162fef5472b4-scripts\") pod \"ceilometer-0\" (UID: \"38a4c39e-48ad-4e05-a4b8-162fef5472b4\") " pod="openstack/ceilometer-0" Feb 27 10:50:05 crc kubenswrapper[4728]: I0227 10:50:05.722497 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/38a4c39e-48ad-4e05-a4b8-162fef5472b4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"38a4c39e-48ad-4e05-a4b8-162fef5472b4\") " pod="openstack/ceilometer-0" Feb 27 10:50:05 crc kubenswrapper[4728]: I0227 10:50:05.722614 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/38a4c39e-48ad-4e05-a4b8-162fef5472b4-run-httpd\") pod \"ceilometer-0\" (UID: \"38a4c39e-48ad-4e05-a4b8-162fef5472b4\") " pod="openstack/ceilometer-0" Feb 27 10:50:05 crc kubenswrapper[4728]: I0227 10:50:05.825829 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/38a4c39e-48ad-4e05-a4b8-162fef5472b4-run-httpd\") pod \"ceilometer-0\" (UID: \"38a4c39e-48ad-4e05-a4b8-162fef5472b4\") " pod="openstack/ceilometer-0" Feb 27 10:50:05 crc kubenswrapper[4728]: I0227 10:50:05.825903 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/38a4c39e-48ad-4e05-a4b8-162fef5472b4-log-httpd\") pod \"ceilometer-0\" (UID: \"38a4c39e-48ad-4e05-a4b8-162fef5472b4\") " pod="openstack/ceilometer-0" Feb 27 10:50:05 crc kubenswrapper[4728]: I0227 10:50:05.825979 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9qq4h\" (UniqueName: \"kubernetes.io/projected/38a4c39e-48ad-4e05-a4b8-162fef5472b4-kube-api-access-9qq4h\") pod \"ceilometer-0\" (UID: \"38a4c39e-48ad-4e05-a4b8-162fef5472b4\") " pod="openstack/ceilometer-0" Feb 27 10:50:05 crc kubenswrapper[4728]: I0227 10:50:05.826006 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38a4c39e-48ad-4e05-a4b8-162fef5472b4-config-data\") pod \"ceilometer-0\" (UID: \"38a4c39e-48ad-4e05-a4b8-162fef5472b4\") " pod="openstack/ceilometer-0" Feb 27 10:50:05 crc kubenswrapper[4728]: I0227 10:50:05.826066 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38a4c39e-48ad-4e05-a4b8-162fef5472b4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"38a4c39e-48ad-4e05-a4b8-162fef5472b4\") " pod="openstack/ceilometer-0" Feb 27 10:50:05 crc kubenswrapper[4728]: I0227 10:50:05.826276 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/38a4c39e-48ad-4e05-a4b8-162fef5472b4-scripts\") pod \"ceilometer-0\" (UID: \"38a4c39e-48ad-4e05-a4b8-162fef5472b4\") " pod="openstack/ceilometer-0" Feb 27 10:50:05 crc kubenswrapper[4728]: I0227 10:50:05.826343 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/38a4c39e-48ad-4e05-a4b8-162fef5472b4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"38a4c39e-48ad-4e05-a4b8-162fef5472b4\") " pod="openstack/ceilometer-0" Feb 27 10:50:05 crc kubenswrapper[4728]: I0227 10:50:05.826452 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/38a4c39e-48ad-4e05-a4b8-162fef5472b4-run-httpd\") pod \"ceilometer-0\" (UID: \"38a4c39e-48ad-4e05-a4b8-162fef5472b4\") " pod="openstack/ceilometer-0" Feb 27 10:50:05 crc kubenswrapper[4728]: I0227 10:50:05.830576 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/38a4c39e-48ad-4e05-a4b8-162fef5472b4-log-httpd\") pod \"ceilometer-0\" (UID: \"38a4c39e-48ad-4e05-a4b8-162fef5472b4\") " pod="openstack/ceilometer-0" Feb 27 10:50:05 crc kubenswrapper[4728]: I0227 10:50:05.833562 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/38a4c39e-48ad-4e05-a4b8-162fef5472b4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"38a4c39e-48ad-4e05-a4b8-162fef5472b4\") " pod="openstack/ceilometer-0" Feb 27 10:50:05 crc kubenswrapper[4728]: I0227 10:50:05.846233 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38a4c39e-48ad-4e05-a4b8-162fef5472b4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"38a4c39e-48ad-4e05-a4b8-162fef5472b4\") " pod="openstack/ceilometer-0" Feb 27 10:50:05 crc kubenswrapper[4728]: I0227 10:50:05.846975 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/38a4c39e-48ad-4e05-a4b8-162fef5472b4-scripts\") pod \"ceilometer-0\" (UID: \"38a4c39e-48ad-4e05-a4b8-162fef5472b4\") " pod="openstack/ceilometer-0" Feb 27 10:50:05 crc kubenswrapper[4728]: I0227 10:50:05.849309 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38a4c39e-48ad-4e05-a4b8-162fef5472b4-config-data\") pod \"ceilometer-0\" (UID: \"38a4c39e-48ad-4e05-a4b8-162fef5472b4\") " pod="openstack/ceilometer-0" Feb 27 10:50:05 crc kubenswrapper[4728]: I0227 10:50:05.855372 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9qq4h\" (UniqueName: \"kubernetes.io/projected/38a4c39e-48ad-4e05-a4b8-162fef5472b4-kube-api-access-9qq4h\") pod \"ceilometer-0\" (UID: \"38a4c39e-48ad-4e05-a4b8-162fef5472b4\") " pod="openstack/ceilometer-0" Feb 27 10:50:05 crc kubenswrapper[4728]: I0227 10:50:05.866572 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-5dcd5b5d76-tfs97"] Feb 27 10:50:05 crc kubenswrapper[4728]: I0227 10:50:05.910106 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-54dc5858d7-7975b"] Feb 27 10:50:05 crc kubenswrapper[4728]: I0227 10:50:05.941113 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-76558d5849-k75gx"] Feb 27 10:50:05 crc kubenswrapper[4728]: I0227 10:50:05.942691 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-76558d5849-k75gx" Feb 27 10:50:05 crc kubenswrapper[4728]: I0227 10:50:05.947483 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-internal-svc" Feb 27 10:50:05 crc kubenswrapper[4728]: I0227 10:50:05.955636 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-public-svc" Feb 27 10:50:05 crc kubenswrapper[4728]: I0227 10:50:05.959103 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-76558d5849-k75gx"] Feb 27 10:50:05 crc kubenswrapper[4728]: I0227 10:50:05.969762 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-69498cf5f9-8b2rb"] Feb 27 10:50:05 crc kubenswrapper[4728]: I0227 10:50:05.971181 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-69498cf5f9-8b2rb" Feb 27 10:50:05 crc kubenswrapper[4728]: I0227 10:50:05.973867 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-public-svc" Feb 27 10:50:05 crc kubenswrapper[4728]: I0227 10:50:05.974022 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-internal-svc" Feb 27 10:50:05 crc kubenswrapper[4728]: I0227 10:50:05.989126 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-69498cf5f9-8b2rb"] Feb 27 10:50:06 crc kubenswrapper[4728]: I0227 10:50:06.023024 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 27 10:50:06 crc kubenswrapper[4728]: I0227 10:50:06.032071 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f411351c-a796-4df4-9e09-407e93afb4a9-config-data\") pod \"heat-cfnapi-69498cf5f9-8b2rb\" (UID: \"f411351c-a796-4df4-9e09-407e93afb4a9\") " pod="openstack/heat-cfnapi-69498cf5f9-8b2rb" Feb 27 10:50:06 crc kubenswrapper[4728]: I0227 10:50:06.032113 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6386555e-93c8-46af-bdc9-ca0db04f8712-public-tls-certs\") pod \"heat-api-76558d5849-k75gx\" (UID: \"6386555e-93c8-46af-bdc9-ca0db04f8712\") " pod="openstack/heat-api-76558d5849-k75gx" Feb 27 10:50:06 crc kubenswrapper[4728]: I0227 10:50:06.032194 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-865fc\" (UniqueName: \"kubernetes.io/projected/6386555e-93c8-46af-bdc9-ca0db04f8712-kube-api-access-865fc\") pod \"heat-api-76558d5849-k75gx\" (UID: \"6386555e-93c8-46af-bdc9-ca0db04f8712\") " pod="openstack/heat-api-76558d5849-k75gx" Feb 27 10:50:06 crc kubenswrapper[4728]: I0227 10:50:06.032298 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvnv4\" (UniqueName: \"kubernetes.io/projected/f411351c-a796-4df4-9e09-407e93afb4a9-kube-api-access-lvnv4\") pod \"heat-cfnapi-69498cf5f9-8b2rb\" (UID: \"f411351c-a796-4df4-9e09-407e93afb4a9\") " pod="openstack/heat-cfnapi-69498cf5f9-8b2rb" Feb 27 10:50:06 crc kubenswrapper[4728]: I0227 10:50:06.032342 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6386555e-93c8-46af-bdc9-ca0db04f8712-config-data\") pod \"heat-api-76558d5849-k75gx\" (UID: \"6386555e-93c8-46af-bdc9-ca0db04f8712\") " pod="openstack/heat-api-76558d5849-k75gx" Feb 27 10:50:06 crc kubenswrapper[4728]: I0227 10:50:06.032419 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f411351c-a796-4df4-9e09-407e93afb4a9-internal-tls-certs\") pod \"heat-cfnapi-69498cf5f9-8b2rb\" (UID: \"f411351c-a796-4df4-9e09-407e93afb4a9\") " pod="openstack/heat-cfnapi-69498cf5f9-8b2rb" Feb 27 10:50:06 crc kubenswrapper[4728]: I0227 10:50:06.032445 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6386555e-93c8-46af-bdc9-ca0db04f8712-internal-tls-certs\") pod \"heat-api-76558d5849-k75gx\" (UID: \"6386555e-93c8-46af-bdc9-ca0db04f8712\") " pod="openstack/heat-api-76558d5849-k75gx" Feb 27 10:50:06 crc kubenswrapper[4728]: I0227 10:50:06.032571 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6386555e-93c8-46af-bdc9-ca0db04f8712-combined-ca-bundle\") pod \"heat-api-76558d5849-k75gx\" (UID: \"6386555e-93c8-46af-bdc9-ca0db04f8712\") " pod="openstack/heat-api-76558d5849-k75gx" Feb 27 10:50:06 crc kubenswrapper[4728]: I0227 10:50:06.032635 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f411351c-a796-4df4-9e09-407e93afb4a9-combined-ca-bundle\") pod \"heat-cfnapi-69498cf5f9-8b2rb\" (UID: \"f411351c-a796-4df4-9e09-407e93afb4a9\") " pod="openstack/heat-cfnapi-69498cf5f9-8b2rb" Feb 27 10:50:06 crc kubenswrapper[4728]: I0227 10:50:06.032665 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f411351c-a796-4df4-9e09-407e93afb4a9-public-tls-certs\") pod \"heat-cfnapi-69498cf5f9-8b2rb\" (UID: \"f411351c-a796-4df4-9e09-407e93afb4a9\") " pod="openstack/heat-cfnapi-69498cf5f9-8b2rb" Feb 27 10:50:06 crc kubenswrapper[4728]: I0227 10:50:06.032736 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6386555e-93c8-46af-bdc9-ca0db04f8712-config-data-custom\") pod \"heat-api-76558d5849-k75gx\" (UID: \"6386555e-93c8-46af-bdc9-ca0db04f8712\") " pod="openstack/heat-api-76558d5849-k75gx" Feb 27 10:50:06 crc kubenswrapper[4728]: I0227 10:50:06.032758 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f411351c-a796-4df4-9e09-407e93afb4a9-config-data-custom\") pod \"heat-cfnapi-69498cf5f9-8b2rb\" (UID: \"f411351c-a796-4df4-9e09-407e93afb4a9\") " pod="openstack/heat-cfnapi-69498cf5f9-8b2rb" Feb 27 10:50:06 crc kubenswrapper[4728]: I0227 10:50:06.134982 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lvnv4\" (UniqueName: \"kubernetes.io/projected/f411351c-a796-4df4-9e09-407e93afb4a9-kube-api-access-lvnv4\") pod \"heat-cfnapi-69498cf5f9-8b2rb\" (UID: \"f411351c-a796-4df4-9e09-407e93afb4a9\") " pod="openstack/heat-cfnapi-69498cf5f9-8b2rb" Feb 27 10:50:06 crc kubenswrapper[4728]: I0227 10:50:06.135831 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6386555e-93c8-46af-bdc9-ca0db04f8712-config-data\") pod \"heat-api-76558d5849-k75gx\" (UID: \"6386555e-93c8-46af-bdc9-ca0db04f8712\") " pod="openstack/heat-api-76558d5849-k75gx" Feb 27 10:50:06 crc kubenswrapper[4728]: I0227 10:50:06.135891 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f411351c-a796-4df4-9e09-407e93afb4a9-internal-tls-certs\") pod \"heat-cfnapi-69498cf5f9-8b2rb\" (UID: \"f411351c-a796-4df4-9e09-407e93afb4a9\") " pod="openstack/heat-cfnapi-69498cf5f9-8b2rb" Feb 27 10:50:06 crc kubenswrapper[4728]: I0227 10:50:06.135914 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6386555e-93c8-46af-bdc9-ca0db04f8712-internal-tls-certs\") pod \"heat-api-76558d5849-k75gx\" (UID: \"6386555e-93c8-46af-bdc9-ca0db04f8712\") " pod="openstack/heat-api-76558d5849-k75gx" Feb 27 10:50:06 crc kubenswrapper[4728]: I0227 10:50:06.135939 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6386555e-93c8-46af-bdc9-ca0db04f8712-combined-ca-bundle\") pod \"heat-api-76558d5849-k75gx\" (UID: \"6386555e-93c8-46af-bdc9-ca0db04f8712\") " pod="openstack/heat-api-76558d5849-k75gx" Feb 27 10:50:06 crc kubenswrapper[4728]: I0227 10:50:06.135960 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f411351c-a796-4df4-9e09-407e93afb4a9-combined-ca-bundle\") pod \"heat-cfnapi-69498cf5f9-8b2rb\" (UID: \"f411351c-a796-4df4-9e09-407e93afb4a9\") " pod="openstack/heat-cfnapi-69498cf5f9-8b2rb" Feb 27 10:50:06 crc kubenswrapper[4728]: I0227 10:50:06.135974 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f411351c-a796-4df4-9e09-407e93afb4a9-public-tls-certs\") pod \"heat-cfnapi-69498cf5f9-8b2rb\" (UID: \"f411351c-a796-4df4-9e09-407e93afb4a9\") " pod="openstack/heat-cfnapi-69498cf5f9-8b2rb" Feb 27 10:50:06 crc kubenswrapper[4728]: I0227 10:50:06.136007 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6386555e-93c8-46af-bdc9-ca0db04f8712-config-data-custom\") pod \"heat-api-76558d5849-k75gx\" (UID: \"6386555e-93c8-46af-bdc9-ca0db04f8712\") " pod="openstack/heat-api-76558d5849-k75gx" Feb 27 10:50:06 crc kubenswrapper[4728]: I0227 10:50:06.136021 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f411351c-a796-4df4-9e09-407e93afb4a9-config-data-custom\") pod \"heat-cfnapi-69498cf5f9-8b2rb\" (UID: \"f411351c-a796-4df4-9e09-407e93afb4a9\") " pod="openstack/heat-cfnapi-69498cf5f9-8b2rb" Feb 27 10:50:06 crc kubenswrapper[4728]: I0227 10:50:06.136050 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f411351c-a796-4df4-9e09-407e93afb4a9-config-data\") pod \"heat-cfnapi-69498cf5f9-8b2rb\" (UID: \"f411351c-a796-4df4-9e09-407e93afb4a9\") " pod="openstack/heat-cfnapi-69498cf5f9-8b2rb" Feb 27 10:50:06 crc kubenswrapper[4728]: I0227 10:50:06.136067 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6386555e-93c8-46af-bdc9-ca0db04f8712-public-tls-certs\") pod \"heat-api-76558d5849-k75gx\" (UID: \"6386555e-93c8-46af-bdc9-ca0db04f8712\") " pod="openstack/heat-api-76558d5849-k75gx" Feb 27 10:50:06 crc kubenswrapper[4728]: I0227 10:50:06.136123 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-865fc\" (UniqueName: \"kubernetes.io/projected/6386555e-93c8-46af-bdc9-ca0db04f8712-kube-api-access-865fc\") pod \"heat-api-76558d5849-k75gx\" (UID: \"6386555e-93c8-46af-bdc9-ca0db04f8712\") " pod="openstack/heat-api-76558d5849-k75gx" Feb 27 10:50:06 crc kubenswrapper[4728]: I0227 10:50:06.153031 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f411351c-a796-4df4-9e09-407e93afb4a9-config-data-custom\") pod \"heat-cfnapi-69498cf5f9-8b2rb\" (UID: \"f411351c-a796-4df4-9e09-407e93afb4a9\") " pod="openstack/heat-cfnapi-69498cf5f9-8b2rb" Feb 27 10:50:06 crc kubenswrapper[4728]: I0227 10:50:06.153693 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6386555e-93c8-46af-bdc9-ca0db04f8712-config-data-custom\") pod \"heat-api-76558d5849-k75gx\" (UID: \"6386555e-93c8-46af-bdc9-ca0db04f8712\") " pod="openstack/heat-api-76558d5849-k75gx" Feb 27 10:50:06 crc kubenswrapper[4728]: I0227 10:50:06.154077 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6386555e-93c8-46af-bdc9-ca0db04f8712-combined-ca-bundle\") pod \"heat-api-76558d5849-k75gx\" (UID: \"6386555e-93c8-46af-bdc9-ca0db04f8712\") " pod="openstack/heat-api-76558d5849-k75gx" Feb 27 10:50:06 crc kubenswrapper[4728]: I0227 10:50:06.154811 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f411351c-a796-4df4-9e09-407e93afb4a9-config-data\") pod \"heat-cfnapi-69498cf5f9-8b2rb\" (UID: \"f411351c-a796-4df4-9e09-407e93afb4a9\") " pod="openstack/heat-cfnapi-69498cf5f9-8b2rb" Feb 27 10:50:06 crc kubenswrapper[4728]: I0227 10:50:06.155035 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6386555e-93c8-46af-bdc9-ca0db04f8712-public-tls-certs\") pod \"heat-api-76558d5849-k75gx\" (UID: \"6386555e-93c8-46af-bdc9-ca0db04f8712\") " pod="openstack/heat-api-76558d5849-k75gx" Feb 27 10:50:06 crc kubenswrapper[4728]: I0227 10:50:06.159788 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6386555e-93c8-46af-bdc9-ca0db04f8712-config-data\") pod \"heat-api-76558d5849-k75gx\" (UID: \"6386555e-93c8-46af-bdc9-ca0db04f8712\") " pod="openstack/heat-api-76558d5849-k75gx" Feb 27 10:50:06 crc kubenswrapper[4728]: I0227 10:50:06.160870 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f411351c-a796-4df4-9e09-407e93afb4a9-combined-ca-bundle\") pod \"heat-cfnapi-69498cf5f9-8b2rb\" (UID: \"f411351c-a796-4df4-9e09-407e93afb4a9\") " pod="openstack/heat-cfnapi-69498cf5f9-8b2rb" Feb 27 10:50:06 crc kubenswrapper[4728]: I0227 10:50:06.161769 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-865fc\" (UniqueName: \"kubernetes.io/projected/6386555e-93c8-46af-bdc9-ca0db04f8712-kube-api-access-865fc\") pod \"heat-api-76558d5849-k75gx\" (UID: \"6386555e-93c8-46af-bdc9-ca0db04f8712\") " pod="openstack/heat-api-76558d5849-k75gx" Feb 27 10:50:06 crc kubenswrapper[4728]: I0227 10:50:06.165841 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6386555e-93c8-46af-bdc9-ca0db04f8712-internal-tls-certs\") pod \"heat-api-76558d5849-k75gx\" (UID: \"6386555e-93c8-46af-bdc9-ca0db04f8712\") " pod="openstack/heat-api-76558d5849-k75gx" Feb 27 10:50:06 crc kubenswrapper[4728]: I0227 10:50:06.174882 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f411351c-a796-4df4-9e09-407e93afb4a9-internal-tls-certs\") pod \"heat-cfnapi-69498cf5f9-8b2rb\" (UID: \"f411351c-a796-4df4-9e09-407e93afb4a9\") " pod="openstack/heat-cfnapi-69498cf5f9-8b2rb" Feb 27 10:50:06 crc kubenswrapper[4728]: I0227 10:50:06.202395 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f411351c-a796-4df4-9e09-407e93afb4a9-public-tls-certs\") pod \"heat-cfnapi-69498cf5f9-8b2rb\" (UID: \"f411351c-a796-4df4-9e09-407e93afb4a9\") " pod="openstack/heat-cfnapi-69498cf5f9-8b2rb" Feb 27 10:50:06 crc kubenswrapper[4728]: I0227 10:50:06.206729 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lvnv4\" (UniqueName: \"kubernetes.io/projected/f411351c-a796-4df4-9e09-407e93afb4a9-kube-api-access-lvnv4\") pod \"heat-cfnapi-69498cf5f9-8b2rb\" (UID: \"f411351c-a796-4df4-9e09-407e93afb4a9\") " pod="openstack/heat-cfnapi-69498cf5f9-8b2rb" Feb 27 10:50:06 crc kubenswrapper[4728]: W0227 10:50:06.225698 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod33b33e17_516b_44dc_b35c_267b155abfa3.slice/crio-cd23d5b193252fc8bc4ead0cdf2eb08a71279c3e2ef59cdaa50c97ec56cdecfb WatchSource:0}: Error finding container cd23d5b193252fc8bc4ead0cdf2eb08a71279c3e2ef59cdaa50c97ec56cdecfb: Status 404 returned error can't find the container with id cd23d5b193252fc8bc4ead0cdf2eb08a71279c3e2ef59cdaa50c97ec56cdecfb Feb 27 10:50:06 crc kubenswrapper[4728]: W0227 10:50:06.233238 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod11767f8f_ebed_4306_b5f7_5e79182d0ad1.slice/crio-1be40c6916fcc0e090d964452b1c7f18ec26dbd9e8c82ad7fecf1cedce25a959 WatchSource:0}: Error finding container 1be40c6916fcc0e090d964452b1c7f18ec26dbd9e8c82ad7fecf1cedce25a959: Status 404 returned error can't find the container with id 1be40c6916fcc0e090d964452b1c7f18ec26dbd9e8c82ad7fecf1cedce25a959 Feb 27 10:50:06 crc kubenswrapper[4728]: I0227 10:50:06.235242 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-5d7b9476fd-gdljj"] Feb 27 10:50:06 crc kubenswrapper[4728]: I0227 10:50:06.270260 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-76558d5849-k75gx" Feb 27 10:50:06 crc kubenswrapper[4728]: I0227 10:50:06.272079 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-dmpcg"] Feb 27 10:50:06 crc kubenswrapper[4728]: I0227 10:50:06.298664 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7d978555f9-6gjtz"] Feb 27 10:50:06 crc kubenswrapper[4728]: I0227 10:50:06.307619 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-69498cf5f9-8b2rb" Feb 27 10:50:06 crc kubenswrapper[4728]: I0227 10:50:06.312095 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536490-5x76j"] Feb 27 10:50:06 crc kubenswrapper[4728]: I0227 10:50:06.344162 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-m9lhq"] Feb 27 10:50:06 crc kubenswrapper[4728]: I0227 10:50:06.353837 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d978555f9-6gjtz" event={"ID":"11767f8f-ebed-4306-b5f7-5e79182d0ad1","Type":"ContainerStarted","Data":"1be40c6916fcc0e090d964452b1c7f18ec26dbd9e8c82ad7fecf1cedce25a959"} Feb 27 10:50:06 crc kubenswrapper[4728]: I0227 10:50:06.358274 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-5d7b9476fd-gdljj" event={"ID":"ca804f78-493d-459d-9061-ee9fe01d8732","Type":"ContainerStarted","Data":"a6f287951a7e8e887032a9cf69cae4c6062482bdacd2b323b25105eb9e36dd45"} Feb 27 10:50:06 crc kubenswrapper[4728]: I0227 10:50:06.359942 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-b66a-account-create-update-wtmz5" event={"ID":"af86f825-d16c-4856-a823-01f06824a97f","Type":"ContainerStarted","Data":"4f60c6cd013aad66dd892924196847c95b5ee2cce37c008e71105ed9212c3d3f"} Feb 27 10:50:06 crc kubenswrapper[4728]: W0227 10:50:06.369387 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0633a337_c673_43b4_a012_ac41403a02a1.slice/crio-3f9933e5009ace7f6e637fbdcb84c26a63dcf2750ebce7b54dd6145d39f4d914 WatchSource:0}: Error finding container 3f9933e5009ace7f6e637fbdcb84c26a63dcf2750ebce7b54dd6145d39f4d914: Status 404 returned error can't find the container with id 3f9933e5009ace7f6e637fbdcb84c26a63dcf2750ebce7b54dd6145d39f4d914 Feb 27 10:50:06 crc kubenswrapper[4728]: I0227 10:50:06.371759 4728 generic.go:334] "Generic (PLEG): container finished" podID="c297845b-bf34-4c1d-88f0-3f9b28d09e54" containerID="b543d083385962cb317532c57b2ca5f63d7e509372a519209505ae381fce765b" exitCode=0 Feb 27 10:50:06 crc kubenswrapper[4728]: I0227 10:50:06.371845 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-cnj5s" event={"ID":"c297845b-bf34-4c1d-88f0-3f9b28d09e54","Type":"ContainerDied","Data":"b543d083385962cb317532c57b2ca5f63d7e509372a519209505ae381fce765b"} Feb 27 10:50:06 crc kubenswrapper[4728]: I0227 10:50:06.371871 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-cnj5s" event={"ID":"c297845b-bf34-4c1d-88f0-3f9b28d09e54","Type":"ContainerStarted","Data":"6dc849902408777f12a07f1cb4a3e1f7a34b44fffec6bec5c3a0004dd3de0b33"} Feb 27 10:50:06 crc kubenswrapper[4728]: I0227 10:50:06.378446 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-54dc5858d7-7975b"] Feb 27 10:50:06 crc kubenswrapper[4728]: I0227 10:50:06.403966 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-7777f965d-45648"] Feb 27 10:50:06 crc kubenswrapper[4728]: I0227 10:50:06.410408 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-dmpcg" event={"ID":"9d5f0465-1665-4c7f-b8f5-e345efbe3872","Type":"ContainerStarted","Data":"b04963d5a843c767e61253a3c6bdc9edeeb3fa4590a4701e3907e6222c920e44"} Feb 27 10:50:06 crc kubenswrapper[4728]: I0227 10:50:06.414216 4728 generic.go:334] "Generic (PLEG): container finished" podID="051376ee-f481-42d6-a69c-637247569a1d" containerID="edadc918fd6b5518e1194a93b819e609d220ae4326c3699c90bca9a9be1cfe95" exitCode=0 Feb 27 10:50:06 crc kubenswrapper[4728]: I0227 10:50:06.414277 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-7371-account-create-update-6lhlf" event={"ID":"051376ee-f481-42d6-a69c-637247569a1d","Type":"ContainerDied","Data":"edadc918fd6b5518e1194a93b819e609d220ae4326c3699c90bca9a9be1cfe95"} Feb 27 10:50:06 crc kubenswrapper[4728]: I0227 10:50:06.462254 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-7777f965d-45648" event={"ID":"cfcacc51-e5e0-4d12-a6d1-b810a9e93e42","Type":"ContainerStarted","Data":"0f1b7aa56710023f2e4d93d945797d4f549ccd265ecdbc7de982dfab662bc1ff"} Feb 27 10:50:06 crc kubenswrapper[4728]: I0227 10:50:06.463457 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-b66a-account-create-update-wtmz5"] Feb 27 10:50:06 crc kubenswrapper[4728]: I0227 10:50:06.468681 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-m9lhq" event={"ID":"390e39cb-69e2-4f3b-94a7-deef2d13c027","Type":"ContainerStarted","Data":"1bbbb1b64a60c450fb4dd29417d9844d1ab12d782186d5c616e8c8f9433dc991"} Feb 27 10:50:06 crc kubenswrapper[4728]: I0227 10:50:06.470815 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536490-5x76j" event={"ID":"33b33e17-516b-44dc-b35c-267b155abfa3","Type":"ContainerStarted","Data":"cd23d5b193252fc8bc4ead0cdf2eb08a71279c3e2ef59cdaa50c97ec56cdecfb"} Feb 27 10:50:06 crc kubenswrapper[4728]: I0227 10:50:06.471945 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-54dc5858d7-7975b" event={"ID":"5e7ce3c0-d850-41a4-862d-6fd83e20ae1f","Type":"ContainerStarted","Data":"79c9d5719c3b09abe1e2e0814c8cf6f530a378248ea3f2eb7f5f77d5fbe445d9"} Feb 27 10:50:06 crc kubenswrapper[4728]: I0227 10:50:06.483763 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-3833-account-create-update-sllck"] Feb 27 10:50:06 crc kubenswrapper[4728]: I0227 10:50:06.495709 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-6bfbb66dbc-8ddj8"] Feb 27 10:50:06 crc kubenswrapper[4728]: I0227 10:50:06.571100 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-84dc467799-w92cc"] Feb 27 10:50:06 crc kubenswrapper[4728]: W0227 10:50:06.653094 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb0db7498_5f3f_4550_932e_64f7d721e902.slice/crio-f8b3f70a6aa6b2b0a2e9d1b715988432d0ce10367376fbe638b52bd957d7b7c7 WatchSource:0}: Error finding container f8b3f70a6aa6b2b0a2e9d1b715988432d0ce10367376fbe638b52bd957d7b7c7: Status 404 returned error can't find the container with id f8b3f70a6aa6b2b0a2e9d1b715988432d0ce10367376fbe638b52bd957d7b7c7 Feb 27 10:50:06 crc kubenswrapper[4728]: I0227 10:50:06.708139 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-7d59b769b4-g5cgn"] Feb 27 10:50:06 crc kubenswrapper[4728]: I0227 10:50:06.795011 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fcc4fffc-ff97-47b0-885c-f8d81d189bcf" path="/var/lib/kubelet/pods/fcc4fffc-ff97-47b0-885c-f8d81d189bcf/volumes" Feb 27 10:50:06 crc kubenswrapper[4728]: I0227 10:50:06.798678 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-5dcd5b5d76-tfs97"] Feb 27 10:50:06 crc kubenswrapper[4728]: I0227 10:50:06.798709 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 27 10:50:07 crc kubenswrapper[4728]: I0227 10:50:07.002030 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-76558d5849-k75gx"] Feb 27 10:50:07 crc kubenswrapper[4728]: W0227 10:50:07.016701 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6386555e_93c8_46af_bdc9_ca0db04f8712.slice/crio-ae4733a7733525ee17960b7759b8897ada40f8b766a7ee8fff1e5c6b51c0a284 WatchSource:0}: Error finding container ae4733a7733525ee17960b7759b8897ada40f8b766a7ee8fff1e5c6b51c0a284: Status 404 returned error can't find the container with id ae4733a7733525ee17960b7759b8897ada40f8b766a7ee8fff1e5c6b51c0a284 Feb 27 10:50:07 crc kubenswrapper[4728]: I0227 10:50:07.168615 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-69498cf5f9-8b2rb"] Feb 27 10:50:07 crc kubenswrapper[4728]: W0227 10:50:07.201901 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf411351c_a796_4df4_9e09_407e93afb4a9.slice/crio-677782ef9a7f0f243bb72121b2c438f9e9dab82dca6213829dad6eb9e97c85be WatchSource:0}: Error finding container 677782ef9a7f0f243bb72121b2c438f9e9dab82dca6213829dad6eb9e97c85be: Status 404 returned error can't find the container with id 677782ef9a7f0f243bb72121b2c438f9e9dab82dca6213829dad6eb9e97c85be Feb 27 10:50:07 crc kubenswrapper[4728]: I0227 10:50:07.505738 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-7777f965d-45648" event={"ID":"cfcacc51-e5e0-4d12-a6d1-b810a9e93e42","Type":"ContainerStarted","Data":"e710d1a685c340e2ca0f326e59a5ed2d6cbd5f8095465f79d665ac6d5dbb6b35"} Feb 27 10:50:07 crc kubenswrapper[4728]: I0227 10:50:07.507779 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-7777f965d-45648" Feb 27 10:50:07 crc kubenswrapper[4728]: I0227 10:50:07.530306 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-7777f965d-45648" podStartSLOduration=5.530284033 podStartE2EDuration="5.530284033s" podCreationTimestamp="2026-02-27 10:50:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:50:07.528005971 +0000 UTC m=+1427.490372087" watchObservedRunningTime="2026-02-27 10:50:07.530284033 +0000 UTC m=+1427.492650139" Feb 27 10:50:07 crc kubenswrapper[4728]: I0227 10:50:07.542742 4728 generic.go:334] "Generic (PLEG): container finished" podID="9d5f0465-1665-4c7f-b8f5-e345efbe3872" containerID="0b5c76c2458773631054dcaca04d179732379c023235d1a76b79c113f201901a" exitCode=0 Feb 27 10:50:07 crc kubenswrapper[4728]: I0227 10:50:07.542860 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-dmpcg" event={"ID":"9d5f0465-1665-4c7f-b8f5-e345efbe3872","Type":"ContainerDied","Data":"0b5c76c2458773631054dcaca04d179732379c023235d1a76b79c113f201901a"} Feb 27 10:50:07 crc kubenswrapper[4728]: I0227 10:50:07.551851 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-84dc467799-w92cc" event={"ID":"b0db7498-5f3f-4550-932e-64f7d721e902","Type":"ContainerStarted","Data":"4f0bce30e7995e8fede562a5ec4a20cf6cce0e35118324e85e7d86547b46c89b"} Feb 27 10:50:07 crc kubenswrapper[4728]: I0227 10:50:07.551891 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-84dc467799-w92cc" event={"ID":"b0db7498-5f3f-4550-932e-64f7d721e902","Type":"ContainerStarted","Data":"f8b3f70a6aa6b2b0a2e9d1b715988432d0ce10367376fbe638b52bd957d7b7c7"} Feb 27 10:50:07 crc kubenswrapper[4728]: I0227 10:50:07.552810 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-84dc467799-w92cc" Feb 27 10:50:07 crc kubenswrapper[4728]: I0227 10:50:07.555801 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6bfbb66dbc-8ddj8" event={"ID":"0633a337-c673-43b4-a012-ac41403a02a1","Type":"ContainerStarted","Data":"b7270124ff198cf4428f36f634bc80c4e46526769cf4dbe65407ba14646fe785"} Feb 27 10:50:07 crc kubenswrapper[4728]: I0227 10:50:07.555860 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6bfbb66dbc-8ddj8" event={"ID":"0633a337-c673-43b4-a012-ac41403a02a1","Type":"ContainerStarted","Data":"3f9933e5009ace7f6e637fbdcb84c26a63dcf2750ebce7b54dd6145d39f4d914"} Feb 27 10:50:07 crc kubenswrapper[4728]: I0227 10:50:07.559047 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-69498cf5f9-8b2rb" event={"ID":"f411351c-a796-4df4-9e09-407e93afb4a9","Type":"ContainerStarted","Data":"677782ef9a7f0f243bb72121b2c438f9e9dab82dca6213829dad6eb9e97c85be"} Feb 27 10:50:07 crc kubenswrapper[4728]: I0227 10:50:07.569021 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"38a4c39e-48ad-4e05-a4b8-162fef5472b4","Type":"ContainerStarted","Data":"5b3f2430eb454e1a24c694d9b4c4d06880274995f42f92580dcd3b28ee011044"} Feb 27 10:50:07 crc kubenswrapper[4728]: I0227 10:50:07.575697 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-7d59b769b4-g5cgn" event={"ID":"cdcc3d16-3cf1-48e9-accc-c65d869be697","Type":"ContainerStarted","Data":"0e5a72f0ee394b7e29c79cd5833daaa909a2f802153ab24fe5c301a413059473"} Feb 27 10:50:07 crc kubenswrapper[4728]: I0227 10:50:07.587405 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-84dc467799-w92cc" podStartSLOduration=11.587383053 podStartE2EDuration="11.587383053s" podCreationTimestamp="2026-02-27 10:49:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:50:07.581984556 +0000 UTC m=+1427.544350652" watchObservedRunningTime="2026-02-27 10:50:07.587383053 +0000 UTC m=+1427.549749159" Feb 27 10:50:07 crc kubenswrapper[4728]: I0227 10:50:07.589371 4728 generic.go:334] "Generic (PLEG): container finished" podID="11767f8f-ebed-4306-b5f7-5e79182d0ad1" containerID="82bbcb2b1994af24f0bdba82fd3d51d0b4a6dc8b8e9e02752b18ffd92dee14f2" exitCode=0 Feb 27 10:50:07 crc kubenswrapper[4728]: I0227 10:50:07.589431 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d978555f9-6gjtz" event={"ID":"11767f8f-ebed-4306-b5f7-5e79182d0ad1","Type":"ContainerDied","Data":"82bbcb2b1994af24f0bdba82fd3d51d0b4a6dc8b8e9e02752b18ffd92dee14f2"} Feb 27 10:50:07 crc kubenswrapper[4728]: I0227 10:50:07.596022 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-76558d5849-k75gx" event={"ID":"6386555e-93c8-46af-bdc9-ca0db04f8712","Type":"ContainerStarted","Data":"ae4733a7733525ee17960b7759b8897ada40f8b766a7ee8fff1e5c6b51c0a284"} Feb 27 10:50:07 crc kubenswrapper[4728]: I0227 10:50:07.601521 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-3833-account-create-update-sllck" event={"ID":"b3f1a225-d366-4d33-bd31-53ef143b546b","Type":"ContainerStarted","Data":"af14006cb97dcfb309cae43234688f5a90e9d54c25c2ea51f5ac948c38136fe3"} Feb 27 10:50:07 crc kubenswrapper[4728]: I0227 10:50:07.601562 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-3833-account-create-update-sllck" event={"ID":"b3f1a225-d366-4d33-bd31-53ef143b546b","Type":"ContainerStarted","Data":"66b0ec50e0faceebd1dbb351226adbfcbe150bb695888a5ee7c2a2122ecfb07e"} Feb 27 10:50:07 crc kubenswrapper[4728]: I0227 10:50:07.607452 4728 generic.go:334] "Generic (PLEG): container finished" podID="390e39cb-69e2-4f3b-94a7-deef2d13c027" containerID="ca665a991a6757017f54e60649d11642ff9c681aa78712ef7af425af22d4ab5d" exitCode=0 Feb 27 10:50:07 crc kubenswrapper[4728]: I0227 10:50:07.607560 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-m9lhq" event={"ID":"390e39cb-69e2-4f3b-94a7-deef2d13c027","Type":"ContainerDied","Data":"ca665a991a6757017f54e60649d11642ff9c681aa78712ef7af425af22d4ab5d"} Feb 27 10:50:07 crc kubenswrapper[4728]: I0227 10:50:07.629189 4728 generic.go:334] "Generic (PLEG): container finished" podID="af86f825-d16c-4856-a823-01f06824a97f" containerID="f4015a30582b187ccb0c55f1870d06cb1d9d793f10f1ed0ed31c3e2494d51211" exitCode=0 Feb 27 10:50:07 crc kubenswrapper[4728]: I0227 10:50:07.629289 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-b66a-account-create-update-wtmz5" event={"ID":"af86f825-d16c-4856-a823-01f06824a97f","Type":"ContainerDied","Data":"f4015a30582b187ccb0c55f1870d06cb1d9d793f10f1ed0ed31c3e2494d51211"} Feb 27 10:50:07 crc kubenswrapper[4728]: I0227 10:50:07.639057 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-5dcd5b5d76-tfs97" event={"ID":"96dc96a4-9c81-4702-8678-1f6824535e01","Type":"ContainerStarted","Data":"22e194aa56a0b907c0d4c6f80b91a1309b2605d95e93987a18c9c46d41526e85"} Feb 27 10:50:08 crc kubenswrapper[4728]: I0227 10:50:08.653798 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6bfbb66dbc-8ddj8" event={"ID":"0633a337-c673-43b4-a012-ac41403a02a1","Type":"ContainerStarted","Data":"4f7e0005c736e3658f1bdaf5487f5ea5fd34a3b54ee34970a048fbf1bb605d16"} Feb 27 10:50:08 crc kubenswrapper[4728]: I0227 10:50:08.654685 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-6bfbb66dbc-8ddj8" Feb 27 10:50:08 crc kubenswrapper[4728]: I0227 10:50:08.654724 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-6bfbb66dbc-8ddj8" Feb 27 10:50:08 crc kubenswrapper[4728]: I0227 10:50:08.661158 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"38a4c39e-48ad-4e05-a4b8-162fef5472b4","Type":"ContainerStarted","Data":"1147fea5bd875a0f5d65df8cd3e342653fc2b50663a72ccc7f2673ef6dd4bc17"} Feb 27 10:50:08 crc kubenswrapper[4728]: I0227 10:50:08.663819 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d978555f9-6gjtz" event={"ID":"11767f8f-ebed-4306-b5f7-5e79182d0ad1","Type":"ContainerStarted","Data":"5e17701145928d05b423e0f776dfca096755255ffa760b4215739c6efdf2b783"} Feb 27 10:50:08 crc kubenswrapper[4728]: I0227 10:50:08.663952 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7d978555f9-6gjtz" Feb 27 10:50:08 crc kubenswrapper[4728]: I0227 10:50:08.665407 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536490-5x76j" event={"ID":"33b33e17-516b-44dc-b35c-267b155abfa3","Type":"ContainerStarted","Data":"44fb28981bed6077cf7613fe63246cf6fffc2f68088a0eb025ac856ba856db60"} Feb 27 10:50:08 crc kubenswrapper[4728]: I0227 10:50:08.667784 4728 generic.go:334] "Generic (PLEG): container finished" podID="b3f1a225-d366-4d33-bd31-53ef143b546b" containerID="af14006cb97dcfb309cae43234688f5a90e9d54c25c2ea51f5ac948c38136fe3" exitCode=0 Feb 27 10:50:08 crc kubenswrapper[4728]: I0227 10:50:08.667936 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-3833-account-create-update-sllck" event={"ID":"b3f1a225-d366-4d33-bd31-53ef143b546b","Type":"ContainerDied","Data":"af14006cb97dcfb309cae43234688f5a90e9d54c25c2ea51f5ac948c38136fe3"} Feb 27 10:50:08 crc kubenswrapper[4728]: I0227 10:50:08.681010 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-6bfbb66dbc-8ddj8" podStartSLOduration=12.680995038 podStartE2EDuration="12.680995038s" podCreationTimestamp="2026-02-27 10:49:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:50:08.672882498 +0000 UTC m=+1428.635248604" watchObservedRunningTime="2026-02-27 10:50:08.680995038 +0000 UTC m=+1428.643361144" Feb 27 10:50:08 crc kubenswrapper[4728]: I0227 10:50:08.722642 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29536490-5x76j" podStartSLOduration=7.570964468 podStartE2EDuration="8.722625338s" podCreationTimestamp="2026-02-27 10:50:00 +0000 UTC" firstStartedPulling="2026-02-27 10:50:06.286397921 +0000 UTC m=+1426.248764027" lastFinishedPulling="2026-02-27 10:50:07.438058801 +0000 UTC m=+1427.400424897" observedRunningTime="2026-02-27 10:50:08.717798017 +0000 UTC m=+1428.680164123" watchObservedRunningTime="2026-02-27 10:50:08.722625338 +0000 UTC m=+1428.684991444" Feb 27 10:50:08 crc kubenswrapper[4728]: I0227 10:50:08.758579 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7d978555f9-6gjtz" podStartSLOduration=12.758558573 podStartE2EDuration="12.758558573s" podCreationTimestamp="2026-02-27 10:49:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:50:08.740913934 +0000 UTC m=+1428.703280050" watchObservedRunningTime="2026-02-27 10:50:08.758558573 +0000 UTC m=+1428.720924679" Feb 27 10:50:08 crc kubenswrapper[4728]: I0227 10:50:08.876766 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-8589b5d689-mwmds" Feb 27 10:50:08 crc kubenswrapper[4728]: I0227 10:50:08.933192 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6bbb6b68c6-zhv6j"] Feb 27 10:50:08 crc kubenswrapper[4728]: I0227 10:50:08.933412 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-6bbb6b68c6-zhv6j" podUID="ae36fee1-d5a7-470b-ae15-4eeb8d126951" containerName="neutron-api" containerID="cri-o://8ff7d0184cf0adf6783ca2ef0e4fa98bcdbe844421c7a65a377b153dd3e99288" gracePeriod=30 Feb 27 10:50:08 crc kubenswrapper[4728]: I0227 10:50:08.933841 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-6bbb6b68c6-zhv6j" podUID="ae36fee1-d5a7-470b-ae15-4eeb8d126951" containerName="neutron-httpd" containerID="cri-o://2ec2cfe466d703d9c3909bfd575c33a84f34a43077f1ea7d4059f7e53fd9e1c3" gracePeriod=30 Feb 27 10:50:09 crc kubenswrapper[4728]: I0227 10:50:09.246069 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-7371-account-create-update-6lhlf" Feb 27 10:50:09 crc kubenswrapper[4728]: I0227 10:50:09.364084 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/051376ee-f481-42d6-a69c-637247569a1d-operator-scripts\") pod \"051376ee-f481-42d6-a69c-637247569a1d\" (UID: \"051376ee-f481-42d6-a69c-637247569a1d\") " Feb 27 10:50:09 crc kubenswrapper[4728]: I0227 10:50:09.365003 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xnl6x\" (UniqueName: \"kubernetes.io/projected/051376ee-f481-42d6-a69c-637247569a1d-kube-api-access-xnl6x\") pod \"051376ee-f481-42d6-a69c-637247569a1d\" (UID: \"051376ee-f481-42d6-a69c-637247569a1d\") " Feb 27 10:50:09 crc kubenswrapper[4728]: I0227 10:50:09.364992 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/051376ee-f481-42d6-a69c-637247569a1d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "051376ee-f481-42d6-a69c-637247569a1d" (UID: "051376ee-f481-42d6-a69c-637247569a1d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:50:09 crc kubenswrapper[4728]: I0227 10:50:09.402755 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/051376ee-f481-42d6-a69c-637247569a1d-kube-api-access-xnl6x" (OuterVolumeSpecName: "kube-api-access-xnl6x") pod "051376ee-f481-42d6-a69c-637247569a1d" (UID: "051376ee-f481-42d6-a69c-637247569a1d"). InnerVolumeSpecName "kube-api-access-xnl6x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:50:09 crc kubenswrapper[4728]: E0227 10:50:09.453411 4728 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podae36fee1_d5a7_470b_ae15_4eeb8d126951.slice/crio-2ec2cfe466d703d9c3909bfd575c33a84f34a43077f1ea7d4059f7e53fd9e1c3.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podae36fee1_d5a7_470b_ae15_4eeb8d126951.slice/crio-conmon-2ec2cfe466d703d9c3909bfd575c33a84f34a43077f1ea7d4059f7e53fd9e1c3.scope\": RecentStats: unable to find data in memory cache]" Feb 27 10:50:09 crc kubenswrapper[4728]: I0227 10:50:09.472236 4728 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/051376ee-f481-42d6-a69c-637247569a1d-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 10:50:09 crc kubenswrapper[4728]: I0227 10:50:09.472427 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xnl6x\" (UniqueName: \"kubernetes.io/projected/051376ee-f481-42d6-a69c-637247569a1d-kube-api-access-xnl6x\") on node \"crc\" DevicePath \"\"" Feb 27 10:50:09 crc kubenswrapper[4728]: I0227 10:50:09.694811 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-7371-account-create-update-6lhlf" event={"ID":"051376ee-f481-42d6-a69c-637247569a1d","Type":"ContainerDied","Data":"be1ac0778f36f8e355d4923673df7c1665acabeccba1888d4d4f0663c86ec29a"} Feb 27 10:50:09 crc kubenswrapper[4728]: I0227 10:50:09.694864 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="be1ac0778f36f8e355d4923673df7c1665acabeccba1888d4d4f0663c86ec29a" Feb 27 10:50:09 crc kubenswrapper[4728]: I0227 10:50:09.695353 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-7371-account-create-update-6lhlf" Feb 27 10:50:09 crc kubenswrapper[4728]: I0227 10:50:09.701999 4728 generic.go:334] "Generic (PLEG): container finished" podID="ae36fee1-d5a7-470b-ae15-4eeb8d126951" containerID="2ec2cfe466d703d9c3909bfd575c33a84f34a43077f1ea7d4059f7e53fd9e1c3" exitCode=0 Feb 27 10:50:09 crc kubenswrapper[4728]: I0227 10:50:09.702155 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6bbb6b68c6-zhv6j" event={"ID":"ae36fee1-d5a7-470b-ae15-4eeb8d126951","Type":"ContainerDied","Data":"2ec2cfe466d703d9c3909bfd575c33a84f34a43077f1ea7d4059f7e53fd9e1c3"} Feb 27 10:50:10 crc kubenswrapper[4728]: I0227 10:50:10.718137 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-b66a-account-create-update-wtmz5" event={"ID":"af86f825-d16c-4856-a823-01f06824a97f","Type":"ContainerDied","Data":"4f60c6cd013aad66dd892924196847c95b5ee2cce37c008e71105ed9212c3d3f"} Feb 27 10:50:10 crc kubenswrapper[4728]: I0227 10:50:10.718436 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4f60c6cd013aad66dd892924196847c95b5ee2cce37c008e71105ed9212c3d3f" Feb 27 10:50:10 crc kubenswrapper[4728]: I0227 10:50:10.749491 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-cnj5s" event={"ID":"c297845b-bf34-4c1d-88f0-3f9b28d09e54","Type":"ContainerDied","Data":"6dc849902408777f12a07f1cb4a3e1f7a34b44fffec6bec5c3a0004dd3de0b33"} Feb 27 10:50:10 crc kubenswrapper[4728]: I0227 10:50:10.750319 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6dc849902408777f12a07f1cb4a3e1f7a34b44fffec6bec5c3a0004dd3de0b33" Feb 27 10:50:10 crc kubenswrapper[4728]: I0227 10:50:10.750362 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-3833-account-create-update-sllck" event={"ID":"b3f1a225-d366-4d33-bd31-53ef143b546b","Type":"ContainerDied","Data":"66b0ec50e0faceebd1dbb351226adbfcbe150bb695888a5ee7c2a2122ecfb07e"} Feb 27 10:50:10 crc kubenswrapper[4728]: I0227 10:50:10.750378 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="66b0ec50e0faceebd1dbb351226adbfcbe150bb695888a5ee7c2a2122ecfb07e" Feb 27 10:50:10 crc kubenswrapper[4728]: I0227 10:50:10.750389 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-dmpcg" event={"ID":"9d5f0465-1665-4c7f-b8f5-e345efbe3872","Type":"ContainerDied","Data":"b04963d5a843c767e61253a3c6bdc9edeeb3fa4590a4701e3907e6222c920e44"} Feb 27 10:50:10 crc kubenswrapper[4728]: I0227 10:50:10.750402 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b04963d5a843c767e61253a3c6bdc9edeeb3fa4590a4701e3907e6222c920e44" Feb 27 10:50:10 crc kubenswrapper[4728]: I0227 10:50:10.750411 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-m9lhq" event={"ID":"390e39cb-69e2-4f3b-94a7-deef2d13c027","Type":"ContainerDied","Data":"1bbbb1b64a60c450fb4dd29417d9844d1ab12d782186d5c616e8c8f9433dc991"} Feb 27 10:50:10 crc kubenswrapper[4728]: I0227 10:50:10.750423 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1bbbb1b64a60c450fb4dd29417d9844d1ab12d782186d5c616e8c8f9433dc991" Feb 27 10:50:11 crc kubenswrapper[4728]: I0227 10:50:11.104654 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-3833-account-create-update-sllck" Feb 27 10:50:11 crc kubenswrapper[4728]: I0227 10:50:11.157846 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-cnj5s" Feb 27 10:50:11 crc kubenswrapper[4728]: I0227 10:50:11.176807 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-dmpcg" Feb 27 10:50:11 crc kubenswrapper[4728]: I0227 10:50:11.176948 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-b66a-account-create-update-wtmz5" Feb 27 10:50:11 crc kubenswrapper[4728]: I0227 10:50:11.214943 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-m9lhq" Feb 27 10:50:11 crc kubenswrapper[4728]: I0227 10:50:11.239679 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8qqwx\" (UniqueName: \"kubernetes.io/projected/9d5f0465-1665-4c7f-b8f5-e345efbe3872-kube-api-access-8qqwx\") pod \"9d5f0465-1665-4c7f-b8f5-e345efbe3872\" (UID: \"9d5f0465-1665-4c7f-b8f5-e345efbe3872\") " Feb 27 10:50:11 crc kubenswrapper[4728]: I0227 10:50:11.239894 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-njf2q\" (UniqueName: \"kubernetes.io/projected/b3f1a225-d366-4d33-bd31-53ef143b546b-kube-api-access-njf2q\") pod \"b3f1a225-d366-4d33-bd31-53ef143b546b\" (UID: \"b3f1a225-d366-4d33-bd31-53ef143b546b\") " Feb 27 10:50:11 crc kubenswrapper[4728]: I0227 10:50:11.239938 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c297845b-bf34-4c1d-88f0-3f9b28d09e54-operator-scripts\") pod \"c297845b-bf34-4c1d-88f0-3f9b28d09e54\" (UID: \"c297845b-bf34-4c1d-88f0-3f9b28d09e54\") " Feb 27 10:50:11 crc kubenswrapper[4728]: I0227 10:50:11.240024 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/af86f825-d16c-4856-a823-01f06824a97f-operator-scripts\") pod \"af86f825-d16c-4856-a823-01f06824a97f\" (UID: \"af86f825-d16c-4856-a823-01f06824a97f\") " Feb 27 10:50:11 crc kubenswrapper[4728]: I0227 10:50:11.240056 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9d5f0465-1665-4c7f-b8f5-e345efbe3872-operator-scripts\") pod \"9d5f0465-1665-4c7f-b8f5-e345efbe3872\" (UID: \"9d5f0465-1665-4c7f-b8f5-e345efbe3872\") " Feb 27 10:50:11 crc kubenswrapper[4728]: I0227 10:50:11.240092 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b3f1a225-d366-4d33-bd31-53ef143b546b-operator-scripts\") pod \"b3f1a225-d366-4d33-bd31-53ef143b546b\" (UID: \"b3f1a225-d366-4d33-bd31-53ef143b546b\") " Feb 27 10:50:11 crc kubenswrapper[4728]: I0227 10:50:11.240217 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mctgl\" (UniqueName: \"kubernetes.io/projected/af86f825-d16c-4856-a823-01f06824a97f-kube-api-access-mctgl\") pod \"af86f825-d16c-4856-a823-01f06824a97f\" (UID: \"af86f825-d16c-4856-a823-01f06824a97f\") " Feb 27 10:50:11 crc kubenswrapper[4728]: I0227 10:50:11.240256 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mqptr\" (UniqueName: \"kubernetes.io/projected/c297845b-bf34-4c1d-88f0-3f9b28d09e54-kube-api-access-mqptr\") pod \"c297845b-bf34-4c1d-88f0-3f9b28d09e54\" (UID: \"c297845b-bf34-4c1d-88f0-3f9b28d09e54\") " Feb 27 10:50:11 crc kubenswrapper[4728]: I0227 10:50:11.244532 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b3f1a225-d366-4d33-bd31-53ef143b546b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b3f1a225-d366-4d33-bd31-53ef143b546b" (UID: "b3f1a225-d366-4d33-bd31-53ef143b546b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:50:11 crc kubenswrapper[4728]: I0227 10:50:11.245006 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d5f0465-1665-4c7f-b8f5-e345efbe3872-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9d5f0465-1665-4c7f-b8f5-e345efbe3872" (UID: "9d5f0465-1665-4c7f-b8f5-e345efbe3872"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:50:11 crc kubenswrapper[4728]: I0227 10:50:11.245394 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c297845b-bf34-4c1d-88f0-3f9b28d09e54-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c297845b-bf34-4c1d-88f0-3f9b28d09e54" (UID: "c297845b-bf34-4c1d-88f0-3f9b28d09e54"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:50:11 crc kubenswrapper[4728]: I0227 10:50:11.247209 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af86f825-d16c-4856-a823-01f06824a97f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "af86f825-d16c-4856-a823-01f06824a97f" (UID: "af86f825-d16c-4856-a823-01f06824a97f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:50:11 crc kubenswrapper[4728]: I0227 10:50:11.260245 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c297845b-bf34-4c1d-88f0-3f9b28d09e54-kube-api-access-mqptr" (OuterVolumeSpecName: "kube-api-access-mqptr") pod "c297845b-bf34-4c1d-88f0-3f9b28d09e54" (UID: "c297845b-bf34-4c1d-88f0-3f9b28d09e54"). InnerVolumeSpecName "kube-api-access-mqptr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:50:11 crc kubenswrapper[4728]: I0227 10:50:11.263955 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af86f825-d16c-4856-a823-01f06824a97f-kube-api-access-mctgl" (OuterVolumeSpecName: "kube-api-access-mctgl") pod "af86f825-d16c-4856-a823-01f06824a97f" (UID: "af86f825-d16c-4856-a823-01f06824a97f"). InnerVolumeSpecName "kube-api-access-mctgl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:50:11 crc kubenswrapper[4728]: I0227 10:50:11.282268 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d5f0465-1665-4c7f-b8f5-e345efbe3872-kube-api-access-8qqwx" (OuterVolumeSpecName: "kube-api-access-8qqwx") pod "9d5f0465-1665-4c7f-b8f5-e345efbe3872" (UID: "9d5f0465-1665-4c7f-b8f5-e345efbe3872"). InnerVolumeSpecName "kube-api-access-8qqwx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:50:11 crc kubenswrapper[4728]: I0227 10:50:11.290313 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3f1a225-d366-4d33-bd31-53ef143b546b-kube-api-access-njf2q" (OuterVolumeSpecName: "kube-api-access-njf2q") pod "b3f1a225-d366-4d33-bd31-53ef143b546b" (UID: "b3f1a225-d366-4d33-bd31-53ef143b546b"). InnerVolumeSpecName "kube-api-access-njf2q". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:50:11 crc kubenswrapper[4728]: I0227 10:50:11.342284 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cdjwt\" (UniqueName: \"kubernetes.io/projected/390e39cb-69e2-4f3b-94a7-deef2d13c027-kube-api-access-cdjwt\") pod \"390e39cb-69e2-4f3b-94a7-deef2d13c027\" (UID: \"390e39cb-69e2-4f3b-94a7-deef2d13c027\") " Feb 27 10:50:11 crc kubenswrapper[4728]: I0227 10:50:11.342373 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/390e39cb-69e2-4f3b-94a7-deef2d13c027-operator-scripts\") pod \"390e39cb-69e2-4f3b-94a7-deef2d13c027\" (UID: \"390e39cb-69e2-4f3b-94a7-deef2d13c027\") " Feb 27 10:50:11 crc kubenswrapper[4728]: I0227 10:50:11.343031 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mctgl\" (UniqueName: \"kubernetes.io/projected/af86f825-d16c-4856-a823-01f06824a97f-kube-api-access-mctgl\") on node \"crc\" DevicePath \"\"" Feb 27 10:50:11 crc kubenswrapper[4728]: I0227 10:50:11.343047 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mqptr\" (UniqueName: \"kubernetes.io/projected/c297845b-bf34-4c1d-88f0-3f9b28d09e54-kube-api-access-mqptr\") on node \"crc\" DevicePath \"\"" Feb 27 10:50:11 crc kubenswrapper[4728]: I0227 10:50:11.343056 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8qqwx\" (UniqueName: \"kubernetes.io/projected/9d5f0465-1665-4c7f-b8f5-e345efbe3872-kube-api-access-8qqwx\") on node \"crc\" DevicePath \"\"" Feb 27 10:50:11 crc kubenswrapper[4728]: I0227 10:50:11.343066 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-njf2q\" (UniqueName: \"kubernetes.io/projected/b3f1a225-d366-4d33-bd31-53ef143b546b-kube-api-access-njf2q\") on node \"crc\" DevicePath \"\"" Feb 27 10:50:11 crc kubenswrapper[4728]: I0227 10:50:11.343074 4728 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c297845b-bf34-4c1d-88f0-3f9b28d09e54-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 10:50:11 crc kubenswrapper[4728]: I0227 10:50:11.343083 4728 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/af86f825-d16c-4856-a823-01f06824a97f-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 10:50:11 crc kubenswrapper[4728]: I0227 10:50:11.343091 4728 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9d5f0465-1665-4c7f-b8f5-e345efbe3872-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 10:50:11 crc kubenswrapper[4728]: I0227 10:50:11.343099 4728 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b3f1a225-d366-4d33-bd31-53ef143b546b-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 10:50:11 crc kubenswrapper[4728]: I0227 10:50:11.343420 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/390e39cb-69e2-4f3b-94a7-deef2d13c027-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "390e39cb-69e2-4f3b-94a7-deef2d13c027" (UID: "390e39cb-69e2-4f3b-94a7-deef2d13c027"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:50:11 crc kubenswrapper[4728]: I0227 10:50:11.353287 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/390e39cb-69e2-4f3b-94a7-deef2d13c027-kube-api-access-cdjwt" (OuterVolumeSpecName: "kube-api-access-cdjwt") pod "390e39cb-69e2-4f3b-94a7-deef2d13c027" (UID: "390e39cb-69e2-4f3b-94a7-deef2d13c027"). InnerVolumeSpecName "kube-api-access-cdjwt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:50:11 crc kubenswrapper[4728]: I0227 10:50:11.445774 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cdjwt\" (UniqueName: \"kubernetes.io/projected/390e39cb-69e2-4f3b-94a7-deef2d13c027-kube-api-access-cdjwt\") on node \"crc\" DevicePath \"\"" Feb 27 10:50:11 crc kubenswrapper[4728]: I0227 10:50:11.446448 4728 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/390e39cb-69e2-4f3b-94a7-deef2d13c027-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 10:50:11 crc kubenswrapper[4728]: I0227 10:50:11.812453 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-69498cf5f9-8b2rb" event={"ID":"f411351c-a796-4df4-9e09-407e93afb4a9","Type":"ContainerStarted","Data":"2ff6ea4b5f4988efbb09b919d8070765b7955305081f3fda1358c992bd6cff30"} Feb 27 10:50:11 crc kubenswrapper[4728]: I0227 10:50:11.813114 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-69498cf5f9-8b2rb" Feb 27 10:50:11 crc kubenswrapper[4728]: I0227 10:50:11.825774 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-7d59b769b4-g5cgn" event={"ID":"cdcc3d16-3cf1-48e9-accc-c65d869be697","Type":"ContainerStarted","Data":"98aa5d90f75366f8df83f085775a7c5dc0d3742c07bff8ed2f7f1e13212ba811"} Feb 27 10:50:11 crc kubenswrapper[4728]: I0227 10:50:11.825936 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-7d59b769b4-g5cgn" Feb 27 10:50:11 crc kubenswrapper[4728]: I0227 10:50:11.829769 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-69498cf5f9-8b2rb" podStartSLOduration=3.238468721 podStartE2EDuration="6.829757831s" podCreationTimestamp="2026-02-27 10:50:05 +0000 UTC" firstStartedPulling="2026-02-27 10:50:07.210675791 +0000 UTC m=+1427.173041897" lastFinishedPulling="2026-02-27 10:50:10.801964901 +0000 UTC m=+1430.764331007" observedRunningTime="2026-02-27 10:50:11.829399511 +0000 UTC m=+1431.791765617" watchObservedRunningTime="2026-02-27 10:50:11.829757831 +0000 UTC m=+1431.792123937" Feb 27 10:50:11 crc kubenswrapper[4728]: I0227 10:50:11.833218 4728 generic.go:334] "Generic (PLEG): container finished" podID="33b33e17-516b-44dc-b35c-267b155abfa3" containerID="44fb28981bed6077cf7613fe63246cf6fffc2f68088a0eb025ac856ba856db60" exitCode=0 Feb 27 10:50:11 crc kubenswrapper[4728]: I0227 10:50:11.833302 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536490-5x76j" event={"ID":"33b33e17-516b-44dc-b35c-267b155abfa3","Type":"ContainerDied","Data":"44fb28981bed6077cf7613fe63246cf6fffc2f68088a0eb025ac856ba856db60"} Feb 27 10:50:11 crc kubenswrapper[4728]: I0227 10:50:11.840804 4728 generic.go:334] "Generic (PLEG): container finished" podID="ae36fee1-d5a7-470b-ae15-4eeb8d126951" containerID="8ff7d0184cf0adf6783ca2ef0e4fa98bcdbe844421c7a65a377b153dd3e99288" exitCode=0 Feb 27 10:50:11 crc kubenswrapper[4728]: I0227 10:50:11.840912 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6bbb6b68c6-zhv6j" event={"ID":"ae36fee1-d5a7-470b-ae15-4eeb8d126951","Type":"ContainerDied","Data":"8ff7d0184cf0adf6783ca2ef0e4fa98bcdbe844421c7a65a377b153dd3e99288"} Feb 27 10:50:11 crc kubenswrapper[4728]: I0227 10:50:11.854585 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-5dcd5b5d76-tfs97" event={"ID":"96dc96a4-9c81-4702-8678-1f6824535e01","Type":"ContainerStarted","Data":"2e02056dd29e84312f0a4af87f7933d56f9035ceff201854e18c2833432799dd"} Feb 27 10:50:11 crc kubenswrapper[4728]: I0227 10:50:11.854729 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-api-5dcd5b5d76-tfs97" podUID="96dc96a4-9c81-4702-8678-1f6824535e01" containerName="heat-api" containerID="cri-o://2e02056dd29e84312f0a4af87f7933d56f9035ceff201854e18c2833432799dd" gracePeriod=60 Feb 27 10:50:11 crc kubenswrapper[4728]: I0227 10:50:11.854966 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-5dcd5b5d76-tfs97" Feb 27 10:50:11 crc kubenswrapper[4728]: I0227 10:50:11.863732 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-7d59b769b4-g5cgn" podStartSLOduration=5.721612435 podStartE2EDuration="9.863713032s" podCreationTimestamp="2026-02-27 10:50:02 +0000 UTC" firstStartedPulling="2026-02-27 10:50:06.660107841 +0000 UTC m=+1426.622473947" lastFinishedPulling="2026-02-27 10:50:10.802208438 +0000 UTC m=+1430.764574544" observedRunningTime="2026-02-27 10:50:11.845485657 +0000 UTC m=+1431.807851763" watchObservedRunningTime="2026-02-27 10:50:11.863713032 +0000 UTC m=+1431.826079138" Feb 27 10:50:11 crc kubenswrapper[4728]: I0227 10:50:11.917293 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-5dcd5b5d76-tfs97" podStartSLOduration=11.668498623 podStartE2EDuration="15.917270215s" podCreationTimestamp="2026-02-27 10:49:56 +0000 UTC" firstStartedPulling="2026-02-27 10:50:06.552072089 +0000 UTC m=+1426.514438195" lastFinishedPulling="2026-02-27 10:50:10.800843681 +0000 UTC m=+1430.763209787" observedRunningTime="2026-02-27 10:50:11.876092428 +0000 UTC m=+1431.838458544" watchObservedRunningTime="2026-02-27 10:50:11.917270215 +0000 UTC m=+1431.879636321" Feb 27 10:50:11 crc kubenswrapper[4728]: I0227 10:50:11.923023 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-cfnapi-54dc5858d7-7975b" podUID="5e7ce3c0-d850-41a4-862d-6fd83e20ae1f" containerName="heat-cfnapi" containerID="cri-o://e5417700dd6115e3e8901abc7bee28bf81b3f2d5a45d3f539280d05be4d62c71" gracePeriod=60 Feb 27 10:50:11 crc kubenswrapper[4728]: I0227 10:50:11.923150 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-54dc5858d7-7975b" event={"ID":"5e7ce3c0-d850-41a4-862d-6fd83e20ae1f","Type":"ContainerStarted","Data":"e5417700dd6115e3e8901abc7bee28bf81b3f2d5a45d3f539280d05be4d62c71"} Feb 27 10:50:11 crc kubenswrapper[4728]: I0227 10:50:11.923193 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-54dc5858d7-7975b" Feb 27 10:50:11 crc kubenswrapper[4728]: I0227 10:50:11.943214 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-54dc5858d7-7975b" podStartSLOduration=11.49951568 podStartE2EDuration="15.943195629s" podCreationTimestamp="2026-02-27 10:49:56 +0000 UTC" firstStartedPulling="2026-02-27 10:50:06.300562055 +0000 UTC m=+1426.262928161" lastFinishedPulling="2026-02-27 10:50:10.744242004 +0000 UTC m=+1430.706608110" observedRunningTime="2026-02-27 10:50:11.943142727 +0000 UTC m=+1431.905508833" watchObservedRunningTime="2026-02-27 10:50:11.943195629 +0000 UTC m=+1431.905561735" Feb 27 10:50:11 crc kubenswrapper[4728]: I0227 10:50:11.950969 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-cnj5s" Feb 27 10:50:11 crc kubenswrapper[4728]: I0227 10:50:11.952003 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-b66a-account-create-update-wtmz5" Feb 27 10:50:11 crc kubenswrapper[4728]: I0227 10:50:11.952820 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-m9lhq" Feb 27 10:50:11 crc kubenswrapper[4728]: I0227 10:50:11.952884 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-3833-account-create-update-sllck" Feb 27 10:50:11 crc kubenswrapper[4728]: I0227 10:50:11.962097 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"38a4c39e-48ad-4e05-a4b8-162fef5472b4","Type":"ContainerStarted","Data":"9a8493936a4216b76ee7f114a82acbb29dae8761ff25687f8e678a3b0dde5c05"} Feb 27 10:50:11 crc kubenswrapper[4728]: I0227 10:50:11.962387 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-dmpcg" Feb 27 10:50:12 crc kubenswrapper[4728]: I0227 10:50:12.166431 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-6bfbb66dbc-8ddj8" Feb 27 10:50:12 crc kubenswrapper[4728]: I0227 10:50:12.782472 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6bbb6b68c6-zhv6j" Feb 27 10:50:12 crc kubenswrapper[4728]: I0227 10:50:12.901377 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ae36fee1-d5a7-470b-ae15-4eeb8d126951-config\") pod \"ae36fee1-d5a7-470b-ae15-4eeb8d126951\" (UID: \"ae36fee1-d5a7-470b-ae15-4eeb8d126951\") " Feb 27 10:50:12 crc kubenswrapper[4728]: I0227 10:50:12.901599 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae36fee1-d5a7-470b-ae15-4eeb8d126951-ovndb-tls-certs\") pod \"ae36fee1-d5a7-470b-ae15-4eeb8d126951\" (UID: \"ae36fee1-d5a7-470b-ae15-4eeb8d126951\") " Feb 27 10:50:12 crc kubenswrapper[4728]: I0227 10:50:12.901759 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-49hl9\" (UniqueName: \"kubernetes.io/projected/ae36fee1-d5a7-470b-ae15-4eeb8d126951-kube-api-access-49hl9\") pod \"ae36fee1-d5a7-470b-ae15-4eeb8d126951\" (UID: \"ae36fee1-d5a7-470b-ae15-4eeb8d126951\") " Feb 27 10:50:12 crc kubenswrapper[4728]: I0227 10:50:12.902453 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae36fee1-d5a7-470b-ae15-4eeb8d126951-combined-ca-bundle\") pod \"ae36fee1-d5a7-470b-ae15-4eeb8d126951\" (UID: \"ae36fee1-d5a7-470b-ae15-4eeb8d126951\") " Feb 27 10:50:12 crc kubenswrapper[4728]: I0227 10:50:12.902570 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/ae36fee1-d5a7-470b-ae15-4eeb8d126951-httpd-config\") pod \"ae36fee1-d5a7-470b-ae15-4eeb8d126951\" (UID: \"ae36fee1-d5a7-470b-ae15-4eeb8d126951\") " Feb 27 10:50:12 crc kubenswrapper[4728]: I0227 10:50:12.907862 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae36fee1-d5a7-470b-ae15-4eeb8d126951-kube-api-access-49hl9" (OuterVolumeSpecName: "kube-api-access-49hl9") pod "ae36fee1-d5a7-470b-ae15-4eeb8d126951" (UID: "ae36fee1-d5a7-470b-ae15-4eeb8d126951"). InnerVolumeSpecName "kube-api-access-49hl9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:50:12 crc kubenswrapper[4728]: I0227 10:50:12.910976 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae36fee1-d5a7-470b-ae15-4eeb8d126951-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "ae36fee1-d5a7-470b-ae15-4eeb8d126951" (UID: "ae36fee1-d5a7-470b-ae15-4eeb8d126951"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:50:12 crc kubenswrapper[4728]: I0227 10:50:12.970748 4728 generic.go:334] "Generic (PLEG): container finished" podID="5e7ce3c0-d850-41a4-862d-6fd83e20ae1f" containerID="e5417700dd6115e3e8901abc7bee28bf81b3f2d5a45d3f539280d05be4d62c71" exitCode=0 Feb 27 10:50:12 crc kubenswrapper[4728]: I0227 10:50:12.970817 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-54dc5858d7-7975b" event={"ID":"5e7ce3c0-d850-41a4-862d-6fd83e20ae1f","Type":"ContainerDied","Data":"e5417700dd6115e3e8901abc7bee28bf81b3f2d5a45d3f539280d05be4d62c71"} Feb 27 10:50:12 crc kubenswrapper[4728]: I0227 10:50:12.980356 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"38a4c39e-48ad-4e05-a4b8-162fef5472b4","Type":"ContainerStarted","Data":"9fa81f802da6341489496460593874668928df9ce8293595621275dc671ab4de"} Feb 27 10:50:12 crc kubenswrapper[4728]: I0227 10:50:12.988839 4728 generic.go:334] "Generic (PLEG): container finished" podID="cdcc3d16-3cf1-48e9-accc-c65d869be697" containerID="98aa5d90f75366f8df83f085775a7c5dc0d3742c07bff8ed2f7f1e13212ba811" exitCode=1 Feb 27 10:50:12 crc kubenswrapper[4728]: I0227 10:50:12.988955 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-7d59b769b4-g5cgn" event={"ID":"cdcc3d16-3cf1-48e9-accc-c65d869be697","Type":"ContainerDied","Data":"98aa5d90f75366f8df83f085775a7c5dc0d3742c07bff8ed2f7f1e13212ba811"} Feb 27 10:50:12 crc kubenswrapper[4728]: I0227 10:50:12.989934 4728 scope.go:117] "RemoveContainer" containerID="98aa5d90f75366f8df83f085775a7c5dc0d3742c07bff8ed2f7f1e13212ba811" Feb 27 10:50:12 crc kubenswrapper[4728]: I0227 10:50:12.999126 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-76558d5849-k75gx" event={"ID":"6386555e-93c8-46af-bdc9-ca0db04f8712","Type":"ContainerStarted","Data":"6f020602c3a8c0f26c04aafcd0a976ad62bbbc9693061a0b5c11b4fb5b6081ac"} Feb 27 10:50:12 crc kubenswrapper[4728]: I0227 10:50:12.999359 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-76558d5849-k75gx" Feb 27 10:50:13 crc kubenswrapper[4728]: I0227 10:50:13.005999 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-49hl9\" (UniqueName: \"kubernetes.io/projected/ae36fee1-d5a7-470b-ae15-4eeb8d126951-kube-api-access-49hl9\") on node \"crc\" DevicePath \"\"" Feb 27 10:50:13 crc kubenswrapper[4728]: I0227 10:50:13.006027 4728 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/ae36fee1-d5a7-470b-ae15-4eeb8d126951-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 27 10:50:13 crc kubenswrapper[4728]: I0227 10:50:13.009765 4728 generic.go:334] "Generic (PLEG): container finished" podID="ca804f78-493d-459d-9061-ee9fe01d8732" containerID="867734cd15d9307d81fe19e1ee34c1fc43214fd377443fe8ded045635904eb56" exitCode=1 Feb 27 10:50:13 crc kubenswrapper[4728]: I0227 10:50:13.009853 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-5d7b9476fd-gdljj" event={"ID":"ca804f78-493d-459d-9061-ee9fe01d8732","Type":"ContainerDied","Data":"867734cd15d9307d81fe19e1ee34c1fc43214fd377443fe8ded045635904eb56"} Feb 27 10:50:13 crc kubenswrapper[4728]: I0227 10:50:13.010852 4728 scope.go:117] "RemoveContainer" containerID="867734cd15d9307d81fe19e1ee34c1fc43214fd377443fe8ded045635904eb56" Feb 27 10:50:13 crc kubenswrapper[4728]: I0227 10:50:13.017598 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6bbb6b68c6-zhv6j" Feb 27 10:50:13 crc kubenswrapper[4728]: I0227 10:50:13.018285 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6bbb6b68c6-zhv6j" event={"ID":"ae36fee1-d5a7-470b-ae15-4eeb8d126951","Type":"ContainerDied","Data":"a62657ad6dd5ec6fc69071e4f8e44e841db4ea46839b61757b39e1624b584191"} Feb 27 10:50:13 crc kubenswrapper[4728]: I0227 10:50:13.018325 4728 scope.go:117] "RemoveContainer" containerID="2ec2cfe466d703d9c3909bfd575c33a84f34a43077f1ea7d4059f7e53fd9e1c3" Feb 27 10:50:13 crc kubenswrapper[4728]: I0227 10:50:13.036279 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae36fee1-d5a7-470b-ae15-4eeb8d126951-config" (OuterVolumeSpecName: "config") pod "ae36fee1-d5a7-470b-ae15-4eeb8d126951" (UID: "ae36fee1-d5a7-470b-ae15-4eeb8d126951"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:50:13 crc kubenswrapper[4728]: I0227 10:50:13.092157 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae36fee1-d5a7-470b-ae15-4eeb8d126951-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ae36fee1-d5a7-470b-ae15-4eeb8d126951" (UID: "ae36fee1-d5a7-470b-ae15-4eeb8d126951"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:50:13 crc kubenswrapper[4728]: I0227 10:50:13.105272 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-76558d5849-k75gx" podStartSLOduration=4.330468381 podStartE2EDuration="8.105250371s" podCreationTimestamp="2026-02-27 10:50:05 +0000 UTC" firstStartedPulling="2026-02-27 10:50:07.018948398 +0000 UTC m=+1426.981314494" lastFinishedPulling="2026-02-27 10:50:10.793730378 +0000 UTC m=+1430.756096484" observedRunningTime="2026-02-27 10:50:13.072865243 +0000 UTC m=+1433.035231369" watchObservedRunningTime="2026-02-27 10:50:13.105250371 +0000 UTC m=+1433.067616477" Feb 27 10:50:13 crc kubenswrapper[4728]: I0227 10:50:13.110223 4728 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae36fee1-d5a7-470b-ae15-4eeb8d126951-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 10:50:13 crc kubenswrapper[4728]: I0227 10:50:13.110254 4728 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/ae36fee1-d5a7-470b-ae15-4eeb8d126951-config\") on node \"crc\" DevicePath \"\"" Feb 27 10:50:13 crc kubenswrapper[4728]: I0227 10:50:13.130919 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae36fee1-d5a7-470b-ae15-4eeb8d126951-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "ae36fee1-d5a7-470b-ae15-4eeb8d126951" (UID: "ae36fee1-d5a7-470b-ae15-4eeb8d126951"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:50:13 crc kubenswrapper[4728]: I0227 10:50:13.215366 4728 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae36fee1-d5a7-470b-ae15-4eeb8d126951-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 27 10:50:13 crc kubenswrapper[4728]: I0227 10:50:13.236972 4728 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-cfnapi-5d7b9476fd-gdljj" Feb 27 10:50:13 crc kubenswrapper[4728]: I0227 10:50:13.237021 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-5d7b9476fd-gdljj" Feb 27 10:50:13 crc kubenswrapper[4728]: I0227 10:50:13.258256 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-54dc5858d7-7975b" Feb 27 10:50:13 crc kubenswrapper[4728]: I0227 10:50:13.266746 4728 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-api-7d59b769b4-g5cgn" Feb 27 10:50:13 crc kubenswrapper[4728]: I0227 10:50:13.286496 4728 scope.go:117] "RemoveContainer" containerID="8ff7d0184cf0adf6783ca2ef0e4fa98bcdbe844421c7a65a377b153dd3e99288" Feb 27 10:50:13 crc kubenswrapper[4728]: I0227 10:50:13.316820 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5e7ce3c0-d850-41a4-862d-6fd83e20ae1f-config-data-custom\") pod \"5e7ce3c0-d850-41a4-862d-6fd83e20ae1f\" (UID: \"5e7ce3c0-d850-41a4-862d-6fd83e20ae1f\") " Feb 27 10:50:13 crc kubenswrapper[4728]: I0227 10:50:13.317161 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e7ce3c0-d850-41a4-862d-6fd83e20ae1f-combined-ca-bundle\") pod \"5e7ce3c0-d850-41a4-862d-6fd83e20ae1f\" (UID: \"5e7ce3c0-d850-41a4-862d-6fd83e20ae1f\") " Feb 27 10:50:13 crc kubenswrapper[4728]: I0227 10:50:13.317352 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e7ce3c0-d850-41a4-862d-6fd83e20ae1f-config-data\") pod \"5e7ce3c0-d850-41a4-862d-6fd83e20ae1f\" (UID: \"5e7ce3c0-d850-41a4-862d-6fd83e20ae1f\") " Feb 27 10:50:13 crc kubenswrapper[4728]: I0227 10:50:13.317462 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x9bhr\" (UniqueName: \"kubernetes.io/projected/5e7ce3c0-d850-41a4-862d-6fd83e20ae1f-kube-api-access-x9bhr\") pod \"5e7ce3c0-d850-41a4-862d-6fd83e20ae1f\" (UID: \"5e7ce3c0-d850-41a4-862d-6fd83e20ae1f\") " Feb 27 10:50:13 crc kubenswrapper[4728]: I0227 10:50:13.365904 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e7ce3c0-d850-41a4-862d-6fd83e20ae1f-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "5e7ce3c0-d850-41a4-862d-6fd83e20ae1f" (UID: "5e7ce3c0-d850-41a4-862d-6fd83e20ae1f"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:50:13 crc kubenswrapper[4728]: I0227 10:50:13.366476 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e7ce3c0-d850-41a4-862d-6fd83e20ae1f-kube-api-access-x9bhr" (OuterVolumeSpecName: "kube-api-access-x9bhr") pod "5e7ce3c0-d850-41a4-862d-6fd83e20ae1f" (UID: "5e7ce3c0-d850-41a4-862d-6fd83e20ae1f"). InnerVolumeSpecName "kube-api-access-x9bhr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:50:13 crc kubenswrapper[4728]: I0227 10:50:13.426278 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6bbb6b68c6-zhv6j"] Feb 27 10:50:13 crc kubenswrapper[4728]: I0227 10:50:13.430273 4728 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5e7ce3c0-d850-41a4-862d-6fd83e20ae1f-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 27 10:50:13 crc kubenswrapper[4728]: I0227 10:50:13.447870 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x9bhr\" (UniqueName: \"kubernetes.io/projected/5e7ce3c0-d850-41a4-862d-6fd83e20ae1f-kube-api-access-x9bhr\") on node \"crc\" DevicePath \"\"" Feb 27 10:50:13 crc kubenswrapper[4728]: I0227 10:50:13.505211 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e7ce3c0-d850-41a4-862d-6fd83e20ae1f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5e7ce3c0-d850-41a4-862d-6fd83e20ae1f" (UID: "5e7ce3c0-d850-41a4-862d-6fd83e20ae1f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:50:13 crc kubenswrapper[4728]: I0227 10:50:13.513621 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-6bbb6b68c6-zhv6j"] Feb 27 10:50:13 crc kubenswrapper[4728]: I0227 10:50:13.551312 4728 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e7ce3c0-d850-41a4-862d-6fd83e20ae1f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 10:50:13 crc kubenswrapper[4728]: I0227 10:50:13.570434 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536490-5x76j" Feb 27 10:50:13 crc kubenswrapper[4728]: I0227 10:50:13.573641 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e7ce3c0-d850-41a4-862d-6fd83e20ae1f-config-data" (OuterVolumeSpecName: "config-data") pod "5e7ce3c0-d850-41a4-862d-6fd83e20ae1f" (UID: "5e7ce3c0-d850-41a4-862d-6fd83e20ae1f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:50:13 crc kubenswrapper[4728]: I0227 10:50:13.652925 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9px8h\" (UniqueName: \"kubernetes.io/projected/33b33e17-516b-44dc-b35c-267b155abfa3-kube-api-access-9px8h\") pod \"33b33e17-516b-44dc-b35c-267b155abfa3\" (UID: \"33b33e17-516b-44dc-b35c-267b155abfa3\") " Feb 27 10:50:13 crc kubenswrapper[4728]: I0227 10:50:13.653918 4728 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e7ce3c0-d850-41a4-862d-6fd83e20ae1f-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 10:50:13 crc kubenswrapper[4728]: I0227 10:50:13.658983 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33b33e17-516b-44dc-b35c-267b155abfa3-kube-api-access-9px8h" (OuterVolumeSpecName: "kube-api-access-9px8h") pod "33b33e17-516b-44dc-b35c-267b155abfa3" (UID: "33b33e17-516b-44dc-b35c-267b155abfa3"). InnerVolumeSpecName "kube-api-access-9px8h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:50:13 crc kubenswrapper[4728]: I0227 10:50:13.758077 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9px8h\" (UniqueName: \"kubernetes.io/projected/33b33e17-516b-44dc-b35c-267b155abfa3-kube-api-access-9px8h\") on node \"crc\" DevicePath \"\"" Feb 27 10:50:13 crc kubenswrapper[4728]: I0227 10:50:13.955608 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536484-b8wn9"] Feb 27 10:50:13 crc kubenswrapper[4728]: I0227 10:50:13.970703 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536484-b8wn9"] Feb 27 10:50:14 crc kubenswrapper[4728]: I0227 10:50:14.033305 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-54dc5858d7-7975b" event={"ID":"5e7ce3c0-d850-41a4-862d-6fd83e20ae1f","Type":"ContainerDied","Data":"79c9d5719c3b09abe1e2e0814c8cf6f530a378248ea3f2eb7f5f77d5fbe445d9"} Feb 27 10:50:14 crc kubenswrapper[4728]: I0227 10:50:14.033368 4728 scope.go:117] "RemoveContainer" containerID="e5417700dd6115e3e8901abc7bee28bf81b3f2d5a45d3f539280d05be4d62c71" Feb 27 10:50:14 crc kubenswrapper[4728]: I0227 10:50:14.033582 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-54dc5858d7-7975b" Feb 27 10:50:14 crc kubenswrapper[4728]: I0227 10:50:14.041535 4728 generic.go:334] "Generic (PLEG): container finished" podID="cdcc3d16-3cf1-48e9-accc-c65d869be697" containerID="408d3b32b4d155259369715a3751924d39ecb393a6859500da6d601d720306b6" exitCode=1 Feb 27 10:50:14 crc kubenswrapper[4728]: I0227 10:50:14.041614 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-7d59b769b4-g5cgn" event={"ID":"cdcc3d16-3cf1-48e9-accc-c65d869be697","Type":"ContainerDied","Data":"408d3b32b4d155259369715a3751924d39ecb393a6859500da6d601d720306b6"} Feb 27 10:50:14 crc kubenswrapper[4728]: I0227 10:50:14.042326 4728 scope.go:117] "RemoveContainer" containerID="408d3b32b4d155259369715a3751924d39ecb393a6859500da6d601d720306b6" Feb 27 10:50:14 crc kubenswrapper[4728]: E0227 10:50:14.042674 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-7d59b769b4-g5cgn_openstack(cdcc3d16-3cf1-48e9-accc-c65d869be697)\"" pod="openstack/heat-api-7d59b769b4-g5cgn" podUID="cdcc3d16-3cf1-48e9-accc-c65d869be697" Feb 27 10:50:14 crc kubenswrapper[4728]: I0227 10:50:14.045424 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536490-5x76j" event={"ID":"33b33e17-516b-44dc-b35c-267b155abfa3","Type":"ContainerDied","Data":"cd23d5b193252fc8bc4ead0cdf2eb08a71279c3e2ef59cdaa50c97ec56cdecfb"} Feb 27 10:50:14 crc kubenswrapper[4728]: I0227 10:50:14.045452 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cd23d5b193252fc8bc4ead0cdf2eb08a71279c3e2ef59cdaa50c97ec56cdecfb" Feb 27 10:50:14 crc kubenswrapper[4728]: I0227 10:50:14.045497 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536490-5x76j" Feb 27 10:50:14 crc kubenswrapper[4728]: I0227 10:50:14.051130 4728 generic.go:334] "Generic (PLEG): container finished" podID="ca804f78-493d-459d-9061-ee9fe01d8732" containerID="1dd580c1b325d5cc078e4ea5ea7d5071cbd0e43ca7a1ffdabb5aa958c04dba56" exitCode=1 Feb 27 10:50:14 crc kubenswrapper[4728]: I0227 10:50:14.051467 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-5d7b9476fd-gdljj" event={"ID":"ca804f78-493d-459d-9061-ee9fe01d8732","Type":"ContainerDied","Data":"1dd580c1b325d5cc078e4ea5ea7d5071cbd0e43ca7a1ffdabb5aa958c04dba56"} Feb 27 10:50:14 crc kubenswrapper[4728]: I0227 10:50:14.051995 4728 scope.go:117] "RemoveContainer" containerID="1dd580c1b325d5cc078e4ea5ea7d5071cbd0e43ca7a1ffdabb5aa958c04dba56" Feb 27 10:50:14 crc kubenswrapper[4728]: E0227 10:50:14.052233 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-5d7b9476fd-gdljj_openstack(ca804f78-493d-459d-9061-ee9fe01d8732)\"" pod="openstack/heat-cfnapi-5d7b9476fd-gdljj" podUID="ca804f78-493d-459d-9061-ee9fe01d8732" Feb 27 10:50:14 crc kubenswrapper[4728]: I0227 10:50:14.073547 4728 scope.go:117] "RemoveContainer" containerID="98aa5d90f75366f8df83f085775a7c5dc0d3742c07bff8ed2f7f1e13212ba811" Feb 27 10:50:14 crc kubenswrapper[4728]: I0227 10:50:14.077579 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-54dc5858d7-7975b"] Feb 27 10:50:14 crc kubenswrapper[4728]: I0227 10:50:14.098565 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-54dc5858d7-7975b"] Feb 27 10:50:14 crc kubenswrapper[4728]: I0227 10:50:14.151633 4728 scope.go:117] "RemoveContainer" containerID="867734cd15d9307d81fe19e1ee34c1fc43214fd377443fe8ded045635904eb56" Feb 27 10:50:14 crc kubenswrapper[4728]: I0227 10:50:14.669893 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-74b5q"] Feb 27 10:50:14 crc kubenswrapper[4728]: E0227 10:50:14.670599 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="390e39cb-69e2-4f3b-94a7-deef2d13c027" containerName="mariadb-database-create" Feb 27 10:50:14 crc kubenswrapper[4728]: I0227 10:50:14.670615 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="390e39cb-69e2-4f3b-94a7-deef2d13c027" containerName="mariadb-database-create" Feb 27 10:50:14 crc kubenswrapper[4728]: E0227 10:50:14.670627 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af86f825-d16c-4856-a823-01f06824a97f" containerName="mariadb-account-create-update" Feb 27 10:50:14 crc kubenswrapper[4728]: I0227 10:50:14.670635 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="af86f825-d16c-4856-a823-01f06824a97f" containerName="mariadb-account-create-update" Feb 27 10:50:14 crc kubenswrapper[4728]: E0227 10:50:14.670651 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d5f0465-1665-4c7f-b8f5-e345efbe3872" containerName="mariadb-database-create" Feb 27 10:50:14 crc kubenswrapper[4728]: I0227 10:50:14.670657 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d5f0465-1665-4c7f-b8f5-e345efbe3872" containerName="mariadb-database-create" Feb 27 10:50:14 crc kubenswrapper[4728]: E0227 10:50:14.670672 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3f1a225-d366-4d33-bd31-53ef143b546b" containerName="mariadb-account-create-update" Feb 27 10:50:14 crc kubenswrapper[4728]: I0227 10:50:14.670679 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3f1a225-d366-4d33-bd31-53ef143b546b" containerName="mariadb-account-create-update" Feb 27 10:50:14 crc kubenswrapper[4728]: E0227 10:50:14.670690 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae36fee1-d5a7-470b-ae15-4eeb8d126951" containerName="neutron-httpd" Feb 27 10:50:14 crc kubenswrapper[4728]: I0227 10:50:14.670696 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae36fee1-d5a7-470b-ae15-4eeb8d126951" containerName="neutron-httpd" Feb 27 10:50:14 crc kubenswrapper[4728]: E0227 10:50:14.670709 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c297845b-bf34-4c1d-88f0-3f9b28d09e54" containerName="mariadb-database-create" Feb 27 10:50:14 crc kubenswrapper[4728]: I0227 10:50:14.670715 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="c297845b-bf34-4c1d-88f0-3f9b28d09e54" containerName="mariadb-database-create" Feb 27 10:50:14 crc kubenswrapper[4728]: E0227 10:50:14.670726 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae36fee1-d5a7-470b-ae15-4eeb8d126951" containerName="neutron-api" Feb 27 10:50:14 crc kubenswrapper[4728]: I0227 10:50:14.670733 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae36fee1-d5a7-470b-ae15-4eeb8d126951" containerName="neutron-api" Feb 27 10:50:14 crc kubenswrapper[4728]: E0227 10:50:14.670744 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="051376ee-f481-42d6-a69c-637247569a1d" containerName="mariadb-account-create-update" Feb 27 10:50:14 crc kubenswrapper[4728]: I0227 10:50:14.670751 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="051376ee-f481-42d6-a69c-637247569a1d" containerName="mariadb-account-create-update" Feb 27 10:50:14 crc kubenswrapper[4728]: E0227 10:50:14.670784 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e7ce3c0-d850-41a4-862d-6fd83e20ae1f" containerName="heat-cfnapi" Feb 27 10:50:14 crc kubenswrapper[4728]: I0227 10:50:14.670789 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e7ce3c0-d850-41a4-862d-6fd83e20ae1f" containerName="heat-cfnapi" Feb 27 10:50:14 crc kubenswrapper[4728]: E0227 10:50:14.670798 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33b33e17-516b-44dc-b35c-267b155abfa3" containerName="oc" Feb 27 10:50:14 crc kubenswrapper[4728]: I0227 10:50:14.670804 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="33b33e17-516b-44dc-b35c-267b155abfa3" containerName="oc" Feb 27 10:50:14 crc kubenswrapper[4728]: I0227 10:50:14.670995 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="051376ee-f481-42d6-a69c-637247569a1d" containerName="mariadb-account-create-update" Feb 27 10:50:14 crc kubenswrapper[4728]: I0227 10:50:14.671006 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="af86f825-d16c-4856-a823-01f06824a97f" containerName="mariadb-account-create-update" Feb 27 10:50:14 crc kubenswrapper[4728]: I0227 10:50:14.671025 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="33b33e17-516b-44dc-b35c-267b155abfa3" containerName="oc" Feb 27 10:50:14 crc kubenswrapper[4728]: I0227 10:50:14.671036 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="c297845b-bf34-4c1d-88f0-3f9b28d09e54" containerName="mariadb-database-create" Feb 27 10:50:14 crc kubenswrapper[4728]: I0227 10:50:14.671046 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3f1a225-d366-4d33-bd31-53ef143b546b" containerName="mariadb-account-create-update" Feb 27 10:50:14 crc kubenswrapper[4728]: I0227 10:50:14.671054 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae36fee1-d5a7-470b-ae15-4eeb8d126951" containerName="neutron-api" Feb 27 10:50:14 crc kubenswrapper[4728]: I0227 10:50:14.671067 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e7ce3c0-d850-41a4-862d-6fd83e20ae1f" containerName="heat-cfnapi" Feb 27 10:50:14 crc kubenswrapper[4728]: I0227 10:50:14.671080 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d5f0465-1665-4c7f-b8f5-e345efbe3872" containerName="mariadb-database-create" Feb 27 10:50:14 crc kubenswrapper[4728]: I0227 10:50:14.671092 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae36fee1-d5a7-470b-ae15-4eeb8d126951" containerName="neutron-httpd" Feb 27 10:50:14 crc kubenswrapper[4728]: I0227 10:50:14.671101 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="390e39cb-69e2-4f3b-94a7-deef2d13c027" containerName="mariadb-database-create" Feb 27 10:50:14 crc kubenswrapper[4728]: I0227 10:50:14.671889 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-74b5q" Feb 27 10:50:14 crc kubenswrapper[4728]: I0227 10:50:14.674685 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 27 10:50:14 crc kubenswrapper[4728]: I0227 10:50:14.674881 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-nzv8g" Feb 27 10:50:14 crc kubenswrapper[4728]: I0227 10:50:14.676879 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Feb 27 10:50:14 crc kubenswrapper[4728]: I0227 10:50:14.703002 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-74b5q"] Feb 27 10:50:14 crc kubenswrapper[4728]: I0227 10:50:14.737622 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5e7ce3c0-d850-41a4-862d-6fd83e20ae1f" path="/var/lib/kubelet/pods/5e7ce3c0-d850-41a4-862d-6fd83e20ae1f/volumes" Feb 27 10:50:14 crc kubenswrapper[4728]: I0227 10:50:14.738150 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae36fee1-d5a7-470b-ae15-4eeb8d126951" path="/var/lib/kubelet/pods/ae36fee1-d5a7-470b-ae15-4eeb8d126951/volumes" Feb 27 10:50:14 crc kubenswrapper[4728]: I0227 10:50:14.740232 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e2071f71-edef-47c5-a08a-3ed7891d66c0" path="/var/lib/kubelet/pods/e2071f71-edef-47c5-a08a-3ed7891d66c0/volumes" Feb 27 10:50:14 crc kubenswrapper[4728]: I0227 10:50:14.784733 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4aaa1a60-7863-44de-8271-50bcd8fc1743-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-74b5q\" (UID: \"4aaa1a60-7863-44de-8271-50bcd8fc1743\") " pod="openstack/nova-cell0-conductor-db-sync-74b5q" Feb 27 10:50:14 crc kubenswrapper[4728]: I0227 10:50:14.784782 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4aaa1a60-7863-44de-8271-50bcd8fc1743-config-data\") pod \"nova-cell0-conductor-db-sync-74b5q\" (UID: \"4aaa1a60-7863-44de-8271-50bcd8fc1743\") " pod="openstack/nova-cell0-conductor-db-sync-74b5q" Feb 27 10:50:14 crc kubenswrapper[4728]: I0227 10:50:14.785332 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bhms\" (UniqueName: \"kubernetes.io/projected/4aaa1a60-7863-44de-8271-50bcd8fc1743-kube-api-access-6bhms\") pod \"nova-cell0-conductor-db-sync-74b5q\" (UID: \"4aaa1a60-7863-44de-8271-50bcd8fc1743\") " pod="openstack/nova-cell0-conductor-db-sync-74b5q" Feb 27 10:50:14 crc kubenswrapper[4728]: I0227 10:50:14.785464 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4aaa1a60-7863-44de-8271-50bcd8fc1743-scripts\") pod \"nova-cell0-conductor-db-sync-74b5q\" (UID: \"4aaa1a60-7863-44de-8271-50bcd8fc1743\") " pod="openstack/nova-cell0-conductor-db-sync-74b5q" Feb 27 10:50:14 crc kubenswrapper[4728]: I0227 10:50:14.887435 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6bhms\" (UniqueName: \"kubernetes.io/projected/4aaa1a60-7863-44de-8271-50bcd8fc1743-kube-api-access-6bhms\") pod \"nova-cell0-conductor-db-sync-74b5q\" (UID: \"4aaa1a60-7863-44de-8271-50bcd8fc1743\") " pod="openstack/nova-cell0-conductor-db-sync-74b5q" Feb 27 10:50:14 crc kubenswrapper[4728]: I0227 10:50:14.887989 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4aaa1a60-7863-44de-8271-50bcd8fc1743-scripts\") pod \"nova-cell0-conductor-db-sync-74b5q\" (UID: \"4aaa1a60-7863-44de-8271-50bcd8fc1743\") " pod="openstack/nova-cell0-conductor-db-sync-74b5q" Feb 27 10:50:14 crc kubenswrapper[4728]: I0227 10:50:14.888101 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4aaa1a60-7863-44de-8271-50bcd8fc1743-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-74b5q\" (UID: \"4aaa1a60-7863-44de-8271-50bcd8fc1743\") " pod="openstack/nova-cell0-conductor-db-sync-74b5q" Feb 27 10:50:14 crc kubenswrapper[4728]: I0227 10:50:14.888129 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4aaa1a60-7863-44de-8271-50bcd8fc1743-config-data\") pod \"nova-cell0-conductor-db-sync-74b5q\" (UID: \"4aaa1a60-7863-44de-8271-50bcd8fc1743\") " pod="openstack/nova-cell0-conductor-db-sync-74b5q" Feb 27 10:50:14 crc kubenswrapper[4728]: I0227 10:50:14.892976 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4aaa1a60-7863-44de-8271-50bcd8fc1743-config-data\") pod \"nova-cell0-conductor-db-sync-74b5q\" (UID: \"4aaa1a60-7863-44de-8271-50bcd8fc1743\") " pod="openstack/nova-cell0-conductor-db-sync-74b5q" Feb 27 10:50:14 crc kubenswrapper[4728]: I0227 10:50:14.893412 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4aaa1a60-7863-44de-8271-50bcd8fc1743-scripts\") pod \"nova-cell0-conductor-db-sync-74b5q\" (UID: \"4aaa1a60-7863-44de-8271-50bcd8fc1743\") " pod="openstack/nova-cell0-conductor-db-sync-74b5q" Feb 27 10:50:14 crc kubenswrapper[4728]: I0227 10:50:14.893768 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4aaa1a60-7863-44de-8271-50bcd8fc1743-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-74b5q\" (UID: \"4aaa1a60-7863-44de-8271-50bcd8fc1743\") " pod="openstack/nova-cell0-conductor-db-sync-74b5q" Feb 27 10:50:14 crc kubenswrapper[4728]: I0227 10:50:14.906082 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6bhms\" (UniqueName: \"kubernetes.io/projected/4aaa1a60-7863-44de-8271-50bcd8fc1743-kube-api-access-6bhms\") pod \"nova-cell0-conductor-db-sync-74b5q\" (UID: \"4aaa1a60-7863-44de-8271-50bcd8fc1743\") " pod="openstack/nova-cell0-conductor-db-sync-74b5q" Feb 27 10:50:15 crc kubenswrapper[4728]: I0227 10:50:15.062178 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-74b5q" Feb 27 10:50:15 crc kubenswrapper[4728]: I0227 10:50:15.062967 4728 scope.go:117] "RemoveContainer" containerID="1dd580c1b325d5cc078e4ea5ea7d5071cbd0e43ca7a1ffdabb5aa958c04dba56" Feb 27 10:50:15 crc kubenswrapper[4728]: E0227 10:50:15.063226 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-5d7b9476fd-gdljj_openstack(ca804f78-493d-459d-9061-ee9fe01d8732)\"" pod="openstack/heat-cfnapi-5d7b9476fd-gdljj" podUID="ca804f78-493d-459d-9061-ee9fe01d8732" Feb 27 10:50:15 crc kubenswrapper[4728]: I0227 10:50:15.067011 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"38a4c39e-48ad-4e05-a4b8-162fef5472b4","Type":"ContainerStarted","Data":"77755e04f3436ff2b8e985276880d15d9adf11c6e6b6597433874f97e68ae2da"} Feb 27 10:50:15 crc kubenswrapper[4728]: I0227 10:50:15.068087 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 27 10:50:15 crc kubenswrapper[4728]: I0227 10:50:15.070852 4728 scope.go:117] "RemoveContainer" containerID="408d3b32b4d155259369715a3751924d39ecb393a6859500da6d601d720306b6" Feb 27 10:50:15 crc kubenswrapper[4728]: E0227 10:50:15.071162 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-7d59b769b4-g5cgn_openstack(cdcc3d16-3cf1-48e9-accc-c65d869be697)\"" pod="openstack/heat-api-7d59b769b4-g5cgn" podUID="cdcc3d16-3cf1-48e9-accc-c65d869be697" Feb 27 10:50:15 crc kubenswrapper[4728]: I0227 10:50:15.155151 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.529050641 podStartE2EDuration="10.155125305s" podCreationTimestamp="2026-02-27 10:50:05 +0000 UTC" firstStartedPulling="2026-02-27 10:50:06.752853648 +0000 UTC m=+1426.715219754" lastFinishedPulling="2026-02-27 10:50:14.378928312 +0000 UTC m=+1434.341294418" observedRunningTime="2026-02-27 10:50:15.127675649 +0000 UTC m=+1435.090041785" watchObservedRunningTime="2026-02-27 10:50:15.155125305 +0000 UTC m=+1435.117491411" Feb 27 10:50:15 crc kubenswrapper[4728]: I0227 10:50:15.601943 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-74b5q"] Feb 27 10:50:15 crc kubenswrapper[4728]: W0227 10:50:15.610537 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4aaa1a60_7863_44de_8271_50bcd8fc1743.slice/crio-e3dec06fbed6b82e0789ee1235bd580c8e711d555d426458deb11086de593275 WatchSource:0}: Error finding container e3dec06fbed6b82e0789ee1235bd580c8e711d555d426458deb11086de593275: Status 404 returned error can't find the container with id e3dec06fbed6b82e0789ee1235bd580c8e711d555d426458deb11086de593275 Feb 27 10:50:16 crc kubenswrapper[4728]: I0227 10:50:16.082992 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-74b5q" event={"ID":"4aaa1a60-7863-44de-8271-50bcd8fc1743","Type":"ContainerStarted","Data":"e3dec06fbed6b82e0789ee1235bd580c8e711d555d426458deb11086de593275"} Feb 27 10:50:16 crc kubenswrapper[4728]: I0227 10:50:16.607620 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7d978555f9-6gjtz" Feb 27 10:50:16 crc kubenswrapper[4728]: I0227 10:50:16.681633 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bb4fc677f-67mnv"] Feb 27 10:50:16 crc kubenswrapper[4728]: I0227 10:50:16.681866 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6bb4fc677f-67mnv" podUID="6ac390a6-133b-434f-a853-cf7fc9d18de1" containerName="dnsmasq-dns" containerID="cri-o://144dcde888cd69d067767c13a3cfa8aef249f02c2010b5aa3981ea8193e9f6a7" gracePeriod=10 Feb 27 10:50:17 crc kubenswrapper[4728]: I0227 10:50:17.109406 4728 generic.go:334] "Generic (PLEG): container finished" podID="6ac390a6-133b-434f-a853-cf7fc9d18de1" containerID="144dcde888cd69d067767c13a3cfa8aef249f02c2010b5aa3981ea8193e9f6a7" exitCode=0 Feb 27 10:50:17 crc kubenswrapper[4728]: I0227 10:50:17.110929 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb4fc677f-67mnv" event={"ID":"6ac390a6-133b-434f-a853-cf7fc9d18de1","Type":"ContainerDied","Data":"144dcde888cd69d067767c13a3cfa8aef249f02c2010b5aa3981ea8193e9f6a7"} Feb 27 10:50:17 crc kubenswrapper[4728]: I0227 10:50:17.157523 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-6bfbb66dbc-8ddj8" Feb 27 10:50:17 crc kubenswrapper[4728]: I0227 10:50:17.397296 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bb4fc677f-67mnv" Feb 27 10:50:17 crc kubenswrapper[4728]: I0227 10:50:17.565333 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6ac390a6-133b-434f-a853-cf7fc9d18de1-dns-svc\") pod \"6ac390a6-133b-434f-a853-cf7fc9d18de1\" (UID: \"6ac390a6-133b-434f-a853-cf7fc9d18de1\") " Feb 27 10:50:17 crc kubenswrapper[4728]: I0227 10:50:17.565466 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6ac390a6-133b-434f-a853-cf7fc9d18de1-ovsdbserver-sb\") pod \"6ac390a6-133b-434f-a853-cf7fc9d18de1\" (UID: \"6ac390a6-133b-434f-a853-cf7fc9d18de1\") " Feb 27 10:50:17 crc kubenswrapper[4728]: I0227 10:50:17.565592 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ac390a6-133b-434f-a853-cf7fc9d18de1-config\") pod \"6ac390a6-133b-434f-a853-cf7fc9d18de1\" (UID: \"6ac390a6-133b-434f-a853-cf7fc9d18de1\") " Feb 27 10:50:17 crc kubenswrapper[4728]: I0227 10:50:17.565655 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6ac390a6-133b-434f-a853-cf7fc9d18de1-ovsdbserver-nb\") pod \"6ac390a6-133b-434f-a853-cf7fc9d18de1\" (UID: \"6ac390a6-133b-434f-a853-cf7fc9d18de1\") " Feb 27 10:50:17 crc kubenswrapper[4728]: I0227 10:50:17.565705 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qqflt\" (UniqueName: \"kubernetes.io/projected/6ac390a6-133b-434f-a853-cf7fc9d18de1-kube-api-access-qqflt\") pod \"6ac390a6-133b-434f-a853-cf7fc9d18de1\" (UID: \"6ac390a6-133b-434f-a853-cf7fc9d18de1\") " Feb 27 10:50:17 crc kubenswrapper[4728]: I0227 10:50:17.565740 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6ac390a6-133b-434f-a853-cf7fc9d18de1-dns-swift-storage-0\") pod \"6ac390a6-133b-434f-a853-cf7fc9d18de1\" (UID: \"6ac390a6-133b-434f-a853-cf7fc9d18de1\") " Feb 27 10:50:17 crc kubenswrapper[4728]: I0227 10:50:17.599745 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ac390a6-133b-434f-a853-cf7fc9d18de1-kube-api-access-qqflt" (OuterVolumeSpecName: "kube-api-access-qqflt") pod "6ac390a6-133b-434f-a853-cf7fc9d18de1" (UID: "6ac390a6-133b-434f-a853-cf7fc9d18de1"). InnerVolumeSpecName "kube-api-access-qqflt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:50:17 crc kubenswrapper[4728]: I0227 10:50:17.650664 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ac390a6-133b-434f-a853-cf7fc9d18de1-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "6ac390a6-133b-434f-a853-cf7fc9d18de1" (UID: "6ac390a6-133b-434f-a853-cf7fc9d18de1"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:50:17 crc kubenswrapper[4728]: I0227 10:50:17.670282 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qqflt\" (UniqueName: \"kubernetes.io/projected/6ac390a6-133b-434f-a853-cf7fc9d18de1-kube-api-access-qqflt\") on node \"crc\" DevicePath \"\"" Feb 27 10:50:17 crc kubenswrapper[4728]: I0227 10:50:17.670318 4728 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6ac390a6-133b-434f-a853-cf7fc9d18de1-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 27 10:50:17 crc kubenswrapper[4728]: I0227 10:50:17.686870 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ac390a6-133b-434f-a853-cf7fc9d18de1-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6ac390a6-133b-434f-a853-cf7fc9d18de1" (UID: "6ac390a6-133b-434f-a853-cf7fc9d18de1"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:50:17 crc kubenswrapper[4728]: I0227 10:50:17.690450 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ac390a6-133b-434f-a853-cf7fc9d18de1-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "6ac390a6-133b-434f-a853-cf7fc9d18de1" (UID: "6ac390a6-133b-434f-a853-cf7fc9d18de1"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:50:17 crc kubenswrapper[4728]: I0227 10:50:17.706230 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ac390a6-133b-434f-a853-cf7fc9d18de1-config" (OuterVolumeSpecName: "config") pod "6ac390a6-133b-434f-a853-cf7fc9d18de1" (UID: "6ac390a6-133b-434f-a853-cf7fc9d18de1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:50:17 crc kubenswrapper[4728]: I0227 10:50:17.711812 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ac390a6-133b-434f-a853-cf7fc9d18de1-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "6ac390a6-133b-434f-a853-cf7fc9d18de1" (UID: "6ac390a6-133b-434f-a853-cf7fc9d18de1"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:50:17 crc kubenswrapper[4728]: I0227 10:50:17.773953 4728 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ac390a6-133b-434f-a853-cf7fc9d18de1-config\") on node \"crc\" DevicePath \"\"" Feb 27 10:50:17 crc kubenswrapper[4728]: I0227 10:50:17.773979 4728 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6ac390a6-133b-434f-a853-cf7fc9d18de1-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 27 10:50:17 crc kubenswrapper[4728]: I0227 10:50:17.773990 4728 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6ac390a6-133b-434f-a853-cf7fc9d18de1-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 27 10:50:17 crc kubenswrapper[4728]: I0227 10:50:17.773998 4728 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6ac390a6-133b-434f-a853-cf7fc9d18de1-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 27 10:50:17 crc kubenswrapper[4728]: I0227 10:50:17.905858 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-69498cf5f9-8b2rb" Feb 27 10:50:17 crc kubenswrapper[4728]: I0227 10:50:17.955983 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-5d7b9476fd-gdljj"] Feb 27 10:50:18 crc kubenswrapper[4728]: I0227 10:50:18.130387 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb4fc677f-67mnv" event={"ID":"6ac390a6-133b-434f-a853-cf7fc9d18de1","Type":"ContainerDied","Data":"7baceb6ff4a47a0f088d9f22e4ccbb8aaa682c02f7db99253e549cb3f392b047"} Feb 27 10:50:18 crc kubenswrapper[4728]: I0227 10:50:18.130435 4728 scope.go:117] "RemoveContainer" containerID="144dcde888cd69d067767c13a3cfa8aef249f02c2010b5aa3981ea8193e9f6a7" Feb 27 10:50:18 crc kubenswrapper[4728]: I0227 10:50:18.130534 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bb4fc677f-67mnv" Feb 27 10:50:18 crc kubenswrapper[4728]: I0227 10:50:18.190595 4728 scope.go:117] "RemoveContainer" containerID="b4704af0a9c43403ef4f3d24fefd50ee5b346c2d963535d161ca1e5b68777f3d" Feb 27 10:50:18 crc kubenswrapper[4728]: I0227 10:50:18.193476 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bb4fc677f-67mnv"] Feb 27 10:50:18 crc kubenswrapper[4728]: I0227 10:50:18.208273 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6bb4fc677f-67mnv"] Feb 27 10:50:18 crc kubenswrapper[4728]: I0227 10:50:18.239627 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-5d7b9476fd-gdljj" Feb 27 10:50:18 crc kubenswrapper[4728]: I0227 10:50:18.267661 4728 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-api-7d59b769b4-g5cgn" Feb 27 10:50:18 crc kubenswrapper[4728]: I0227 10:50:18.267948 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-7d59b769b4-g5cgn" Feb 27 10:50:18 crc kubenswrapper[4728]: I0227 10:50:18.268495 4728 scope.go:117] "RemoveContainer" containerID="408d3b32b4d155259369715a3751924d39ecb393a6859500da6d601d720306b6" Feb 27 10:50:18 crc kubenswrapper[4728]: E0227 10:50:18.268882 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-7d59b769b4-g5cgn_openstack(cdcc3d16-3cf1-48e9-accc-c65d869be697)\"" pod="openstack/heat-api-7d59b769b4-g5cgn" podUID="cdcc3d16-3cf1-48e9-accc-c65d869be697" Feb 27 10:50:18 crc kubenswrapper[4728]: I0227 10:50:18.489551 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-5d7b9476fd-gdljj" Feb 27 10:50:18 crc kubenswrapper[4728]: I0227 10:50:18.594808 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca804f78-493d-459d-9061-ee9fe01d8732-config-data\") pod \"ca804f78-493d-459d-9061-ee9fe01d8732\" (UID: \"ca804f78-493d-459d-9061-ee9fe01d8732\") " Feb 27 10:50:18 crc kubenswrapper[4728]: I0227 10:50:18.594940 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ca804f78-493d-459d-9061-ee9fe01d8732-config-data-custom\") pod \"ca804f78-493d-459d-9061-ee9fe01d8732\" (UID: \"ca804f78-493d-459d-9061-ee9fe01d8732\") " Feb 27 10:50:18 crc kubenswrapper[4728]: I0227 10:50:18.595470 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zvxjn\" (UniqueName: \"kubernetes.io/projected/ca804f78-493d-459d-9061-ee9fe01d8732-kube-api-access-zvxjn\") pod \"ca804f78-493d-459d-9061-ee9fe01d8732\" (UID: \"ca804f78-493d-459d-9061-ee9fe01d8732\") " Feb 27 10:50:18 crc kubenswrapper[4728]: I0227 10:50:18.597845 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca804f78-493d-459d-9061-ee9fe01d8732-combined-ca-bundle\") pod \"ca804f78-493d-459d-9061-ee9fe01d8732\" (UID: \"ca804f78-493d-459d-9061-ee9fe01d8732\") " Feb 27 10:50:18 crc kubenswrapper[4728]: I0227 10:50:18.610868 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca804f78-493d-459d-9061-ee9fe01d8732-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "ca804f78-493d-459d-9061-ee9fe01d8732" (UID: "ca804f78-493d-459d-9061-ee9fe01d8732"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:50:18 crc kubenswrapper[4728]: I0227 10:50:18.625746 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca804f78-493d-459d-9061-ee9fe01d8732-kube-api-access-zvxjn" (OuterVolumeSpecName: "kube-api-access-zvxjn") pod "ca804f78-493d-459d-9061-ee9fe01d8732" (UID: "ca804f78-493d-459d-9061-ee9fe01d8732"). InnerVolumeSpecName "kube-api-access-zvxjn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:50:18 crc kubenswrapper[4728]: I0227 10:50:18.676866 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca804f78-493d-459d-9061-ee9fe01d8732-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ca804f78-493d-459d-9061-ee9fe01d8732" (UID: "ca804f78-493d-459d-9061-ee9fe01d8732"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:50:18 crc kubenswrapper[4728]: I0227 10:50:18.702811 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zvxjn\" (UniqueName: \"kubernetes.io/projected/ca804f78-493d-459d-9061-ee9fe01d8732-kube-api-access-zvxjn\") on node \"crc\" DevicePath \"\"" Feb 27 10:50:18 crc kubenswrapper[4728]: I0227 10:50:18.702932 4728 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca804f78-493d-459d-9061-ee9fe01d8732-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 10:50:18 crc kubenswrapper[4728]: I0227 10:50:18.702992 4728 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ca804f78-493d-459d-9061-ee9fe01d8732-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 27 10:50:18 crc kubenswrapper[4728]: I0227 10:50:18.710829 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca804f78-493d-459d-9061-ee9fe01d8732-config-data" (OuterVolumeSpecName: "config-data") pod "ca804f78-493d-459d-9061-ee9fe01d8732" (UID: "ca804f78-493d-459d-9061-ee9fe01d8732"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:50:18 crc kubenswrapper[4728]: I0227 10:50:18.743478 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ac390a6-133b-434f-a853-cf7fc9d18de1" path="/var/lib/kubelet/pods/6ac390a6-133b-434f-a853-cf7fc9d18de1/volumes" Feb 27 10:50:18 crc kubenswrapper[4728]: I0227 10:50:18.818056 4728 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca804f78-493d-459d-9061-ee9fe01d8732-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 10:50:18 crc kubenswrapper[4728]: I0227 10:50:18.828165 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-5dcd5b5d76-tfs97" Feb 27 10:50:19 crc kubenswrapper[4728]: I0227 10:50:19.191235 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-5d7b9476fd-gdljj" event={"ID":"ca804f78-493d-459d-9061-ee9fe01d8732","Type":"ContainerDied","Data":"a6f287951a7e8e887032a9cf69cae4c6062482bdacd2b323b25105eb9e36dd45"} Feb 27 10:50:19 crc kubenswrapper[4728]: I0227 10:50:19.191300 4728 scope.go:117] "RemoveContainer" containerID="1dd580c1b325d5cc078e4ea5ea7d5071cbd0e43ca7a1ffdabb5aa958c04dba56" Feb 27 10:50:19 crc kubenswrapper[4728]: I0227 10:50:19.191433 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-5d7b9476fd-gdljj" Feb 27 10:50:19 crc kubenswrapper[4728]: I0227 10:50:19.204972 4728 scope.go:117] "RemoveContainer" containerID="408d3b32b4d155259369715a3751924d39ecb393a6859500da6d601d720306b6" Feb 27 10:50:19 crc kubenswrapper[4728]: E0227 10:50:19.205216 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-7d59b769b4-g5cgn_openstack(cdcc3d16-3cf1-48e9-accc-c65d869be697)\"" pod="openstack/heat-api-7d59b769b4-g5cgn" podUID="cdcc3d16-3cf1-48e9-accc-c65d869be697" Feb 27 10:50:19 crc kubenswrapper[4728]: I0227 10:50:19.244577 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-5d7b9476fd-gdljj"] Feb 27 10:50:19 crc kubenswrapper[4728]: I0227 10:50:19.267105 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-5d7b9476fd-gdljj"] Feb 27 10:50:20 crc kubenswrapper[4728]: I0227 10:50:20.430042 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-768d449478-rt9ff" Feb 27 10:50:20 crc kubenswrapper[4728]: I0227 10:50:20.457110 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 27 10:50:20 crc kubenswrapper[4728]: I0227 10:50:20.457621 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="38a4c39e-48ad-4e05-a4b8-162fef5472b4" containerName="ceilometer-central-agent" containerID="cri-o://1147fea5bd875a0f5d65df8cd3e342653fc2b50663a72ccc7f2673ef6dd4bc17" gracePeriod=30 Feb 27 10:50:20 crc kubenswrapper[4728]: I0227 10:50:20.457733 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="38a4c39e-48ad-4e05-a4b8-162fef5472b4" containerName="sg-core" containerID="cri-o://9fa81f802da6341489496460593874668928df9ce8293595621275dc671ab4de" gracePeriod=30 Feb 27 10:50:20 crc kubenswrapper[4728]: I0227 10:50:20.457743 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="38a4c39e-48ad-4e05-a4b8-162fef5472b4" containerName="proxy-httpd" containerID="cri-o://77755e04f3436ff2b8e985276880d15d9adf11c6e6b6597433874f97e68ae2da" gracePeriod=30 Feb 27 10:50:20 crc kubenswrapper[4728]: I0227 10:50:20.457882 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="38a4c39e-48ad-4e05-a4b8-162fef5472b4" containerName="ceilometer-notification-agent" containerID="cri-o://9a8493936a4216b76ee7f114a82acbb29dae8761ff25687f8e678a3b0dde5c05" gracePeriod=30 Feb 27 10:50:20 crc kubenswrapper[4728]: I0227 10:50:20.489364 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-768d449478-rt9ff" Feb 27 10:50:20 crc kubenswrapper[4728]: I0227 10:50:20.581913 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-7ff4498744-6lwbb"] Feb 27 10:50:20 crc kubenswrapper[4728]: I0227 10:50:20.587036 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-7ff4498744-6lwbb" podUID="a4671725-3a26-4c77-ad25-01c9aa82bdf0" containerName="placement-log" containerID="cri-o://5e64b5e5dee261a73a652b58276e7104d40ee91eb4d60bb79282c5a31b47261a" gracePeriod=30 Feb 27 10:50:20 crc kubenswrapper[4728]: I0227 10:50:20.587216 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-7ff4498744-6lwbb" podUID="a4671725-3a26-4c77-ad25-01c9aa82bdf0" containerName="placement-api" containerID="cri-o://d004a9842d2f8dd22564f858096272a1c8aa802c0fa92e92f4e304530b77a7f1" gracePeriod=30 Feb 27 10:50:20 crc kubenswrapper[4728]: I0227 10:50:20.742495 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca804f78-493d-459d-9061-ee9fe01d8732" path="/var/lib/kubelet/pods/ca804f78-493d-459d-9061-ee9fe01d8732/volumes" Feb 27 10:50:21 crc kubenswrapper[4728]: I0227 10:50:21.241466 4728 generic.go:334] "Generic (PLEG): container finished" podID="38a4c39e-48ad-4e05-a4b8-162fef5472b4" containerID="77755e04f3436ff2b8e985276880d15d9adf11c6e6b6597433874f97e68ae2da" exitCode=0 Feb 27 10:50:21 crc kubenswrapper[4728]: I0227 10:50:21.241532 4728 generic.go:334] "Generic (PLEG): container finished" podID="38a4c39e-48ad-4e05-a4b8-162fef5472b4" containerID="9fa81f802da6341489496460593874668928df9ce8293595621275dc671ab4de" exitCode=2 Feb 27 10:50:21 crc kubenswrapper[4728]: I0227 10:50:21.241543 4728 generic.go:334] "Generic (PLEG): container finished" podID="38a4c39e-48ad-4e05-a4b8-162fef5472b4" containerID="9a8493936a4216b76ee7f114a82acbb29dae8761ff25687f8e678a3b0dde5c05" exitCode=0 Feb 27 10:50:21 crc kubenswrapper[4728]: I0227 10:50:21.241537 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"38a4c39e-48ad-4e05-a4b8-162fef5472b4","Type":"ContainerDied","Data":"77755e04f3436ff2b8e985276880d15d9adf11c6e6b6597433874f97e68ae2da"} Feb 27 10:50:21 crc kubenswrapper[4728]: I0227 10:50:21.241579 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"38a4c39e-48ad-4e05-a4b8-162fef5472b4","Type":"ContainerDied","Data":"9fa81f802da6341489496460593874668928df9ce8293595621275dc671ab4de"} Feb 27 10:50:21 crc kubenswrapper[4728]: I0227 10:50:21.241593 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"38a4c39e-48ad-4e05-a4b8-162fef5472b4","Type":"ContainerDied","Data":"9a8493936a4216b76ee7f114a82acbb29dae8761ff25687f8e678a3b0dde5c05"} Feb 27 10:50:21 crc kubenswrapper[4728]: I0227 10:50:21.241603 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"38a4c39e-48ad-4e05-a4b8-162fef5472b4","Type":"ContainerDied","Data":"1147fea5bd875a0f5d65df8cd3e342653fc2b50663a72ccc7f2673ef6dd4bc17"} Feb 27 10:50:21 crc kubenswrapper[4728]: I0227 10:50:21.241550 4728 generic.go:334] "Generic (PLEG): container finished" podID="38a4c39e-48ad-4e05-a4b8-162fef5472b4" containerID="1147fea5bd875a0f5d65df8cd3e342653fc2b50663a72ccc7f2673ef6dd4bc17" exitCode=0 Feb 27 10:50:21 crc kubenswrapper[4728]: I0227 10:50:21.244283 4728 generic.go:334] "Generic (PLEG): container finished" podID="a4671725-3a26-4c77-ad25-01c9aa82bdf0" containerID="5e64b5e5dee261a73a652b58276e7104d40ee91eb4d60bb79282c5a31b47261a" exitCode=143 Feb 27 10:50:21 crc kubenswrapper[4728]: I0227 10:50:21.244401 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7ff4498744-6lwbb" event={"ID":"a4671725-3a26-4c77-ad25-01c9aa82bdf0","Type":"ContainerDied","Data":"5e64b5e5dee261a73a652b58276e7104d40ee91eb4d60bb79282c5a31b47261a"} Feb 27 10:50:22 crc kubenswrapper[4728]: I0227 10:50:22.998816 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-76558d5849-k75gx" Feb 27 10:50:23 crc kubenswrapper[4728]: I0227 10:50:23.110216 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-7d59b769b4-g5cgn"] Feb 27 10:50:23 crc kubenswrapper[4728]: I0227 10:50:23.293560 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-7777f965d-45648" Feb 27 10:50:23 crc kubenswrapper[4728]: I0227 10:50:23.379385 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-84dc467799-w92cc"] Feb 27 10:50:23 crc kubenswrapper[4728]: I0227 10:50:23.380281 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-engine-84dc467799-w92cc" podUID="b0db7498-5f3f-4550-932e-64f7d721e902" containerName="heat-engine" containerID="cri-o://4f0bce30e7995e8fede562a5ec4a20cf6cce0e35118324e85e7d86547b46c89b" gracePeriod=60 Feb 27 10:50:23 crc kubenswrapper[4728]: E0227 10:50:23.384695 4728 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4f0bce30e7995e8fede562a5ec4a20cf6cce0e35118324e85e7d86547b46c89b" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 27 10:50:23 crc kubenswrapper[4728]: E0227 10:50:23.401692 4728 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4f0bce30e7995e8fede562a5ec4a20cf6cce0e35118324e85e7d86547b46c89b" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 27 10:50:23 crc kubenswrapper[4728]: E0227 10:50:23.406973 4728 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4f0bce30e7995e8fede562a5ec4a20cf6cce0e35118324e85e7d86547b46c89b" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 27 10:50:23 crc kubenswrapper[4728]: E0227 10:50:23.407042 4728 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-84dc467799-w92cc" podUID="b0db7498-5f3f-4550-932e-64f7d721e902" containerName="heat-engine" Feb 27 10:50:24 crc kubenswrapper[4728]: I0227 10:50:24.326164 4728 generic.go:334] "Generic (PLEG): container finished" podID="a4671725-3a26-4c77-ad25-01c9aa82bdf0" containerID="d004a9842d2f8dd22564f858096272a1c8aa802c0fa92e92f4e304530b77a7f1" exitCode=0 Feb 27 10:50:24 crc kubenswrapper[4728]: I0227 10:50:24.326231 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7ff4498744-6lwbb" event={"ID":"a4671725-3a26-4c77-ad25-01c9aa82bdf0","Type":"ContainerDied","Data":"d004a9842d2f8dd22564f858096272a1c8aa802c0fa92e92f4e304530b77a7f1"} Feb 27 10:50:26 crc kubenswrapper[4728]: E0227 10:50:26.480779 4728 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4f0bce30e7995e8fede562a5ec4a20cf6cce0e35118324e85e7d86547b46c89b" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 27 10:50:26 crc kubenswrapper[4728]: E0227 10:50:26.482858 4728 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4f0bce30e7995e8fede562a5ec4a20cf6cce0e35118324e85e7d86547b46c89b" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 27 10:50:26 crc kubenswrapper[4728]: E0227 10:50:26.484442 4728 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4f0bce30e7995e8fede562a5ec4a20cf6cce0e35118324e85e7d86547b46c89b" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 27 10:50:26 crc kubenswrapper[4728]: E0227 10:50:26.484554 4728 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-84dc467799-w92cc" podUID="b0db7498-5f3f-4550-932e-64f7d721e902" containerName="heat-engine" Feb 27 10:50:28 crc kubenswrapper[4728]: I0227 10:50:28.048858 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-7d59b769b4-g5cgn" Feb 27 10:50:28 crc kubenswrapper[4728]: I0227 10:50:28.173049 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-plvmw\" (UniqueName: \"kubernetes.io/projected/cdcc3d16-3cf1-48e9-accc-c65d869be697-kube-api-access-plvmw\") pod \"cdcc3d16-3cf1-48e9-accc-c65d869be697\" (UID: \"cdcc3d16-3cf1-48e9-accc-c65d869be697\") " Feb 27 10:50:28 crc kubenswrapper[4728]: I0227 10:50:28.173086 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cdcc3d16-3cf1-48e9-accc-c65d869be697-config-data\") pod \"cdcc3d16-3cf1-48e9-accc-c65d869be697\" (UID: \"cdcc3d16-3cf1-48e9-accc-c65d869be697\") " Feb 27 10:50:28 crc kubenswrapper[4728]: I0227 10:50:28.173187 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cdcc3d16-3cf1-48e9-accc-c65d869be697-combined-ca-bundle\") pod \"cdcc3d16-3cf1-48e9-accc-c65d869be697\" (UID: \"cdcc3d16-3cf1-48e9-accc-c65d869be697\") " Feb 27 10:50:28 crc kubenswrapper[4728]: I0227 10:50:28.173292 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cdcc3d16-3cf1-48e9-accc-c65d869be697-config-data-custom\") pod \"cdcc3d16-3cf1-48e9-accc-c65d869be697\" (UID: \"cdcc3d16-3cf1-48e9-accc-c65d869be697\") " Feb 27 10:50:28 crc kubenswrapper[4728]: I0227 10:50:28.182040 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cdcc3d16-3cf1-48e9-accc-c65d869be697-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "cdcc3d16-3cf1-48e9-accc-c65d869be697" (UID: "cdcc3d16-3cf1-48e9-accc-c65d869be697"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:50:28 crc kubenswrapper[4728]: I0227 10:50:28.193718 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cdcc3d16-3cf1-48e9-accc-c65d869be697-kube-api-access-plvmw" (OuterVolumeSpecName: "kube-api-access-plvmw") pod "cdcc3d16-3cf1-48e9-accc-c65d869be697" (UID: "cdcc3d16-3cf1-48e9-accc-c65d869be697"). InnerVolumeSpecName "kube-api-access-plvmw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:50:28 crc kubenswrapper[4728]: I0227 10:50:28.249161 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cdcc3d16-3cf1-48e9-accc-c65d869be697-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cdcc3d16-3cf1-48e9-accc-c65d869be697" (UID: "cdcc3d16-3cf1-48e9-accc-c65d869be697"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:50:28 crc kubenswrapper[4728]: I0227 10:50:28.282202 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-plvmw\" (UniqueName: \"kubernetes.io/projected/cdcc3d16-3cf1-48e9-accc-c65d869be697-kube-api-access-plvmw\") on node \"crc\" DevicePath \"\"" Feb 27 10:50:28 crc kubenswrapper[4728]: I0227 10:50:28.282237 4728 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cdcc3d16-3cf1-48e9-accc-c65d869be697-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 10:50:28 crc kubenswrapper[4728]: I0227 10:50:28.282246 4728 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cdcc3d16-3cf1-48e9-accc-c65d869be697-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 27 10:50:28 crc kubenswrapper[4728]: I0227 10:50:28.390580 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-7d59b769b4-g5cgn" event={"ID":"cdcc3d16-3cf1-48e9-accc-c65d869be697","Type":"ContainerDied","Data":"0e5a72f0ee394b7e29c79cd5833daaa909a2f802153ab24fe5c301a413059473"} Feb 27 10:50:28 crc kubenswrapper[4728]: I0227 10:50:28.390634 4728 scope.go:117] "RemoveContainer" containerID="408d3b32b4d155259369715a3751924d39ecb393a6859500da6d601d720306b6" Feb 27 10:50:28 crc kubenswrapper[4728]: I0227 10:50:28.390808 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-7d59b769b4-g5cgn" Feb 27 10:50:28 crc kubenswrapper[4728]: I0227 10:50:28.416815 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cdcc3d16-3cf1-48e9-accc-c65d869be697-config-data" (OuterVolumeSpecName: "config-data") pod "cdcc3d16-3cf1-48e9-accc-c65d869be697" (UID: "cdcc3d16-3cf1-48e9-accc-c65d869be697"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:50:28 crc kubenswrapper[4728]: I0227 10:50:28.459477 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 27 10:50:28 crc kubenswrapper[4728]: I0227 10:50:28.486468 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9qq4h\" (UniqueName: \"kubernetes.io/projected/38a4c39e-48ad-4e05-a4b8-162fef5472b4-kube-api-access-9qq4h\") pod \"38a4c39e-48ad-4e05-a4b8-162fef5472b4\" (UID: \"38a4c39e-48ad-4e05-a4b8-162fef5472b4\") " Feb 27 10:50:28 crc kubenswrapper[4728]: I0227 10:50:28.487386 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/38a4c39e-48ad-4e05-a4b8-162fef5472b4-run-httpd\") pod \"38a4c39e-48ad-4e05-a4b8-162fef5472b4\" (UID: \"38a4c39e-48ad-4e05-a4b8-162fef5472b4\") " Feb 27 10:50:28 crc kubenswrapper[4728]: I0227 10:50:28.487513 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38a4c39e-48ad-4e05-a4b8-162fef5472b4-combined-ca-bundle\") pod \"38a4c39e-48ad-4e05-a4b8-162fef5472b4\" (UID: \"38a4c39e-48ad-4e05-a4b8-162fef5472b4\") " Feb 27 10:50:28 crc kubenswrapper[4728]: I0227 10:50:28.487633 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/38a4c39e-48ad-4e05-a4b8-162fef5472b4-scripts\") pod \"38a4c39e-48ad-4e05-a4b8-162fef5472b4\" (UID: \"38a4c39e-48ad-4e05-a4b8-162fef5472b4\") " Feb 27 10:50:28 crc kubenswrapper[4728]: I0227 10:50:28.487783 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/38a4c39e-48ad-4e05-a4b8-162fef5472b4-log-httpd\") pod \"38a4c39e-48ad-4e05-a4b8-162fef5472b4\" (UID: \"38a4c39e-48ad-4e05-a4b8-162fef5472b4\") " Feb 27 10:50:28 crc kubenswrapper[4728]: I0227 10:50:28.487835 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38a4c39e-48ad-4e05-a4b8-162fef5472b4-config-data\") pod \"38a4c39e-48ad-4e05-a4b8-162fef5472b4\" (UID: \"38a4c39e-48ad-4e05-a4b8-162fef5472b4\") " Feb 27 10:50:28 crc kubenswrapper[4728]: I0227 10:50:28.487874 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/38a4c39e-48ad-4e05-a4b8-162fef5472b4-sg-core-conf-yaml\") pod \"38a4c39e-48ad-4e05-a4b8-162fef5472b4\" (UID: \"38a4c39e-48ad-4e05-a4b8-162fef5472b4\") " Feb 27 10:50:28 crc kubenswrapper[4728]: I0227 10:50:28.488553 4728 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cdcc3d16-3cf1-48e9-accc-c65d869be697-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 10:50:28 crc kubenswrapper[4728]: I0227 10:50:28.493074 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/38a4c39e-48ad-4e05-a4b8-162fef5472b4-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "38a4c39e-48ad-4e05-a4b8-162fef5472b4" (UID: "38a4c39e-48ad-4e05-a4b8-162fef5472b4"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:50:28 crc kubenswrapper[4728]: I0227 10:50:28.494226 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/38a4c39e-48ad-4e05-a4b8-162fef5472b4-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "38a4c39e-48ad-4e05-a4b8-162fef5472b4" (UID: "38a4c39e-48ad-4e05-a4b8-162fef5472b4"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:50:28 crc kubenswrapper[4728]: I0227 10:50:28.499697 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38a4c39e-48ad-4e05-a4b8-162fef5472b4-kube-api-access-9qq4h" (OuterVolumeSpecName: "kube-api-access-9qq4h") pod "38a4c39e-48ad-4e05-a4b8-162fef5472b4" (UID: "38a4c39e-48ad-4e05-a4b8-162fef5472b4"). InnerVolumeSpecName "kube-api-access-9qq4h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:50:28 crc kubenswrapper[4728]: I0227 10:50:28.501626 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38a4c39e-48ad-4e05-a4b8-162fef5472b4-scripts" (OuterVolumeSpecName: "scripts") pod "38a4c39e-48ad-4e05-a4b8-162fef5472b4" (UID: "38a4c39e-48ad-4e05-a4b8-162fef5472b4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:50:28 crc kubenswrapper[4728]: I0227 10:50:28.541798 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38a4c39e-48ad-4e05-a4b8-162fef5472b4-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "38a4c39e-48ad-4e05-a4b8-162fef5472b4" (UID: "38a4c39e-48ad-4e05-a4b8-162fef5472b4"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:50:28 crc kubenswrapper[4728]: I0227 10:50:28.577996 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7ff4498744-6lwbb" Feb 27 10:50:28 crc kubenswrapper[4728]: I0227 10:50:28.611791 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a4671725-3a26-4c77-ad25-01c9aa82bdf0-internal-tls-certs\") pod \"a4671725-3a26-4c77-ad25-01c9aa82bdf0\" (UID: \"a4671725-3a26-4c77-ad25-01c9aa82bdf0\") " Feb 27 10:50:28 crc kubenswrapper[4728]: I0227 10:50:28.611955 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a4671725-3a26-4c77-ad25-01c9aa82bdf0-logs\") pod \"a4671725-3a26-4c77-ad25-01c9aa82bdf0\" (UID: \"a4671725-3a26-4c77-ad25-01c9aa82bdf0\") " Feb 27 10:50:28 crc kubenswrapper[4728]: I0227 10:50:28.612052 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a4671725-3a26-4c77-ad25-01c9aa82bdf0-public-tls-certs\") pod \"a4671725-3a26-4c77-ad25-01c9aa82bdf0\" (UID: \"a4671725-3a26-4c77-ad25-01c9aa82bdf0\") " Feb 27 10:50:28 crc kubenswrapper[4728]: I0227 10:50:28.612114 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7vm4z\" (UniqueName: \"kubernetes.io/projected/a4671725-3a26-4c77-ad25-01c9aa82bdf0-kube-api-access-7vm4z\") pod \"a4671725-3a26-4c77-ad25-01c9aa82bdf0\" (UID: \"a4671725-3a26-4c77-ad25-01c9aa82bdf0\") " Feb 27 10:50:28 crc kubenswrapper[4728]: I0227 10:50:28.612277 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4671725-3a26-4c77-ad25-01c9aa82bdf0-combined-ca-bundle\") pod \"a4671725-3a26-4c77-ad25-01c9aa82bdf0\" (UID: \"a4671725-3a26-4c77-ad25-01c9aa82bdf0\") " Feb 27 10:50:28 crc kubenswrapper[4728]: I0227 10:50:28.612314 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a4671725-3a26-4c77-ad25-01c9aa82bdf0-scripts\") pod \"a4671725-3a26-4c77-ad25-01c9aa82bdf0\" (UID: \"a4671725-3a26-4c77-ad25-01c9aa82bdf0\") " Feb 27 10:50:28 crc kubenswrapper[4728]: I0227 10:50:28.612337 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4671725-3a26-4c77-ad25-01c9aa82bdf0-config-data\") pod \"a4671725-3a26-4c77-ad25-01c9aa82bdf0\" (UID: \"a4671725-3a26-4c77-ad25-01c9aa82bdf0\") " Feb 27 10:50:28 crc kubenswrapper[4728]: I0227 10:50:28.618273 4728 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/38a4c39e-48ad-4e05-a4b8-162fef5472b4-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 27 10:50:28 crc kubenswrapper[4728]: I0227 10:50:28.618299 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9qq4h\" (UniqueName: \"kubernetes.io/projected/38a4c39e-48ad-4e05-a4b8-162fef5472b4-kube-api-access-9qq4h\") on node \"crc\" DevicePath \"\"" Feb 27 10:50:28 crc kubenswrapper[4728]: I0227 10:50:28.618311 4728 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/38a4c39e-48ad-4e05-a4b8-162fef5472b4-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 27 10:50:28 crc kubenswrapper[4728]: I0227 10:50:28.618328 4728 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/38a4c39e-48ad-4e05-a4b8-162fef5472b4-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 10:50:28 crc kubenswrapper[4728]: I0227 10:50:28.618337 4728 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/38a4c39e-48ad-4e05-a4b8-162fef5472b4-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 27 10:50:28 crc kubenswrapper[4728]: I0227 10:50:28.623375 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a4671725-3a26-4c77-ad25-01c9aa82bdf0-logs" (OuterVolumeSpecName: "logs") pod "a4671725-3a26-4c77-ad25-01c9aa82bdf0" (UID: "a4671725-3a26-4c77-ad25-01c9aa82bdf0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:50:28 crc kubenswrapper[4728]: I0227 10:50:28.626725 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4671725-3a26-4c77-ad25-01c9aa82bdf0-scripts" (OuterVolumeSpecName: "scripts") pod "a4671725-3a26-4c77-ad25-01c9aa82bdf0" (UID: "a4671725-3a26-4c77-ad25-01c9aa82bdf0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:50:28 crc kubenswrapper[4728]: I0227 10:50:28.669772 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4671725-3a26-4c77-ad25-01c9aa82bdf0-kube-api-access-7vm4z" (OuterVolumeSpecName: "kube-api-access-7vm4z") pod "a4671725-3a26-4c77-ad25-01c9aa82bdf0" (UID: "a4671725-3a26-4c77-ad25-01c9aa82bdf0"). InnerVolumeSpecName "kube-api-access-7vm4z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:50:28 crc kubenswrapper[4728]: I0227 10:50:28.686728 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38a4c39e-48ad-4e05-a4b8-162fef5472b4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "38a4c39e-48ad-4e05-a4b8-162fef5472b4" (UID: "38a4c39e-48ad-4e05-a4b8-162fef5472b4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:50:28 crc kubenswrapper[4728]: I0227 10:50:28.732563 4728 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a4671725-3a26-4c77-ad25-01c9aa82bdf0-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 10:50:28 crc kubenswrapper[4728]: I0227 10:50:28.732774 4728 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38a4c39e-48ad-4e05-a4b8-162fef5472b4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 10:50:28 crc kubenswrapper[4728]: I0227 10:50:28.732843 4728 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a4671725-3a26-4c77-ad25-01c9aa82bdf0-logs\") on node \"crc\" DevicePath \"\"" Feb 27 10:50:28 crc kubenswrapper[4728]: I0227 10:50:28.732897 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7vm4z\" (UniqueName: \"kubernetes.io/projected/a4671725-3a26-4c77-ad25-01c9aa82bdf0-kube-api-access-7vm4z\") on node \"crc\" DevicePath \"\"" Feb 27 10:50:28 crc kubenswrapper[4728]: I0227 10:50:28.749985 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4671725-3a26-4c77-ad25-01c9aa82bdf0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a4671725-3a26-4c77-ad25-01c9aa82bdf0" (UID: "a4671725-3a26-4c77-ad25-01c9aa82bdf0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:50:28 crc kubenswrapper[4728]: I0227 10:50:28.750863 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-7d59b769b4-g5cgn"] Feb 27 10:50:28 crc kubenswrapper[4728]: I0227 10:50:28.750914 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-7d59b769b4-g5cgn"] Feb 27 10:50:28 crc kubenswrapper[4728]: I0227 10:50:28.783623 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4671725-3a26-4c77-ad25-01c9aa82bdf0-config-data" (OuterVolumeSpecName: "config-data") pod "a4671725-3a26-4c77-ad25-01c9aa82bdf0" (UID: "a4671725-3a26-4c77-ad25-01c9aa82bdf0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:50:28 crc kubenswrapper[4728]: I0227 10:50:28.790526 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38a4c39e-48ad-4e05-a4b8-162fef5472b4-config-data" (OuterVolumeSpecName: "config-data") pod "38a4c39e-48ad-4e05-a4b8-162fef5472b4" (UID: "38a4c39e-48ad-4e05-a4b8-162fef5472b4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:50:28 crc kubenswrapper[4728]: I0227 10:50:28.835897 4728 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38a4c39e-48ad-4e05-a4b8-162fef5472b4-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 10:50:28 crc kubenswrapper[4728]: I0227 10:50:28.835942 4728 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4671725-3a26-4c77-ad25-01c9aa82bdf0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 10:50:28 crc kubenswrapper[4728]: I0227 10:50:28.835952 4728 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4671725-3a26-4c77-ad25-01c9aa82bdf0-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 10:50:28 crc kubenswrapper[4728]: I0227 10:50:28.839075 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4671725-3a26-4c77-ad25-01c9aa82bdf0-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "a4671725-3a26-4c77-ad25-01c9aa82bdf0" (UID: "a4671725-3a26-4c77-ad25-01c9aa82bdf0"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:50:28 crc kubenswrapper[4728]: I0227 10:50:28.840298 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4671725-3a26-4c77-ad25-01c9aa82bdf0-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "a4671725-3a26-4c77-ad25-01c9aa82bdf0" (UID: "a4671725-3a26-4c77-ad25-01c9aa82bdf0"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:50:28 crc kubenswrapper[4728]: I0227 10:50:28.937935 4728 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a4671725-3a26-4c77-ad25-01c9aa82bdf0-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 27 10:50:28 crc kubenswrapper[4728]: I0227 10:50:28.938239 4728 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a4671725-3a26-4c77-ad25-01c9aa82bdf0-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 27 10:50:29 crc kubenswrapper[4728]: I0227 10:50:29.402411 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-74b5q" event={"ID":"4aaa1a60-7863-44de-8271-50bcd8fc1743","Type":"ContainerStarted","Data":"6f40f487424f71e0d0146e913f5f9501d1d177c2e72bcbc2ccbc0a4c401c579f"} Feb 27 10:50:29 crc kubenswrapper[4728]: I0227 10:50:29.406250 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"38a4c39e-48ad-4e05-a4b8-162fef5472b4","Type":"ContainerDied","Data":"5b3f2430eb454e1a24c694d9b4c4d06880274995f42f92580dcd3b28ee011044"} Feb 27 10:50:29 crc kubenswrapper[4728]: I0227 10:50:29.406408 4728 scope.go:117] "RemoveContainer" containerID="77755e04f3436ff2b8e985276880d15d9adf11c6e6b6597433874f97e68ae2da" Feb 27 10:50:29 crc kubenswrapper[4728]: I0227 10:50:29.406315 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 27 10:50:29 crc kubenswrapper[4728]: I0227 10:50:29.410264 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7ff4498744-6lwbb" event={"ID":"a4671725-3a26-4c77-ad25-01c9aa82bdf0","Type":"ContainerDied","Data":"78bc388bf6e3846dac3e7a0db319337129eb7aec5bfae3d1e655a02a30250826"} Feb 27 10:50:29 crc kubenswrapper[4728]: I0227 10:50:29.410595 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7ff4498744-6lwbb" Feb 27 10:50:29 crc kubenswrapper[4728]: I0227 10:50:29.441947 4728 scope.go:117] "RemoveContainer" containerID="9fa81f802da6341489496460593874668928df9ce8293595621275dc671ab4de" Feb 27 10:50:29 crc kubenswrapper[4728]: I0227 10:50:29.446904 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-74b5q" podStartSLOduration=2.978656906 podStartE2EDuration="15.446880043s" podCreationTimestamp="2026-02-27 10:50:14 +0000 UTC" firstStartedPulling="2026-02-27 10:50:15.612948928 +0000 UTC m=+1435.575315034" lastFinishedPulling="2026-02-27 10:50:28.081172065 +0000 UTC m=+1448.043538171" observedRunningTime="2026-02-27 10:50:29.429462621 +0000 UTC m=+1449.391828747" watchObservedRunningTime="2026-02-27 10:50:29.446880043 +0000 UTC m=+1449.409246159" Feb 27 10:50:29 crc kubenswrapper[4728]: I0227 10:50:29.493563 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 27 10:50:29 crc kubenswrapper[4728]: I0227 10:50:29.497667 4728 scope.go:117] "RemoveContainer" containerID="9a8493936a4216b76ee7f114a82acbb29dae8761ff25687f8e678a3b0dde5c05" Feb 27 10:50:29 crc kubenswrapper[4728]: I0227 10:50:29.516452 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 27 10:50:29 crc kubenswrapper[4728]: I0227 10:50:29.530659 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-7ff4498744-6lwbb"] Feb 27 10:50:29 crc kubenswrapper[4728]: I0227 10:50:29.541279 4728 scope.go:117] "RemoveContainer" containerID="1147fea5bd875a0f5d65df8cd3e342653fc2b50663a72ccc7f2673ef6dd4bc17" Feb 27 10:50:29 crc kubenswrapper[4728]: I0227 10:50:29.545581 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-7ff4498744-6lwbb"] Feb 27 10:50:29 crc kubenswrapper[4728]: I0227 10:50:29.564655 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 27 10:50:29 crc kubenswrapper[4728]: E0227 10:50:29.565143 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ac390a6-133b-434f-a853-cf7fc9d18de1" containerName="init" Feb 27 10:50:29 crc kubenswrapper[4728]: I0227 10:50:29.565160 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ac390a6-133b-434f-a853-cf7fc9d18de1" containerName="init" Feb 27 10:50:29 crc kubenswrapper[4728]: E0227 10:50:29.565179 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cdcc3d16-3cf1-48e9-accc-c65d869be697" containerName="heat-api" Feb 27 10:50:29 crc kubenswrapper[4728]: I0227 10:50:29.565185 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="cdcc3d16-3cf1-48e9-accc-c65d869be697" containerName="heat-api" Feb 27 10:50:29 crc kubenswrapper[4728]: E0227 10:50:29.565195 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38a4c39e-48ad-4e05-a4b8-162fef5472b4" containerName="ceilometer-central-agent" Feb 27 10:50:29 crc kubenswrapper[4728]: I0227 10:50:29.565202 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="38a4c39e-48ad-4e05-a4b8-162fef5472b4" containerName="ceilometer-central-agent" Feb 27 10:50:29 crc kubenswrapper[4728]: E0227 10:50:29.565215 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4671725-3a26-4c77-ad25-01c9aa82bdf0" containerName="placement-api" Feb 27 10:50:29 crc kubenswrapper[4728]: I0227 10:50:29.565220 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4671725-3a26-4c77-ad25-01c9aa82bdf0" containerName="placement-api" Feb 27 10:50:29 crc kubenswrapper[4728]: E0227 10:50:29.565238 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca804f78-493d-459d-9061-ee9fe01d8732" containerName="heat-cfnapi" Feb 27 10:50:29 crc kubenswrapper[4728]: I0227 10:50:29.565243 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca804f78-493d-459d-9061-ee9fe01d8732" containerName="heat-cfnapi" Feb 27 10:50:29 crc kubenswrapper[4728]: E0227 10:50:29.565254 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38a4c39e-48ad-4e05-a4b8-162fef5472b4" containerName="sg-core" Feb 27 10:50:29 crc kubenswrapper[4728]: I0227 10:50:29.565259 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="38a4c39e-48ad-4e05-a4b8-162fef5472b4" containerName="sg-core" Feb 27 10:50:29 crc kubenswrapper[4728]: E0227 10:50:29.565275 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38a4c39e-48ad-4e05-a4b8-162fef5472b4" containerName="ceilometer-notification-agent" Feb 27 10:50:29 crc kubenswrapper[4728]: I0227 10:50:29.565282 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="38a4c39e-48ad-4e05-a4b8-162fef5472b4" containerName="ceilometer-notification-agent" Feb 27 10:50:29 crc kubenswrapper[4728]: E0227 10:50:29.565300 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ac390a6-133b-434f-a853-cf7fc9d18de1" containerName="dnsmasq-dns" Feb 27 10:50:29 crc kubenswrapper[4728]: I0227 10:50:29.565306 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ac390a6-133b-434f-a853-cf7fc9d18de1" containerName="dnsmasq-dns" Feb 27 10:50:29 crc kubenswrapper[4728]: E0227 10:50:29.565314 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4671725-3a26-4c77-ad25-01c9aa82bdf0" containerName="placement-log" Feb 27 10:50:29 crc kubenswrapper[4728]: I0227 10:50:29.565319 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4671725-3a26-4c77-ad25-01c9aa82bdf0" containerName="placement-log" Feb 27 10:50:29 crc kubenswrapper[4728]: E0227 10:50:29.565327 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38a4c39e-48ad-4e05-a4b8-162fef5472b4" containerName="proxy-httpd" Feb 27 10:50:29 crc kubenswrapper[4728]: I0227 10:50:29.565332 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="38a4c39e-48ad-4e05-a4b8-162fef5472b4" containerName="proxy-httpd" Feb 27 10:50:29 crc kubenswrapper[4728]: E0227 10:50:29.565342 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca804f78-493d-459d-9061-ee9fe01d8732" containerName="heat-cfnapi" Feb 27 10:50:29 crc kubenswrapper[4728]: I0227 10:50:29.565347 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca804f78-493d-459d-9061-ee9fe01d8732" containerName="heat-cfnapi" Feb 27 10:50:29 crc kubenswrapper[4728]: I0227 10:50:29.565543 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4671725-3a26-4c77-ad25-01c9aa82bdf0" containerName="placement-api" Feb 27 10:50:29 crc kubenswrapper[4728]: I0227 10:50:29.565560 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="cdcc3d16-3cf1-48e9-accc-c65d869be697" containerName="heat-api" Feb 27 10:50:29 crc kubenswrapper[4728]: I0227 10:50:29.565569 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ac390a6-133b-434f-a853-cf7fc9d18de1" containerName="dnsmasq-dns" Feb 27 10:50:29 crc kubenswrapper[4728]: I0227 10:50:29.565581 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="38a4c39e-48ad-4e05-a4b8-162fef5472b4" containerName="proxy-httpd" Feb 27 10:50:29 crc kubenswrapper[4728]: I0227 10:50:29.565593 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="38a4c39e-48ad-4e05-a4b8-162fef5472b4" containerName="sg-core" Feb 27 10:50:29 crc kubenswrapper[4728]: I0227 10:50:29.565602 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="38a4c39e-48ad-4e05-a4b8-162fef5472b4" containerName="ceilometer-notification-agent" Feb 27 10:50:29 crc kubenswrapper[4728]: I0227 10:50:29.565611 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="38a4c39e-48ad-4e05-a4b8-162fef5472b4" containerName="ceilometer-central-agent" Feb 27 10:50:29 crc kubenswrapper[4728]: I0227 10:50:29.565622 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca804f78-493d-459d-9061-ee9fe01d8732" containerName="heat-cfnapi" Feb 27 10:50:29 crc kubenswrapper[4728]: I0227 10:50:29.565630 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4671725-3a26-4c77-ad25-01c9aa82bdf0" containerName="placement-log" Feb 27 10:50:29 crc kubenswrapper[4728]: I0227 10:50:29.565671 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca804f78-493d-459d-9061-ee9fe01d8732" containerName="heat-cfnapi" Feb 27 10:50:29 crc kubenswrapper[4728]: E0227 10:50:29.565871 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cdcc3d16-3cf1-48e9-accc-c65d869be697" containerName="heat-api" Feb 27 10:50:29 crc kubenswrapper[4728]: I0227 10:50:29.565879 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="cdcc3d16-3cf1-48e9-accc-c65d869be697" containerName="heat-api" Feb 27 10:50:29 crc kubenswrapper[4728]: I0227 10:50:29.566084 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="cdcc3d16-3cf1-48e9-accc-c65d869be697" containerName="heat-api" Feb 27 10:50:29 crc kubenswrapper[4728]: I0227 10:50:29.568817 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 27 10:50:29 crc kubenswrapper[4728]: I0227 10:50:29.571305 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 27 10:50:29 crc kubenswrapper[4728]: I0227 10:50:29.577188 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 27 10:50:29 crc kubenswrapper[4728]: I0227 10:50:29.577479 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 27 10:50:29 crc kubenswrapper[4728]: I0227 10:50:29.578534 4728 scope.go:117] "RemoveContainer" containerID="d004a9842d2f8dd22564f858096272a1c8aa802c0fa92e92f4e304530b77a7f1" Feb 27 10:50:29 crc kubenswrapper[4728]: I0227 10:50:29.618242 4728 scope.go:117] "RemoveContainer" containerID="5e64b5e5dee261a73a652b58276e7104d40ee91eb4d60bb79282c5a31b47261a" Feb 27 10:50:29 crc kubenswrapper[4728]: I0227 10:50:29.656730 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/555793ce-76da-4384-8b40-4133438d1bec-config-data\") pod \"ceilometer-0\" (UID: \"555793ce-76da-4384-8b40-4133438d1bec\") " pod="openstack/ceilometer-0" Feb 27 10:50:29 crc kubenswrapper[4728]: I0227 10:50:29.657022 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/555793ce-76da-4384-8b40-4133438d1bec-scripts\") pod \"ceilometer-0\" (UID: \"555793ce-76da-4384-8b40-4133438d1bec\") " pod="openstack/ceilometer-0" Feb 27 10:50:29 crc kubenswrapper[4728]: I0227 10:50:29.657171 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/555793ce-76da-4384-8b40-4133438d1bec-log-httpd\") pod \"ceilometer-0\" (UID: \"555793ce-76da-4384-8b40-4133438d1bec\") " pod="openstack/ceilometer-0" Feb 27 10:50:29 crc kubenswrapper[4728]: I0227 10:50:29.657350 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cx5z6\" (UniqueName: \"kubernetes.io/projected/555793ce-76da-4384-8b40-4133438d1bec-kube-api-access-cx5z6\") pod \"ceilometer-0\" (UID: \"555793ce-76da-4384-8b40-4133438d1bec\") " pod="openstack/ceilometer-0" Feb 27 10:50:29 crc kubenswrapper[4728]: I0227 10:50:29.657495 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/555793ce-76da-4384-8b40-4133438d1bec-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"555793ce-76da-4384-8b40-4133438d1bec\") " pod="openstack/ceilometer-0" Feb 27 10:50:29 crc kubenswrapper[4728]: I0227 10:50:29.657593 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/555793ce-76da-4384-8b40-4133438d1bec-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"555793ce-76da-4384-8b40-4133438d1bec\") " pod="openstack/ceilometer-0" Feb 27 10:50:29 crc kubenswrapper[4728]: I0227 10:50:29.657683 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/555793ce-76da-4384-8b40-4133438d1bec-run-httpd\") pod \"ceilometer-0\" (UID: \"555793ce-76da-4384-8b40-4133438d1bec\") " pod="openstack/ceilometer-0" Feb 27 10:50:29 crc kubenswrapper[4728]: I0227 10:50:29.759556 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/555793ce-76da-4384-8b40-4133438d1bec-run-httpd\") pod \"ceilometer-0\" (UID: \"555793ce-76da-4384-8b40-4133438d1bec\") " pod="openstack/ceilometer-0" Feb 27 10:50:29 crc kubenswrapper[4728]: I0227 10:50:29.759615 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/555793ce-76da-4384-8b40-4133438d1bec-config-data\") pod \"ceilometer-0\" (UID: \"555793ce-76da-4384-8b40-4133438d1bec\") " pod="openstack/ceilometer-0" Feb 27 10:50:29 crc kubenswrapper[4728]: I0227 10:50:29.759648 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/555793ce-76da-4384-8b40-4133438d1bec-scripts\") pod \"ceilometer-0\" (UID: \"555793ce-76da-4384-8b40-4133438d1bec\") " pod="openstack/ceilometer-0" Feb 27 10:50:29 crc kubenswrapper[4728]: I0227 10:50:29.759746 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/555793ce-76da-4384-8b40-4133438d1bec-log-httpd\") pod \"ceilometer-0\" (UID: \"555793ce-76da-4384-8b40-4133438d1bec\") " pod="openstack/ceilometer-0" Feb 27 10:50:29 crc kubenswrapper[4728]: I0227 10:50:29.759806 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cx5z6\" (UniqueName: \"kubernetes.io/projected/555793ce-76da-4384-8b40-4133438d1bec-kube-api-access-cx5z6\") pod \"ceilometer-0\" (UID: \"555793ce-76da-4384-8b40-4133438d1bec\") " pod="openstack/ceilometer-0" Feb 27 10:50:29 crc kubenswrapper[4728]: I0227 10:50:29.759892 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/555793ce-76da-4384-8b40-4133438d1bec-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"555793ce-76da-4384-8b40-4133438d1bec\") " pod="openstack/ceilometer-0" Feb 27 10:50:29 crc kubenswrapper[4728]: I0227 10:50:29.759928 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/555793ce-76da-4384-8b40-4133438d1bec-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"555793ce-76da-4384-8b40-4133438d1bec\") " pod="openstack/ceilometer-0" Feb 27 10:50:29 crc kubenswrapper[4728]: I0227 10:50:29.760327 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/555793ce-76da-4384-8b40-4133438d1bec-run-httpd\") pod \"ceilometer-0\" (UID: \"555793ce-76da-4384-8b40-4133438d1bec\") " pod="openstack/ceilometer-0" Feb 27 10:50:29 crc kubenswrapper[4728]: I0227 10:50:29.761204 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/555793ce-76da-4384-8b40-4133438d1bec-log-httpd\") pod \"ceilometer-0\" (UID: \"555793ce-76da-4384-8b40-4133438d1bec\") " pod="openstack/ceilometer-0" Feb 27 10:50:29 crc kubenswrapper[4728]: I0227 10:50:29.766889 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/555793ce-76da-4384-8b40-4133438d1bec-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"555793ce-76da-4384-8b40-4133438d1bec\") " pod="openstack/ceilometer-0" Feb 27 10:50:29 crc kubenswrapper[4728]: I0227 10:50:29.767050 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/555793ce-76da-4384-8b40-4133438d1bec-config-data\") pod \"ceilometer-0\" (UID: \"555793ce-76da-4384-8b40-4133438d1bec\") " pod="openstack/ceilometer-0" Feb 27 10:50:29 crc kubenswrapper[4728]: I0227 10:50:29.768187 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/555793ce-76da-4384-8b40-4133438d1bec-scripts\") pod \"ceilometer-0\" (UID: \"555793ce-76da-4384-8b40-4133438d1bec\") " pod="openstack/ceilometer-0" Feb 27 10:50:29 crc kubenswrapper[4728]: I0227 10:50:29.768779 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/555793ce-76da-4384-8b40-4133438d1bec-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"555793ce-76da-4384-8b40-4133438d1bec\") " pod="openstack/ceilometer-0" Feb 27 10:50:29 crc kubenswrapper[4728]: I0227 10:50:29.779942 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cx5z6\" (UniqueName: \"kubernetes.io/projected/555793ce-76da-4384-8b40-4133438d1bec-kube-api-access-cx5z6\") pod \"ceilometer-0\" (UID: \"555793ce-76da-4384-8b40-4133438d1bec\") " pod="openstack/ceilometer-0" Feb 27 10:50:29 crc kubenswrapper[4728]: I0227 10:50:29.891657 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 27 10:50:30 crc kubenswrapper[4728]: I0227 10:50:30.428537 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 27 10:50:30 crc kubenswrapper[4728]: I0227 10:50:30.737591 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="38a4c39e-48ad-4e05-a4b8-162fef5472b4" path="/var/lib/kubelet/pods/38a4c39e-48ad-4e05-a4b8-162fef5472b4/volumes" Feb 27 10:50:30 crc kubenswrapper[4728]: I0227 10:50:30.738375 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a4671725-3a26-4c77-ad25-01c9aa82bdf0" path="/var/lib/kubelet/pods/a4671725-3a26-4c77-ad25-01c9aa82bdf0/volumes" Feb 27 10:50:30 crc kubenswrapper[4728]: I0227 10:50:30.738948 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cdcc3d16-3cf1-48e9-accc-c65d869be697" path="/var/lib/kubelet/pods/cdcc3d16-3cf1-48e9-accc-c65d869be697/volumes" Feb 27 10:50:31 crc kubenswrapper[4728]: I0227 10:50:31.440349 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"555793ce-76da-4384-8b40-4133438d1bec","Type":"ContainerStarted","Data":"0715becab8c00893ac70c9215a4b8c871c38975372c5d09fea90bc6ad8464c0a"} Feb 27 10:50:31 crc kubenswrapper[4728]: I0227 10:50:31.440721 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"555793ce-76da-4384-8b40-4133438d1bec","Type":"ContainerStarted","Data":"053d4e2fbdf746aa13422465227c6a3d739f5764fc21a048fb697134ecc1768f"} Feb 27 10:50:32 crc kubenswrapper[4728]: I0227 10:50:32.455188 4728 generic.go:334] "Generic (PLEG): container finished" podID="b0db7498-5f3f-4550-932e-64f7d721e902" containerID="4f0bce30e7995e8fede562a5ec4a20cf6cce0e35118324e85e7d86547b46c89b" exitCode=0 Feb 27 10:50:32 crc kubenswrapper[4728]: I0227 10:50:32.456019 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-84dc467799-w92cc" event={"ID":"b0db7498-5f3f-4550-932e-64f7d721e902","Type":"ContainerDied","Data":"4f0bce30e7995e8fede562a5ec4a20cf6cce0e35118324e85e7d86547b46c89b"} Feb 27 10:50:32 crc kubenswrapper[4728]: I0227 10:50:32.456054 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-84dc467799-w92cc" event={"ID":"b0db7498-5f3f-4550-932e-64f7d721e902","Type":"ContainerDied","Data":"f8b3f70a6aa6b2b0a2e9d1b715988432d0ce10367376fbe638b52bd957d7b7c7"} Feb 27 10:50:32 crc kubenswrapper[4728]: I0227 10:50:32.456070 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f8b3f70a6aa6b2b0a2e9d1b715988432d0ce10367376fbe638b52bd957d7b7c7" Feb 27 10:50:32 crc kubenswrapper[4728]: I0227 10:50:32.467803 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"555793ce-76da-4384-8b40-4133438d1bec","Type":"ContainerStarted","Data":"269d85f72aa432f3ec61376d13460d5d06bfd1dbe10308c8c9a03ed5857c8529"} Feb 27 10:50:32 crc kubenswrapper[4728]: I0227 10:50:32.553713 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-84dc467799-w92cc" Feb 27 10:50:32 crc kubenswrapper[4728]: I0227 10:50:32.630055 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0db7498-5f3f-4550-932e-64f7d721e902-combined-ca-bundle\") pod \"b0db7498-5f3f-4550-932e-64f7d721e902\" (UID: \"b0db7498-5f3f-4550-932e-64f7d721e902\") " Feb 27 10:50:32 crc kubenswrapper[4728]: I0227 10:50:32.630389 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mdj7s\" (UniqueName: \"kubernetes.io/projected/b0db7498-5f3f-4550-932e-64f7d721e902-kube-api-access-mdj7s\") pod \"b0db7498-5f3f-4550-932e-64f7d721e902\" (UID: \"b0db7498-5f3f-4550-932e-64f7d721e902\") " Feb 27 10:50:32 crc kubenswrapper[4728]: I0227 10:50:32.630487 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0db7498-5f3f-4550-932e-64f7d721e902-config-data\") pod \"b0db7498-5f3f-4550-932e-64f7d721e902\" (UID: \"b0db7498-5f3f-4550-932e-64f7d721e902\") " Feb 27 10:50:32 crc kubenswrapper[4728]: I0227 10:50:32.630646 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b0db7498-5f3f-4550-932e-64f7d721e902-config-data-custom\") pod \"b0db7498-5f3f-4550-932e-64f7d721e902\" (UID: \"b0db7498-5f3f-4550-932e-64f7d721e902\") " Feb 27 10:50:32 crc kubenswrapper[4728]: I0227 10:50:32.640392 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0db7498-5f3f-4550-932e-64f7d721e902-kube-api-access-mdj7s" (OuterVolumeSpecName: "kube-api-access-mdj7s") pod "b0db7498-5f3f-4550-932e-64f7d721e902" (UID: "b0db7498-5f3f-4550-932e-64f7d721e902"). InnerVolumeSpecName "kube-api-access-mdj7s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:50:32 crc kubenswrapper[4728]: I0227 10:50:32.648046 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0db7498-5f3f-4550-932e-64f7d721e902-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "b0db7498-5f3f-4550-932e-64f7d721e902" (UID: "b0db7498-5f3f-4550-932e-64f7d721e902"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:50:32 crc kubenswrapper[4728]: I0227 10:50:32.663614 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0db7498-5f3f-4550-932e-64f7d721e902-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b0db7498-5f3f-4550-932e-64f7d721e902" (UID: "b0db7498-5f3f-4550-932e-64f7d721e902"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:50:32 crc kubenswrapper[4728]: I0227 10:50:32.716632 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0db7498-5f3f-4550-932e-64f7d721e902-config-data" (OuterVolumeSpecName: "config-data") pod "b0db7498-5f3f-4550-932e-64f7d721e902" (UID: "b0db7498-5f3f-4550-932e-64f7d721e902"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:50:32 crc kubenswrapper[4728]: I0227 10:50:32.738610 4728 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b0db7498-5f3f-4550-932e-64f7d721e902-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 27 10:50:32 crc kubenswrapper[4728]: I0227 10:50:32.738637 4728 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0db7498-5f3f-4550-932e-64f7d721e902-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 10:50:32 crc kubenswrapper[4728]: I0227 10:50:32.738654 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mdj7s\" (UniqueName: \"kubernetes.io/projected/b0db7498-5f3f-4550-932e-64f7d721e902-kube-api-access-mdj7s\") on node \"crc\" DevicePath \"\"" Feb 27 10:50:32 crc kubenswrapper[4728]: I0227 10:50:32.738667 4728 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0db7498-5f3f-4550-932e-64f7d721e902-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 10:50:33 crc kubenswrapper[4728]: I0227 10:50:33.480242 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"555793ce-76da-4384-8b40-4133438d1bec","Type":"ContainerStarted","Data":"b3197829f9f03b9abde5dc0fcfb961d95cba9a6be02e975621b5e11dc5c65edb"} Feb 27 10:50:33 crc kubenswrapper[4728]: I0227 10:50:33.481169 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-84dc467799-w92cc" Feb 27 10:50:33 crc kubenswrapper[4728]: I0227 10:50:33.511929 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-84dc467799-w92cc"] Feb 27 10:50:33 crc kubenswrapper[4728]: I0227 10:50:33.530410 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-engine-84dc467799-w92cc"] Feb 27 10:50:34 crc kubenswrapper[4728]: I0227 10:50:34.017550 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 27 10:50:34 crc kubenswrapper[4728]: I0227 10:50:34.018163 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="4489bcc6-1de2-45f0-993d-67bd941e697c" containerName="glance-log" containerID="cri-o://06ac0b483b2a9e06753561a7639abef68a0cd6228c20931c8119eba80490f3b5" gracePeriod=30 Feb 27 10:50:34 crc kubenswrapper[4728]: I0227 10:50:34.018276 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="4489bcc6-1de2-45f0-993d-67bd941e697c" containerName="glance-httpd" containerID="cri-o://3e5a486e6493ed0a69a24a8111f106acb12103b60d7d09f35b9f7062ba41f0d1" gracePeriod=30 Feb 27 10:50:34 crc kubenswrapper[4728]: I0227 10:50:34.493185 4728 generic.go:334] "Generic (PLEG): container finished" podID="4489bcc6-1de2-45f0-993d-67bd941e697c" containerID="06ac0b483b2a9e06753561a7639abef68a0cd6228c20931c8119eba80490f3b5" exitCode=143 Feb 27 10:50:34 crc kubenswrapper[4728]: I0227 10:50:34.493237 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4489bcc6-1de2-45f0-993d-67bd941e697c","Type":"ContainerDied","Data":"06ac0b483b2a9e06753561a7639abef68a0cd6228c20931c8119eba80490f3b5"} Feb 27 10:50:34 crc kubenswrapper[4728]: I0227 10:50:34.750580 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b0db7498-5f3f-4550-932e-64f7d721e902" path="/var/lib/kubelet/pods/b0db7498-5f3f-4550-932e-64f7d721e902/volumes" Feb 27 10:50:35 crc kubenswrapper[4728]: I0227 10:50:35.515454 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"555793ce-76da-4384-8b40-4133438d1bec","Type":"ContainerStarted","Data":"dd86b632d9818237c5ad8cfe3a03f71d330a4e6cb0899a1e566bb8b267c6b90b"} Feb 27 10:50:35 crc kubenswrapper[4728]: I0227 10:50:35.516712 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 27 10:50:35 crc kubenswrapper[4728]: I0227 10:50:35.539868 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.332648514 podStartE2EDuration="6.539851816s" podCreationTimestamp="2026-02-27 10:50:29 +0000 UTC" firstStartedPulling="2026-02-27 10:50:30.430211576 +0000 UTC m=+1450.392577682" lastFinishedPulling="2026-02-27 10:50:34.637414878 +0000 UTC m=+1454.599780984" observedRunningTime="2026-02-27 10:50:35.539030063 +0000 UTC m=+1455.501396169" watchObservedRunningTime="2026-02-27 10:50:35.539851816 +0000 UTC m=+1455.502217922" Feb 27 10:50:35 crc kubenswrapper[4728]: I0227 10:50:35.577411 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 27 10:50:37 crc kubenswrapper[4728]: I0227 10:50:37.174120 4728 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-internal-api-0" podUID="4489bcc6-1de2-45f0-993d-67bd941e697c" containerName="glance-log" probeResult="failure" output="Get \"https://10.217.0.197:9292/healthcheck\": read tcp 10.217.0.2:52334->10.217.0.197:9292: read: connection reset by peer" Feb 27 10:50:37 crc kubenswrapper[4728]: I0227 10:50:37.174117 4728 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-internal-api-0" podUID="4489bcc6-1de2-45f0-993d-67bd941e697c" containerName="glance-httpd" probeResult="failure" output="Get \"https://10.217.0.197:9292/healthcheck\": read tcp 10.217.0.2:52328->10.217.0.197:9292: read: connection reset by peer" Feb 27 10:50:37 crc kubenswrapper[4728]: I0227 10:50:37.541854 4728 generic.go:334] "Generic (PLEG): container finished" podID="4489bcc6-1de2-45f0-993d-67bd941e697c" containerID="3e5a486e6493ed0a69a24a8111f106acb12103b60d7d09f35b9f7062ba41f0d1" exitCode=0 Feb 27 10:50:37 crc kubenswrapper[4728]: I0227 10:50:37.542289 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="555793ce-76da-4384-8b40-4133438d1bec" containerName="ceilometer-central-agent" containerID="cri-o://0715becab8c00893ac70c9215a4b8c871c38975372c5d09fea90bc6ad8464c0a" gracePeriod=30 Feb 27 10:50:37 crc kubenswrapper[4728]: I0227 10:50:37.542580 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4489bcc6-1de2-45f0-993d-67bd941e697c","Type":"ContainerDied","Data":"3e5a486e6493ed0a69a24a8111f106acb12103b60d7d09f35b9f7062ba41f0d1"} Feb 27 10:50:37 crc kubenswrapper[4728]: I0227 10:50:37.542971 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="555793ce-76da-4384-8b40-4133438d1bec" containerName="proxy-httpd" containerID="cri-o://dd86b632d9818237c5ad8cfe3a03f71d330a4e6cb0899a1e566bb8b267c6b90b" gracePeriod=30 Feb 27 10:50:37 crc kubenswrapper[4728]: I0227 10:50:37.543025 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="555793ce-76da-4384-8b40-4133438d1bec" containerName="sg-core" containerID="cri-o://b3197829f9f03b9abde5dc0fcfb961d95cba9a6be02e975621b5e11dc5c65edb" gracePeriod=30 Feb 27 10:50:37 crc kubenswrapper[4728]: I0227 10:50:37.543057 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="555793ce-76da-4384-8b40-4133438d1bec" containerName="ceilometer-notification-agent" containerID="cri-o://269d85f72aa432f3ec61376d13460d5d06bfd1dbe10308c8c9a03ed5857c8529" gracePeriod=30 Feb 27 10:50:37 crc kubenswrapper[4728]: I0227 10:50:37.868406 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 27 10:50:37 crc kubenswrapper[4728]: I0227 10:50:37.964172 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4489bcc6-1de2-45f0-993d-67bd941e697c-config-data\") pod \"4489bcc6-1de2-45f0-993d-67bd941e697c\" (UID: \"4489bcc6-1de2-45f0-993d-67bd941e697c\") " Feb 27 10:50:37 crc kubenswrapper[4728]: I0227 10:50:37.964480 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4489bcc6-1de2-45f0-993d-67bd941e697c-logs\") pod \"4489bcc6-1de2-45f0-993d-67bd941e697c\" (UID: \"4489bcc6-1de2-45f0-993d-67bd941e697c\") " Feb 27 10:50:37 crc kubenswrapper[4728]: I0227 10:50:37.964609 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qv78l\" (UniqueName: \"kubernetes.io/projected/4489bcc6-1de2-45f0-993d-67bd941e697c-kube-api-access-qv78l\") pod \"4489bcc6-1de2-45f0-993d-67bd941e697c\" (UID: \"4489bcc6-1de2-45f0-993d-67bd941e697c\") " Feb 27 10:50:37 crc kubenswrapper[4728]: I0227 10:50:37.964709 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4489bcc6-1de2-45f0-993d-67bd941e697c-httpd-run\") pod \"4489bcc6-1de2-45f0-993d-67bd941e697c\" (UID: \"4489bcc6-1de2-45f0-993d-67bd941e697c\") " Feb 27 10:50:37 crc kubenswrapper[4728]: I0227 10:50:37.964736 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4489bcc6-1de2-45f0-993d-67bd941e697c-internal-tls-certs\") pod \"4489bcc6-1de2-45f0-993d-67bd941e697c\" (UID: \"4489bcc6-1de2-45f0-993d-67bd941e697c\") " Feb 27 10:50:37 crc kubenswrapper[4728]: I0227 10:50:37.964771 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4489bcc6-1de2-45f0-993d-67bd941e697c-combined-ca-bundle\") pod \"4489bcc6-1de2-45f0-993d-67bd941e697c\" (UID: \"4489bcc6-1de2-45f0-993d-67bd941e697c\") " Feb 27 10:50:37 crc kubenswrapper[4728]: I0227 10:50:37.964961 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4489bcc6-1de2-45f0-993d-67bd941e697c-scripts\") pod \"4489bcc6-1de2-45f0-993d-67bd941e697c\" (UID: \"4489bcc6-1de2-45f0-993d-67bd941e697c\") " Feb 27 10:50:37 crc kubenswrapper[4728]: I0227 10:50:37.965863 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-35b05381-be13-4917-9b03-b3cf66c9fdec\") pod \"4489bcc6-1de2-45f0-993d-67bd941e697c\" (UID: \"4489bcc6-1de2-45f0-993d-67bd941e697c\") " Feb 27 10:50:37 crc kubenswrapper[4728]: I0227 10:50:37.968546 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4489bcc6-1de2-45f0-993d-67bd941e697c-logs" (OuterVolumeSpecName: "logs") pod "4489bcc6-1de2-45f0-993d-67bd941e697c" (UID: "4489bcc6-1de2-45f0-993d-67bd941e697c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:50:37 crc kubenswrapper[4728]: I0227 10:50:37.969783 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4489bcc6-1de2-45f0-993d-67bd941e697c-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "4489bcc6-1de2-45f0-993d-67bd941e697c" (UID: "4489bcc6-1de2-45f0-993d-67bd941e697c"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:50:37 crc kubenswrapper[4728]: I0227 10:50:37.973675 4728 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4489bcc6-1de2-45f0-993d-67bd941e697c-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 27 10:50:37 crc kubenswrapper[4728]: I0227 10:50:37.973708 4728 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4489bcc6-1de2-45f0-993d-67bd941e697c-logs\") on node \"crc\" DevicePath \"\"" Feb 27 10:50:37 crc kubenswrapper[4728]: I0227 10:50:37.977065 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4489bcc6-1de2-45f0-993d-67bd941e697c-scripts" (OuterVolumeSpecName: "scripts") pod "4489bcc6-1de2-45f0-993d-67bd941e697c" (UID: "4489bcc6-1de2-45f0-993d-67bd941e697c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:50:37 crc kubenswrapper[4728]: I0227 10:50:37.994725 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4489bcc6-1de2-45f0-993d-67bd941e697c-kube-api-access-qv78l" (OuterVolumeSpecName: "kube-api-access-qv78l") pod "4489bcc6-1de2-45f0-993d-67bd941e697c" (UID: "4489bcc6-1de2-45f0-993d-67bd941e697c"). InnerVolumeSpecName "kube-api-access-qv78l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:50:38 crc kubenswrapper[4728]: I0227 10:50:38.026835 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-35b05381-be13-4917-9b03-b3cf66c9fdec" (OuterVolumeSpecName: "glance") pod "4489bcc6-1de2-45f0-993d-67bd941e697c" (UID: "4489bcc6-1de2-45f0-993d-67bd941e697c"). InnerVolumeSpecName "pvc-35b05381-be13-4917-9b03-b3cf66c9fdec". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 27 10:50:38 crc kubenswrapper[4728]: I0227 10:50:38.059784 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4489bcc6-1de2-45f0-993d-67bd941e697c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4489bcc6-1de2-45f0-993d-67bd941e697c" (UID: "4489bcc6-1de2-45f0-993d-67bd941e697c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:50:38 crc kubenswrapper[4728]: I0227 10:50:38.077472 4728 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4489bcc6-1de2-45f0-993d-67bd941e697c-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 10:50:38 crc kubenswrapper[4728]: I0227 10:50:38.077532 4728 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-35b05381-be13-4917-9b03-b3cf66c9fdec\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-35b05381-be13-4917-9b03-b3cf66c9fdec\") on node \"crc\" " Feb 27 10:50:38 crc kubenswrapper[4728]: I0227 10:50:38.077545 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qv78l\" (UniqueName: \"kubernetes.io/projected/4489bcc6-1de2-45f0-993d-67bd941e697c-kube-api-access-qv78l\") on node \"crc\" DevicePath \"\"" Feb 27 10:50:38 crc kubenswrapper[4728]: I0227 10:50:38.077556 4728 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4489bcc6-1de2-45f0-993d-67bd941e697c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 10:50:38 crc kubenswrapper[4728]: I0227 10:50:38.096769 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4489bcc6-1de2-45f0-993d-67bd941e697c-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "4489bcc6-1de2-45f0-993d-67bd941e697c" (UID: "4489bcc6-1de2-45f0-993d-67bd941e697c"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:50:38 crc kubenswrapper[4728]: I0227 10:50:38.104250 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4489bcc6-1de2-45f0-993d-67bd941e697c-config-data" (OuterVolumeSpecName: "config-data") pod "4489bcc6-1de2-45f0-993d-67bd941e697c" (UID: "4489bcc6-1de2-45f0-993d-67bd941e697c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:50:38 crc kubenswrapper[4728]: I0227 10:50:38.130465 4728 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 27 10:50:38 crc kubenswrapper[4728]: I0227 10:50:38.130669 4728 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-35b05381-be13-4917-9b03-b3cf66c9fdec" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-35b05381-be13-4917-9b03-b3cf66c9fdec") on node "crc" Feb 27 10:50:38 crc kubenswrapper[4728]: I0227 10:50:38.179126 4728 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4489bcc6-1de2-45f0-993d-67bd941e697c-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 27 10:50:38 crc kubenswrapper[4728]: I0227 10:50:38.179163 4728 reconciler_common.go:293] "Volume detached for volume \"pvc-35b05381-be13-4917-9b03-b3cf66c9fdec\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-35b05381-be13-4917-9b03-b3cf66c9fdec\") on node \"crc\" DevicePath \"\"" Feb 27 10:50:38 crc kubenswrapper[4728]: I0227 10:50:38.179175 4728 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4489bcc6-1de2-45f0-993d-67bd941e697c-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 10:50:38 crc kubenswrapper[4728]: E0227 10:50:38.202971 4728 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod555793ce_76da_4384_8b40_4133438d1bec.slice/crio-conmon-269d85f72aa432f3ec61376d13460d5d06bfd1dbe10308c8c9a03ed5857c8529.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod555793ce_76da_4384_8b40_4133438d1bec.slice/crio-conmon-dd86b632d9818237c5ad8cfe3a03f71d330a4e6cb0899a1e566bb8b267c6b90b.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod555793ce_76da_4384_8b40_4133438d1bec.slice/crio-269d85f72aa432f3ec61376d13460d5d06bfd1dbe10308c8c9a03ed5857c8529.scope\": RecentStats: unable to find data in memory cache]" Feb 27 10:50:38 crc kubenswrapper[4728]: I0227 10:50:38.555278 4728 generic.go:334] "Generic (PLEG): container finished" podID="555793ce-76da-4384-8b40-4133438d1bec" containerID="dd86b632d9818237c5ad8cfe3a03f71d330a4e6cb0899a1e566bb8b267c6b90b" exitCode=0 Feb 27 10:50:38 crc kubenswrapper[4728]: I0227 10:50:38.555646 4728 generic.go:334] "Generic (PLEG): container finished" podID="555793ce-76da-4384-8b40-4133438d1bec" containerID="b3197829f9f03b9abde5dc0fcfb961d95cba9a6be02e975621b5e11dc5c65edb" exitCode=2 Feb 27 10:50:38 crc kubenswrapper[4728]: I0227 10:50:38.555657 4728 generic.go:334] "Generic (PLEG): container finished" podID="555793ce-76da-4384-8b40-4133438d1bec" containerID="269d85f72aa432f3ec61376d13460d5d06bfd1dbe10308c8c9a03ed5857c8529" exitCode=0 Feb 27 10:50:38 crc kubenswrapper[4728]: I0227 10:50:38.555343 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"555793ce-76da-4384-8b40-4133438d1bec","Type":"ContainerDied","Data":"dd86b632d9818237c5ad8cfe3a03f71d330a4e6cb0899a1e566bb8b267c6b90b"} Feb 27 10:50:38 crc kubenswrapper[4728]: I0227 10:50:38.555721 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"555793ce-76da-4384-8b40-4133438d1bec","Type":"ContainerDied","Data":"b3197829f9f03b9abde5dc0fcfb961d95cba9a6be02e975621b5e11dc5c65edb"} Feb 27 10:50:38 crc kubenswrapper[4728]: I0227 10:50:38.555738 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"555793ce-76da-4384-8b40-4133438d1bec","Type":"ContainerDied","Data":"269d85f72aa432f3ec61376d13460d5d06bfd1dbe10308c8c9a03ed5857c8529"} Feb 27 10:50:38 crc kubenswrapper[4728]: I0227 10:50:38.558388 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4489bcc6-1de2-45f0-993d-67bd941e697c","Type":"ContainerDied","Data":"07d80261b3ede7b07a58551b40861c499ed3cc256ebce237279028efc99feb9c"} Feb 27 10:50:38 crc kubenswrapper[4728]: I0227 10:50:38.558442 4728 scope.go:117] "RemoveContainer" containerID="3e5a486e6493ed0a69a24a8111f106acb12103b60d7d09f35b9f7062ba41f0d1" Feb 27 10:50:38 crc kubenswrapper[4728]: I0227 10:50:38.558465 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 27 10:50:38 crc kubenswrapper[4728]: I0227 10:50:38.585057 4728 scope.go:117] "RemoveContainer" containerID="06ac0b483b2a9e06753561a7639abef68a0cd6228c20931c8119eba80490f3b5" Feb 27 10:50:38 crc kubenswrapper[4728]: I0227 10:50:38.608332 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 27 10:50:38 crc kubenswrapper[4728]: I0227 10:50:38.653299 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 27 10:50:38 crc kubenswrapper[4728]: I0227 10:50:38.684578 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 27 10:50:38 crc kubenswrapper[4728]: E0227 10:50:38.685177 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4489bcc6-1de2-45f0-993d-67bd941e697c" containerName="glance-httpd" Feb 27 10:50:38 crc kubenswrapper[4728]: I0227 10:50:38.685197 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="4489bcc6-1de2-45f0-993d-67bd941e697c" containerName="glance-httpd" Feb 27 10:50:38 crc kubenswrapper[4728]: E0227 10:50:38.685241 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0db7498-5f3f-4550-932e-64f7d721e902" containerName="heat-engine" Feb 27 10:50:38 crc kubenswrapper[4728]: I0227 10:50:38.685250 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0db7498-5f3f-4550-932e-64f7d721e902" containerName="heat-engine" Feb 27 10:50:38 crc kubenswrapper[4728]: E0227 10:50:38.685277 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4489bcc6-1de2-45f0-993d-67bd941e697c" containerName="glance-log" Feb 27 10:50:38 crc kubenswrapper[4728]: I0227 10:50:38.685285 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="4489bcc6-1de2-45f0-993d-67bd941e697c" containerName="glance-log" Feb 27 10:50:38 crc kubenswrapper[4728]: I0227 10:50:38.685564 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="4489bcc6-1de2-45f0-993d-67bd941e697c" containerName="glance-httpd" Feb 27 10:50:38 crc kubenswrapper[4728]: I0227 10:50:38.685590 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0db7498-5f3f-4550-932e-64f7d721e902" containerName="heat-engine" Feb 27 10:50:38 crc kubenswrapper[4728]: I0227 10:50:38.685803 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="4489bcc6-1de2-45f0-993d-67bd941e697c" containerName="glance-log" Feb 27 10:50:38 crc kubenswrapper[4728]: I0227 10:50:38.687308 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 27 10:50:38 crc kubenswrapper[4728]: I0227 10:50:38.699221 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 27 10:50:38 crc kubenswrapper[4728]: I0227 10:50:38.701220 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 27 10:50:38 crc kubenswrapper[4728]: I0227 10:50:38.701462 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 27 10:50:38 crc kubenswrapper[4728]: I0227 10:50:38.743593 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4489bcc6-1de2-45f0-993d-67bd941e697c" path="/var/lib/kubelet/pods/4489bcc6-1de2-45f0-993d-67bd941e697c/volumes" Feb 27 10:50:38 crc kubenswrapper[4728]: I0227 10:50:38.795994 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/341ec042-9200-431a-b264-4a43228e1010-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"341ec042-9200-431a-b264-4a43228e1010\") " pod="openstack/glance-default-internal-api-0" Feb 27 10:50:38 crc kubenswrapper[4728]: I0227 10:50:38.796069 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/341ec042-9200-431a-b264-4a43228e1010-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"341ec042-9200-431a-b264-4a43228e1010\") " pod="openstack/glance-default-internal-api-0" Feb 27 10:50:38 crc kubenswrapper[4728]: I0227 10:50:38.796115 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-35b05381-be13-4917-9b03-b3cf66c9fdec\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-35b05381-be13-4917-9b03-b3cf66c9fdec\") pod \"glance-default-internal-api-0\" (UID: \"341ec042-9200-431a-b264-4a43228e1010\") " pod="openstack/glance-default-internal-api-0" Feb 27 10:50:38 crc kubenswrapper[4728]: I0227 10:50:38.796166 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lb7hk\" (UniqueName: \"kubernetes.io/projected/341ec042-9200-431a-b264-4a43228e1010-kube-api-access-lb7hk\") pod \"glance-default-internal-api-0\" (UID: \"341ec042-9200-431a-b264-4a43228e1010\") " pod="openstack/glance-default-internal-api-0" Feb 27 10:50:38 crc kubenswrapper[4728]: I0227 10:50:38.796232 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/341ec042-9200-431a-b264-4a43228e1010-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"341ec042-9200-431a-b264-4a43228e1010\") " pod="openstack/glance-default-internal-api-0" Feb 27 10:50:38 crc kubenswrapper[4728]: I0227 10:50:38.796466 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/341ec042-9200-431a-b264-4a43228e1010-logs\") pod \"glance-default-internal-api-0\" (UID: \"341ec042-9200-431a-b264-4a43228e1010\") " pod="openstack/glance-default-internal-api-0" Feb 27 10:50:38 crc kubenswrapper[4728]: I0227 10:50:38.797056 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/341ec042-9200-431a-b264-4a43228e1010-scripts\") pod \"glance-default-internal-api-0\" (UID: \"341ec042-9200-431a-b264-4a43228e1010\") " pod="openstack/glance-default-internal-api-0" Feb 27 10:50:38 crc kubenswrapper[4728]: I0227 10:50:38.797212 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/341ec042-9200-431a-b264-4a43228e1010-config-data\") pod \"glance-default-internal-api-0\" (UID: \"341ec042-9200-431a-b264-4a43228e1010\") " pod="openstack/glance-default-internal-api-0" Feb 27 10:50:38 crc kubenswrapper[4728]: I0227 10:50:38.899376 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/341ec042-9200-431a-b264-4a43228e1010-logs\") pod \"glance-default-internal-api-0\" (UID: \"341ec042-9200-431a-b264-4a43228e1010\") " pod="openstack/glance-default-internal-api-0" Feb 27 10:50:38 crc kubenswrapper[4728]: I0227 10:50:38.899491 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/341ec042-9200-431a-b264-4a43228e1010-scripts\") pod \"glance-default-internal-api-0\" (UID: \"341ec042-9200-431a-b264-4a43228e1010\") " pod="openstack/glance-default-internal-api-0" Feb 27 10:50:38 crc kubenswrapper[4728]: I0227 10:50:38.899553 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/341ec042-9200-431a-b264-4a43228e1010-config-data\") pod \"glance-default-internal-api-0\" (UID: \"341ec042-9200-431a-b264-4a43228e1010\") " pod="openstack/glance-default-internal-api-0" Feb 27 10:50:38 crc kubenswrapper[4728]: I0227 10:50:38.899664 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/341ec042-9200-431a-b264-4a43228e1010-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"341ec042-9200-431a-b264-4a43228e1010\") " pod="openstack/glance-default-internal-api-0" Feb 27 10:50:38 crc kubenswrapper[4728]: I0227 10:50:38.899721 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/341ec042-9200-431a-b264-4a43228e1010-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"341ec042-9200-431a-b264-4a43228e1010\") " pod="openstack/glance-default-internal-api-0" Feb 27 10:50:38 crc kubenswrapper[4728]: I0227 10:50:38.899773 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-35b05381-be13-4917-9b03-b3cf66c9fdec\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-35b05381-be13-4917-9b03-b3cf66c9fdec\") pod \"glance-default-internal-api-0\" (UID: \"341ec042-9200-431a-b264-4a43228e1010\") " pod="openstack/glance-default-internal-api-0" Feb 27 10:50:38 crc kubenswrapper[4728]: I0227 10:50:38.899803 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lb7hk\" (UniqueName: \"kubernetes.io/projected/341ec042-9200-431a-b264-4a43228e1010-kube-api-access-lb7hk\") pod \"glance-default-internal-api-0\" (UID: \"341ec042-9200-431a-b264-4a43228e1010\") " pod="openstack/glance-default-internal-api-0" Feb 27 10:50:38 crc kubenswrapper[4728]: I0227 10:50:38.899837 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/341ec042-9200-431a-b264-4a43228e1010-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"341ec042-9200-431a-b264-4a43228e1010\") " pod="openstack/glance-default-internal-api-0" Feb 27 10:50:38 crc kubenswrapper[4728]: I0227 10:50:38.900448 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/341ec042-9200-431a-b264-4a43228e1010-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"341ec042-9200-431a-b264-4a43228e1010\") " pod="openstack/glance-default-internal-api-0" Feb 27 10:50:38 crc kubenswrapper[4728]: I0227 10:50:38.900994 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/341ec042-9200-431a-b264-4a43228e1010-logs\") pod \"glance-default-internal-api-0\" (UID: \"341ec042-9200-431a-b264-4a43228e1010\") " pod="openstack/glance-default-internal-api-0" Feb 27 10:50:38 crc kubenswrapper[4728]: I0227 10:50:38.906202 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/341ec042-9200-431a-b264-4a43228e1010-scripts\") pod \"glance-default-internal-api-0\" (UID: \"341ec042-9200-431a-b264-4a43228e1010\") " pod="openstack/glance-default-internal-api-0" Feb 27 10:50:38 crc kubenswrapper[4728]: I0227 10:50:38.906585 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/341ec042-9200-431a-b264-4a43228e1010-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"341ec042-9200-431a-b264-4a43228e1010\") " pod="openstack/glance-default-internal-api-0" Feb 27 10:50:38 crc kubenswrapper[4728]: I0227 10:50:38.906960 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/341ec042-9200-431a-b264-4a43228e1010-config-data\") pod \"glance-default-internal-api-0\" (UID: \"341ec042-9200-431a-b264-4a43228e1010\") " pod="openstack/glance-default-internal-api-0" Feb 27 10:50:38 crc kubenswrapper[4728]: I0227 10:50:38.907217 4728 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 27 10:50:38 crc kubenswrapper[4728]: I0227 10:50:38.907249 4728 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-35b05381-be13-4917-9b03-b3cf66c9fdec\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-35b05381-be13-4917-9b03-b3cf66c9fdec\") pod \"glance-default-internal-api-0\" (UID: \"341ec042-9200-431a-b264-4a43228e1010\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b9ab4151576111345e22bb0de839169f32b3e09bf40bf7b14be12c506cea8a77/globalmount\"" pod="openstack/glance-default-internal-api-0" Feb 27 10:50:38 crc kubenswrapper[4728]: I0227 10:50:38.918293 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/341ec042-9200-431a-b264-4a43228e1010-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"341ec042-9200-431a-b264-4a43228e1010\") " pod="openstack/glance-default-internal-api-0" Feb 27 10:50:38 crc kubenswrapper[4728]: I0227 10:50:38.930718 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lb7hk\" (UniqueName: \"kubernetes.io/projected/341ec042-9200-431a-b264-4a43228e1010-kube-api-access-lb7hk\") pod \"glance-default-internal-api-0\" (UID: \"341ec042-9200-431a-b264-4a43228e1010\") " pod="openstack/glance-default-internal-api-0" Feb 27 10:50:38 crc kubenswrapper[4728]: I0227 10:50:38.986388 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-35b05381-be13-4917-9b03-b3cf66c9fdec\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-35b05381-be13-4917-9b03-b3cf66c9fdec\") pod \"glance-default-internal-api-0\" (UID: \"341ec042-9200-431a-b264-4a43228e1010\") " pod="openstack/glance-default-internal-api-0" Feb 27 10:50:39 crc kubenswrapper[4728]: I0227 10:50:39.020895 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 27 10:50:39 crc kubenswrapper[4728]: I0227 10:50:39.724654 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 27 10:50:39 crc kubenswrapper[4728]: W0227 10:50:39.732016 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod341ec042_9200_431a_b264_4a43228e1010.slice/crio-68bec05b8d574f1ad936ff9454c111d9a1f37cc5dd7bb9f06ae2038e08f004fc WatchSource:0}: Error finding container 68bec05b8d574f1ad936ff9454c111d9a1f37cc5dd7bb9f06ae2038e08f004fc: Status 404 returned error can't find the container with id 68bec05b8d574f1ad936ff9454c111d9a1f37cc5dd7bb9f06ae2038e08f004fc Feb 27 10:50:40 crc kubenswrapper[4728]: I0227 10:50:40.595073 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"341ec042-9200-431a-b264-4a43228e1010","Type":"ContainerStarted","Data":"6ad72b7ab82d7db5f9a4801fbb0f2709b59542e6160f991bc9909f53c0d626de"} Feb 27 10:50:40 crc kubenswrapper[4728]: I0227 10:50:40.595391 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"341ec042-9200-431a-b264-4a43228e1010","Type":"ContainerStarted","Data":"68bec05b8d574f1ad936ff9454c111d9a1f37cc5dd7bb9f06ae2038e08f004fc"} Feb 27 10:50:40 crc kubenswrapper[4728]: I0227 10:50:40.640947 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 27 10:50:40 crc kubenswrapper[4728]: I0227 10:50:40.641226 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="afb3f3cd-8dc7-4bfb-b46d-aa7d1eef7dc0" containerName="glance-log" containerID="cri-o://ab4d5066d00303bc3e83ffd52e4b2f80c96fb80255ced422c66039e161a36d37" gracePeriod=30 Feb 27 10:50:40 crc kubenswrapper[4728]: I0227 10:50:40.641643 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="afb3f3cd-8dc7-4bfb-b46d-aa7d1eef7dc0" containerName="glance-httpd" containerID="cri-o://a1ba93c922074640e7fa96eddc3052a948ef0eb1207edaf20ab7ea540b868666" gracePeriod=30 Feb 27 10:50:41 crc kubenswrapper[4728]: I0227 10:50:41.608151 4728 generic.go:334] "Generic (PLEG): container finished" podID="afb3f3cd-8dc7-4bfb-b46d-aa7d1eef7dc0" containerID="ab4d5066d00303bc3e83ffd52e4b2f80c96fb80255ced422c66039e161a36d37" exitCode=143 Feb 27 10:50:41 crc kubenswrapper[4728]: I0227 10:50:41.608226 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"afb3f3cd-8dc7-4bfb-b46d-aa7d1eef7dc0","Type":"ContainerDied","Data":"ab4d5066d00303bc3e83ffd52e4b2f80c96fb80255ced422c66039e161a36d37"} Feb 27 10:50:41 crc kubenswrapper[4728]: I0227 10:50:41.611687 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"341ec042-9200-431a-b264-4a43228e1010","Type":"ContainerStarted","Data":"e66254ddd35a70f6c66630299bceedfedee698a302106f039a1ec147ea754b1c"} Feb 27 10:50:41 crc kubenswrapper[4728]: I0227 10:50:41.658538 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.6585168660000003 podStartE2EDuration="3.658516866s" podCreationTimestamp="2026-02-27 10:50:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:50:41.64281044 +0000 UTC m=+1461.605176546" watchObservedRunningTime="2026-02-27 10:50:41.658516866 +0000 UTC m=+1461.620882972" Feb 27 10:50:42 crc kubenswrapper[4728]: I0227 10:50:42.622463 4728 generic.go:334] "Generic (PLEG): container finished" podID="4aaa1a60-7863-44de-8271-50bcd8fc1743" containerID="6f40f487424f71e0d0146e913f5f9501d1d177c2e72bcbc2ccbc0a4c401c579f" exitCode=0 Feb 27 10:50:42 crc kubenswrapper[4728]: I0227 10:50:42.622538 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-74b5q" event={"ID":"4aaa1a60-7863-44de-8271-50bcd8fc1743","Type":"ContainerDied","Data":"6f40f487424f71e0d0146e913f5f9501d1d177c2e72bcbc2ccbc0a4c401c579f"} Feb 27 10:50:44 crc kubenswrapper[4728]: I0227 10:50:44.120997 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-74b5q" Feb 27 10:50:44 crc kubenswrapper[4728]: I0227 10:50:44.270746 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4aaa1a60-7863-44de-8271-50bcd8fc1743-scripts\") pod \"4aaa1a60-7863-44de-8271-50bcd8fc1743\" (UID: \"4aaa1a60-7863-44de-8271-50bcd8fc1743\") " Feb 27 10:50:44 crc kubenswrapper[4728]: I0227 10:50:44.270853 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4aaa1a60-7863-44de-8271-50bcd8fc1743-combined-ca-bundle\") pod \"4aaa1a60-7863-44de-8271-50bcd8fc1743\" (UID: \"4aaa1a60-7863-44de-8271-50bcd8fc1743\") " Feb 27 10:50:44 crc kubenswrapper[4728]: I0227 10:50:44.270904 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4aaa1a60-7863-44de-8271-50bcd8fc1743-config-data\") pod \"4aaa1a60-7863-44de-8271-50bcd8fc1743\" (UID: \"4aaa1a60-7863-44de-8271-50bcd8fc1743\") " Feb 27 10:50:44 crc kubenswrapper[4728]: I0227 10:50:44.270974 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6bhms\" (UniqueName: \"kubernetes.io/projected/4aaa1a60-7863-44de-8271-50bcd8fc1743-kube-api-access-6bhms\") pod \"4aaa1a60-7863-44de-8271-50bcd8fc1743\" (UID: \"4aaa1a60-7863-44de-8271-50bcd8fc1743\") " Feb 27 10:50:44 crc kubenswrapper[4728]: I0227 10:50:44.288040 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4aaa1a60-7863-44de-8271-50bcd8fc1743-scripts" (OuterVolumeSpecName: "scripts") pod "4aaa1a60-7863-44de-8271-50bcd8fc1743" (UID: "4aaa1a60-7863-44de-8271-50bcd8fc1743"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:50:44 crc kubenswrapper[4728]: I0227 10:50:44.305552 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4aaa1a60-7863-44de-8271-50bcd8fc1743-kube-api-access-6bhms" (OuterVolumeSpecName: "kube-api-access-6bhms") pod "4aaa1a60-7863-44de-8271-50bcd8fc1743" (UID: "4aaa1a60-7863-44de-8271-50bcd8fc1743"). InnerVolumeSpecName "kube-api-access-6bhms". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:50:44 crc kubenswrapper[4728]: I0227 10:50:44.346111 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4aaa1a60-7863-44de-8271-50bcd8fc1743-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4aaa1a60-7863-44de-8271-50bcd8fc1743" (UID: "4aaa1a60-7863-44de-8271-50bcd8fc1743"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:50:44 crc kubenswrapper[4728]: I0227 10:50:44.387917 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4aaa1a60-7863-44de-8271-50bcd8fc1743-config-data" (OuterVolumeSpecName: "config-data") pod "4aaa1a60-7863-44de-8271-50bcd8fc1743" (UID: "4aaa1a60-7863-44de-8271-50bcd8fc1743"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:50:44 crc kubenswrapper[4728]: I0227 10:50:44.390781 4728 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4aaa1a60-7863-44de-8271-50bcd8fc1743-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 10:50:44 crc kubenswrapper[4728]: I0227 10:50:44.390801 4728 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4aaa1a60-7863-44de-8271-50bcd8fc1743-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 10:50:44 crc kubenswrapper[4728]: I0227 10:50:44.390812 4728 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4aaa1a60-7863-44de-8271-50bcd8fc1743-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 10:50:44 crc kubenswrapper[4728]: I0227 10:50:44.390821 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6bhms\" (UniqueName: \"kubernetes.io/projected/4aaa1a60-7863-44de-8271-50bcd8fc1743-kube-api-access-6bhms\") on node \"crc\" DevicePath \"\"" Feb 27 10:50:44 crc kubenswrapper[4728]: I0227 10:50:44.581421 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 27 10:50:44 crc kubenswrapper[4728]: I0227 10:50:44.644170 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-74b5q" event={"ID":"4aaa1a60-7863-44de-8271-50bcd8fc1743","Type":"ContainerDied","Data":"e3dec06fbed6b82e0789ee1235bd580c8e711d555d426458deb11086de593275"} Feb 27 10:50:44 crc kubenswrapper[4728]: I0227 10:50:44.644207 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e3dec06fbed6b82e0789ee1235bd580c8e711d555d426458deb11086de593275" Feb 27 10:50:44 crc kubenswrapper[4728]: I0227 10:50:44.644260 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-74b5q" Feb 27 10:50:44 crc kubenswrapper[4728]: I0227 10:50:44.656088 4728 generic.go:334] "Generic (PLEG): container finished" podID="afb3f3cd-8dc7-4bfb-b46d-aa7d1eef7dc0" containerID="a1ba93c922074640e7fa96eddc3052a948ef0eb1207edaf20ab7ea540b868666" exitCode=0 Feb 27 10:50:44 crc kubenswrapper[4728]: I0227 10:50:44.656132 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"afb3f3cd-8dc7-4bfb-b46d-aa7d1eef7dc0","Type":"ContainerDied","Data":"a1ba93c922074640e7fa96eddc3052a948ef0eb1207edaf20ab7ea540b868666"} Feb 27 10:50:44 crc kubenswrapper[4728]: I0227 10:50:44.656162 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"afb3f3cd-8dc7-4bfb-b46d-aa7d1eef7dc0","Type":"ContainerDied","Data":"ff529f21983fdc9638da435b921b17a0f6ffc0212c182cea404eb29a199aec79"} Feb 27 10:50:44 crc kubenswrapper[4728]: I0227 10:50:44.656179 4728 scope.go:117] "RemoveContainer" containerID="a1ba93c922074640e7fa96eddc3052a948ef0eb1207edaf20ab7ea540b868666" Feb 27 10:50:44 crc kubenswrapper[4728]: I0227 10:50:44.656307 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 27 10:50:44 crc kubenswrapper[4728]: I0227 10:50:44.683690 4728 scope.go:117] "RemoveContainer" containerID="ab4d5066d00303bc3e83ffd52e4b2f80c96fb80255ced422c66039e161a36d37" Feb 27 10:50:44 crc kubenswrapper[4728]: I0227 10:50:44.698448 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/afb3f3cd-8dc7-4bfb-b46d-aa7d1eef7dc0-scripts\") pod \"afb3f3cd-8dc7-4bfb-b46d-aa7d1eef7dc0\" (UID: \"afb3f3cd-8dc7-4bfb-b46d-aa7d1eef7dc0\") " Feb 27 10:50:44 crc kubenswrapper[4728]: I0227 10:50:44.698539 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/afb3f3cd-8dc7-4bfb-b46d-aa7d1eef7dc0-public-tls-certs\") pod \"afb3f3cd-8dc7-4bfb-b46d-aa7d1eef7dc0\" (UID: \"afb3f3cd-8dc7-4bfb-b46d-aa7d1eef7dc0\") " Feb 27 10:50:44 crc kubenswrapper[4728]: I0227 10:50:44.698565 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/afb3f3cd-8dc7-4bfb-b46d-aa7d1eef7dc0-httpd-run\") pod \"afb3f3cd-8dc7-4bfb-b46d-aa7d1eef7dc0\" (UID: \"afb3f3cd-8dc7-4bfb-b46d-aa7d1eef7dc0\") " Feb 27 10:50:44 crc kubenswrapper[4728]: I0227 10:50:44.698613 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/afb3f3cd-8dc7-4bfb-b46d-aa7d1eef7dc0-config-data\") pod \"afb3f3cd-8dc7-4bfb-b46d-aa7d1eef7dc0\" (UID: \"afb3f3cd-8dc7-4bfb-b46d-aa7d1eef7dc0\") " Feb 27 10:50:44 crc kubenswrapper[4728]: I0227 10:50:44.698695 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/afb3f3cd-8dc7-4bfb-b46d-aa7d1eef7dc0-logs\") pod \"afb3f3cd-8dc7-4bfb-b46d-aa7d1eef7dc0\" (UID: \"afb3f3cd-8dc7-4bfb-b46d-aa7d1eef7dc0\") " Feb 27 10:50:44 crc kubenswrapper[4728]: I0227 10:50:44.698850 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afb3f3cd-8dc7-4bfb-b46d-aa7d1eef7dc0-combined-ca-bundle\") pod \"afb3f3cd-8dc7-4bfb-b46d-aa7d1eef7dc0\" (UID: \"afb3f3cd-8dc7-4bfb-b46d-aa7d1eef7dc0\") " Feb 27 10:50:44 crc kubenswrapper[4728]: I0227 10:50:44.699450 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-57a3f881-6a5d-4f01-9ebb-b4d4e5ba8025\") pod \"afb3f3cd-8dc7-4bfb-b46d-aa7d1eef7dc0\" (UID: \"afb3f3cd-8dc7-4bfb-b46d-aa7d1eef7dc0\") " Feb 27 10:50:44 crc kubenswrapper[4728]: I0227 10:50:44.699486 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2hfwv\" (UniqueName: \"kubernetes.io/projected/afb3f3cd-8dc7-4bfb-b46d-aa7d1eef7dc0-kube-api-access-2hfwv\") pod \"afb3f3cd-8dc7-4bfb-b46d-aa7d1eef7dc0\" (UID: \"afb3f3cd-8dc7-4bfb-b46d-aa7d1eef7dc0\") " Feb 27 10:50:44 crc kubenswrapper[4728]: I0227 10:50:44.701927 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/afb3f3cd-8dc7-4bfb-b46d-aa7d1eef7dc0-logs" (OuterVolumeSpecName: "logs") pod "afb3f3cd-8dc7-4bfb-b46d-aa7d1eef7dc0" (UID: "afb3f3cd-8dc7-4bfb-b46d-aa7d1eef7dc0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:50:44 crc kubenswrapper[4728]: I0227 10:50:44.702092 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/afb3f3cd-8dc7-4bfb-b46d-aa7d1eef7dc0-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "afb3f3cd-8dc7-4bfb-b46d-aa7d1eef7dc0" (UID: "afb3f3cd-8dc7-4bfb-b46d-aa7d1eef7dc0"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:50:44 crc kubenswrapper[4728]: I0227 10:50:44.707638 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/afb3f3cd-8dc7-4bfb-b46d-aa7d1eef7dc0-kube-api-access-2hfwv" (OuterVolumeSpecName: "kube-api-access-2hfwv") pod "afb3f3cd-8dc7-4bfb-b46d-aa7d1eef7dc0" (UID: "afb3f3cd-8dc7-4bfb-b46d-aa7d1eef7dc0"). InnerVolumeSpecName "kube-api-access-2hfwv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:50:44 crc kubenswrapper[4728]: I0227 10:50:44.713098 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/afb3f3cd-8dc7-4bfb-b46d-aa7d1eef7dc0-scripts" (OuterVolumeSpecName: "scripts") pod "afb3f3cd-8dc7-4bfb-b46d-aa7d1eef7dc0" (UID: "afb3f3cd-8dc7-4bfb-b46d-aa7d1eef7dc0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:50:44 crc kubenswrapper[4728]: I0227 10:50:44.748192 4728 scope.go:117] "RemoveContainer" containerID="a1ba93c922074640e7fa96eddc3052a948ef0eb1207edaf20ab7ea540b868666" Feb 27 10:50:44 crc kubenswrapper[4728]: E0227 10:50:44.752067 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a1ba93c922074640e7fa96eddc3052a948ef0eb1207edaf20ab7ea540b868666\": container with ID starting with a1ba93c922074640e7fa96eddc3052a948ef0eb1207edaf20ab7ea540b868666 not found: ID does not exist" containerID="a1ba93c922074640e7fa96eddc3052a948ef0eb1207edaf20ab7ea540b868666" Feb 27 10:50:44 crc kubenswrapper[4728]: I0227 10:50:44.752120 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a1ba93c922074640e7fa96eddc3052a948ef0eb1207edaf20ab7ea540b868666"} err="failed to get container status \"a1ba93c922074640e7fa96eddc3052a948ef0eb1207edaf20ab7ea540b868666\": rpc error: code = NotFound desc = could not find container \"a1ba93c922074640e7fa96eddc3052a948ef0eb1207edaf20ab7ea540b868666\": container with ID starting with a1ba93c922074640e7fa96eddc3052a948ef0eb1207edaf20ab7ea540b868666 not found: ID does not exist" Feb 27 10:50:44 crc kubenswrapper[4728]: I0227 10:50:44.752144 4728 scope.go:117] "RemoveContainer" containerID="ab4d5066d00303bc3e83ffd52e4b2f80c96fb80255ced422c66039e161a36d37" Feb 27 10:50:44 crc kubenswrapper[4728]: E0227 10:50:44.752482 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab4d5066d00303bc3e83ffd52e4b2f80c96fb80255ced422c66039e161a36d37\": container with ID starting with ab4d5066d00303bc3e83ffd52e4b2f80c96fb80255ced422c66039e161a36d37 not found: ID does not exist" containerID="ab4d5066d00303bc3e83ffd52e4b2f80c96fb80255ced422c66039e161a36d37" Feb 27 10:50:44 crc kubenswrapper[4728]: I0227 10:50:44.752524 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab4d5066d00303bc3e83ffd52e4b2f80c96fb80255ced422c66039e161a36d37"} err="failed to get container status \"ab4d5066d00303bc3e83ffd52e4b2f80c96fb80255ced422c66039e161a36d37\": rpc error: code = NotFound desc = could not find container \"ab4d5066d00303bc3e83ffd52e4b2f80c96fb80255ced422c66039e161a36d37\": container with ID starting with ab4d5066d00303bc3e83ffd52e4b2f80c96fb80255ced422c66039e161a36d37 not found: ID does not exist" Feb 27 10:50:44 crc kubenswrapper[4728]: I0227 10:50:44.780872 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 27 10:50:44 crc kubenswrapper[4728]: E0227 10:50:44.781379 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4aaa1a60-7863-44de-8271-50bcd8fc1743" containerName="nova-cell0-conductor-db-sync" Feb 27 10:50:44 crc kubenswrapper[4728]: I0227 10:50:44.781393 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="4aaa1a60-7863-44de-8271-50bcd8fc1743" containerName="nova-cell0-conductor-db-sync" Feb 27 10:50:44 crc kubenswrapper[4728]: E0227 10:50:44.781420 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afb3f3cd-8dc7-4bfb-b46d-aa7d1eef7dc0" containerName="glance-httpd" Feb 27 10:50:44 crc kubenswrapper[4728]: I0227 10:50:44.781426 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="afb3f3cd-8dc7-4bfb-b46d-aa7d1eef7dc0" containerName="glance-httpd" Feb 27 10:50:44 crc kubenswrapper[4728]: E0227 10:50:44.781445 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afb3f3cd-8dc7-4bfb-b46d-aa7d1eef7dc0" containerName="glance-log" Feb 27 10:50:44 crc kubenswrapper[4728]: I0227 10:50:44.781451 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="afb3f3cd-8dc7-4bfb-b46d-aa7d1eef7dc0" containerName="glance-log" Feb 27 10:50:44 crc kubenswrapper[4728]: I0227 10:50:44.781677 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="afb3f3cd-8dc7-4bfb-b46d-aa7d1eef7dc0" containerName="glance-httpd" Feb 27 10:50:44 crc kubenswrapper[4728]: I0227 10:50:44.781701 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="afb3f3cd-8dc7-4bfb-b46d-aa7d1eef7dc0" containerName="glance-log" Feb 27 10:50:44 crc kubenswrapper[4728]: I0227 10:50:44.781717 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="4aaa1a60-7863-44de-8271-50bcd8fc1743" containerName="nova-cell0-conductor-db-sync" Feb 27 10:50:44 crc kubenswrapper[4728]: I0227 10:50:44.782474 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 27 10:50:44 crc kubenswrapper[4728]: I0227 10:50:44.786456 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-nzv8g" Feb 27 10:50:44 crc kubenswrapper[4728]: I0227 10:50:44.786773 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 27 10:50:44 crc kubenswrapper[4728]: I0227 10:50:44.792046 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 27 10:50:44 crc kubenswrapper[4728]: I0227 10:50:44.794631 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/afb3f3cd-8dc7-4bfb-b46d-aa7d1eef7dc0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "afb3f3cd-8dc7-4bfb-b46d-aa7d1eef7dc0" (UID: "afb3f3cd-8dc7-4bfb-b46d-aa7d1eef7dc0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:50:44 crc kubenswrapper[4728]: I0227 10:50:44.802207 4728 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/afb3f3cd-8dc7-4bfb-b46d-aa7d1eef7dc0-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 10:50:44 crc kubenswrapper[4728]: I0227 10:50:44.802235 4728 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/afb3f3cd-8dc7-4bfb-b46d-aa7d1eef7dc0-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 27 10:50:44 crc kubenswrapper[4728]: I0227 10:50:44.802244 4728 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/afb3f3cd-8dc7-4bfb-b46d-aa7d1eef7dc0-logs\") on node \"crc\" DevicePath \"\"" Feb 27 10:50:44 crc kubenswrapper[4728]: I0227 10:50:44.802254 4728 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afb3f3cd-8dc7-4bfb-b46d-aa7d1eef7dc0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 10:50:44 crc kubenswrapper[4728]: I0227 10:50:44.802264 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2hfwv\" (UniqueName: \"kubernetes.io/projected/afb3f3cd-8dc7-4bfb-b46d-aa7d1eef7dc0-kube-api-access-2hfwv\") on node \"crc\" DevicePath \"\"" Feb 27 10:50:44 crc kubenswrapper[4728]: I0227 10:50:44.814800 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-57a3f881-6a5d-4f01-9ebb-b4d4e5ba8025" (OuterVolumeSpecName: "glance") pod "afb3f3cd-8dc7-4bfb-b46d-aa7d1eef7dc0" (UID: "afb3f3cd-8dc7-4bfb-b46d-aa7d1eef7dc0"). InnerVolumeSpecName "pvc-57a3f881-6a5d-4f01-9ebb-b4d4e5ba8025". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 27 10:50:44 crc kubenswrapper[4728]: I0227 10:50:44.823594 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/afb3f3cd-8dc7-4bfb-b46d-aa7d1eef7dc0-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "afb3f3cd-8dc7-4bfb-b46d-aa7d1eef7dc0" (UID: "afb3f3cd-8dc7-4bfb-b46d-aa7d1eef7dc0"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:50:44 crc kubenswrapper[4728]: I0227 10:50:44.866700 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/afb3f3cd-8dc7-4bfb-b46d-aa7d1eef7dc0-config-data" (OuterVolumeSpecName: "config-data") pod "afb3f3cd-8dc7-4bfb-b46d-aa7d1eef7dc0" (UID: "afb3f3cd-8dc7-4bfb-b46d-aa7d1eef7dc0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:50:44 crc kubenswrapper[4728]: I0227 10:50:44.904450 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bk55s\" (UniqueName: \"kubernetes.io/projected/5cb277f6-0a63-4989-9ea8-6437c4b1d09b-kube-api-access-bk55s\") pod \"nova-cell0-conductor-0\" (UID: \"5cb277f6-0a63-4989-9ea8-6437c4b1d09b\") " pod="openstack/nova-cell0-conductor-0" Feb 27 10:50:44 crc kubenswrapper[4728]: I0227 10:50:44.904526 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5cb277f6-0a63-4989-9ea8-6437c4b1d09b-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"5cb277f6-0a63-4989-9ea8-6437c4b1d09b\") " pod="openstack/nova-cell0-conductor-0" Feb 27 10:50:44 crc kubenswrapper[4728]: I0227 10:50:44.904558 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5cb277f6-0a63-4989-9ea8-6437c4b1d09b-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"5cb277f6-0a63-4989-9ea8-6437c4b1d09b\") " pod="openstack/nova-cell0-conductor-0" Feb 27 10:50:44 crc kubenswrapper[4728]: I0227 10:50:44.904800 4728 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-57a3f881-6a5d-4f01-9ebb-b4d4e5ba8025\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-57a3f881-6a5d-4f01-9ebb-b4d4e5ba8025\") on node \"crc\" " Feb 27 10:50:44 crc kubenswrapper[4728]: I0227 10:50:44.904820 4728 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/afb3f3cd-8dc7-4bfb-b46d-aa7d1eef7dc0-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 27 10:50:44 crc kubenswrapper[4728]: I0227 10:50:44.904831 4728 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/afb3f3cd-8dc7-4bfb-b46d-aa7d1eef7dc0-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 10:50:44 crc kubenswrapper[4728]: I0227 10:50:44.935076 4728 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 27 10:50:44 crc kubenswrapper[4728]: I0227 10:50:44.935218 4728 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-57a3f881-6a5d-4f01-9ebb-b4d4e5ba8025" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-57a3f881-6a5d-4f01-9ebb-b4d4e5ba8025") on node "crc" Feb 27 10:50:45 crc kubenswrapper[4728]: I0227 10:50:45.007389 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bk55s\" (UniqueName: \"kubernetes.io/projected/5cb277f6-0a63-4989-9ea8-6437c4b1d09b-kube-api-access-bk55s\") pod \"nova-cell0-conductor-0\" (UID: \"5cb277f6-0a63-4989-9ea8-6437c4b1d09b\") " pod="openstack/nova-cell0-conductor-0" Feb 27 10:50:45 crc kubenswrapper[4728]: I0227 10:50:45.007452 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5cb277f6-0a63-4989-9ea8-6437c4b1d09b-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"5cb277f6-0a63-4989-9ea8-6437c4b1d09b\") " pod="openstack/nova-cell0-conductor-0" Feb 27 10:50:45 crc kubenswrapper[4728]: I0227 10:50:45.007492 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5cb277f6-0a63-4989-9ea8-6437c4b1d09b-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"5cb277f6-0a63-4989-9ea8-6437c4b1d09b\") " pod="openstack/nova-cell0-conductor-0" Feb 27 10:50:45 crc kubenswrapper[4728]: I0227 10:50:45.007729 4728 reconciler_common.go:293] "Volume detached for volume \"pvc-57a3f881-6a5d-4f01-9ebb-b4d4e5ba8025\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-57a3f881-6a5d-4f01-9ebb-b4d4e5ba8025\") on node \"crc\" DevicePath \"\"" Feb 27 10:50:45 crc kubenswrapper[4728]: I0227 10:50:45.011691 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5cb277f6-0a63-4989-9ea8-6437c4b1d09b-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"5cb277f6-0a63-4989-9ea8-6437c4b1d09b\") " pod="openstack/nova-cell0-conductor-0" Feb 27 10:50:45 crc kubenswrapper[4728]: I0227 10:50:45.014116 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5cb277f6-0a63-4989-9ea8-6437c4b1d09b-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"5cb277f6-0a63-4989-9ea8-6437c4b1d09b\") " pod="openstack/nova-cell0-conductor-0" Feb 27 10:50:45 crc kubenswrapper[4728]: I0227 10:50:45.030056 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bk55s\" (UniqueName: \"kubernetes.io/projected/5cb277f6-0a63-4989-9ea8-6437c4b1d09b-kube-api-access-bk55s\") pod \"nova-cell0-conductor-0\" (UID: \"5cb277f6-0a63-4989-9ea8-6437c4b1d09b\") " pod="openstack/nova-cell0-conductor-0" Feb 27 10:50:45 crc kubenswrapper[4728]: I0227 10:50:45.115034 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 27 10:50:45 crc kubenswrapper[4728]: I0227 10:50:45.256750 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 27 10:50:45 crc kubenswrapper[4728]: I0227 10:50:45.274574 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 27 10:50:45 crc kubenswrapper[4728]: I0227 10:50:45.293563 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 27 10:50:45 crc kubenswrapper[4728]: I0227 10:50:45.295526 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 27 10:50:45 crc kubenswrapper[4728]: I0227 10:50:45.298194 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 27 10:50:45 crc kubenswrapper[4728]: I0227 10:50:45.298563 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 27 10:50:45 crc kubenswrapper[4728]: I0227 10:50:45.307701 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 27 10:50:45 crc kubenswrapper[4728]: I0227 10:50:45.419179 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4d607ed-8cde-48d2-9d5e-fa0903477b07-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"b4d607ed-8cde-48d2-9d5e-fa0903477b07\") " pod="openstack/glance-default-external-api-0" Feb 27 10:50:45 crc kubenswrapper[4728]: I0227 10:50:45.419560 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4d607ed-8cde-48d2-9d5e-fa0903477b07-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"b4d607ed-8cde-48d2-9d5e-fa0903477b07\") " pod="openstack/glance-default-external-api-0" Feb 27 10:50:45 crc kubenswrapper[4728]: I0227 10:50:45.419637 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-57a3f881-6a5d-4f01-9ebb-b4d4e5ba8025\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-57a3f881-6a5d-4f01-9ebb-b4d4e5ba8025\") pod \"glance-default-external-api-0\" (UID: \"b4d607ed-8cde-48d2-9d5e-fa0903477b07\") " pod="openstack/glance-default-external-api-0" Feb 27 10:50:45 crc kubenswrapper[4728]: I0227 10:50:45.419685 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b4d607ed-8cde-48d2-9d5e-fa0903477b07-logs\") pod \"glance-default-external-api-0\" (UID: \"b4d607ed-8cde-48d2-9d5e-fa0903477b07\") " pod="openstack/glance-default-external-api-0" Feb 27 10:50:45 crc kubenswrapper[4728]: I0227 10:50:45.419708 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tkfhv\" (UniqueName: \"kubernetes.io/projected/b4d607ed-8cde-48d2-9d5e-fa0903477b07-kube-api-access-tkfhv\") pod \"glance-default-external-api-0\" (UID: \"b4d607ed-8cde-48d2-9d5e-fa0903477b07\") " pod="openstack/glance-default-external-api-0" Feb 27 10:50:45 crc kubenswrapper[4728]: I0227 10:50:45.419732 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b4d607ed-8cde-48d2-9d5e-fa0903477b07-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"b4d607ed-8cde-48d2-9d5e-fa0903477b07\") " pod="openstack/glance-default-external-api-0" Feb 27 10:50:45 crc kubenswrapper[4728]: I0227 10:50:45.419809 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b4d607ed-8cde-48d2-9d5e-fa0903477b07-scripts\") pod \"glance-default-external-api-0\" (UID: \"b4d607ed-8cde-48d2-9d5e-fa0903477b07\") " pod="openstack/glance-default-external-api-0" Feb 27 10:50:45 crc kubenswrapper[4728]: I0227 10:50:45.419833 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4d607ed-8cde-48d2-9d5e-fa0903477b07-config-data\") pod \"glance-default-external-api-0\" (UID: \"b4d607ed-8cde-48d2-9d5e-fa0903477b07\") " pod="openstack/glance-default-external-api-0" Feb 27 10:50:45 crc kubenswrapper[4728]: I0227 10:50:45.522168 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b4d607ed-8cde-48d2-9d5e-fa0903477b07-logs\") pod \"glance-default-external-api-0\" (UID: \"b4d607ed-8cde-48d2-9d5e-fa0903477b07\") " pod="openstack/glance-default-external-api-0" Feb 27 10:50:45 crc kubenswrapper[4728]: I0227 10:50:45.522220 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tkfhv\" (UniqueName: \"kubernetes.io/projected/b4d607ed-8cde-48d2-9d5e-fa0903477b07-kube-api-access-tkfhv\") pod \"glance-default-external-api-0\" (UID: \"b4d607ed-8cde-48d2-9d5e-fa0903477b07\") " pod="openstack/glance-default-external-api-0" Feb 27 10:50:45 crc kubenswrapper[4728]: I0227 10:50:45.522249 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b4d607ed-8cde-48d2-9d5e-fa0903477b07-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"b4d607ed-8cde-48d2-9d5e-fa0903477b07\") " pod="openstack/glance-default-external-api-0" Feb 27 10:50:45 crc kubenswrapper[4728]: I0227 10:50:45.522333 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b4d607ed-8cde-48d2-9d5e-fa0903477b07-scripts\") pod \"glance-default-external-api-0\" (UID: \"b4d607ed-8cde-48d2-9d5e-fa0903477b07\") " pod="openstack/glance-default-external-api-0" Feb 27 10:50:45 crc kubenswrapper[4728]: I0227 10:50:45.522369 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4d607ed-8cde-48d2-9d5e-fa0903477b07-config-data\") pod \"glance-default-external-api-0\" (UID: \"b4d607ed-8cde-48d2-9d5e-fa0903477b07\") " pod="openstack/glance-default-external-api-0" Feb 27 10:50:45 crc kubenswrapper[4728]: I0227 10:50:45.522419 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4d607ed-8cde-48d2-9d5e-fa0903477b07-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"b4d607ed-8cde-48d2-9d5e-fa0903477b07\") " pod="openstack/glance-default-external-api-0" Feb 27 10:50:45 crc kubenswrapper[4728]: I0227 10:50:45.522497 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4d607ed-8cde-48d2-9d5e-fa0903477b07-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"b4d607ed-8cde-48d2-9d5e-fa0903477b07\") " pod="openstack/glance-default-external-api-0" Feb 27 10:50:45 crc kubenswrapper[4728]: I0227 10:50:45.522616 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-57a3f881-6a5d-4f01-9ebb-b4d4e5ba8025\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-57a3f881-6a5d-4f01-9ebb-b4d4e5ba8025\") pod \"glance-default-external-api-0\" (UID: \"b4d607ed-8cde-48d2-9d5e-fa0903477b07\") " pod="openstack/glance-default-external-api-0" Feb 27 10:50:45 crc kubenswrapper[4728]: I0227 10:50:45.523340 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b4d607ed-8cde-48d2-9d5e-fa0903477b07-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"b4d607ed-8cde-48d2-9d5e-fa0903477b07\") " pod="openstack/glance-default-external-api-0" Feb 27 10:50:45 crc kubenswrapper[4728]: I0227 10:50:45.523599 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b4d607ed-8cde-48d2-9d5e-fa0903477b07-logs\") pod \"glance-default-external-api-0\" (UID: \"b4d607ed-8cde-48d2-9d5e-fa0903477b07\") " pod="openstack/glance-default-external-api-0" Feb 27 10:50:45 crc kubenswrapper[4728]: I0227 10:50:45.537373 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4d607ed-8cde-48d2-9d5e-fa0903477b07-config-data\") pod \"glance-default-external-api-0\" (UID: \"b4d607ed-8cde-48d2-9d5e-fa0903477b07\") " pod="openstack/glance-default-external-api-0" Feb 27 10:50:45 crc kubenswrapper[4728]: I0227 10:50:45.542224 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4d607ed-8cde-48d2-9d5e-fa0903477b07-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"b4d607ed-8cde-48d2-9d5e-fa0903477b07\") " pod="openstack/glance-default-external-api-0" Feb 27 10:50:45 crc kubenswrapper[4728]: I0227 10:50:45.546189 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4d607ed-8cde-48d2-9d5e-fa0903477b07-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"b4d607ed-8cde-48d2-9d5e-fa0903477b07\") " pod="openstack/glance-default-external-api-0" Feb 27 10:50:45 crc kubenswrapper[4728]: I0227 10:50:45.548400 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b4d607ed-8cde-48d2-9d5e-fa0903477b07-scripts\") pod \"glance-default-external-api-0\" (UID: \"b4d607ed-8cde-48d2-9d5e-fa0903477b07\") " pod="openstack/glance-default-external-api-0" Feb 27 10:50:45 crc kubenswrapper[4728]: I0227 10:50:45.560712 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tkfhv\" (UniqueName: \"kubernetes.io/projected/b4d607ed-8cde-48d2-9d5e-fa0903477b07-kube-api-access-tkfhv\") pod \"glance-default-external-api-0\" (UID: \"b4d607ed-8cde-48d2-9d5e-fa0903477b07\") " pod="openstack/glance-default-external-api-0" Feb 27 10:50:45 crc kubenswrapper[4728]: I0227 10:50:45.701978 4728 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 27 10:50:45 crc kubenswrapper[4728]: I0227 10:50:45.702017 4728 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-57a3f881-6a5d-4f01-9ebb-b4d4e5ba8025\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-57a3f881-6a5d-4f01-9ebb-b4d4e5ba8025\") pod \"glance-default-external-api-0\" (UID: \"b4d607ed-8cde-48d2-9d5e-fa0903477b07\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/d1be12f0d287d711dc64205ed4c6dc8f46de7817821a549d7df0761d45187fe2/globalmount\"" pod="openstack/glance-default-external-api-0" Feb 27 10:50:45 crc kubenswrapper[4728]: I0227 10:50:45.754221 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-57a3f881-6a5d-4f01-9ebb-b4d4e5ba8025\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-57a3f881-6a5d-4f01-9ebb-b4d4e5ba8025\") pod \"glance-default-external-api-0\" (UID: \"b4d607ed-8cde-48d2-9d5e-fa0903477b07\") " pod="openstack/glance-default-external-api-0" Feb 27 10:50:45 crc kubenswrapper[4728]: I0227 10:50:45.826697 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 27 10:50:45 crc kubenswrapper[4728]: W0227 10:50:45.829431 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5cb277f6_0a63_4989_9ea8_6437c4b1d09b.slice/crio-6a99b56f8dc94979b3ce4c2e404da8288a5ab8200b7855c4366032c51095d44f WatchSource:0}: Error finding container 6a99b56f8dc94979b3ce4c2e404da8288a5ab8200b7855c4366032c51095d44f: Status 404 returned error can't find the container with id 6a99b56f8dc94979b3ce4c2e404da8288a5ab8200b7855c4366032c51095d44f Feb 27 10:50:45 crc kubenswrapper[4728]: I0227 10:50:45.931163 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 27 10:50:46 crc kubenswrapper[4728]: W0227 10:50:46.508700 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb4d607ed_8cde_48d2_9d5e_fa0903477b07.slice/crio-a6b89a6b388678d74d73974cce97367ce1fa46cdc1cd111f6c7e933533470a07 WatchSource:0}: Error finding container a6b89a6b388678d74d73974cce97367ce1fa46cdc1cd111f6c7e933533470a07: Status 404 returned error can't find the container with id a6b89a6b388678d74d73974cce97367ce1fa46cdc1cd111f6c7e933533470a07 Feb 27 10:50:46 crc kubenswrapper[4728]: I0227 10:50:46.512868 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 27 10:50:46 crc kubenswrapper[4728]: I0227 10:50:46.706163 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b4d607ed-8cde-48d2-9d5e-fa0903477b07","Type":"ContainerStarted","Data":"a6b89a6b388678d74d73974cce97367ce1fa46cdc1cd111f6c7e933533470a07"} Feb 27 10:50:46 crc kubenswrapper[4728]: I0227 10:50:46.709092 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"5cb277f6-0a63-4989-9ea8-6437c4b1d09b","Type":"ContainerStarted","Data":"da30d1b253176558ecddefd1fb7703e69fde36feacbb30e226907c38bb2513e3"} Feb 27 10:50:46 crc kubenswrapper[4728]: I0227 10:50:46.709133 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"5cb277f6-0a63-4989-9ea8-6437c4b1d09b","Type":"ContainerStarted","Data":"6a99b56f8dc94979b3ce4c2e404da8288a5ab8200b7855c4366032c51095d44f"} Feb 27 10:50:46 crc kubenswrapper[4728]: I0227 10:50:46.709337 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Feb 27 10:50:46 crc kubenswrapper[4728]: I0227 10:50:46.725193 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.72517597 podStartE2EDuration="2.72517597s" podCreationTimestamp="2026-02-27 10:50:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:50:46.721784548 +0000 UTC m=+1466.684150654" watchObservedRunningTime="2026-02-27 10:50:46.72517597 +0000 UTC m=+1466.687542076" Feb 27 10:50:46 crc kubenswrapper[4728]: I0227 10:50:46.742580 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="afb3f3cd-8dc7-4bfb-b46d-aa7d1eef7dc0" path="/var/lib/kubelet/pods/afb3f3cd-8dc7-4bfb-b46d-aa7d1eef7dc0/volumes" Feb 27 10:50:47 crc kubenswrapper[4728]: I0227 10:50:47.726705 4728 generic.go:334] "Generic (PLEG): container finished" podID="555793ce-76da-4384-8b40-4133438d1bec" containerID="0715becab8c00893ac70c9215a4b8c871c38975372c5d09fea90bc6ad8464c0a" exitCode=0 Feb 27 10:50:47 crc kubenswrapper[4728]: I0227 10:50:47.727206 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"555793ce-76da-4384-8b40-4133438d1bec","Type":"ContainerDied","Data":"0715becab8c00893ac70c9215a4b8c871c38975372c5d09fea90bc6ad8464c0a"} Feb 27 10:50:47 crc kubenswrapper[4728]: I0227 10:50:47.735411 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b4d607ed-8cde-48d2-9d5e-fa0903477b07","Type":"ContainerStarted","Data":"8b080b008d72eacb44be1ce510a07a7180829a0d5ea9aa6fa0b00bb1e25a7de3"} Feb 27 10:50:47 crc kubenswrapper[4728]: I0227 10:50:47.892250 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 27 10:50:47 crc kubenswrapper[4728]: I0227 10:50:47.979024 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/555793ce-76da-4384-8b40-4133438d1bec-combined-ca-bundle\") pod \"555793ce-76da-4384-8b40-4133438d1bec\" (UID: \"555793ce-76da-4384-8b40-4133438d1bec\") " Feb 27 10:50:47 crc kubenswrapper[4728]: I0227 10:50:47.979319 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/555793ce-76da-4384-8b40-4133438d1bec-scripts\") pod \"555793ce-76da-4384-8b40-4133438d1bec\" (UID: \"555793ce-76da-4384-8b40-4133438d1bec\") " Feb 27 10:50:47 crc kubenswrapper[4728]: I0227 10:50:47.979355 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/555793ce-76da-4384-8b40-4133438d1bec-run-httpd\") pod \"555793ce-76da-4384-8b40-4133438d1bec\" (UID: \"555793ce-76da-4384-8b40-4133438d1bec\") " Feb 27 10:50:47 crc kubenswrapper[4728]: I0227 10:50:47.979374 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/555793ce-76da-4384-8b40-4133438d1bec-config-data\") pod \"555793ce-76da-4384-8b40-4133438d1bec\" (UID: \"555793ce-76da-4384-8b40-4133438d1bec\") " Feb 27 10:50:47 crc kubenswrapper[4728]: I0227 10:50:47.979491 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/555793ce-76da-4384-8b40-4133438d1bec-log-httpd\") pod \"555793ce-76da-4384-8b40-4133438d1bec\" (UID: \"555793ce-76da-4384-8b40-4133438d1bec\") " Feb 27 10:50:47 crc kubenswrapper[4728]: I0227 10:50:47.979537 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/555793ce-76da-4384-8b40-4133438d1bec-sg-core-conf-yaml\") pod \"555793ce-76da-4384-8b40-4133438d1bec\" (UID: \"555793ce-76da-4384-8b40-4133438d1bec\") " Feb 27 10:50:47 crc kubenswrapper[4728]: I0227 10:50:47.979571 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cx5z6\" (UniqueName: \"kubernetes.io/projected/555793ce-76da-4384-8b40-4133438d1bec-kube-api-access-cx5z6\") pod \"555793ce-76da-4384-8b40-4133438d1bec\" (UID: \"555793ce-76da-4384-8b40-4133438d1bec\") " Feb 27 10:50:47 crc kubenswrapper[4728]: I0227 10:50:47.979818 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/555793ce-76da-4384-8b40-4133438d1bec-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "555793ce-76da-4384-8b40-4133438d1bec" (UID: "555793ce-76da-4384-8b40-4133438d1bec"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:50:47 crc kubenswrapper[4728]: I0227 10:50:47.980591 4728 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/555793ce-76da-4384-8b40-4133438d1bec-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 27 10:50:47 crc kubenswrapper[4728]: I0227 10:50:47.980990 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/555793ce-76da-4384-8b40-4133438d1bec-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "555793ce-76da-4384-8b40-4133438d1bec" (UID: "555793ce-76da-4384-8b40-4133438d1bec"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:50:47 crc kubenswrapper[4728]: I0227 10:50:47.984927 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/555793ce-76da-4384-8b40-4133438d1bec-kube-api-access-cx5z6" (OuterVolumeSpecName: "kube-api-access-cx5z6") pod "555793ce-76da-4384-8b40-4133438d1bec" (UID: "555793ce-76da-4384-8b40-4133438d1bec"). InnerVolumeSpecName "kube-api-access-cx5z6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:50:47 crc kubenswrapper[4728]: I0227 10:50:47.988938 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/555793ce-76da-4384-8b40-4133438d1bec-scripts" (OuterVolumeSpecName: "scripts") pod "555793ce-76da-4384-8b40-4133438d1bec" (UID: "555793ce-76da-4384-8b40-4133438d1bec"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:50:48 crc kubenswrapper[4728]: I0227 10:50:48.021767 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/555793ce-76da-4384-8b40-4133438d1bec-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "555793ce-76da-4384-8b40-4133438d1bec" (UID: "555793ce-76da-4384-8b40-4133438d1bec"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:50:48 crc kubenswrapper[4728]: I0227 10:50:48.084385 4728 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/555793ce-76da-4384-8b40-4133438d1bec-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 10:50:48 crc kubenswrapper[4728]: I0227 10:50:48.084413 4728 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/555793ce-76da-4384-8b40-4133438d1bec-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 27 10:50:48 crc kubenswrapper[4728]: I0227 10:50:48.084422 4728 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/555793ce-76da-4384-8b40-4133438d1bec-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 27 10:50:48 crc kubenswrapper[4728]: I0227 10:50:48.084431 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cx5z6\" (UniqueName: \"kubernetes.io/projected/555793ce-76da-4384-8b40-4133438d1bec-kube-api-access-cx5z6\") on node \"crc\" DevicePath \"\"" Feb 27 10:50:48 crc kubenswrapper[4728]: I0227 10:50:48.112423 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/555793ce-76da-4384-8b40-4133438d1bec-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "555793ce-76da-4384-8b40-4133438d1bec" (UID: "555793ce-76da-4384-8b40-4133438d1bec"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:50:48 crc kubenswrapper[4728]: I0227 10:50:48.148943 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/555793ce-76da-4384-8b40-4133438d1bec-config-data" (OuterVolumeSpecName: "config-data") pod "555793ce-76da-4384-8b40-4133438d1bec" (UID: "555793ce-76da-4384-8b40-4133438d1bec"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:50:48 crc kubenswrapper[4728]: I0227 10:50:48.187962 4728 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/555793ce-76da-4384-8b40-4133438d1bec-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 10:50:48 crc kubenswrapper[4728]: I0227 10:50:48.188240 4728 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/555793ce-76da-4384-8b40-4133438d1bec-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 10:50:48 crc kubenswrapper[4728]: I0227 10:50:48.747229 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b4d607ed-8cde-48d2-9d5e-fa0903477b07","Type":"ContainerStarted","Data":"26b33d36f944d37ce912a635ebd094789ec6f2f5f903d6fea4042b847c9efbc8"} Feb 27 10:50:48 crc kubenswrapper[4728]: I0227 10:50:48.752057 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"555793ce-76da-4384-8b40-4133438d1bec","Type":"ContainerDied","Data":"053d4e2fbdf746aa13422465227c6a3d739f5764fc21a048fb697134ecc1768f"} Feb 27 10:50:48 crc kubenswrapper[4728]: I0227 10:50:48.752117 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 27 10:50:48 crc kubenswrapper[4728]: I0227 10:50:48.752183 4728 scope.go:117] "RemoveContainer" containerID="dd86b632d9818237c5ad8cfe3a03f71d330a4e6cb0899a1e566bb8b267c6b90b" Feb 27 10:50:48 crc kubenswrapper[4728]: I0227 10:50:48.817903 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.817880686 podStartE2EDuration="3.817880686s" podCreationTimestamp="2026-02-27 10:50:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:50:48.794314416 +0000 UTC m=+1468.756680522" watchObservedRunningTime="2026-02-27 10:50:48.817880686 +0000 UTC m=+1468.780246792" Feb 27 10:50:48 crc kubenswrapper[4728]: I0227 10:50:48.823664 4728 scope.go:117] "RemoveContainer" containerID="b3197829f9f03b9abde5dc0fcfb961d95cba9a6be02e975621b5e11dc5c65edb" Feb 27 10:50:48 crc kubenswrapper[4728]: I0227 10:50:48.862879 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 27 10:50:48 crc kubenswrapper[4728]: I0227 10:50:48.874144 4728 scope.go:117] "RemoveContainer" containerID="269d85f72aa432f3ec61376d13460d5d06bfd1dbe10308c8c9a03ed5857c8529" Feb 27 10:50:48 crc kubenswrapper[4728]: I0227 10:50:48.883851 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 27 10:50:48 crc kubenswrapper[4728]: I0227 10:50:48.920558 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 27 10:50:48 crc kubenswrapper[4728]: E0227 10:50:48.921056 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="555793ce-76da-4384-8b40-4133438d1bec" containerName="proxy-httpd" Feb 27 10:50:48 crc kubenswrapper[4728]: I0227 10:50:48.921072 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="555793ce-76da-4384-8b40-4133438d1bec" containerName="proxy-httpd" Feb 27 10:50:48 crc kubenswrapper[4728]: E0227 10:50:48.921090 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="555793ce-76da-4384-8b40-4133438d1bec" containerName="ceilometer-central-agent" Feb 27 10:50:48 crc kubenswrapper[4728]: I0227 10:50:48.921096 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="555793ce-76da-4384-8b40-4133438d1bec" containerName="ceilometer-central-agent" Feb 27 10:50:48 crc kubenswrapper[4728]: E0227 10:50:48.921128 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="555793ce-76da-4384-8b40-4133438d1bec" containerName="sg-core" Feb 27 10:50:48 crc kubenswrapper[4728]: I0227 10:50:48.921135 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="555793ce-76da-4384-8b40-4133438d1bec" containerName="sg-core" Feb 27 10:50:48 crc kubenswrapper[4728]: E0227 10:50:48.921154 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="555793ce-76da-4384-8b40-4133438d1bec" containerName="ceilometer-notification-agent" Feb 27 10:50:48 crc kubenswrapper[4728]: I0227 10:50:48.921160 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="555793ce-76da-4384-8b40-4133438d1bec" containerName="ceilometer-notification-agent" Feb 27 10:50:48 crc kubenswrapper[4728]: I0227 10:50:48.921342 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="555793ce-76da-4384-8b40-4133438d1bec" containerName="ceilometer-central-agent" Feb 27 10:50:48 crc kubenswrapper[4728]: I0227 10:50:48.921362 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="555793ce-76da-4384-8b40-4133438d1bec" containerName="proxy-httpd" Feb 27 10:50:48 crc kubenswrapper[4728]: I0227 10:50:48.921372 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="555793ce-76da-4384-8b40-4133438d1bec" containerName="ceilometer-notification-agent" Feb 27 10:50:48 crc kubenswrapper[4728]: I0227 10:50:48.921387 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="555793ce-76da-4384-8b40-4133438d1bec" containerName="sg-core" Feb 27 10:50:48 crc kubenswrapper[4728]: I0227 10:50:48.923295 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 27 10:50:48 crc kubenswrapper[4728]: I0227 10:50:48.934637 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 27 10:50:48 crc kubenswrapper[4728]: I0227 10:50:48.935048 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 27 10:50:48 crc kubenswrapper[4728]: I0227 10:50:48.935219 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 27 10:50:48 crc kubenswrapper[4728]: I0227 10:50:48.991582 4728 scope.go:117] "RemoveContainer" containerID="0715becab8c00893ac70c9215a4b8c871c38975372c5d09fea90bc6ad8464c0a" Feb 27 10:50:49 crc kubenswrapper[4728]: I0227 10:50:49.007006 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/50db51f7-f183-4a84-acd2-4320f33423f2-scripts\") pod \"ceilometer-0\" (UID: \"50db51f7-f183-4a84-acd2-4320f33423f2\") " pod="openstack/ceilometer-0" Feb 27 10:50:49 crc kubenswrapper[4728]: I0227 10:50:49.007060 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6b6qb\" (UniqueName: \"kubernetes.io/projected/50db51f7-f183-4a84-acd2-4320f33423f2-kube-api-access-6b6qb\") pod \"ceilometer-0\" (UID: \"50db51f7-f183-4a84-acd2-4320f33423f2\") " pod="openstack/ceilometer-0" Feb 27 10:50:49 crc kubenswrapper[4728]: I0227 10:50:49.007094 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/50db51f7-f183-4a84-acd2-4320f33423f2-log-httpd\") pod \"ceilometer-0\" (UID: \"50db51f7-f183-4a84-acd2-4320f33423f2\") " pod="openstack/ceilometer-0" Feb 27 10:50:49 crc kubenswrapper[4728]: I0227 10:50:49.007119 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/50db51f7-f183-4a84-acd2-4320f33423f2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"50db51f7-f183-4a84-acd2-4320f33423f2\") " pod="openstack/ceilometer-0" Feb 27 10:50:49 crc kubenswrapper[4728]: I0227 10:50:49.007193 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50db51f7-f183-4a84-acd2-4320f33423f2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"50db51f7-f183-4a84-acd2-4320f33423f2\") " pod="openstack/ceilometer-0" Feb 27 10:50:49 crc kubenswrapper[4728]: I0227 10:50:49.007233 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50db51f7-f183-4a84-acd2-4320f33423f2-config-data\") pod \"ceilometer-0\" (UID: \"50db51f7-f183-4a84-acd2-4320f33423f2\") " pod="openstack/ceilometer-0" Feb 27 10:50:49 crc kubenswrapper[4728]: I0227 10:50:49.007287 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/50db51f7-f183-4a84-acd2-4320f33423f2-run-httpd\") pod \"ceilometer-0\" (UID: \"50db51f7-f183-4a84-acd2-4320f33423f2\") " pod="openstack/ceilometer-0" Feb 27 10:50:49 crc kubenswrapper[4728]: I0227 10:50:49.024748 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 27 10:50:49 crc kubenswrapper[4728]: I0227 10:50:49.025924 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 27 10:50:49 crc kubenswrapper[4728]: I0227 10:50:49.062626 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 27 10:50:49 crc kubenswrapper[4728]: I0227 10:50:49.073068 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 27 10:50:49 crc kubenswrapper[4728]: I0227 10:50:49.108923 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/50db51f7-f183-4a84-acd2-4320f33423f2-run-httpd\") pod \"ceilometer-0\" (UID: \"50db51f7-f183-4a84-acd2-4320f33423f2\") " pod="openstack/ceilometer-0" Feb 27 10:50:49 crc kubenswrapper[4728]: I0227 10:50:49.109367 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/50db51f7-f183-4a84-acd2-4320f33423f2-run-httpd\") pod \"ceilometer-0\" (UID: \"50db51f7-f183-4a84-acd2-4320f33423f2\") " pod="openstack/ceilometer-0" Feb 27 10:50:49 crc kubenswrapper[4728]: I0227 10:50:49.109601 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/50db51f7-f183-4a84-acd2-4320f33423f2-scripts\") pod \"ceilometer-0\" (UID: \"50db51f7-f183-4a84-acd2-4320f33423f2\") " pod="openstack/ceilometer-0" Feb 27 10:50:49 crc kubenswrapper[4728]: I0227 10:50:49.109637 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6b6qb\" (UniqueName: \"kubernetes.io/projected/50db51f7-f183-4a84-acd2-4320f33423f2-kube-api-access-6b6qb\") pod \"ceilometer-0\" (UID: \"50db51f7-f183-4a84-acd2-4320f33423f2\") " pod="openstack/ceilometer-0" Feb 27 10:50:49 crc kubenswrapper[4728]: I0227 10:50:49.110593 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/50db51f7-f183-4a84-acd2-4320f33423f2-log-httpd\") pod \"ceilometer-0\" (UID: \"50db51f7-f183-4a84-acd2-4320f33423f2\") " pod="openstack/ceilometer-0" Feb 27 10:50:49 crc kubenswrapper[4728]: I0227 10:50:49.110616 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/50db51f7-f183-4a84-acd2-4320f33423f2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"50db51f7-f183-4a84-acd2-4320f33423f2\") " pod="openstack/ceilometer-0" Feb 27 10:50:49 crc kubenswrapper[4728]: I0227 10:50:49.110701 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50db51f7-f183-4a84-acd2-4320f33423f2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"50db51f7-f183-4a84-acd2-4320f33423f2\") " pod="openstack/ceilometer-0" Feb 27 10:50:49 crc kubenswrapper[4728]: I0227 10:50:49.110772 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50db51f7-f183-4a84-acd2-4320f33423f2-config-data\") pod \"ceilometer-0\" (UID: \"50db51f7-f183-4a84-acd2-4320f33423f2\") " pod="openstack/ceilometer-0" Feb 27 10:50:49 crc kubenswrapper[4728]: I0227 10:50:49.112051 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/50db51f7-f183-4a84-acd2-4320f33423f2-log-httpd\") pod \"ceilometer-0\" (UID: \"50db51f7-f183-4a84-acd2-4320f33423f2\") " pod="openstack/ceilometer-0" Feb 27 10:50:49 crc kubenswrapper[4728]: I0227 10:50:49.116803 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/50db51f7-f183-4a84-acd2-4320f33423f2-scripts\") pod \"ceilometer-0\" (UID: \"50db51f7-f183-4a84-acd2-4320f33423f2\") " pod="openstack/ceilometer-0" Feb 27 10:50:49 crc kubenswrapper[4728]: I0227 10:50:49.117184 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50db51f7-f183-4a84-acd2-4320f33423f2-config-data\") pod \"ceilometer-0\" (UID: \"50db51f7-f183-4a84-acd2-4320f33423f2\") " pod="openstack/ceilometer-0" Feb 27 10:50:49 crc kubenswrapper[4728]: I0227 10:50:49.117832 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50db51f7-f183-4a84-acd2-4320f33423f2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"50db51f7-f183-4a84-acd2-4320f33423f2\") " pod="openstack/ceilometer-0" Feb 27 10:50:49 crc kubenswrapper[4728]: I0227 10:50:49.122061 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/50db51f7-f183-4a84-acd2-4320f33423f2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"50db51f7-f183-4a84-acd2-4320f33423f2\") " pod="openstack/ceilometer-0" Feb 27 10:50:49 crc kubenswrapper[4728]: I0227 10:50:49.127251 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6b6qb\" (UniqueName: \"kubernetes.io/projected/50db51f7-f183-4a84-acd2-4320f33423f2-kube-api-access-6b6qb\") pod \"ceilometer-0\" (UID: \"50db51f7-f183-4a84-acd2-4320f33423f2\") " pod="openstack/ceilometer-0" Feb 27 10:50:49 crc kubenswrapper[4728]: I0227 10:50:49.282587 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 27 10:50:49 crc kubenswrapper[4728]: I0227 10:50:49.765116 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 27 10:50:49 crc kubenswrapper[4728]: I0227 10:50:49.765671 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 27 10:50:49 crc kubenswrapper[4728]: I0227 10:50:49.861783 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 27 10:50:50 crc kubenswrapper[4728]: I0227 10:50:50.749044 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="555793ce-76da-4384-8b40-4133438d1bec" path="/var/lib/kubelet/pods/555793ce-76da-4384-8b40-4133438d1bec/volumes" Feb 27 10:50:50 crc kubenswrapper[4728]: I0227 10:50:50.782489 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"50db51f7-f183-4a84-acd2-4320f33423f2","Type":"ContainerStarted","Data":"cc94aa8a47d61092c4258b26b06534ee442b22ca9f85dc9c7d7665aa4a114aba"} Feb 27 10:50:51 crc kubenswrapper[4728]: I0227 10:50:51.791936 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"50db51f7-f183-4a84-acd2-4320f33423f2","Type":"ContainerStarted","Data":"fec23eb88f656a352e56a75c27663bd020af92ffa9bb82e89103888f0ef6656a"} Feb 27 10:50:51 crc kubenswrapper[4728]: I0227 10:50:51.792265 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"50db51f7-f183-4a84-acd2-4320f33423f2","Type":"ContainerStarted","Data":"45e426aaf7dcae12e9251a4339d4393052ed79f1a1ed3181e44b611f9642cad9"} Feb 27 10:50:51 crc kubenswrapper[4728]: I0227 10:50:51.791963 4728 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 27 10:50:51 crc kubenswrapper[4728]: I0227 10:50:51.792293 4728 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 27 10:50:52 crc kubenswrapper[4728]: I0227 10:50:52.809785 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"50db51f7-f183-4a84-acd2-4320f33423f2","Type":"ContainerStarted","Data":"8e71b0296019b8dde3447ccb1c820c51daac7e5894773164e46843f7db3bf689"} Feb 27 10:50:52 crc kubenswrapper[4728]: I0227 10:50:52.824154 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 27 10:50:52 crc kubenswrapper[4728]: I0227 10:50:52.824239 4728 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 27 10:50:52 crc kubenswrapper[4728]: I0227 10:50:52.860439 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 27 10:50:54 crc kubenswrapper[4728]: I0227 10:50:54.832749 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"50db51f7-f183-4a84-acd2-4320f33423f2","Type":"ContainerStarted","Data":"91668e3ea1180f688fb19b668d9650894cf159e3c2a5a0c8fff9d8851f70f25e"} Feb 27 10:50:54 crc kubenswrapper[4728]: I0227 10:50:54.833548 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 27 10:50:54 crc kubenswrapper[4728]: I0227 10:50:54.860559 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.756117081 podStartE2EDuration="6.860536614s" podCreationTimestamp="2026-02-27 10:50:48 +0000 UTC" firstStartedPulling="2026-02-27 10:50:49.866364317 +0000 UTC m=+1469.828730423" lastFinishedPulling="2026-02-27 10:50:53.97078384 +0000 UTC m=+1473.933149956" observedRunningTime="2026-02-27 10:50:54.853657618 +0000 UTC m=+1474.816023744" watchObservedRunningTime="2026-02-27 10:50:54.860536614 +0000 UTC m=+1474.822902730" Feb 27 10:50:55 crc kubenswrapper[4728]: I0227 10:50:55.156481 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Feb 27 10:50:55 crc kubenswrapper[4728]: I0227 10:50:55.931709 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 27 10:50:55 crc kubenswrapper[4728]: I0227 10:50:55.932071 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 27 10:50:55 crc kubenswrapper[4728]: I0227 10:50:55.934204 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-mmv5g"] Feb 27 10:50:55 crc kubenswrapper[4728]: I0227 10:50:55.935679 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-mmv5g" Feb 27 10:50:55 crc kubenswrapper[4728]: I0227 10:50:55.938297 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Feb 27 10:50:55 crc kubenswrapper[4728]: I0227 10:50:55.942036 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Feb 27 10:50:55 crc kubenswrapper[4728]: I0227 10:50:55.951663 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-mmv5g"] Feb 27 10:50:56 crc kubenswrapper[4728]: I0227 10:50:56.000772 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 27 10:50:56 crc kubenswrapper[4728]: I0227 10:50:56.018862 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 27 10:50:56 crc kubenswrapper[4728]: I0227 10:50:56.020289 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ppg9x\" (UniqueName: \"kubernetes.io/projected/e93f65f0-9517-45e7-bcfc-3cbb70046b3e-kube-api-access-ppg9x\") pod \"nova-cell0-cell-mapping-mmv5g\" (UID: \"e93f65f0-9517-45e7-bcfc-3cbb70046b3e\") " pod="openstack/nova-cell0-cell-mapping-mmv5g" Feb 27 10:50:56 crc kubenswrapper[4728]: I0227 10:50:56.020658 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e93f65f0-9517-45e7-bcfc-3cbb70046b3e-scripts\") pod \"nova-cell0-cell-mapping-mmv5g\" (UID: \"e93f65f0-9517-45e7-bcfc-3cbb70046b3e\") " pod="openstack/nova-cell0-cell-mapping-mmv5g" Feb 27 10:50:56 crc kubenswrapper[4728]: I0227 10:50:56.020727 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e93f65f0-9517-45e7-bcfc-3cbb70046b3e-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-mmv5g\" (UID: \"e93f65f0-9517-45e7-bcfc-3cbb70046b3e\") " pod="openstack/nova-cell0-cell-mapping-mmv5g" Feb 27 10:50:56 crc kubenswrapper[4728]: I0227 10:50:56.020756 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e93f65f0-9517-45e7-bcfc-3cbb70046b3e-config-data\") pod \"nova-cell0-cell-mapping-mmv5g\" (UID: \"e93f65f0-9517-45e7-bcfc-3cbb70046b3e\") " pod="openstack/nova-cell0-cell-mapping-mmv5g" Feb 27 10:50:56 crc kubenswrapper[4728]: I0227 10:50:56.125666 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e93f65f0-9517-45e7-bcfc-3cbb70046b3e-scripts\") pod \"nova-cell0-cell-mapping-mmv5g\" (UID: \"e93f65f0-9517-45e7-bcfc-3cbb70046b3e\") " pod="openstack/nova-cell0-cell-mapping-mmv5g" Feb 27 10:50:56 crc kubenswrapper[4728]: I0227 10:50:56.125738 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e93f65f0-9517-45e7-bcfc-3cbb70046b3e-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-mmv5g\" (UID: \"e93f65f0-9517-45e7-bcfc-3cbb70046b3e\") " pod="openstack/nova-cell0-cell-mapping-mmv5g" Feb 27 10:50:56 crc kubenswrapper[4728]: I0227 10:50:56.125785 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e93f65f0-9517-45e7-bcfc-3cbb70046b3e-config-data\") pod \"nova-cell0-cell-mapping-mmv5g\" (UID: \"e93f65f0-9517-45e7-bcfc-3cbb70046b3e\") " pod="openstack/nova-cell0-cell-mapping-mmv5g" Feb 27 10:50:56 crc kubenswrapper[4728]: I0227 10:50:56.125864 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ppg9x\" (UniqueName: \"kubernetes.io/projected/e93f65f0-9517-45e7-bcfc-3cbb70046b3e-kube-api-access-ppg9x\") pod \"nova-cell0-cell-mapping-mmv5g\" (UID: \"e93f65f0-9517-45e7-bcfc-3cbb70046b3e\") " pod="openstack/nova-cell0-cell-mapping-mmv5g" Feb 27 10:50:56 crc kubenswrapper[4728]: I0227 10:50:56.136155 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e93f65f0-9517-45e7-bcfc-3cbb70046b3e-scripts\") pod \"nova-cell0-cell-mapping-mmv5g\" (UID: \"e93f65f0-9517-45e7-bcfc-3cbb70046b3e\") " pod="openstack/nova-cell0-cell-mapping-mmv5g" Feb 27 10:50:56 crc kubenswrapper[4728]: I0227 10:50:56.140092 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e93f65f0-9517-45e7-bcfc-3cbb70046b3e-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-mmv5g\" (UID: \"e93f65f0-9517-45e7-bcfc-3cbb70046b3e\") " pod="openstack/nova-cell0-cell-mapping-mmv5g" Feb 27 10:50:56 crc kubenswrapper[4728]: I0227 10:50:56.140624 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e93f65f0-9517-45e7-bcfc-3cbb70046b3e-config-data\") pod \"nova-cell0-cell-mapping-mmv5g\" (UID: \"e93f65f0-9517-45e7-bcfc-3cbb70046b3e\") " pod="openstack/nova-cell0-cell-mapping-mmv5g" Feb 27 10:50:56 crc kubenswrapper[4728]: I0227 10:50:56.176385 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 27 10:50:56 crc kubenswrapper[4728]: I0227 10:50:56.177965 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 27 10:50:56 crc kubenswrapper[4728]: I0227 10:50:56.187604 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 27 10:50:56 crc kubenswrapper[4728]: I0227 10:50:56.201095 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ppg9x\" (UniqueName: \"kubernetes.io/projected/e93f65f0-9517-45e7-bcfc-3cbb70046b3e-kube-api-access-ppg9x\") pod \"nova-cell0-cell-mapping-mmv5g\" (UID: \"e93f65f0-9517-45e7-bcfc-3cbb70046b3e\") " pod="openstack/nova-cell0-cell-mapping-mmv5g" Feb 27 10:50:56 crc kubenswrapper[4728]: I0227 10:50:56.208168 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 27 10:50:56 crc kubenswrapper[4728]: I0227 10:50:56.210145 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 27 10:50:56 crc kubenswrapper[4728]: I0227 10:50:56.221765 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 27 10:50:56 crc kubenswrapper[4728]: I0227 10:50:56.226048 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 27 10:50:56 crc kubenswrapper[4728]: I0227 10:50:56.252884 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 27 10:50:56 crc kubenswrapper[4728]: I0227 10:50:56.261083 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-mmv5g" Feb 27 10:50:56 crc kubenswrapper[4728]: I0227 10:50:56.331471 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cv4d5\" (UniqueName: \"kubernetes.io/projected/67af1e62-168c-4a94-a206-79158119b0a4-kube-api-access-cv4d5\") pod \"nova-scheduler-0\" (UID: \"67af1e62-168c-4a94-a206-79158119b0a4\") " pod="openstack/nova-scheduler-0" Feb 27 10:50:56 crc kubenswrapper[4728]: I0227 10:50:56.336582 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4517eec5-2129-4678-9f45-e149b9ef35fc-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"4517eec5-2129-4678-9f45-e149b9ef35fc\") " pod="openstack/nova-metadata-0" Feb 27 10:50:56 crc kubenswrapper[4728]: I0227 10:50:56.338831 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4517eec5-2129-4678-9f45-e149b9ef35fc-logs\") pod \"nova-metadata-0\" (UID: \"4517eec5-2129-4678-9f45-e149b9ef35fc\") " pod="openstack/nova-metadata-0" Feb 27 10:50:56 crc kubenswrapper[4728]: I0227 10:50:56.339086 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4517eec5-2129-4678-9f45-e149b9ef35fc-config-data\") pod \"nova-metadata-0\" (UID: \"4517eec5-2129-4678-9f45-e149b9ef35fc\") " pod="openstack/nova-metadata-0" Feb 27 10:50:56 crc kubenswrapper[4728]: I0227 10:50:56.345343 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67af1e62-168c-4a94-a206-79158119b0a4-config-data\") pod \"nova-scheduler-0\" (UID: \"67af1e62-168c-4a94-a206-79158119b0a4\") " pod="openstack/nova-scheduler-0" Feb 27 10:50:56 crc kubenswrapper[4728]: I0227 10:50:56.345456 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67af1e62-168c-4a94-a206-79158119b0a4-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"67af1e62-168c-4a94-a206-79158119b0a4\") " pod="openstack/nova-scheduler-0" Feb 27 10:50:56 crc kubenswrapper[4728]: I0227 10:50:56.345719 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqxfr\" (UniqueName: \"kubernetes.io/projected/4517eec5-2129-4678-9f45-e149b9ef35fc-kube-api-access-pqxfr\") pod \"nova-metadata-0\" (UID: \"4517eec5-2129-4678-9f45-e149b9ef35fc\") " pod="openstack/nova-metadata-0" Feb 27 10:50:56 crc kubenswrapper[4728]: I0227 10:50:56.366670 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7877d89589-bb282"] Feb 27 10:50:56 crc kubenswrapper[4728]: I0227 10:50:56.368612 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7877d89589-bb282" Feb 27 10:50:56 crc kubenswrapper[4728]: I0227 10:50:56.387605 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7877d89589-bb282"] Feb 27 10:50:56 crc kubenswrapper[4728]: I0227 10:50:56.419735 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 27 10:50:56 crc kubenswrapper[4728]: I0227 10:50:56.421146 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 27 10:50:56 crc kubenswrapper[4728]: I0227 10:50:56.423879 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 27 10:50:56 crc kubenswrapper[4728]: I0227 10:50:56.448326 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4517eec5-2129-4678-9f45-e149b9ef35fc-config-data\") pod \"nova-metadata-0\" (UID: \"4517eec5-2129-4678-9f45-e149b9ef35fc\") " pod="openstack/nova-metadata-0" Feb 27 10:50:56 crc kubenswrapper[4728]: I0227 10:50:56.448486 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67af1e62-168c-4a94-a206-79158119b0a4-config-data\") pod \"nova-scheduler-0\" (UID: \"67af1e62-168c-4a94-a206-79158119b0a4\") " pod="openstack/nova-scheduler-0" Feb 27 10:50:56 crc kubenswrapper[4728]: I0227 10:50:56.448535 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67af1e62-168c-4a94-a206-79158119b0a4-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"67af1e62-168c-4a94-a206-79158119b0a4\") " pod="openstack/nova-scheduler-0" Feb 27 10:50:56 crc kubenswrapper[4728]: I0227 10:50:56.448609 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03fe94ce-b874-4412-b3a8-adb6d8172507-config\") pod \"dnsmasq-dns-7877d89589-bb282\" (UID: \"03fe94ce-b874-4412-b3a8-adb6d8172507\") " pod="openstack/dnsmasq-dns-7877d89589-bb282" Feb 27 10:50:56 crc kubenswrapper[4728]: I0227 10:50:56.448676 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pqxfr\" (UniqueName: \"kubernetes.io/projected/4517eec5-2129-4678-9f45-e149b9ef35fc-kube-api-access-pqxfr\") pod \"nova-metadata-0\" (UID: \"4517eec5-2129-4678-9f45-e149b9ef35fc\") " pod="openstack/nova-metadata-0" Feb 27 10:50:56 crc kubenswrapper[4728]: I0227 10:50:56.448724 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/03fe94ce-b874-4412-b3a8-adb6d8172507-dns-svc\") pod \"dnsmasq-dns-7877d89589-bb282\" (UID: \"03fe94ce-b874-4412-b3a8-adb6d8172507\") " pod="openstack/dnsmasq-dns-7877d89589-bb282" Feb 27 10:50:56 crc kubenswrapper[4728]: I0227 10:50:56.448758 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ctcxz\" (UniqueName: \"kubernetes.io/projected/03fe94ce-b874-4412-b3a8-adb6d8172507-kube-api-access-ctcxz\") pod \"dnsmasq-dns-7877d89589-bb282\" (UID: \"03fe94ce-b874-4412-b3a8-adb6d8172507\") " pod="openstack/dnsmasq-dns-7877d89589-bb282" Feb 27 10:50:56 crc kubenswrapper[4728]: I0227 10:50:56.448776 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/03fe94ce-b874-4412-b3a8-adb6d8172507-dns-swift-storage-0\") pod \"dnsmasq-dns-7877d89589-bb282\" (UID: \"03fe94ce-b874-4412-b3a8-adb6d8172507\") " pod="openstack/dnsmasq-dns-7877d89589-bb282" Feb 27 10:50:56 crc kubenswrapper[4728]: I0227 10:50:56.448851 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/03fe94ce-b874-4412-b3a8-adb6d8172507-ovsdbserver-sb\") pod \"dnsmasq-dns-7877d89589-bb282\" (UID: \"03fe94ce-b874-4412-b3a8-adb6d8172507\") " pod="openstack/dnsmasq-dns-7877d89589-bb282" Feb 27 10:50:56 crc kubenswrapper[4728]: I0227 10:50:56.448918 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cv4d5\" (UniqueName: \"kubernetes.io/projected/67af1e62-168c-4a94-a206-79158119b0a4-kube-api-access-cv4d5\") pod \"nova-scheduler-0\" (UID: \"67af1e62-168c-4a94-a206-79158119b0a4\") " pod="openstack/nova-scheduler-0" Feb 27 10:50:56 crc kubenswrapper[4728]: I0227 10:50:56.448943 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4517eec5-2129-4678-9f45-e149b9ef35fc-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"4517eec5-2129-4678-9f45-e149b9ef35fc\") " pod="openstack/nova-metadata-0" Feb 27 10:50:56 crc kubenswrapper[4728]: I0227 10:50:56.448963 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4517eec5-2129-4678-9f45-e149b9ef35fc-logs\") pod \"nova-metadata-0\" (UID: \"4517eec5-2129-4678-9f45-e149b9ef35fc\") " pod="openstack/nova-metadata-0" Feb 27 10:50:56 crc kubenswrapper[4728]: I0227 10:50:56.449012 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/03fe94ce-b874-4412-b3a8-adb6d8172507-ovsdbserver-nb\") pod \"dnsmasq-dns-7877d89589-bb282\" (UID: \"03fe94ce-b874-4412-b3a8-adb6d8172507\") " pod="openstack/dnsmasq-dns-7877d89589-bb282" Feb 27 10:50:56 crc kubenswrapper[4728]: I0227 10:50:56.455041 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 27 10:50:56 crc kubenswrapper[4728]: I0227 10:50:56.459623 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67af1e62-168c-4a94-a206-79158119b0a4-config-data\") pod \"nova-scheduler-0\" (UID: \"67af1e62-168c-4a94-a206-79158119b0a4\") " pod="openstack/nova-scheduler-0" Feb 27 10:50:56 crc kubenswrapper[4728]: I0227 10:50:56.462894 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67af1e62-168c-4a94-a206-79158119b0a4-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"67af1e62-168c-4a94-a206-79158119b0a4\") " pod="openstack/nova-scheduler-0" Feb 27 10:50:56 crc kubenswrapper[4728]: I0227 10:50:56.502059 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4517eec5-2129-4678-9f45-e149b9ef35fc-logs\") pod \"nova-metadata-0\" (UID: \"4517eec5-2129-4678-9f45-e149b9ef35fc\") " pod="openstack/nova-metadata-0" Feb 27 10:50:56 crc kubenswrapper[4728]: I0227 10:50:56.507077 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4517eec5-2129-4678-9f45-e149b9ef35fc-config-data\") pod \"nova-metadata-0\" (UID: \"4517eec5-2129-4678-9f45-e149b9ef35fc\") " pod="openstack/nova-metadata-0" Feb 27 10:50:56 crc kubenswrapper[4728]: I0227 10:50:56.512396 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4517eec5-2129-4678-9f45-e149b9ef35fc-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"4517eec5-2129-4678-9f45-e149b9ef35fc\") " pod="openstack/nova-metadata-0" Feb 27 10:50:56 crc kubenswrapper[4728]: I0227 10:50:56.517216 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pqxfr\" (UniqueName: \"kubernetes.io/projected/4517eec5-2129-4678-9f45-e149b9ef35fc-kube-api-access-pqxfr\") pod \"nova-metadata-0\" (UID: \"4517eec5-2129-4678-9f45-e149b9ef35fc\") " pod="openstack/nova-metadata-0" Feb 27 10:50:56 crc kubenswrapper[4728]: I0227 10:50:56.525894 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cv4d5\" (UniqueName: \"kubernetes.io/projected/67af1e62-168c-4a94-a206-79158119b0a4-kube-api-access-cv4d5\") pod \"nova-scheduler-0\" (UID: \"67af1e62-168c-4a94-a206-79158119b0a4\") " pod="openstack/nova-scheduler-0" Feb 27 10:50:56 crc kubenswrapper[4728]: I0227 10:50:56.568380 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/214d42db-ca45-403f-89e3-7026fb6abed2-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"214d42db-ca45-403f-89e3-7026fb6abed2\") " pod="openstack/nova-cell1-novncproxy-0" Feb 27 10:50:56 crc kubenswrapper[4728]: I0227 10:50:56.569099 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03fe94ce-b874-4412-b3a8-adb6d8172507-config\") pod \"dnsmasq-dns-7877d89589-bb282\" (UID: \"03fe94ce-b874-4412-b3a8-adb6d8172507\") " pod="openstack/dnsmasq-dns-7877d89589-bb282" Feb 27 10:50:56 crc kubenswrapper[4728]: I0227 10:50:56.569207 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-twmf7\" (UniqueName: \"kubernetes.io/projected/214d42db-ca45-403f-89e3-7026fb6abed2-kube-api-access-twmf7\") pod \"nova-cell1-novncproxy-0\" (UID: \"214d42db-ca45-403f-89e3-7026fb6abed2\") " pod="openstack/nova-cell1-novncproxy-0" Feb 27 10:50:56 crc kubenswrapper[4728]: I0227 10:50:56.569337 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/214d42db-ca45-403f-89e3-7026fb6abed2-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"214d42db-ca45-403f-89e3-7026fb6abed2\") " pod="openstack/nova-cell1-novncproxy-0" Feb 27 10:50:56 crc kubenswrapper[4728]: I0227 10:50:56.569370 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/03fe94ce-b874-4412-b3a8-adb6d8172507-dns-svc\") pod \"dnsmasq-dns-7877d89589-bb282\" (UID: \"03fe94ce-b874-4412-b3a8-adb6d8172507\") " pod="openstack/dnsmasq-dns-7877d89589-bb282" Feb 27 10:50:56 crc kubenswrapper[4728]: I0227 10:50:56.569412 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ctcxz\" (UniqueName: \"kubernetes.io/projected/03fe94ce-b874-4412-b3a8-adb6d8172507-kube-api-access-ctcxz\") pod \"dnsmasq-dns-7877d89589-bb282\" (UID: \"03fe94ce-b874-4412-b3a8-adb6d8172507\") " pod="openstack/dnsmasq-dns-7877d89589-bb282" Feb 27 10:50:56 crc kubenswrapper[4728]: I0227 10:50:56.569443 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/03fe94ce-b874-4412-b3a8-adb6d8172507-dns-swift-storage-0\") pod \"dnsmasq-dns-7877d89589-bb282\" (UID: \"03fe94ce-b874-4412-b3a8-adb6d8172507\") " pod="openstack/dnsmasq-dns-7877d89589-bb282" Feb 27 10:50:56 crc kubenswrapper[4728]: I0227 10:50:56.569606 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/03fe94ce-b874-4412-b3a8-adb6d8172507-ovsdbserver-sb\") pod \"dnsmasq-dns-7877d89589-bb282\" (UID: \"03fe94ce-b874-4412-b3a8-adb6d8172507\") " pod="openstack/dnsmasq-dns-7877d89589-bb282" Feb 27 10:50:56 crc kubenswrapper[4728]: I0227 10:50:56.569768 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/03fe94ce-b874-4412-b3a8-adb6d8172507-ovsdbserver-nb\") pod \"dnsmasq-dns-7877d89589-bb282\" (UID: \"03fe94ce-b874-4412-b3a8-adb6d8172507\") " pod="openstack/dnsmasq-dns-7877d89589-bb282" Feb 27 10:50:56 crc kubenswrapper[4728]: I0227 10:50:56.573184 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/03fe94ce-b874-4412-b3a8-adb6d8172507-ovsdbserver-sb\") pod \"dnsmasq-dns-7877d89589-bb282\" (UID: \"03fe94ce-b874-4412-b3a8-adb6d8172507\") " pod="openstack/dnsmasq-dns-7877d89589-bb282" Feb 27 10:50:56 crc kubenswrapper[4728]: I0227 10:50:56.574749 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/03fe94ce-b874-4412-b3a8-adb6d8172507-dns-svc\") pod \"dnsmasq-dns-7877d89589-bb282\" (UID: \"03fe94ce-b874-4412-b3a8-adb6d8172507\") " pod="openstack/dnsmasq-dns-7877d89589-bb282" Feb 27 10:50:56 crc kubenswrapper[4728]: I0227 10:50:56.575461 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/03fe94ce-b874-4412-b3a8-adb6d8172507-dns-swift-storage-0\") pod \"dnsmasq-dns-7877d89589-bb282\" (UID: \"03fe94ce-b874-4412-b3a8-adb6d8172507\") " pod="openstack/dnsmasq-dns-7877d89589-bb282" Feb 27 10:50:56 crc kubenswrapper[4728]: I0227 10:50:56.576764 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03fe94ce-b874-4412-b3a8-adb6d8172507-config\") pod \"dnsmasq-dns-7877d89589-bb282\" (UID: \"03fe94ce-b874-4412-b3a8-adb6d8172507\") " pod="openstack/dnsmasq-dns-7877d89589-bb282" Feb 27 10:50:56 crc kubenswrapper[4728]: I0227 10:50:56.589395 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/03fe94ce-b874-4412-b3a8-adb6d8172507-ovsdbserver-nb\") pod \"dnsmasq-dns-7877d89589-bb282\" (UID: \"03fe94ce-b874-4412-b3a8-adb6d8172507\") " pod="openstack/dnsmasq-dns-7877d89589-bb282" Feb 27 10:50:56 crc kubenswrapper[4728]: I0227 10:50:56.605621 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 27 10:50:56 crc kubenswrapper[4728]: I0227 10:50:56.608702 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 27 10:50:56 crc kubenswrapper[4728]: I0227 10:50:56.631157 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 27 10:50:56 crc kubenswrapper[4728]: I0227 10:50:56.650571 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ctcxz\" (UniqueName: \"kubernetes.io/projected/03fe94ce-b874-4412-b3a8-adb6d8172507-kube-api-access-ctcxz\") pod \"dnsmasq-dns-7877d89589-bb282\" (UID: \"03fe94ce-b874-4412-b3a8-adb6d8172507\") " pod="openstack/dnsmasq-dns-7877d89589-bb282" Feb 27 10:50:56 crc kubenswrapper[4728]: I0227 10:50:56.674292 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80b35d5f-b110-435d-a058-375286dc42d2-config-data\") pod \"nova-api-0\" (UID: \"80b35d5f-b110-435d-a058-375286dc42d2\") " pod="openstack/nova-api-0" Feb 27 10:50:56 crc kubenswrapper[4728]: I0227 10:50:56.674354 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80b35d5f-b110-435d-a058-375286dc42d2-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"80b35d5f-b110-435d-a058-375286dc42d2\") " pod="openstack/nova-api-0" Feb 27 10:50:56 crc kubenswrapper[4728]: I0227 10:50:56.674421 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-twmf7\" (UniqueName: \"kubernetes.io/projected/214d42db-ca45-403f-89e3-7026fb6abed2-kube-api-access-twmf7\") pod \"nova-cell1-novncproxy-0\" (UID: \"214d42db-ca45-403f-89e3-7026fb6abed2\") " pod="openstack/nova-cell1-novncproxy-0" Feb 27 10:50:56 crc kubenswrapper[4728]: I0227 10:50:56.674466 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/214d42db-ca45-403f-89e3-7026fb6abed2-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"214d42db-ca45-403f-89e3-7026fb6abed2\") " pod="openstack/nova-cell1-novncproxy-0" Feb 27 10:50:56 crc kubenswrapper[4728]: I0227 10:50:56.674566 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x5r6x\" (UniqueName: \"kubernetes.io/projected/80b35d5f-b110-435d-a058-375286dc42d2-kube-api-access-x5r6x\") pod \"nova-api-0\" (UID: \"80b35d5f-b110-435d-a058-375286dc42d2\") " pod="openstack/nova-api-0" Feb 27 10:50:56 crc kubenswrapper[4728]: I0227 10:50:56.674589 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/80b35d5f-b110-435d-a058-375286dc42d2-logs\") pod \"nova-api-0\" (UID: \"80b35d5f-b110-435d-a058-375286dc42d2\") " pod="openstack/nova-api-0" Feb 27 10:50:56 crc kubenswrapper[4728]: I0227 10:50:56.674698 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/214d42db-ca45-403f-89e3-7026fb6abed2-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"214d42db-ca45-403f-89e3-7026fb6abed2\") " pod="openstack/nova-cell1-novncproxy-0" Feb 27 10:50:56 crc kubenswrapper[4728]: I0227 10:50:56.681428 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/214d42db-ca45-403f-89e3-7026fb6abed2-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"214d42db-ca45-403f-89e3-7026fb6abed2\") " pod="openstack/nova-cell1-novncproxy-0" Feb 27 10:50:56 crc kubenswrapper[4728]: I0227 10:50:56.720138 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 27 10:50:56 crc kubenswrapper[4728]: I0227 10:50:56.721339 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-twmf7\" (UniqueName: \"kubernetes.io/projected/214d42db-ca45-403f-89e3-7026fb6abed2-kube-api-access-twmf7\") pod \"nova-cell1-novncproxy-0\" (UID: \"214d42db-ca45-403f-89e3-7026fb6abed2\") " pod="openstack/nova-cell1-novncproxy-0" Feb 27 10:50:56 crc kubenswrapper[4728]: I0227 10:50:56.721822 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/214d42db-ca45-403f-89e3-7026fb6abed2-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"214d42db-ca45-403f-89e3-7026fb6abed2\") " pod="openstack/nova-cell1-novncproxy-0" Feb 27 10:50:56 crc kubenswrapper[4728]: I0227 10:50:56.767958 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7877d89589-bb282" Feb 27 10:50:56 crc kubenswrapper[4728]: I0227 10:50:56.770133 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 27 10:50:56 crc kubenswrapper[4728]: I0227 10:50:56.776093 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80b35d5f-b110-435d-a058-375286dc42d2-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"80b35d5f-b110-435d-a058-375286dc42d2\") " pod="openstack/nova-api-0" Feb 27 10:50:56 crc kubenswrapper[4728]: I0227 10:50:56.776398 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x5r6x\" (UniqueName: \"kubernetes.io/projected/80b35d5f-b110-435d-a058-375286dc42d2-kube-api-access-x5r6x\") pod \"nova-api-0\" (UID: \"80b35d5f-b110-435d-a058-375286dc42d2\") " pod="openstack/nova-api-0" Feb 27 10:50:56 crc kubenswrapper[4728]: I0227 10:50:56.776595 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/80b35d5f-b110-435d-a058-375286dc42d2-logs\") pod \"nova-api-0\" (UID: \"80b35d5f-b110-435d-a058-375286dc42d2\") " pod="openstack/nova-api-0" Feb 27 10:50:56 crc kubenswrapper[4728]: I0227 10:50:56.776757 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80b35d5f-b110-435d-a058-375286dc42d2-config-data\") pod \"nova-api-0\" (UID: \"80b35d5f-b110-435d-a058-375286dc42d2\") " pod="openstack/nova-api-0" Feb 27 10:50:56 crc kubenswrapper[4728]: I0227 10:50:56.780151 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/80b35d5f-b110-435d-a058-375286dc42d2-logs\") pod \"nova-api-0\" (UID: \"80b35d5f-b110-435d-a058-375286dc42d2\") " pod="openstack/nova-api-0" Feb 27 10:50:56 crc kubenswrapper[4728]: I0227 10:50:56.794144 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80b35d5f-b110-435d-a058-375286dc42d2-config-data\") pod \"nova-api-0\" (UID: \"80b35d5f-b110-435d-a058-375286dc42d2\") " pod="openstack/nova-api-0" Feb 27 10:50:56 crc kubenswrapper[4728]: I0227 10:50:56.794588 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80b35d5f-b110-435d-a058-375286dc42d2-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"80b35d5f-b110-435d-a058-375286dc42d2\") " pod="openstack/nova-api-0" Feb 27 10:50:56 crc kubenswrapper[4728]: I0227 10:50:56.813243 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x5r6x\" (UniqueName: \"kubernetes.io/projected/80b35d5f-b110-435d-a058-375286dc42d2-kube-api-access-x5r6x\") pod \"nova-api-0\" (UID: \"80b35d5f-b110-435d-a058-375286dc42d2\") " pod="openstack/nova-api-0" Feb 27 10:50:56 crc kubenswrapper[4728]: I0227 10:50:56.844810 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 27 10:50:56 crc kubenswrapper[4728]: I0227 10:50:56.874579 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 27 10:50:56 crc kubenswrapper[4728]: I0227 10:50:56.892899 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 27 10:50:56 crc kubenswrapper[4728]: I0227 10:50:56.893073 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 27 10:50:56 crc kubenswrapper[4728]: I0227 10:50:56.989372 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 27 10:50:57 crc kubenswrapper[4728]: I0227 10:50:57.040054 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-mmv5g"] Feb 27 10:50:57 crc kubenswrapper[4728]: I0227 10:50:57.855565 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 27 10:50:57 crc kubenswrapper[4728]: I0227 10:50:57.909600 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7877d89589-bb282"] Feb 27 10:50:57 crc kubenswrapper[4728]: I0227 10:50:57.945441 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"214d42db-ca45-403f-89e3-7026fb6abed2","Type":"ContainerStarted","Data":"fdf2e1d636a05173ca7fc92d13da23736a716b0b721b9c679fa9430e29a5c4d8"} Feb 27 10:50:57 crc kubenswrapper[4728]: I0227 10:50:57.945901 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 27 10:50:57 crc kubenswrapper[4728]: I0227 10:50:57.953199 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-mmv5g" event={"ID":"e93f65f0-9517-45e7-bcfc-3cbb70046b3e","Type":"ContainerStarted","Data":"9f6ea11dad6bb2049d938751dcc3375fb2100ea3e696693d6c0e5f6ed31e08ad"} Feb 27 10:50:57 crc kubenswrapper[4728]: I0227 10:50:57.953248 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-mmv5g" event={"ID":"e93f65f0-9517-45e7-bcfc-3cbb70046b3e","Type":"ContainerStarted","Data":"0e55ccad825bf81602a0b59692e1c4a7f29671d6a485ccb075d3cf6245b50833"} Feb 27 10:50:57 crc kubenswrapper[4728]: I0227 10:50:57.960834 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7877d89589-bb282" event={"ID":"03fe94ce-b874-4412-b3a8-adb6d8172507","Type":"ContainerStarted","Data":"6b925af78c426e1bfb61fc68745377b0a0b85b13bf006b4106226d100e576bad"} Feb 27 10:50:57 crc kubenswrapper[4728]: I0227 10:50:57.967196 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 27 10:50:57 crc kubenswrapper[4728]: I0227 10:50:57.971626 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4517eec5-2129-4678-9f45-e149b9ef35fc","Type":"ContainerStarted","Data":"e497f6be891f58a2fee88db240430f4559d5c6df45a8cae5fa3c9c17edf1e631"} Feb 27 10:50:57 crc kubenswrapper[4728]: I0227 10:50:57.972312 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-mmv5g" podStartSLOduration=2.972295382 podStartE2EDuration="2.972295382s" podCreationTimestamp="2026-02-27 10:50:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:50:57.971422679 +0000 UTC m=+1477.933788785" watchObservedRunningTime="2026-02-27 10:50:57.972295382 +0000 UTC m=+1477.934661488" Feb 27 10:50:57 crc kubenswrapper[4728]: I0227 10:50:57.984311 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"67af1e62-168c-4a94-a206-79158119b0a4","Type":"ContainerStarted","Data":"e7915ecf5e4c34868ea4181cdf1fd6414005a1a371bcdad0299301d0d62751b6"} Feb 27 10:50:58 crc kubenswrapper[4728]: I0227 10:50:58.003550 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 27 10:50:58 crc kubenswrapper[4728]: I0227 10:50:58.563881 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-85k7n"] Feb 27 10:50:58 crc kubenswrapper[4728]: I0227 10:50:58.566350 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-85k7n" Feb 27 10:50:58 crc kubenswrapper[4728]: I0227 10:50:58.568942 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Feb 27 10:50:58 crc kubenswrapper[4728]: I0227 10:50:58.569149 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 27 10:50:58 crc kubenswrapper[4728]: I0227 10:50:58.593476 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-85k7n"] Feb 27 10:50:58 crc kubenswrapper[4728]: I0227 10:50:58.753885 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6gvs\" (UniqueName: \"kubernetes.io/projected/72a5cb19-78b7-47a9-8d6f-7b5b2b67f395-kube-api-access-g6gvs\") pod \"nova-cell1-conductor-db-sync-85k7n\" (UID: \"72a5cb19-78b7-47a9-8d6f-7b5b2b67f395\") " pod="openstack/nova-cell1-conductor-db-sync-85k7n" Feb 27 10:50:58 crc kubenswrapper[4728]: I0227 10:50:58.754261 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72a5cb19-78b7-47a9-8d6f-7b5b2b67f395-config-data\") pod \"nova-cell1-conductor-db-sync-85k7n\" (UID: \"72a5cb19-78b7-47a9-8d6f-7b5b2b67f395\") " pod="openstack/nova-cell1-conductor-db-sync-85k7n" Feb 27 10:50:58 crc kubenswrapper[4728]: I0227 10:50:58.754344 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/72a5cb19-78b7-47a9-8d6f-7b5b2b67f395-scripts\") pod \"nova-cell1-conductor-db-sync-85k7n\" (UID: \"72a5cb19-78b7-47a9-8d6f-7b5b2b67f395\") " pod="openstack/nova-cell1-conductor-db-sync-85k7n" Feb 27 10:50:58 crc kubenswrapper[4728]: I0227 10:50:58.754588 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72a5cb19-78b7-47a9-8d6f-7b5b2b67f395-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-85k7n\" (UID: \"72a5cb19-78b7-47a9-8d6f-7b5b2b67f395\") " pod="openstack/nova-cell1-conductor-db-sync-85k7n" Feb 27 10:50:58 crc kubenswrapper[4728]: I0227 10:50:58.857802 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g6gvs\" (UniqueName: \"kubernetes.io/projected/72a5cb19-78b7-47a9-8d6f-7b5b2b67f395-kube-api-access-g6gvs\") pod \"nova-cell1-conductor-db-sync-85k7n\" (UID: \"72a5cb19-78b7-47a9-8d6f-7b5b2b67f395\") " pod="openstack/nova-cell1-conductor-db-sync-85k7n" Feb 27 10:50:58 crc kubenswrapper[4728]: I0227 10:50:58.857890 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72a5cb19-78b7-47a9-8d6f-7b5b2b67f395-config-data\") pod \"nova-cell1-conductor-db-sync-85k7n\" (UID: \"72a5cb19-78b7-47a9-8d6f-7b5b2b67f395\") " pod="openstack/nova-cell1-conductor-db-sync-85k7n" Feb 27 10:50:58 crc kubenswrapper[4728]: I0227 10:50:58.857914 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/72a5cb19-78b7-47a9-8d6f-7b5b2b67f395-scripts\") pod \"nova-cell1-conductor-db-sync-85k7n\" (UID: \"72a5cb19-78b7-47a9-8d6f-7b5b2b67f395\") " pod="openstack/nova-cell1-conductor-db-sync-85k7n" Feb 27 10:50:58 crc kubenswrapper[4728]: I0227 10:50:58.857997 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72a5cb19-78b7-47a9-8d6f-7b5b2b67f395-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-85k7n\" (UID: \"72a5cb19-78b7-47a9-8d6f-7b5b2b67f395\") " pod="openstack/nova-cell1-conductor-db-sync-85k7n" Feb 27 10:50:58 crc kubenswrapper[4728]: I0227 10:50:58.866352 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/72a5cb19-78b7-47a9-8d6f-7b5b2b67f395-scripts\") pod \"nova-cell1-conductor-db-sync-85k7n\" (UID: \"72a5cb19-78b7-47a9-8d6f-7b5b2b67f395\") " pod="openstack/nova-cell1-conductor-db-sync-85k7n" Feb 27 10:50:58 crc kubenswrapper[4728]: I0227 10:50:58.871096 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72a5cb19-78b7-47a9-8d6f-7b5b2b67f395-config-data\") pod \"nova-cell1-conductor-db-sync-85k7n\" (UID: \"72a5cb19-78b7-47a9-8d6f-7b5b2b67f395\") " pod="openstack/nova-cell1-conductor-db-sync-85k7n" Feb 27 10:50:58 crc kubenswrapper[4728]: I0227 10:50:58.876969 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72a5cb19-78b7-47a9-8d6f-7b5b2b67f395-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-85k7n\" (UID: \"72a5cb19-78b7-47a9-8d6f-7b5b2b67f395\") " pod="openstack/nova-cell1-conductor-db-sync-85k7n" Feb 27 10:50:58 crc kubenswrapper[4728]: I0227 10:50:58.877812 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g6gvs\" (UniqueName: \"kubernetes.io/projected/72a5cb19-78b7-47a9-8d6f-7b5b2b67f395-kube-api-access-g6gvs\") pod \"nova-cell1-conductor-db-sync-85k7n\" (UID: \"72a5cb19-78b7-47a9-8d6f-7b5b2b67f395\") " pod="openstack/nova-cell1-conductor-db-sync-85k7n" Feb 27 10:50:58 crc kubenswrapper[4728]: I0227 10:50:58.897450 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-85k7n" Feb 27 10:50:59 crc kubenswrapper[4728]: I0227 10:50:59.019658 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"80b35d5f-b110-435d-a058-375286dc42d2","Type":"ContainerStarted","Data":"129f1cf9af9f01d8b9fcdb9f577f4725c59b85c89f38902753db9b2323ed8e9c"} Feb 27 10:50:59 crc kubenswrapper[4728]: I0227 10:50:59.031614 4728 generic.go:334] "Generic (PLEG): container finished" podID="03fe94ce-b874-4412-b3a8-adb6d8172507" containerID="d5b702f4c67f3c1b5305b10b926e10817a9c76ae5867e9c02cf500c9f85c8fd4" exitCode=0 Feb 27 10:50:59 crc kubenswrapper[4728]: I0227 10:50:59.033402 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7877d89589-bb282" event={"ID":"03fe94ce-b874-4412-b3a8-adb6d8172507","Type":"ContainerDied","Data":"d5b702f4c67f3c1b5305b10b926e10817a9c76ae5867e9c02cf500c9f85c8fd4"} Feb 27 10:50:59 crc kubenswrapper[4728]: I0227 10:50:59.583925 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-85k7n"] Feb 27 10:51:00 crc kubenswrapper[4728]: I0227 10:51:00.051183 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-85k7n" event={"ID":"72a5cb19-78b7-47a9-8d6f-7b5b2b67f395","Type":"ContainerStarted","Data":"e7f9e5e7af91bf89f829ae55dd9e9b4c6b616663b446670de46b1807e2c16165"} Feb 27 10:51:00 crc kubenswrapper[4728]: I0227 10:51:00.051538 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-85k7n" event={"ID":"72a5cb19-78b7-47a9-8d6f-7b5b2b67f395","Type":"ContainerStarted","Data":"f89cf4e70cfa4f706dbcaa1f7834610172d3a53e002a226f898517d0b3eaa6ff"} Feb 27 10:51:00 crc kubenswrapper[4728]: I0227 10:51:00.056563 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7877d89589-bb282" event={"ID":"03fe94ce-b874-4412-b3a8-adb6d8172507","Type":"ContainerStarted","Data":"43f906b3b632e54291e58d236fff5ceca13200ace38dc4fbb48d1a0447e98594"} Feb 27 10:51:00 crc kubenswrapper[4728]: I0227 10:51:00.056880 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7877d89589-bb282" Feb 27 10:51:00 crc kubenswrapper[4728]: I0227 10:51:00.083274 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7877d89589-bb282" podStartSLOduration=4.083245143 podStartE2EDuration="4.083245143s" podCreationTimestamp="2026-02-27 10:50:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:51:00.075286837 +0000 UTC m=+1480.037652963" watchObservedRunningTime="2026-02-27 10:51:00.083245143 +0000 UTC m=+1480.045611289" Feb 27 10:51:00 crc kubenswrapper[4728]: I0227 10:51:00.321967 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 27 10:51:00 crc kubenswrapper[4728]: I0227 10:51:00.334899 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 27 10:51:00 crc kubenswrapper[4728]: I0227 10:51:00.440751 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 27 10:51:00 crc kubenswrapper[4728]: I0227 10:51:00.441156 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 27 10:51:01 crc kubenswrapper[4728]: I0227 10:51:01.078787 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-85k7n" podStartSLOduration=3.078768647 podStartE2EDuration="3.078768647s" podCreationTimestamp="2026-02-27 10:50:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:51:01.077108002 +0000 UTC m=+1481.039474118" watchObservedRunningTime="2026-02-27 10:51:01.078768647 +0000 UTC m=+1481.041134753" Feb 27 10:51:01 crc kubenswrapper[4728]: I0227 10:51:01.534453 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 27 10:51:01 crc kubenswrapper[4728]: I0227 10:51:01.592894 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 27 10:51:01 crc kubenswrapper[4728]: I0227 10:51:01.593163 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell0-conductor-0" podUID="5cb277f6-0a63-4989-9ea8-6437c4b1d09b" containerName="nova-cell0-conductor-conductor" containerID="cri-o://da30d1b253176558ecddefd1fb7703e69fde36feacbb30e226907c38bb2513e3" gracePeriod=30 Feb 27 10:51:01 crc kubenswrapper[4728]: I0227 10:51:01.622760 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 27 10:51:04 crc kubenswrapper[4728]: I0227 10:51:04.140157 4728 generic.go:334] "Generic (PLEG): container finished" podID="5cb277f6-0a63-4989-9ea8-6437c4b1d09b" containerID="da30d1b253176558ecddefd1fb7703e69fde36feacbb30e226907c38bb2513e3" exitCode=0 Feb 27 10:51:04 crc kubenswrapper[4728]: I0227 10:51:04.140394 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"5cb277f6-0a63-4989-9ea8-6437c4b1d09b","Type":"ContainerDied","Data":"da30d1b253176558ecddefd1fb7703e69fde36feacbb30e226907c38bb2513e3"} Feb 27 10:51:04 crc kubenswrapper[4728]: I0227 10:51:04.499600 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 27 10:51:04 crc kubenswrapper[4728]: I0227 10:51:04.499945 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="50db51f7-f183-4a84-acd2-4320f33423f2" containerName="ceilometer-central-agent" containerID="cri-o://45e426aaf7dcae12e9251a4339d4393052ed79f1a1ed3181e44b611f9642cad9" gracePeriod=30 Feb 27 10:51:04 crc kubenswrapper[4728]: I0227 10:51:04.500935 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="50db51f7-f183-4a84-acd2-4320f33423f2" containerName="proxy-httpd" containerID="cri-o://91668e3ea1180f688fb19b668d9650894cf159e3c2a5a0c8fff9d8851f70f25e" gracePeriod=30 Feb 27 10:51:04 crc kubenswrapper[4728]: I0227 10:51:04.501012 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="50db51f7-f183-4a84-acd2-4320f33423f2" containerName="sg-core" containerID="cri-o://8e71b0296019b8dde3447ccb1c820c51daac7e5894773164e46843f7db3bf689" gracePeriod=30 Feb 27 10:51:04 crc kubenswrapper[4728]: I0227 10:51:04.501057 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="50db51f7-f183-4a84-acd2-4320f33423f2" containerName="ceilometer-notification-agent" containerID="cri-o://fec23eb88f656a352e56a75c27663bd020af92ffa9bb82e89103888f0ef6656a" gracePeriod=30 Feb 27 10:51:04 crc kubenswrapper[4728]: I0227 10:51:04.550811 4728 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="50db51f7-f183-4a84-acd2-4320f33423f2" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.244:3000/\": EOF" Feb 27 10:51:05 crc kubenswrapper[4728]: E0227 10:51:05.116592 4728 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of da30d1b253176558ecddefd1fb7703e69fde36feacbb30e226907c38bb2513e3 is running failed: container process not found" containerID="da30d1b253176558ecddefd1fb7703e69fde36feacbb30e226907c38bb2513e3" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 27 10:51:05 crc kubenswrapper[4728]: E0227 10:51:05.117773 4728 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of da30d1b253176558ecddefd1fb7703e69fde36feacbb30e226907c38bb2513e3 is running failed: container process not found" containerID="da30d1b253176558ecddefd1fb7703e69fde36feacbb30e226907c38bb2513e3" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 27 10:51:05 crc kubenswrapper[4728]: E0227 10:51:05.120724 4728 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of da30d1b253176558ecddefd1fb7703e69fde36feacbb30e226907c38bb2513e3 is running failed: container process not found" containerID="da30d1b253176558ecddefd1fb7703e69fde36feacbb30e226907c38bb2513e3" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 27 10:51:05 crc kubenswrapper[4728]: E0227 10:51:05.120770 4728 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of da30d1b253176558ecddefd1fb7703e69fde36feacbb30e226907c38bb2513e3 is running failed: container process not found" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="5cb277f6-0a63-4989-9ea8-6437c4b1d09b" containerName="nova-cell0-conductor-conductor" Feb 27 10:51:05 crc kubenswrapper[4728]: I0227 10:51:05.167260 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"80b35d5f-b110-435d-a058-375286dc42d2","Type":"ContainerStarted","Data":"78f28e96064472a99d2434f1d3375d75145645bc839d8c108a12f38a70e3dcc6"} Feb 27 10:51:05 crc kubenswrapper[4728]: I0227 10:51:05.171739 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"214d42db-ca45-403f-89e3-7026fb6abed2","Type":"ContainerStarted","Data":"171887b04082b4cf63e4c743d6389ebad685207ffac1fd0da2bd9773c241220a"} Feb 27 10:51:05 crc kubenswrapper[4728]: I0227 10:51:05.171867 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="214d42db-ca45-403f-89e3-7026fb6abed2" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://171887b04082b4cf63e4c743d6389ebad685207ffac1fd0da2bd9773c241220a" gracePeriod=30 Feb 27 10:51:05 crc kubenswrapper[4728]: I0227 10:51:05.186528 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4517eec5-2129-4678-9f45-e149b9ef35fc","Type":"ContainerStarted","Data":"7ce9602f0459c15fa8ef812329b3b2d8731f01218cbcfa8cac64d19acd373cf0"} Feb 27 10:51:05 crc kubenswrapper[4728]: I0227 10:51:05.189149 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.493034354 podStartE2EDuration="9.189135973s" podCreationTimestamp="2026-02-27 10:50:56 +0000 UTC" firstStartedPulling="2026-02-27 10:50:57.86898938 +0000 UTC m=+1477.831355486" lastFinishedPulling="2026-02-27 10:51:04.565090999 +0000 UTC m=+1484.527457105" observedRunningTime="2026-02-27 10:51:05.186795729 +0000 UTC m=+1485.149161835" watchObservedRunningTime="2026-02-27 10:51:05.189135973 +0000 UTC m=+1485.151502079" Feb 27 10:51:05 crc kubenswrapper[4728]: I0227 10:51:05.194023 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 27 10:51:05 crc kubenswrapper[4728]: I0227 10:51:05.212698 4728 generic.go:334] "Generic (PLEG): container finished" podID="50db51f7-f183-4a84-acd2-4320f33423f2" containerID="91668e3ea1180f688fb19b668d9650894cf159e3c2a5a0c8fff9d8851f70f25e" exitCode=0 Feb 27 10:51:05 crc kubenswrapper[4728]: I0227 10:51:05.212727 4728 generic.go:334] "Generic (PLEG): container finished" podID="50db51f7-f183-4a84-acd2-4320f33423f2" containerID="8e71b0296019b8dde3447ccb1c820c51daac7e5894773164e46843f7db3bf689" exitCode=2 Feb 27 10:51:05 crc kubenswrapper[4728]: I0227 10:51:05.212808 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"50db51f7-f183-4a84-acd2-4320f33423f2","Type":"ContainerDied","Data":"91668e3ea1180f688fb19b668d9650894cf159e3c2a5a0c8fff9d8851f70f25e"} Feb 27 10:51:05 crc kubenswrapper[4728]: I0227 10:51:05.212856 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"50db51f7-f183-4a84-acd2-4320f33423f2","Type":"ContainerDied","Data":"8e71b0296019b8dde3447ccb1c820c51daac7e5894773164e46843f7db3bf689"} Feb 27 10:51:05 crc kubenswrapper[4728]: I0227 10:51:05.225020 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"67af1e62-168c-4a94-a206-79158119b0a4","Type":"ContainerStarted","Data":"35470350f7e534e870a85053d34cdc34d1ac705756500146263f184b9fac2860"} Feb 27 10:51:05 crc kubenswrapper[4728]: I0227 10:51:05.225191 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="67af1e62-168c-4a94-a206-79158119b0a4" containerName="nova-scheduler-scheduler" containerID="cri-o://35470350f7e534e870a85053d34cdc34d1ac705756500146263f184b9fac2860" gracePeriod=30 Feb 27 10:51:05 crc kubenswrapper[4728]: I0227 10:51:05.260197 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.599912463 podStartE2EDuration="9.26017856s" podCreationTimestamp="2026-02-27 10:50:56 +0000 UTC" firstStartedPulling="2026-02-27 10:50:57.868571458 +0000 UTC m=+1477.830937564" lastFinishedPulling="2026-02-27 10:51:04.528837565 +0000 UTC m=+1484.491203661" observedRunningTime="2026-02-27 10:51:05.254954089 +0000 UTC m=+1485.217320195" watchObservedRunningTime="2026-02-27 10:51:05.26017856 +0000 UTC m=+1485.222544656" Feb 27 10:51:05 crc kubenswrapper[4728]: I0227 10:51:05.263592 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5cb277f6-0a63-4989-9ea8-6437c4b1d09b-config-data\") pod \"5cb277f6-0a63-4989-9ea8-6437c4b1d09b\" (UID: \"5cb277f6-0a63-4989-9ea8-6437c4b1d09b\") " Feb 27 10:51:05 crc kubenswrapper[4728]: I0227 10:51:05.263976 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bk55s\" (UniqueName: \"kubernetes.io/projected/5cb277f6-0a63-4989-9ea8-6437c4b1d09b-kube-api-access-bk55s\") pod \"5cb277f6-0a63-4989-9ea8-6437c4b1d09b\" (UID: \"5cb277f6-0a63-4989-9ea8-6437c4b1d09b\") " Feb 27 10:51:05 crc kubenswrapper[4728]: I0227 10:51:05.264133 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5cb277f6-0a63-4989-9ea8-6437c4b1d09b-combined-ca-bundle\") pod \"5cb277f6-0a63-4989-9ea8-6437c4b1d09b\" (UID: \"5cb277f6-0a63-4989-9ea8-6437c4b1d09b\") " Feb 27 10:51:05 crc kubenswrapper[4728]: I0227 10:51:05.285694 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5cb277f6-0a63-4989-9ea8-6437c4b1d09b-kube-api-access-bk55s" (OuterVolumeSpecName: "kube-api-access-bk55s") pod "5cb277f6-0a63-4989-9ea8-6437c4b1d09b" (UID: "5cb277f6-0a63-4989-9ea8-6437c4b1d09b"). InnerVolumeSpecName "kube-api-access-bk55s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:51:05 crc kubenswrapper[4728]: I0227 10:51:05.365711 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5cb277f6-0a63-4989-9ea8-6437c4b1d09b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5cb277f6-0a63-4989-9ea8-6437c4b1d09b" (UID: "5cb277f6-0a63-4989-9ea8-6437c4b1d09b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:51:05 crc kubenswrapper[4728]: I0227 10:51:05.386284 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bk55s\" (UniqueName: \"kubernetes.io/projected/5cb277f6-0a63-4989-9ea8-6437c4b1d09b-kube-api-access-bk55s\") on node \"crc\" DevicePath \"\"" Feb 27 10:51:05 crc kubenswrapper[4728]: I0227 10:51:05.386330 4728 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5cb277f6-0a63-4989-9ea8-6437c4b1d09b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 10:51:05 crc kubenswrapper[4728]: I0227 10:51:05.397625 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5cb277f6-0a63-4989-9ea8-6437c4b1d09b-config-data" (OuterVolumeSpecName: "config-data") pod "5cb277f6-0a63-4989-9ea8-6437c4b1d09b" (UID: "5cb277f6-0a63-4989-9ea8-6437c4b1d09b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:51:05 crc kubenswrapper[4728]: I0227 10:51:05.439867 4728 scope.go:117] "RemoveContainer" containerID="a3e438cea8d8eca61ab4f50315598a30164960ef35bd6515c9bae8df76d98d83" Feb 27 10:51:05 crc kubenswrapper[4728]: I0227 10:51:05.493305 4728 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5cb277f6-0a63-4989-9ea8-6437c4b1d09b-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 10:51:06 crc kubenswrapper[4728]: I0227 10:51:06.041665 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 27 10:51:06 crc kubenswrapper[4728]: I0227 10:51:06.111973 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6b6qb\" (UniqueName: \"kubernetes.io/projected/50db51f7-f183-4a84-acd2-4320f33423f2-kube-api-access-6b6qb\") pod \"50db51f7-f183-4a84-acd2-4320f33423f2\" (UID: \"50db51f7-f183-4a84-acd2-4320f33423f2\") " Feb 27 10:51:06 crc kubenswrapper[4728]: I0227 10:51:06.112106 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/50db51f7-f183-4a84-acd2-4320f33423f2-log-httpd\") pod \"50db51f7-f183-4a84-acd2-4320f33423f2\" (UID: \"50db51f7-f183-4a84-acd2-4320f33423f2\") " Feb 27 10:51:06 crc kubenswrapper[4728]: I0227 10:51:06.112193 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/50db51f7-f183-4a84-acd2-4320f33423f2-scripts\") pod \"50db51f7-f183-4a84-acd2-4320f33423f2\" (UID: \"50db51f7-f183-4a84-acd2-4320f33423f2\") " Feb 27 10:51:06 crc kubenswrapper[4728]: I0227 10:51:06.112305 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50db51f7-f183-4a84-acd2-4320f33423f2-config-data\") pod \"50db51f7-f183-4a84-acd2-4320f33423f2\" (UID: \"50db51f7-f183-4a84-acd2-4320f33423f2\") " Feb 27 10:51:06 crc kubenswrapper[4728]: I0227 10:51:06.112350 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/50db51f7-f183-4a84-acd2-4320f33423f2-sg-core-conf-yaml\") pod \"50db51f7-f183-4a84-acd2-4320f33423f2\" (UID: \"50db51f7-f183-4a84-acd2-4320f33423f2\") " Feb 27 10:51:06 crc kubenswrapper[4728]: I0227 10:51:06.112429 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/50db51f7-f183-4a84-acd2-4320f33423f2-run-httpd\") pod \"50db51f7-f183-4a84-acd2-4320f33423f2\" (UID: \"50db51f7-f183-4a84-acd2-4320f33423f2\") " Feb 27 10:51:06 crc kubenswrapper[4728]: I0227 10:51:06.112456 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50db51f7-f183-4a84-acd2-4320f33423f2-combined-ca-bundle\") pod \"50db51f7-f183-4a84-acd2-4320f33423f2\" (UID: \"50db51f7-f183-4a84-acd2-4320f33423f2\") " Feb 27 10:51:06 crc kubenswrapper[4728]: I0227 10:51:06.114951 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/50db51f7-f183-4a84-acd2-4320f33423f2-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "50db51f7-f183-4a84-acd2-4320f33423f2" (UID: "50db51f7-f183-4a84-acd2-4320f33423f2"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:51:06 crc kubenswrapper[4728]: I0227 10:51:06.115983 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/50db51f7-f183-4a84-acd2-4320f33423f2-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "50db51f7-f183-4a84-acd2-4320f33423f2" (UID: "50db51f7-f183-4a84-acd2-4320f33423f2"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:51:06 crc kubenswrapper[4728]: I0227 10:51:06.130821 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50db51f7-f183-4a84-acd2-4320f33423f2-kube-api-access-6b6qb" (OuterVolumeSpecName: "kube-api-access-6b6qb") pod "50db51f7-f183-4a84-acd2-4320f33423f2" (UID: "50db51f7-f183-4a84-acd2-4320f33423f2"). InnerVolumeSpecName "kube-api-access-6b6qb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:51:06 crc kubenswrapper[4728]: I0227 10:51:06.131912 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50db51f7-f183-4a84-acd2-4320f33423f2-scripts" (OuterVolumeSpecName: "scripts") pod "50db51f7-f183-4a84-acd2-4320f33423f2" (UID: "50db51f7-f183-4a84-acd2-4320f33423f2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:51:06 crc kubenswrapper[4728]: I0227 10:51:06.176816 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50db51f7-f183-4a84-acd2-4320f33423f2-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "50db51f7-f183-4a84-acd2-4320f33423f2" (UID: "50db51f7-f183-4a84-acd2-4320f33423f2"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:51:06 crc kubenswrapper[4728]: I0227 10:51:06.216684 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6b6qb\" (UniqueName: \"kubernetes.io/projected/50db51f7-f183-4a84-acd2-4320f33423f2-kube-api-access-6b6qb\") on node \"crc\" DevicePath \"\"" Feb 27 10:51:06 crc kubenswrapper[4728]: I0227 10:51:06.216717 4728 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/50db51f7-f183-4a84-acd2-4320f33423f2-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 27 10:51:06 crc kubenswrapper[4728]: I0227 10:51:06.216726 4728 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/50db51f7-f183-4a84-acd2-4320f33423f2-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 10:51:06 crc kubenswrapper[4728]: I0227 10:51:06.216735 4728 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/50db51f7-f183-4a84-acd2-4320f33423f2-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 27 10:51:06 crc kubenswrapper[4728]: I0227 10:51:06.216743 4728 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/50db51f7-f183-4a84-acd2-4320f33423f2-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 27 10:51:06 crc kubenswrapper[4728]: I0227 10:51:06.221265 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50db51f7-f183-4a84-acd2-4320f33423f2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "50db51f7-f183-4a84-acd2-4320f33423f2" (UID: "50db51f7-f183-4a84-acd2-4320f33423f2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:51:06 crc kubenswrapper[4728]: I0227 10:51:06.240972 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="80b35d5f-b110-435d-a058-375286dc42d2" containerName="nova-api-log" containerID="cri-o://78f28e96064472a99d2434f1d3375d75145645bc839d8c108a12f38a70e3dcc6" gracePeriod=30 Feb 27 10:51:06 crc kubenswrapper[4728]: I0227 10:51:06.241094 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="80b35d5f-b110-435d-a058-375286dc42d2" containerName="nova-api-api" containerID="cri-o://e778d3b7faed754a0a2bba9bd57b18271c1e7f43a17fc93c7805bc846de1fba1" gracePeriod=30 Feb 27 10:51:06 crc kubenswrapper[4728]: I0227 10:51:06.241098 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"80b35d5f-b110-435d-a058-375286dc42d2","Type":"ContainerStarted","Data":"e778d3b7faed754a0a2bba9bd57b18271c1e7f43a17fc93c7805bc846de1fba1"} Feb 27 10:51:06 crc kubenswrapper[4728]: I0227 10:51:06.243674 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"5cb277f6-0a63-4989-9ea8-6437c4b1d09b","Type":"ContainerDied","Data":"6a99b56f8dc94979b3ce4c2e404da8288a5ab8200b7855c4366032c51095d44f"} Feb 27 10:51:06 crc kubenswrapper[4728]: I0227 10:51:06.243730 4728 scope.go:117] "RemoveContainer" containerID="da30d1b253176558ecddefd1fb7703e69fde36feacbb30e226907c38bb2513e3" Feb 27 10:51:06 crc kubenswrapper[4728]: I0227 10:51:06.243870 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 27 10:51:06 crc kubenswrapper[4728]: I0227 10:51:06.257195 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4517eec5-2129-4678-9f45-e149b9ef35fc","Type":"ContainerStarted","Data":"d941464ec58816936182c705032d528e87baa8cb08243da4420eb01105c458af"} Feb 27 10:51:06 crc kubenswrapper[4728]: I0227 10:51:06.257405 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="4517eec5-2129-4678-9f45-e149b9ef35fc" containerName="nova-metadata-log" containerID="cri-o://7ce9602f0459c15fa8ef812329b3b2d8731f01218cbcfa8cac64d19acd373cf0" gracePeriod=30 Feb 27 10:51:06 crc kubenswrapper[4728]: I0227 10:51:06.257555 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="4517eec5-2129-4678-9f45-e149b9ef35fc" containerName="nova-metadata-metadata" containerID="cri-o://d941464ec58816936182c705032d528e87baa8cb08243da4420eb01105c458af" gracePeriod=30 Feb 27 10:51:06 crc kubenswrapper[4728]: I0227 10:51:06.276757 4728 generic.go:334] "Generic (PLEG): container finished" podID="50db51f7-f183-4a84-acd2-4320f33423f2" containerID="fec23eb88f656a352e56a75c27663bd020af92ffa9bb82e89103888f0ef6656a" exitCode=0 Feb 27 10:51:06 crc kubenswrapper[4728]: I0227 10:51:06.276796 4728 generic.go:334] "Generic (PLEG): container finished" podID="50db51f7-f183-4a84-acd2-4320f33423f2" containerID="45e426aaf7dcae12e9251a4339d4393052ed79f1a1ed3181e44b611f9642cad9" exitCode=0 Feb 27 10:51:06 crc kubenswrapper[4728]: I0227 10:51:06.276835 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"50db51f7-f183-4a84-acd2-4320f33423f2","Type":"ContainerDied","Data":"fec23eb88f656a352e56a75c27663bd020af92ffa9bb82e89103888f0ef6656a"} Feb 27 10:51:06 crc kubenswrapper[4728]: I0227 10:51:06.276867 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"50db51f7-f183-4a84-acd2-4320f33423f2","Type":"ContainerDied","Data":"45e426aaf7dcae12e9251a4339d4393052ed79f1a1ed3181e44b611f9642cad9"} Feb 27 10:51:06 crc kubenswrapper[4728]: I0227 10:51:06.276885 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"50db51f7-f183-4a84-acd2-4320f33423f2","Type":"ContainerDied","Data":"cc94aa8a47d61092c4258b26b06534ee442b22ca9f85dc9c7d7665aa4a114aba"} Feb 27 10:51:06 crc kubenswrapper[4728]: I0227 10:51:06.276964 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 27 10:51:06 crc kubenswrapper[4728]: I0227 10:51:06.283440 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.742201579 podStartE2EDuration="10.283393495s" podCreationTimestamp="2026-02-27 10:50:56 +0000 UTC" firstStartedPulling="2026-02-27 10:50:58.029863264 +0000 UTC m=+1477.992229370" lastFinishedPulling="2026-02-27 10:51:04.57105518 +0000 UTC m=+1484.533421286" observedRunningTime="2026-02-27 10:51:06.265055437 +0000 UTC m=+1486.227421543" watchObservedRunningTime="2026-02-27 10:51:06.283393495 +0000 UTC m=+1486.245759611" Feb 27 10:51:06 crc kubenswrapper[4728]: I0227 10:51:06.319703 4728 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50db51f7-f183-4a84-acd2-4320f33423f2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 10:51:06 crc kubenswrapper[4728]: I0227 10:51:06.321557 4728 scope.go:117] "RemoveContainer" containerID="91668e3ea1180f688fb19b668d9650894cf159e3c2a5a0c8fff9d8851f70f25e" Feb 27 10:51:06 crc kubenswrapper[4728]: I0227 10:51:06.334626 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50db51f7-f183-4a84-acd2-4320f33423f2-config-data" (OuterVolumeSpecName: "config-data") pod "50db51f7-f183-4a84-acd2-4320f33423f2" (UID: "50db51f7-f183-4a84-acd2-4320f33423f2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:51:06 crc kubenswrapper[4728]: I0227 10:51:06.345340 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.60372148 podStartE2EDuration="10.345316795s" podCreationTimestamp="2026-02-27 10:50:56 +0000 UTC" firstStartedPulling="2026-02-27 10:50:57.840066504 +0000 UTC m=+1477.802432610" lastFinishedPulling="2026-02-27 10:51:04.581661819 +0000 UTC m=+1484.544027925" observedRunningTime="2026-02-27 10:51:06.310594264 +0000 UTC m=+1486.272960400" watchObservedRunningTime="2026-02-27 10:51:06.345316795 +0000 UTC m=+1486.307682901" Feb 27 10:51:06 crc kubenswrapper[4728]: I0227 10:51:06.371135 4728 scope.go:117] "RemoveContainer" containerID="8e71b0296019b8dde3447ccb1c820c51daac7e5894773164e46843f7db3bf689" Feb 27 10:51:06 crc kubenswrapper[4728]: I0227 10:51:06.371285 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 27 10:51:06 crc kubenswrapper[4728]: I0227 10:51:06.390456 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 27 10:51:06 crc kubenswrapper[4728]: I0227 10:51:06.413874 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 27 10:51:06 crc kubenswrapper[4728]: E0227 10:51:06.416026 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50db51f7-f183-4a84-acd2-4320f33423f2" containerName="ceilometer-central-agent" Feb 27 10:51:06 crc kubenswrapper[4728]: I0227 10:51:06.416050 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="50db51f7-f183-4a84-acd2-4320f33423f2" containerName="ceilometer-central-agent" Feb 27 10:51:06 crc kubenswrapper[4728]: E0227 10:51:06.416066 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50db51f7-f183-4a84-acd2-4320f33423f2" containerName="proxy-httpd" Feb 27 10:51:06 crc kubenswrapper[4728]: I0227 10:51:06.416072 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="50db51f7-f183-4a84-acd2-4320f33423f2" containerName="proxy-httpd" Feb 27 10:51:06 crc kubenswrapper[4728]: E0227 10:51:06.416103 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5cb277f6-0a63-4989-9ea8-6437c4b1d09b" containerName="nova-cell0-conductor-conductor" Feb 27 10:51:06 crc kubenswrapper[4728]: I0227 10:51:06.416109 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="5cb277f6-0a63-4989-9ea8-6437c4b1d09b" containerName="nova-cell0-conductor-conductor" Feb 27 10:51:06 crc kubenswrapper[4728]: E0227 10:51:06.416135 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50db51f7-f183-4a84-acd2-4320f33423f2" containerName="ceilometer-notification-agent" Feb 27 10:51:06 crc kubenswrapper[4728]: I0227 10:51:06.416141 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="50db51f7-f183-4a84-acd2-4320f33423f2" containerName="ceilometer-notification-agent" Feb 27 10:51:06 crc kubenswrapper[4728]: E0227 10:51:06.416150 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50db51f7-f183-4a84-acd2-4320f33423f2" containerName="sg-core" Feb 27 10:51:06 crc kubenswrapper[4728]: I0227 10:51:06.416156 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="50db51f7-f183-4a84-acd2-4320f33423f2" containerName="sg-core" Feb 27 10:51:06 crc kubenswrapper[4728]: I0227 10:51:06.416565 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="50db51f7-f183-4a84-acd2-4320f33423f2" containerName="proxy-httpd" Feb 27 10:51:06 crc kubenswrapper[4728]: I0227 10:51:06.416591 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="50db51f7-f183-4a84-acd2-4320f33423f2" containerName="sg-core" Feb 27 10:51:06 crc kubenswrapper[4728]: I0227 10:51:06.416665 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="5cb277f6-0a63-4989-9ea8-6437c4b1d09b" containerName="nova-cell0-conductor-conductor" Feb 27 10:51:06 crc kubenswrapper[4728]: I0227 10:51:06.416677 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="50db51f7-f183-4a84-acd2-4320f33423f2" containerName="ceilometer-central-agent" Feb 27 10:51:06 crc kubenswrapper[4728]: I0227 10:51:06.416688 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="50db51f7-f183-4a84-acd2-4320f33423f2" containerName="ceilometer-notification-agent" Feb 27 10:51:06 crc kubenswrapper[4728]: I0227 10:51:06.417937 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 27 10:51:06 crc kubenswrapper[4728]: I0227 10:51:06.419955 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 27 10:51:06 crc kubenswrapper[4728]: I0227 10:51:06.422150 4728 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50db51f7-f183-4a84-acd2-4320f33423f2-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 10:51:06 crc kubenswrapper[4728]: I0227 10:51:06.423976 4728 scope.go:117] "RemoveContainer" containerID="fec23eb88f656a352e56a75c27663bd020af92ffa9bb82e89103888f0ef6656a" Feb 27 10:51:06 crc kubenswrapper[4728]: I0227 10:51:06.437074 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 27 10:51:06 crc kubenswrapper[4728]: I0227 10:51:06.471282 4728 scope.go:117] "RemoveContainer" containerID="45e426aaf7dcae12e9251a4339d4393052ed79f1a1ed3181e44b611f9642cad9" Feb 27 10:51:06 crc kubenswrapper[4728]: I0227 10:51:06.494314 4728 scope.go:117] "RemoveContainer" containerID="91668e3ea1180f688fb19b668d9650894cf159e3c2a5a0c8fff9d8851f70f25e" Feb 27 10:51:06 crc kubenswrapper[4728]: E0227 10:51:06.494931 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"91668e3ea1180f688fb19b668d9650894cf159e3c2a5a0c8fff9d8851f70f25e\": container with ID starting with 91668e3ea1180f688fb19b668d9650894cf159e3c2a5a0c8fff9d8851f70f25e not found: ID does not exist" containerID="91668e3ea1180f688fb19b668d9650894cf159e3c2a5a0c8fff9d8851f70f25e" Feb 27 10:51:06 crc kubenswrapper[4728]: I0227 10:51:06.495080 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"91668e3ea1180f688fb19b668d9650894cf159e3c2a5a0c8fff9d8851f70f25e"} err="failed to get container status \"91668e3ea1180f688fb19b668d9650894cf159e3c2a5a0c8fff9d8851f70f25e\": rpc error: code = NotFound desc = could not find container \"91668e3ea1180f688fb19b668d9650894cf159e3c2a5a0c8fff9d8851f70f25e\": container with ID starting with 91668e3ea1180f688fb19b668d9650894cf159e3c2a5a0c8fff9d8851f70f25e not found: ID does not exist" Feb 27 10:51:06 crc kubenswrapper[4728]: I0227 10:51:06.495216 4728 scope.go:117] "RemoveContainer" containerID="8e71b0296019b8dde3447ccb1c820c51daac7e5894773164e46843f7db3bf689" Feb 27 10:51:06 crc kubenswrapper[4728]: E0227 10:51:06.495727 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8e71b0296019b8dde3447ccb1c820c51daac7e5894773164e46843f7db3bf689\": container with ID starting with 8e71b0296019b8dde3447ccb1c820c51daac7e5894773164e46843f7db3bf689 not found: ID does not exist" containerID="8e71b0296019b8dde3447ccb1c820c51daac7e5894773164e46843f7db3bf689" Feb 27 10:51:06 crc kubenswrapper[4728]: I0227 10:51:06.495805 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e71b0296019b8dde3447ccb1c820c51daac7e5894773164e46843f7db3bf689"} err="failed to get container status \"8e71b0296019b8dde3447ccb1c820c51daac7e5894773164e46843f7db3bf689\": rpc error: code = NotFound desc = could not find container \"8e71b0296019b8dde3447ccb1c820c51daac7e5894773164e46843f7db3bf689\": container with ID starting with 8e71b0296019b8dde3447ccb1c820c51daac7e5894773164e46843f7db3bf689 not found: ID does not exist" Feb 27 10:51:06 crc kubenswrapper[4728]: I0227 10:51:06.495853 4728 scope.go:117] "RemoveContainer" containerID="fec23eb88f656a352e56a75c27663bd020af92ffa9bb82e89103888f0ef6656a" Feb 27 10:51:06 crc kubenswrapper[4728]: E0227 10:51:06.496879 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fec23eb88f656a352e56a75c27663bd020af92ffa9bb82e89103888f0ef6656a\": container with ID starting with fec23eb88f656a352e56a75c27663bd020af92ffa9bb82e89103888f0ef6656a not found: ID does not exist" containerID="fec23eb88f656a352e56a75c27663bd020af92ffa9bb82e89103888f0ef6656a" Feb 27 10:51:06 crc kubenswrapper[4728]: I0227 10:51:06.496910 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fec23eb88f656a352e56a75c27663bd020af92ffa9bb82e89103888f0ef6656a"} err="failed to get container status \"fec23eb88f656a352e56a75c27663bd020af92ffa9bb82e89103888f0ef6656a\": rpc error: code = NotFound desc = could not find container \"fec23eb88f656a352e56a75c27663bd020af92ffa9bb82e89103888f0ef6656a\": container with ID starting with fec23eb88f656a352e56a75c27663bd020af92ffa9bb82e89103888f0ef6656a not found: ID does not exist" Feb 27 10:51:06 crc kubenswrapper[4728]: I0227 10:51:06.496931 4728 scope.go:117] "RemoveContainer" containerID="45e426aaf7dcae12e9251a4339d4393052ed79f1a1ed3181e44b611f9642cad9" Feb 27 10:51:06 crc kubenswrapper[4728]: E0227 10:51:06.497276 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"45e426aaf7dcae12e9251a4339d4393052ed79f1a1ed3181e44b611f9642cad9\": container with ID starting with 45e426aaf7dcae12e9251a4339d4393052ed79f1a1ed3181e44b611f9642cad9 not found: ID does not exist" containerID="45e426aaf7dcae12e9251a4339d4393052ed79f1a1ed3181e44b611f9642cad9" Feb 27 10:51:06 crc kubenswrapper[4728]: I0227 10:51:06.497325 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"45e426aaf7dcae12e9251a4339d4393052ed79f1a1ed3181e44b611f9642cad9"} err="failed to get container status \"45e426aaf7dcae12e9251a4339d4393052ed79f1a1ed3181e44b611f9642cad9\": rpc error: code = NotFound desc = could not find container \"45e426aaf7dcae12e9251a4339d4393052ed79f1a1ed3181e44b611f9642cad9\": container with ID starting with 45e426aaf7dcae12e9251a4339d4393052ed79f1a1ed3181e44b611f9642cad9 not found: ID does not exist" Feb 27 10:51:06 crc kubenswrapper[4728]: I0227 10:51:06.497381 4728 scope.go:117] "RemoveContainer" containerID="91668e3ea1180f688fb19b668d9650894cf159e3c2a5a0c8fff9d8851f70f25e" Feb 27 10:51:06 crc kubenswrapper[4728]: I0227 10:51:06.497765 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"91668e3ea1180f688fb19b668d9650894cf159e3c2a5a0c8fff9d8851f70f25e"} err="failed to get container status \"91668e3ea1180f688fb19b668d9650894cf159e3c2a5a0c8fff9d8851f70f25e\": rpc error: code = NotFound desc = could not find container \"91668e3ea1180f688fb19b668d9650894cf159e3c2a5a0c8fff9d8851f70f25e\": container with ID starting with 91668e3ea1180f688fb19b668d9650894cf159e3c2a5a0c8fff9d8851f70f25e not found: ID does not exist" Feb 27 10:51:06 crc kubenswrapper[4728]: I0227 10:51:06.497794 4728 scope.go:117] "RemoveContainer" containerID="8e71b0296019b8dde3447ccb1c820c51daac7e5894773164e46843f7db3bf689" Feb 27 10:51:06 crc kubenswrapper[4728]: I0227 10:51:06.498072 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e71b0296019b8dde3447ccb1c820c51daac7e5894773164e46843f7db3bf689"} err="failed to get container status \"8e71b0296019b8dde3447ccb1c820c51daac7e5894773164e46843f7db3bf689\": rpc error: code = NotFound desc = could not find container \"8e71b0296019b8dde3447ccb1c820c51daac7e5894773164e46843f7db3bf689\": container with ID starting with 8e71b0296019b8dde3447ccb1c820c51daac7e5894773164e46843f7db3bf689 not found: ID does not exist" Feb 27 10:51:06 crc kubenswrapper[4728]: I0227 10:51:06.498115 4728 scope.go:117] "RemoveContainer" containerID="fec23eb88f656a352e56a75c27663bd020af92ffa9bb82e89103888f0ef6656a" Feb 27 10:51:06 crc kubenswrapper[4728]: I0227 10:51:06.498969 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fec23eb88f656a352e56a75c27663bd020af92ffa9bb82e89103888f0ef6656a"} err="failed to get container status \"fec23eb88f656a352e56a75c27663bd020af92ffa9bb82e89103888f0ef6656a\": rpc error: code = NotFound desc = could not find container \"fec23eb88f656a352e56a75c27663bd020af92ffa9bb82e89103888f0ef6656a\": container with ID starting with fec23eb88f656a352e56a75c27663bd020af92ffa9bb82e89103888f0ef6656a not found: ID does not exist" Feb 27 10:51:06 crc kubenswrapper[4728]: I0227 10:51:06.499008 4728 scope.go:117] "RemoveContainer" containerID="45e426aaf7dcae12e9251a4339d4393052ed79f1a1ed3181e44b611f9642cad9" Feb 27 10:51:06 crc kubenswrapper[4728]: I0227 10:51:06.499205 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"45e426aaf7dcae12e9251a4339d4393052ed79f1a1ed3181e44b611f9642cad9"} err="failed to get container status \"45e426aaf7dcae12e9251a4339d4393052ed79f1a1ed3181e44b611f9642cad9\": rpc error: code = NotFound desc = could not find container \"45e426aaf7dcae12e9251a4339d4393052ed79f1a1ed3181e44b611f9642cad9\": container with ID starting with 45e426aaf7dcae12e9251a4339d4393052ed79f1a1ed3181e44b611f9642cad9 not found: ID does not exist" Feb 27 10:51:06 crc kubenswrapper[4728]: I0227 10:51:06.524411 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3315d635-81af-4a85-b41f-d9736448876a-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"3315d635-81af-4a85-b41f-d9736448876a\") " pod="openstack/nova-cell0-conductor-0" Feb 27 10:51:06 crc kubenswrapper[4728]: I0227 10:51:06.524661 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvp75\" (UniqueName: \"kubernetes.io/projected/3315d635-81af-4a85-b41f-d9736448876a-kube-api-access-nvp75\") pod \"nova-cell0-conductor-0\" (UID: \"3315d635-81af-4a85-b41f-d9736448876a\") " pod="openstack/nova-cell0-conductor-0" Feb 27 10:51:06 crc kubenswrapper[4728]: I0227 10:51:06.524710 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3315d635-81af-4a85-b41f-d9736448876a-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"3315d635-81af-4a85-b41f-d9736448876a\") " pod="openstack/nova-cell0-conductor-0" Feb 27 10:51:06 crc kubenswrapper[4728]: I0227 10:51:06.627161 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nvp75\" (UniqueName: \"kubernetes.io/projected/3315d635-81af-4a85-b41f-d9736448876a-kube-api-access-nvp75\") pod \"nova-cell0-conductor-0\" (UID: \"3315d635-81af-4a85-b41f-d9736448876a\") " pod="openstack/nova-cell0-conductor-0" Feb 27 10:51:06 crc kubenswrapper[4728]: I0227 10:51:06.627241 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3315d635-81af-4a85-b41f-d9736448876a-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"3315d635-81af-4a85-b41f-d9736448876a\") " pod="openstack/nova-cell0-conductor-0" Feb 27 10:51:06 crc kubenswrapper[4728]: I0227 10:51:06.627392 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3315d635-81af-4a85-b41f-d9736448876a-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"3315d635-81af-4a85-b41f-d9736448876a\") " pod="openstack/nova-cell0-conductor-0" Feb 27 10:51:06 crc kubenswrapper[4728]: I0227 10:51:06.636394 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3315d635-81af-4a85-b41f-d9736448876a-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"3315d635-81af-4a85-b41f-d9736448876a\") " pod="openstack/nova-cell0-conductor-0" Feb 27 10:51:06 crc kubenswrapper[4728]: I0227 10:51:06.636466 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3315d635-81af-4a85-b41f-d9736448876a-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"3315d635-81af-4a85-b41f-d9736448876a\") " pod="openstack/nova-cell0-conductor-0" Feb 27 10:51:06 crc kubenswrapper[4728]: I0227 10:51:06.641876 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nvp75\" (UniqueName: \"kubernetes.io/projected/3315d635-81af-4a85-b41f-d9736448876a-kube-api-access-nvp75\") pod \"nova-cell0-conductor-0\" (UID: \"3315d635-81af-4a85-b41f-d9736448876a\") " pod="openstack/nova-cell0-conductor-0" Feb 27 10:51:06 crc kubenswrapper[4728]: I0227 10:51:06.747271 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 27 10:51:06 crc kubenswrapper[4728]: I0227 10:51:06.765102 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5cb277f6-0a63-4989-9ea8-6437c4b1d09b" path="/var/lib/kubelet/pods/5cb277f6-0a63-4989-9ea8-6437c4b1d09b/volumes" Feb 27 10:51:06 crc kubenswrapper[4728]: I0227 10:51:06.767222 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 27 10:51:06 crc kubenswrapper[4728]: I0227 10:51:06.769760 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7877d89589-bb282" Feb 27 10:51:06 crc kubenswrapper[4728]: I0227 10:51:06.770636 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 27 10:51:06 crc kubenswrapper[4728]: I0227 10:51:06.770780 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 27 10:51:06 crc kubenswrapper[4728]: I0227 10:51:06.875935 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 27 10:51:06 crc kubenswrapper[4728]: I0227 10:51:06.944635 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 27 10:51:06 crc kubenswrapper[4728]: I0227 10:51:06.971686 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 27 10:51:07 crc kubenswrapper[4728]: I0227 10:51:07.041288 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 27 10:51:07 crc kubenswrapper[4728]: I0227 10:51:07.044170 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d978555f9-6gjtz"] Feb 27 10:51:07 crc kubenswrapper[4728]: I0227 10:51:07.044479 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7d978555f9-6gjtz" podUID="11767f8f-ebed-4306-b5f7-5e79182d0ad1" containerName="dnsmasq-dns" containerID="cri-o://5e17701145928d05b423e0f776dfca096755255ffa760b4215739c6efdf2b783" gracePeriod=10 Feb 27 10:51:07 crc kubenswrapper[4728]: I0227 10:51:07.078372 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80b35d5f-b110-435d-a058-375286dc42d2-config-data\") pod \"80b35d5f-b110-435d-a058-375286dc42d2\" (UID: \"80b35d5f-b110-435d-a058-375286dc42d2\") " Feb 27 10:51:07 crc kubenswrapper[4728]: I0227 10:51:07.080762 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80b35d5f-b110-435d-a058-375286dc42d2-combined-ca-bundle\") pod \"80b35d5f-b110-435d-a058-375286dc42d2\" (UID: \"80b35d5f-b110-435d-a058-375286dc42d2\") " Feb 27 10:51:07 crc kubenswrapper[4728]: I0227 10:51:07.080881 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x5r6x\" (UniqueName: \"kubernetes.io/projected/80b35d5f-b110-435d-a058-375286dc42d2-kube-api-access-x5r6x\") pod \"80b35d5f-b110-435d-a058-375286dc42d2\" (UID: \"80b35d5f-b110-435d-a058-375286dc42d2\") " Feb 27 10:51:07 crc kubenswrapper[4728]: I0227 10:51:07.080933 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/80b35d5f-b110-435d-a058-375286dc42d2-logs\") pod \"80b35d5f-b110-435d-a058-375286dc42d2\" (UID: \"80b35d5f-b110-435d-a058-375286dc42d2\") " Feb 27 10:51:07 crc kubenswrapper[4728]: I0227 10:51:07.084040 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/80b35d5f-b110-435d-a058-375286dc42d2-logs" (OuterVolumeSpecName: "logs") pod "80b35d5f-b110-435d-a058-375286dc42d2" (UID: "80b35d5f-b110-435d-a058-375286dc42d2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:51:07 crc kubenswrapper[4728]: I0227 10:51:07.090153 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80b35d5f-b110-435d-a058-375286dc42d2-kube-api-access-x5r6x" (OuterVolumeSpecName: "kube-api-access-x5r6x") pod "80b35d5f-b110-435d-a058-375286dc42d2" (UID: "80b35d5f-b110-435d-a058-375286dc42d2"). InnerVolumeSpecName "kube-api-access-x5r6x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:51:07 crc kubenswrapper[4728]: I0227 10:51:07.112751 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 27 10:51:07 crc kubenswrapper[4728]: E0227 10:51:07.113406 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80b35d5f-b110-435d-a058-375286dc42d2" containerName="nova-api-api" Feb 27 10:51:07 crc kubenswrapper[4728]: I0227 10:51:07.113445 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="80b35d5f-b110-435d-a058-375286dc42d2" containerName="nova-api-api" Feb 27 10:51:07 crc kubenswrapper[4728]: E0227 10:51:07.113473 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80b35d5f-b110-435d-a058-375286dc42d2" containerName="nova-api-log" Feb 27 10:51:07 crc kubenswrapper[4728]: I0227 10:51:07.113480 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="80b35d5f-b110-435d-a058-375286dc42d2" containerName="nova-api-log" Feb 27 10:51:07 crc kubenswrapper[4728]: I0227 10:51:07.113836 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="80b35d5f-b110-435d-a058-375286dc42d2" containerName="nova-api-log" Feb 27 10:51:07 crc kubenswrapper[4728]: I0227 10:51:07.113866 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="80b35d5f-b110-435d-a058-375286dc42d2" containerName="nova-api-api" Feb 27 10:51:07 crc kubenswrapper[4728]: I0227 10:51:07.116619 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 27 10:51:07 crc kubenswrapper[4728]: I0227 10:51:07.126260 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 27 10:51:07 crc kubenswrapper[4728]: I0227 10:51:07.126976 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 27 10:51:07 crc kubenswrapper[4728]: I0227 10:51:07.134808 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 27 10:51:07 crc kubenswrapper[4728]: I0227 10:51:07.154926 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80b35d5f-b110-435d-a058-375286dc42d2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "80b35d5f-b110-435d-a058-375286dc42d2" (UID: "80b35d5f-b110-435d-a058-375286dc42d2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:51:07 crc kubenswrapper[4728]: I0227 10:51:07.173842 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80b35d5f-b110-435d-a058-375286dc42d2-config-data" (OuterVolumeSpecName: "config-data") pod "80b35d5f-b110-435d-a058-375286dc42d2" (UID: "80b35d5f-b110-435d-a058-375286dc42d2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:51:07 crc kubenswrapper[4728]: I0227 10:51:07.185229 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/136f2587-addd-432e-a591-fc74213bf87c-config-data\") pod \"ceilometer-0\" (UID: \"136f2587-addd-432e-a591-fc74213bf87c\") " pod="openstack/ceilometer-0" Feb 27 10:51:07 crc kubenswrapper[4728]: I0227 10:51:07.185335 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kkfdk\" (UniqueName: \"kubernetes.io/projected/136f2587-addd-432e-a591-fc74213bf87c-kube-api-access-kkfdk\") pod \"ceilometer-0\" (UID: \"136f2587-addd-432e-a591-fc74213bf87c\") " pod="openstack/ceilometer-0" Feb 27 10:51:07 crc kubenswrapper[4728]: I0227 10:51:07.185415 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/136f2587-addd-432e-a591-fc74213bf87c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"136f2587-addd-432e-a591-fc74213bf87c\") " pod="openstack/ceilometer-0" Feb 27 10:51:07 crc kubenswrapper[4728]: I0227 10:51:07.185447 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/136f2587-addd-432e-a591-fc74213bf87c-log-httpd\") pod \"ceilometer-0\" (UID: \"136f2587-addd-432e-a591-fc74213bf87c\") " pod="openstack/ceilometer-0" Feb 27 10:51:07 crc kubenswrapper[4728]: I0227 10:51:07.185998 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/136f2587-addd-432e-a591-fc74213bf87c-scripts\") pod \"ceilometer-0\" (UID: \"136f2587-addd-432e-a591-fc74213bf87c\") " pod="openstack/ceilometer-0" Feb 27 10:51:07 crc kubenswrapper[4728]: I0227 10:51:07.186111 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/136f2587-addd-432e-a591-fc74213bf87c-run-httpd\") pod \"ceilometer-0\" (UID: \"136f2587-addd-432e-a591-fc74213bf87c\") " pod="openstack/ceilometer-0" Feb 27 10:51:07 crc kubenswrapper[4728]: I0227 10:51:07.186192 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/136f2587-addd-432e-a591-fc74213bf87c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"136f2587-addd-432e-a591-fc74213bf87c\") " pod="openstack/ceilometer-0" Feb 27 10:51:07 crc kubenswrapper[4728]: I0227 10:51:07.186297 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x5r6x\" (UniqueName: \"kubernetes.io/projected/80b35d5f-b110-435d-a058-375286dc42d2-kube-api-access-x5r6x\") on node \"crc\" DevicePath \"\"" Feb 27 10:51:07 crc kubenswrapper[4728]: I0227 10:51:07.186316 4728 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/80b35d5f-b110-435d-a058-375286dc42d2-logs\") on node \"crc\" DevicePath \"\"" Feb 27 10:51:07 crc kubenswrapper[4728]: I0227 10:51:07.186328 4728 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80b35d5f-b110-435d-a058-375286dc42d2-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 10:51:07 crc kubenswrapper[4728]: I0227 10:51:07.186342 4728 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80b35d5f-b110-435d-a058-375286dc42d2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 10:51:07 crc kubenswrapper[4728]: I0227 10:51:07.288655 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/136f2587-addd-432e-a591-fc74213bf87c-scripts\") pod \"ceilometer-0\" (UID: \"136f2587-addd-432e-a591-fc74213bf87c\") " pod="openstack/ceilometer-0" Feb 27 10:51:07 crc kubenswrapper[4728]: I0227 10:51:07.289322 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/136f2587-addd-432e-a591-fc74213bf87c-run-httpd\") pod \"ceilometer-0\" (UID: \"136f2587-addd-432e-a591-fc74213bf87c\") " pod="openstack/ceilometer-0" Feb 27 10:51:07 crc kubenswrapper[4728]: I0227 10:51:07.290679 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/136f2587-addd-432e-a591-fc74213bf87c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"136f2587-addd-432e-a591-fc74213bf87c\") " pod="openstack/ceilometer-0" Feb 27 10:51:07 crc kubenswrapper[4728]: I0227 10:51:07.290840 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/136f2587-addd-432e-a591-fc74213bf87c-config-data\") pod \"ceilometer-0\" (UID: \"136f2587-addd-432e-a591-fc74213bf87c\") " pod="openstack/ceilometer-0" Feb 27 10:51:07 crc kubenswrapper[4728]: I0227 10:51:07.290970 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kkfdk\" (UniqueName: \"kubernetes.io/projected/136f2587-addd-432e-a591-fc74213bf87c-kube-api-access-kkfdk\") pod \"ceilometer-0\" (UID: \"136f2587-addd-432e-a591-fc74213bf87c\") " pod="openstack/ceilometer-0" Feb 27 10:51:07 crc kubenswrapper[4728]: I0227 10:51:07.291148 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/136f2587-addd-432e-a591-fc74213bf87c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"136f2587-addd-432e-a591-fc74213bf87c\") " pod="openstack/ceilometer-0" Feb 27 10:51:07 crc kubenswrapper[4728]: I0227 10:51:07.291256 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/136f2587-addd-432e-a591-fc74213bf87c-log-httpd\") pod \"ceilometer-0\" (UID: \"136f2587-addd-432e-a591-fc74213bf87c\") " pod="openstack/ceilometer-0" Feb 27 10:51:07 crc kubenswrapper[4728]: I0227 10:51:07.291956 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/136f2587-addd-432e-a591-fc74213bf87c-log-httpd\") pod \"ceilometer-0\" (UID: \"136f2587-addd-432e-a591-fc74213bf87c\") " pod="openstack/ceilometer-0" Feb 27 10:51:07 crc kubenswrapper[4728]: I0227 10:51:07.290178 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/136f2587-addd-432e-a591-fc74213bf87c-run-httpd\") pod \"ceilometer-0\" (UID: \"136f2587-addd-432e-a591-fc74213bf87c\") " pod="openstack/ceilometer-0" Feb 27 10:51:07 crc kubenswrapper[4728]: I0227 10:51:07.297257 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/136f2587-addd-432e-a591-fc74213bf87c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"136f2587-addd-432e-a591-fc74213bf87c\") " pod="openstack/ceilometer-0" Feb 27 10:51:07 crc kubenswrapper[4728]: I0227 10:51:07.297851 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/136f2587-addd-432e-a591-fc74213bf87c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"136f2587-addd-432e-a591-fc74213bf87c\") " pod="openstack/ceilometer-0" Feb 27 10:51:07 crc kubenswrapper[4728]: I0227 10:51:07.299968 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/136f2587-addd-432e-a591-fc74213bf87c-config-data\") pod \"ceilometer-0\" (UID: \"136f2587-addd-432e-a591-fc74213bf87c\") " pod="openstack/ceilometer-0" Feb 27 10:51:07 crc kubenswrapper[4728]: I0227 10:51:07.301483 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/136f2587-addd-432e-a591-fc74213bf87c-scripts\") pod \"ceilometer-0\" (UID: \"136f2587-addd-432e-a591-fc74213bf87c\") " pod="openstack/ceilometer-0" Feb 27 10:51:07 crc kubenswrapper[4728]: I0227 10:51:07.312126 4728 generic.go:334] "Generic (PLEG): container finished" podID="4517eec5-2129-4678-9f45-e149b9ef35fc" containerID="d941464ec58816936182c705032d528e87baa8cb08243da4420eb01105c458af" exitCode=0 Feb 27 10:51:07 crc kubenswrapper[4728]: I0227 10:51:07.312171 4728 generic.go:334] "Generic (PLEG): container finished" podID="4517eec5-2129-4678-9f45-e149b9ef35fc" containerID="7ce9602f0459c15fa8ef812329b3b2d8731f01218cbcfa8cac64d19acd373cf0" exitCode=143 Feb 27 10:51:07 crc kubenswrapper[4728]: I0227 10:51:07.312483 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4517eec5-2129-4678-9f45-e149b9ef35fc","Type":"ContainerDied","Data":"d941464ec58816936182c705032d528e87baa8cb08243da4420eb01105c458af"} Feb 27 10:51:07 crc kubenswrapper[4728]: I0227 10:51:07.312567 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4517eec5-2129-4678-9f45-e149b9ef35fc","Type":"ContainerDied","Data":"7ce9602f0459c15fa8ef812329b3b2d8731f01218cbcfa8cac64d19acd373cf0"} Feb 27 10:51:07 crc kubenswrapper[4728]: I0227 10:51:07.317072 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kkfdk\" (UniqueName: \"kubernetes.io/projected/136f2587-addd-432e-a591-fc74213bf87c-kube-api-access-kkfdk\") pod \"ceilometer-0\" (UID: \"136f2587-addd-432e-a591-fc74213bf87c\") " pod="openstack/ceilometer-0" Feb 27 10:51:07 crc kubenswrapper[4728]: I0227 10:51:07.317602 4728 generic.go:334] "Generic (PLEG): container finished" podID="11767f8f-ebed-4306-b5f7-5e79182d0ad1" containerID="5e17701145928d05b423e0f776dfca096755255ffa760b4215739c6efdf2b783" exitCode=0 Feb 27 10:51:07 crc kubenswrapper[4728]: I0227 10:51:07.317642 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d978555f9-6gjtz" event={"ID":"11767f8f-ebed-4306-b5f7-5e79182d0ad1","Type":"ContainerDied","Data":"5e17701145928d05b423e0f776dfca096755255ffa760b4215739c6efdf2b783"} Feb 27 10:51:07 crc kubenswrapper[4728]: I0227 10:51:07.319000 4728 generic.go:334] "Generic (PLEG): container finished" podID="80b35d5f-b110-435d-a058-375286dc42d2" containerID="e778d3b7faed754a0a2bba9bd57b18271c1e7f43a17fc93c7805bc846de1fba1" exitCode=0 Feb 27 10:51:07 crc kubenswrapper[4728]: I0227 10:51:07.319015 4728 generic.go:334] "Generic (PLEG): container finished" podID="80b35d5f-b110-435d-a058-375286dc42d2" containerID="78f28e96064472a99d2434f1d3375d75145645bc839d8c108a12f38a70e3dcc6" exitCode=143 Feb 27 10:51:07 crc kubenswrapper[4728]: I0227 10:51:07.319042 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"80b35d5f-b110-435d-a058-375286dc42d2","Type":"ContainerDied","Data":"e778d3b7faed754a0a2bba9bd57b18271c1e7f43a17fc93c7805bc846de1fba1"} Feb 27 10:51:07 crc kubenswrapper[4728]: I0227 10:51:07.319057 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"80b35d5f-b110-435d-a058-375286dc42d2","Type":"ContainerDied","Data":"78f28e96064472a99d2434f1d3375d75145645bc839d8c108a12f38a70e3dcc6"} Feb 27 10:51:07 crc kubenswrapper[4728]: I0227 10:51:07.319065 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"80b35d5f-b110-435d-a058-375286dc42d2","Type":"ContainerDied","Data":"129f1cf9af9f01d8b9fcdb9f577f4725c59b85c89f38902753db9b2323ed8e9c"} Feb 27 10:51:07 crc kubenswrapper[4728]: I0227 10:51:07.319079 4728 scope.go:117] "RemoveContainer" containerID="e778d3b7faed754a0a2bba9bd57b18271c1e7f43a17fc93c7805bc846de1fba1" Feb 27 10:51:07 crc kubenswrapper[4728]: I0227 10:51:07.319204 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 27 10:51:07 crc kubenswrapper[4728]: I0227 10:51:07.393091 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 27 10:51:07 crc kubenswrapper[4728]: I0227 10:51:07.415525 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 27 10:51:07 crc kubenswrapper[4728]: I0227 10:51:07.453711 4728 scope.go:117] "RemoveContainer" containerID="78f28e96064472a99d2434f1d3375d75145645bc839d8c108a12f38a70e3dcc6" Feb 27 10:51:07 crc kubenswrapper[4728]: I0227 10:51:07.484784 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 27 10:51:07 crc kubenswrapper[4728]: I0227 10:51:07.487284 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 27 10:51:07 crc kubenswrapper[4728]: I0227 10:51:07.497592 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 27 10:51:07 crc kubenswrapper[4728]: I0227 10:51:07.498808 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 27 10:51:07 crc kubenswrapper[4728]: I0227 10:51:07.501293 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd23f1b2-4fb6-4e9d-a692-7d9640e4c999-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"bd23f1b2-4fb6-4e9d-a692-7d9640e4c999\") " pod="openstack/nova-api-0" Feb 27 10:51:07 crc kubenswrapper[4728]: I0227 10:51:07.501328 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bd23f1b2-4fb6-4e9d-a692-7d9640e4c999-logs\") pod \"nova-api-0\" (UID: \"bd23f1b2-4fb6-4e9d-a692-7d9640e4c999\") " pod="openstack/nova-api-0" Feb 27 10:51:07 crc kubenswrapper[4728]: I0227 10:51:07.501388 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd23f1b2-4fb6-4e9d-a692-7d9640e4c999-config-data\") pod \"nova-api-0\" (UID: \"bd23f1b2-4fb6-4e9d-a692-7d9640e4c999\") " pod="openstack/nova-api-0" Feb 27 10:51:07 crc kubenswrapper[4728]: I0227 10:51:07.501461 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-stllz\" (UniqueName: \"kubernetes.io/projected/bd23f1b2-4fb6-4e9d-a692-7d9640e4c999-kube-api-access-stllz\") pod \"nova-api-0\" (UID: \"bd23f1b2-4fb6-4e9d-a692-7d9640e4c999\") " pod="openstack/nova-api-0" Feb 27 10:51:07 crc kubenswrapper[4728]: I0227 10:51:07.535583 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 27 10:51:07 crc kubenswrapper[4728]: I0227 10:51:07.606895 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd23f1b2-4fb6-4e9d-a692-7d9640e4c999-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"bd23f1b2-4fb6-4e9d-a692-7d9640e4c999\") " pod="openstack/nova-api-0" Feb 27 10:51:07 crc kubenswrapper[4728]: I0227 10:51:07.606951 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bd23f1b2-4fb6-4e9d-a692-7d9640e4c999-logs\") pod \"nova-api-0\" (UID: \"bd23f1b2-4fb6-4e9d-a692-7d9640e4c999\") " pod="openstack/nova-api-0" Feb 27 10:51:07 crc kubenswrapper[4728]: I0227 10:51:07.607026 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd23f1b2-4fb6-4e9d-a692-7d9640e4c999-config-data\") pod \"nova-api-0\" (UID: \"bd23f1b2-4fb6-4e9d-a692-7d9640e4c999\") " pod="openstack/nova-api-0" Feb 27 10:51:07 crc kubenswrapper[4728]: I0227 10:51:07.607128 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-stllz\" (UniqueName: \"kubernetes.io/projected/bd23f1b2-4fb6-4e9d-a692-7d9640e4c999-kube-api-access-stllz\") pod \"nova-api-0\" (UID: \"bd23f1b2-4fb6-4e9d-a692-7d9640e4c999\") " pod="openstack/nova-api-0" Feb 27 10:51:07 crc kubenswrapper[4728]: I0227 10:51:07.608699 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bd23f1b2-4fb6-4e9d-a692-7d9640e4c999-logs\") pod \"nova-api-0\" (UID: \"bd23f1b2-4fb6-4e9d-a692-7d9640e4c999\") " pod="openstack/nova-api-0" Feb 27 10:51:07 crc kubenswrapper[4728]: I0227 10:51:07.616642 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd23f1b2-4fb6-4e9d-a692-7d9640e4c999-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"bd23f1b2-4fb6-4e9d-a692-7d9640e4c999\") " pod="openstack/nova-api-0" Feb 27 10:51:07 crc kubenswrapper[4728]: I0227 10:51:07.628148 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd23f1b2-4fb6-4e9d-a692-7d9640e4c999-config-data\") pod \"nova-api-0\" (UID: \"bd23f1b2-4fb6-4e9d-a692-7d9640e4c999\") " pod="openstack/nova-api-0" Feb 27 10:51:07 crc kubenswrapper[4728]: I0227 10:51:07.654739 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-stllz\" (UniqueName: \"kubernetes.io/projected/bd23f1b2-4fb6-4e9d-a692-7d9640e4c999-kube-api-access-stllz\") pod \"nova-api-0\" (UID: \"bd23f1b2-4fb6-4e9d-a692-7d9640e4c999\") " pod="openstack/nova-api-0" Feb 27 10:51:07 crc kubenswrapper[4728]: I0227 10:51:07.850068 4728 scope.go:117] "RemoveContainer" containerID="e778d3b7faed754a0a2bba9bd57b18271c1e7f43a17fc93c7805bc846de1fba1" Feb 27 10:51:07 crc kubenswrapper[4728]: E0227 10:51:07.856092 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e778d3b7faed754a0a2bba9bd57b18271c1e7f43a17fc93c7805bc846de1fba1\": container with ID starting with e778d3b7faed754a0a2bba9bd57b18271c1e7f43a17fc93c7805bc846de1fba1 not found: ID does not exist" containerID="e778d3b7faed754a0a2bba9bd57b18271c1e7f43a17fc93c7805bc846de1fba1" Feb 27 10:51:07 crc kubenswrapper[4728]: I0227 10:51:07.856131 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e778d3b7faed754a0a2bba9bd57b18271c1e7f43a17fc93c7805bc846de1fba1"} err="failed to get container status \"e778d3b7faed754a0a2bba9bd57b18271c1e7f43a17fc93c7805bc846de1fba1\": rpc error: code = NotFound desc = could not find container \"e778d3b7faed754a0a2bba9bd57b18271c1e7f43a17fc93c7805bc846de1fba1\": container with ID starting with e778d3b7faed754a0a2bba9bd57b18271c1e7f43a17fc93c7805bc846de1fba1 not found: ID does not exist" Feb 27 10:51:07 crc kubenswrapper[4728]: I0227 10:51:07.856160 4728 scope.go:117] "RemoveContainer" containerID="78f28e96064472a99d2434f1d3375d75145645bc839d8c108a12f38a70e3dcc6" Feb 27 10:51:07 crc kubenswrapper[4728]: E0227 10:51:07.873644 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"78f28e96064472a99d2434f1d3375d75145645bc839d8c108a12f38a70e3dcc6\": container with ID starting with 78f28e96064472a99d2434f1d3375d75145645bc839d8c108a12f38a70e3dcc6 not found: ID does not exist" containerID="78f28e96064472a99d2434f1d3375d75145645bc839d8c108a12f38a70e3dcc6" Feb 27 10:51:07 crc kubenswrapper[4728]: I0227 10:51:07.873703 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"78f28e96064472a99d2434f1d3375d75145645bc839d8c108a12f38a70e3dcc6"} err="failed to get container status \"78f28e96064472a99d2434f1d3375d75145645bc839d8c108a12f38a70e3dcc6\": rpc error: code = NotFound desc = could not find container \"78f28e96064472a99d2434f1d3375d75145645bc839d8c108a12f38a70e3dcc6\": container with ID starting with 78f28e96064472a99d2434f1d3375d75145645bc839d8c108a12f38a70e3dcc6 not found: ID does not exist" Feb 27 10:51:07 crc kubenswrapper[4728]: I0227 10:51:07.873736 4728 scope.go:117] "RemoveContainer" containerID="e778d3b7faed754a0a2bba9bd57b18271c1e7f43a17fc93c7805bc846de1fba1" Feb 27 10:51:07 crc kubenswrapper[4728]: I0227 10:51:07.884860 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e778d3b7faed754a0a2bba9bd57b18271c1e7f43a17fc93c7805bc846de1fba1"} err="failed to get container status \"e778d3b7faed754a0a2bba9bd57b18271c1e7f43a17fc93c7805bc846de1fba1\": rpc error: code = NotFound desc = could not find container \"e778d3b7faed754a0a2bba9bd57b18271c1e7f43a17fc93c7805bc846de1fba1\": container with ID starting with e778d3b7faed754a0a2bba9bd57b18271c1e7f43a17fc93c7805bc846de1fba1 not found: ID does not exist" Feb 27 10:51:07 crc kubenswrapper[4728]: I0227 10:51:07.884902 4728 scope.go:117] "RemoveContainer" containerID="78f28e96064472a99d2434f1d3375d75145645bc839d8c108a12f38a70e3dcc6" Feb 27 10:51:07 crc kubenswrapper[4728]: I0227 10:51:07.886705 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"78f28e96064472a99d2434f1d3375d75145645bc839d8c108a12f38a70e3dcc6"} err="failed to get container status \"78f28e96064472a99d2434f1d3375d75145645bc839d8c108a12f38a70e3dcc6\": rpc error: code = NotFound desc = could not find container \"78f28e96064472a99d2434f1d3375d75145645bc839d8c108a12f38a70e3dcc6\": container with ID starting with 78f28e96064472a99d2434f1d3375d75145645bc839d8c108a12f38a70e3dcc6 not found: ID does not exist" Feb 27 10:51:07 crc kubenswrapper[4728]: I0227 10:51:07.891776 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 27 10:51:07 crc kubenswrapper[4728]: I0227 10:51:07.901853 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 27 10:51:08 crc kubenswrapper[4728]: I0227 10:51:08.044433 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 27 10:51:08 crc kubenswrapper[4728]: I0227 10:51:08.047082 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pqxfr\" (UniqueName: \"kubernetes.io/projected/4517eec5-2129-4678-9f45-e149b9ef35fc-kube-api-access-pqxfr\") pod \"4517eec5-2129-4678-9f45-e149b9ef35fc\" (UID: \"4517eec5-2129-4678-9f45-e149b9ef35fc\") " Feb 27 10:51:08 crc kubenswrapper[4728]: I0227 10:51:08.047167 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4517eec5-2129-4678-9f45-e149b9ef35fc-config-data\") pod \"4517eec5-2129-4678-9f45-e149b9ef35fc\" (UID: \"4517eec5-2129-4678-9f45-e149b9ef35fc\") " Feb 27 10:51:08 crc kubenswrapper[4728]: I0227 10:51:08.047356 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4517eec5-2129-4678-9f45-e149b9ef35fc-logs\") pod \"4517eec5-2129-4678-9f45-e149b9ef35fc\" (UID: \"4517eec5-2129-4678-9f45-e149b9ef35fc\") " Feb 27 10:51:08 crc kubenswrapper[4728]: I0227 10:51:08.047671 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4517eec5-2129-4678-9f45-e149b9ef35fc-combined-ca-bundle\") pod \"4517eec5-2129-4678-9f45-e149b9ef35fc\" (UID: \"4517eec5-2129-4678-9f45-e149b9ef35fc\") " Feb 27 10:51:08 crc kubenswrapper[4728]: I0227 10:51:08.049944 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4517eec5-2129-4678-9f45-e149b9ef35fc-logs" (OuterVolumeSpecName: "logs") pod "4517eec5-2129-4678-9f45-e149b9ef35fc" (UID: "4517eec5-2129-4678-9f45-e149b9ef35fc"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:51:08 crc kubenswrapper[4728]: I0227 10:51:08.052292 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4517eec5-2129-4678-9f45-e149b9ef35fc-kube-api-access-pqxfr" (OuterVolumeSpecName: "kube-api-access-pqxfr") pod "4517eec5-2129-4678-9f45-e149b9ef35fc" (UID: "4517eec5-2129-4678-9f45-e149b9ef35fc"). InnerVolumeSpecName "kube-api-access-pqxfr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:51:08 crc kubenswrapper[4728]: W0227 10:51:08.072636 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3315d635_81af_4a85_b41f_d9736448876a.slice/crio-4212f06a555943ed0cb38c3289720e0b73883563b5019348522bce6eb8d42536 WatchSource:0}: Error finding container 4212f06a555943ed0cb38c3289720e0b73883563b5019348522bce6eb8d42536: Status 404 returned error can't find the container with id 4212f06a555943ed0cb38c3289720e0b73883563b5019348522bce6eb8d42536 Feb 27 10:51:08 crc kubenswrapper[4728]: I0227 10:51:08.091619 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4517eec5-2129-4678-9f45-e149b9ef35fc-config-data" (OuterVolumeSpecName: "config-data") pod "4517eec5-2129-4678-9f45-e149b9ef35fc" (UID: "4517eec5-2129-4678-9f45-e149b9ef35fc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:51:08 crc kubenswrapper[4728]: I0227 10:51:08.100709 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4517eec5-2129-4678-9f45-e149b9ef35fc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4517eec5-2129-4678-9f45-e149b9ef35fc" (UID: "4517eec5-2129-4678-9f45-e149b9ef35fc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:51:08 crc kubenswrapper[4728]: I0227 10:51:08.114167 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d978555f9-6gjtz" Feb 27 10:51:08 crc kubenswrapper[4728]: I0227 10:51:08.150998 4728 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4517eec5-2129-4678-9f45-e149b9ef35fc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 10:51:08 crc kubenswrapper[4728]: I0227 10:51:08.151043 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pqxfr\" (UniqueName: \"kubernetes.io/projected/4517eec5-2129-4678-9f45-e149b9ef35fc-kube-api-access-pqxfr\") on node \"crc\" DevicePath \"\"" Feb 27 10:51:08 crc kubenswrapper[4728]: I0227 10:51:08.151059 4728 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4517eec5-2129-4678-9f45-e149b9ef35fc-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 10:51:08 crc kubenswrapper[4728]: I0227 10:51:08.151069 4728 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4517eec5-2129-4678-9f45-e149b9ef35fc-logs\") on node \"crc\" DevicePath \"\"" Feb 27 10:51:08 crc kubenswrapper[4728]: I0227 10:51:08.252090 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/11767f8f-ebed-4306-b5f7-5e79182d0ad1-ovsdbserver-nb\") pod \"11767f8f-ebed-4306-b5f7-5e79182d0ad1\" (UID: \"11767f8f-ebed-4306-b5f7-5e79182d0ad1\") " Feb 27 10:51:08 crc kubenswrapper[4728]: I0227 10:51:08.257737 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/11767f8f-ebed-4306-b5f7-5e79182d0ad1-ovsdbserver-sb\") pod \"11767f8f-ebed-4306-b5f7-5e79182d0ad1\" (UID: \"11767f8f-ebed-4306-b5f7-5e79182d0ad1\") " Feb 27 10:51:08 crc kubenswrapper[4728]: I0227 10:51:08.257819 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/11767f8f-ebed-4306-b5f7-5e79182d0ad1-dns-swift-storage-0\") pod \"11767f8f-ebed-4306-b5f7-5e79182d0ad1\" (UID: \"11767f8f-ebed-4306-b5f7-5e79182d0ad1\") " Feb 27 10:51:08 crc kubenswrapper[4728]: I0227 10:51:08.257867 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11767f8f-ebed-4306-b5f7-5e79182d0ad1-config\") pod \"11767f8f-ebed-4306-b5f7-5e79182d0ad1\" (UID: \"11767f8f-ebed-4306-b5f7-5e79182d0ad1\") " Feb 27 10:51:08 crc kubenswrapper[4728]: I0227 10:51:08.257995 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-smjx7\" (UniqueName: \"kubernetes.io/projected/11767f8f-ebed-4306-b5f7-5e79182d0ad1-kube-api-access-smjx7\") pod \"11767f8f-ebed-4306-b5f7-5e79182d0ad1\" (UID: \"11767f8f-ebed-4306-b5f7-5e79182d0ad1\") " Feb 27 10:51:08 crc kubenswrapper[4728]: I0227 10:51:08.258059 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/11767f8f-ebed-4306-b5f7-5e79182d0ad1-dns-svc\") pod \"11767f8f-ebed-4306-b5f7-5e79182d0ad1\" (UID: \"11767f8f-ebed-4306-b5f7-5e79182d0ad1\") " Feb 27 10:51:08 crc kubenswrapper[4728]: I0227 10:51:08.269333 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11767f8f-ebed-4306-b5f7-5e79182d0ad1-kube-api-access-smjx7" (OuterVolumeSpecName: "kube-api-access-smjx7") pod "11767f8f-ebed-4306-b5f7-5e79182d0ad1" (UID: "11767f8f-ebed-4306-b5f7-5e79182d0ad1"). InnerVolumeSpecName "kube-api-access-smjx7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:51:08 crc kubenswrapper[4728]: I0227 10:51:08.305766 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 27 10:51:08 crc kubenswrapper[4728]: W0227 10:51:08.318919 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod136f2587_addd_432e_a591_fc74213bf87c.slice/crio-a674d4c4b12ee092a5d35c78b9b919f291cbd07c79fe9a8f3dcf096b23815eac WatchSource:0}: Error finding container a674d4c4b12ee092a5d35c78b9b919f291cbd07c79fe9a8f3dcf096b23815eac: Status 404 returned error can't find the container with id a674d4c4b12ee092a5d35c78b9b919f291cbd07c79fe9a8f3dcf096b23815eac Feb 27 10:51:08 crc kubenswrapper[4728]: I0227 10:51:08.349805 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"136f2587-addd-432e-a591-fc74213bf87c","Type":"ContainerStarted","Data":"a674d4c4b12ee092a5d35c78b9b919f291cbd07c79fe9a8f3dcf096b23815eac"} Feb 27 10:51:08 crc kubenswrapper[4728]: I0227 10:51:08.353323 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d978555f9-6gjtz" event={"ID":"11767f8f-ebed-4306-b5f7-5e79182d0ad1","Type":"ContainerDied","Data":"1be40c6916fcc0e090d964452b1c7f18ec26dbd9e8c82ad7fecf1cedce25a959"} Feb 27 10:51:08 crc kubenswrapper[4728]: I0227 10:51:08.353364 4728 scope.go:117] "RemoveContainer" containerID="5e17701145928d05b423e0f776dfca096755255ffa760b4215739c6efdf2b783" Feb 27 10:51:08 crc kubenswrapper[4728]: I0227 10:51:08.353470 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d978555f9-6gjtz" Feb 27 10:51:08 crc kubenswrapper[4728]: I0227 10:51:08.354164 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/11767f8f-ebed-4306-b5f7-5e79182d0ad1-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "11767f8f-ebed-4306-b5f7-5e79182d0ad1" (UID: "11767f8f-ebed-4306-b5f7-5e79182d0ad1"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:51:08 crc kubenswrapper[4728]: I0227 10:51:08.359964 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/11767f8f-ebed-4306-b5f7-5e79182d0ad1-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "11767f8f-ebed-4306-b5f7-5e79182d0ad1" (UID: "11767f8f-ebed-4306-b5f7-5e79182d0ad1"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:51:08 crc kubenswrapper[4728]: I0227 10:51:08.360175 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/11767f8f-ebed-4306-b5f7-5e79182d0ad1-ovsdbserver-sb\") pod \"11767f8f-ebed-4306-b5f7-5e79182d0ad1\" (UID: \"11767f8f-ebed-4306-b5f7-5e79182d0ad1\") " Feb 27 10:51:08 crc kubenswrapper[4728]: W0227 10:51:08.360580 4728 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/11767f8f-ebed-4306-b5f7-5e79182d0ad1/volumes/kubernetes.io~configmap/ovsdbserver-sb Feb 27 10:51:08 crc kubenswrapper[4728]: I0227 10:51:08.360646 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/11767f8f-ebed-4306-b5f7-5e79182d0ad1-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "11767f8f-ebed-4306-b5f7-5e79182d0ad1" (UID: "11767f8f-ebed-4306-b5f7-5e79182d0ad1"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:51:08 crc kubenswrapper[4728]: I0227 10:51:08.361559 4728 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/11767f8f-ebed-4306-b5f7-5e79182d0ad1-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 27 10:51:08 crc kubenswrapper[4728]: I0227 10:51:08.361586 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-smjx7\" (UniqueName: \"kubernetes.io/projected/11767f8f-ebed-4306-b5f7-5e79182d0ad1-kube-api-access-smjx7\") on node \"crc\" DevicePath \"\"" Feb 27 10:51:08 crc kubenswrapper[4728]: I0227 10:51:08.361601 4728 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/11767f8f-ebed-4306-b5f7-5e79182d0ad1-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 27 10:51:08 crc kubenswrapper[4728]: I0227 10:51:08.364696 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"3315d635-81af-4a85-b41f-d9736448876a","Type":"ContainerStarted","Data":"4212f06a555943ed0cb38c3289720e0b73883563b5019348522bce6eb8d42536"} Feb 27 10:51:08 crc kubenswrapper[4728]: I0227 10:51:08.367418 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4517eec5-2129-4678-9f45-e149b9ef35fc","Type":"ContainerDied","Data":"e497f6be891f58a2fee88db240430f4559d5c6df45a8cae5fa3c9c17edf1e631"} Feb 27 10:51:08 crc kubenswrapper[4728]: I0227 10:51:08.367614 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 27 10:51:08 crc kubenswrapper[4728]: I0227 10:51:08.380377 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/11767f8f-ebed-4306-b5f7-5e79182d0ad1-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "11767f8f-ebed-4306-b5f7-5e79182d0ad1" (UID: "11767f8f-ebed-4306-b5f7-5e79182d0ad1"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:51:08 crc kubenswrapper[4728]: I0227 10:51:08.385277 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/11767f8f-ebed-4306-b5f7-5e79182d0ad1-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "11767f8f-ebed-4306-b5f7-5e79182d0ad1" (UID: "11767f8f-ebed-4306-b5f7-5e79182d0ad1"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:51:08 crc kubenswrapper[4728]: I0227 10:51:08.394910 4728 scope.go:117] "RemoveContainer" containerID="82bbcb2b1994af24f0bdba82fd3d51d0b4a6dc8b8e9e02752b18ffd92dee14f2" Feb 27 10:51:08 crc kubenswrapper[4728]: I0227 10:51:08.417389 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/11767f8f-ebed-4306-b5f7-5e79182d0ad1-config" (OuterVolumeSpecName: "config") pod "11767f8f-ebed-4306-b5f7-5e79182d0ad1" (UID: "11767f8f-ebed-4306-b5f7-5e79182d0ad1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:51:08 crc kubenswrapper[4728]: I0227 10:51:08.418631 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 27 10:51:08 crc kubenswrapper[4728]: I0227 10:51:08.437438 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 27 10:51:08 crc kubenswrapper[4728]: I0227 10:51:08.444987 4728 scope.go:117] "RemoveContainer" containerID="d941464ec58816936182c705032d528e87baa8cb08243da4420eb01105c458af" Feb 27 10:51:08 crc kubenswrapper[4728]: I0227 10:51:08.452381 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 27 10:51:08 crc kubenswrapper[4728]: E0227 10:51:08.453156 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11767f8f-ebed-4306-b5f7-5e79182d0ad1" containerName="init" Feb 27 10:51:08 crc kubenswrapper[4728]: I0227 10:51:08.453169 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="11767f8f-ebed-4306-b5f7-5e79182d0ad1" containerName="init" Feb 27 10:51:08 crc kubenswrapper[4728]: E0227 10:51:08.453189 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4517eec5-2129-4678-9f45-e149b9ef35fc" containerName="nova-metadata-log" Feb 27 10:51:08 crc kubenswrapper[4728]: I0227 10:51:08.453195 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="4517eec5-2129-4678-9f45-e149b9ef35fc" containerName="nova-metadata-log" Feb 27 10:51:08 crc kubenswrapper[4728]: E0227 10:51:08.453224 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4517eec5-2129-4678-9f45-e149b9ef35fc" containerName="nova-metadata-metadata" Feb 27 10:51:08 crc kubenswrapper[4728]: I0227 10:51:08.453233 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="4517eec5-2129-4678-9f45-e149b9ef35fc" containerName="nova-metadata-metadata" Feb 27 10:51:08 crc kubenswrapper[4728]: E0227 10:51:08.453272 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11767f8f-ebed-4306-b5f7-5e79182d0ad1" containerName="dnsmasq-dns" Feb 27 10:51:08 crc kubenswrapper[4728]: I0227 10:51:08.453278 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="11767f8f-ebed-4306-b5f7-5e79182d0ad1" containerName="dnsmasq-dns" Feb 27 10:51:08 crc kubenswrapper[4728]: I0227 10:51:08.453468 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="4517eec5-2129-4678-9f45-e149b9ef35fc" containerName="nova-metadata-metadata" Feb 27 10:51:08 crc kubenswrapper[4728]: I0227 10:51:08.453483 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="11767f8f-ebed-4306-b5f7-5e79182d0ad1" containerName="dnsmasq-dns" Feb 27 10:51:08 crc kubenswrapper[4728]: I0227 10:51:08.453497 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="4517eec5-2129-4678-9f45-e149b9ef35fc" containerName="nova-metadata-log" Feb 27 10:51:08 crc kubenswrapper[4728]: I0227 10:51:08.454673 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 27 10:51:08 crc kubenswrapper[4728]: I0227 10:51:08.461136 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 27 10:51:08 crc kubenswrapper[4728]: I0227 10:51:08.461252 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 27 10:51:08 crc kubenswrapper[4728]: I0227 10:51:08.463671 4728 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/11767f8f-ebed-4306-b5f7-5e79182d0ad1-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 27 10:51:08 crc kubenswrapper[4728]: I0227 10:51:08.463694 4728 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/11767f8f-ebed-4306-b5f7-5e79182d0ad1-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 27 10:51:08 crc kubenswrapper[4728]: I0227 10:51:08.463707 4728 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11767f8f-ebed-4306-b5f7-5e79182d0ad1-config\") on node \"crc\" DevicePath \"\"" Feb 27 10:51:08 crc kubenswrapper[4728]: I0227 10:51:08.479772 4728 scope.go:117] "RemoveContainer" containerID="7ce9602f0459c15fa8ef812329b3b2d8731f01218cbcfa8cac64d19acd373cf0" Feb 27 10:51:08 crc kubenswrapper[4728]: I0227 10:51:08.483862 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 27 10:51:08 crc kubenswrapper[4728]: I0227 10:51:08.565931 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4q28\" (UniqueName: \"kubernetes.io/projected/3e17d97c-f4dc-403c-b9ee-5ba6f43de1f3-kube-api-access-n4q28\") pod \"nova-metadata-0\" (UID: \"3e17d97c-f4dc-403c-b9ee-5ba6f43de1f3\") " pod="openstack/nova-metadata-0" Feb 27 10:51:08 crc kubenswrapper[4728]: I0227 10:51:08.566058 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3e17d97c-f4dc-403c-b9ee-5ba6f43de1f3-logs\") pod \"nova-metadata-0\" (UID: \"3e17d97c-f4dc-403c-b9ee-5ba6f43de1f3\") " pod="openstack/nova-metadata-0" Feb 27 10:51:08 crc kubenswrapper[4728]: I0227 10:51:08.566081 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e17d97c-f4dc-403c-b9ee-5ba6f43de1f3-config-data\") pod \"nova-metadata-0\" (UID: \"3e17d97c-f4dc-403c-b9ee-5ba6f43de1f3\") " pod="openstack/nova-metadata-0" Feb 27 10:51:08 crc kubenswrapper[4728]: I0227 10:51:08.566106 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e17d97c-f4dc-403c-b9ee-5ba6f43de1f3-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"3e17d97c-f4dc-403c-b9ee-5ba6f43de1f3\") " pod="openstack/nova-metadata-0" Feb 27 10:51:08 crc kubenswrapper[4728]: I0227 10:51:08.566124 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e17d97c-f4dc-403c-b9ee-5ba6f43de1f3-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3e17d97c-f4dc-403c-b9ee-5ba6f43de1f3\") " pod="openstack/nova-metadata-0" Feb 27 10:51:08 crc kubenswrapper[4728]: I0227 10:51:08.667593 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n4q28\" (UniqueName: \"kubernetes.io/projected/3e17d97c-f4dc-403c-b9ee-5ba6f43de1f3-kube-api-access-n4q28\") pod \"nova-metadata-0\" (UID: \"3e17d97c-f4dc-403c-b9ee-5ba6f43de1f3\") " pod="openstack/nova-metadata-0" Feb 27 10:51:08 crc kubenswrapper[4728]: I0227 10:51:08.668032 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3e17d97c-f4dc-403c-b9ee-5ba6f43de1f3-logs\") pod \"nova-metadata-0\" (UID: \"3e17d97c-f4dc-403c-b9ee-5ba6f43de1f3\") " pod="openstack/nova-metadata-0" Feb 27 10:51:08 crc kubenswrapper[4728]: I0227 10:51:08.668064 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e17d97c-f4dc-403c-b9ee-5ba6f43de1f3-config-data\") pod \"nova-metadata-0\" (UID: \"3e17d97c-f4dc-403c-b9ee-5ba6f43de1f3\") " pod="openstack/nova-metadata-0" Feb 27 10:51:08 crc kubenswrapper[4728]: I0227 10:51:08.668426 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3e17d97c-f4dc-403c-b9ee-5ba6f43de1f3-logs\") pod \"nova-metadata-0\" (UID: \"3e17d97c-f4dc-403c-b9ee-5ba6f43de1f3\") " pod="openstack/nova-metadata-0" Feb 27 10:51:08 crc kubenswrapper[4728]: I0227 10:51:08.669185 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e17d97c-f4dc-403c-b9ee-5ba6f43de1f3-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"3e17d97c-f4dc-403c-b9ee-5ba6f43de1f3\") " pod="openstack/nova-metadata-0" Feb 27 10:51:08 crc kubenswrapper[4728]: I0227 10:51:08.669214 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e17d97c-f4dc-403c-b9ee-5ba6f43de1f3-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3e17d97c-f4dc-403c-b9ee-5ba6f43de1f3\") " pod="openstack/nova-metadata-0" Feb 27 10:51:08 crc kubenswrapper[4728]: I0227 10:51:08.676515 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e17d97c-f4dc-403c-b9ee-5ba6f43de1f3-config-data\") pod \"nova-metadata-0\" (UID: \"3e17d97c-f4dc-403c-b9ee-5ba6f43de1f3\") " pod="openstack/nova-metadata-0" Feb 27 10:51:08 crc kubenswrapper[4728]: I0227 10:51:08.678276 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e17d97c-f4dc-403c-b9ee-5ba6f43de1f3-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3e17d97c-f4dc-403c-b9ee-5ba6f43de1f3\") " pod="openstack/nova-metadata-0" Feb 27 10:51:08 crc kubenswrapper[4728]: I0227 10:51:08.685347 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e17d97c-f4dc-403c-b9ee-5ba6f43de1f3-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"3e17d97c-f4dc-403c-b9ee-5ba6f43de1f3\") " pod="openstack/nova-metadata-0" Feb 27 10:51:08 crc kubenswrapper[4728]: I0227 10:51:08.685698 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n4q28\" (UniqueName: \"kubernetes.io/projected/3e17d97c-f4dc-403c-b9ee-5ba6f43de1f3-kube-api-access-n4q28\") pod \"nova-metadata-0\" (UID: \"3e17d97c-f4dc-403c-b9ee-5ba6f43de1f3\") " pod="openstack/nova-metadata-0" Feb 27 10:51:08 crc kubenswrapper[4728]: I0227 10:51:08.715765 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 27 10:51:08 crc kubenswrapper[4728]: I0227 10:51:08.780301 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4517eec5-2129-4678-9f45-e149b9ef35fc" path="/var/lib/kubelet/pods/4517eec5-2129-4678-9f45-e149b9ef35fc/volumes" Feb 27 10:51:08 crc kubenswrapper[4728]: I0227 10:51:08.781276 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="50db51f7-f183-4a84-acd2-4320f33423f2" path="/var/lib/kubelet/pods/50db51f7-f183-4a84-acd2-4320f33423f2/volumes" Feb 27 10:51:08 crc kubenswrapper[4728]: I0227 10:51:08.782762 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="80b35d5f-b110-435d-a058-375286dc42d2" path="/var/lib/kubelet/pods/80b35d5f-b110-435d-a058-375286dc42d2/volumes" Feb 27 10:51:08 crc kubenswrapper[4728]: I0227 10:51:08.826521 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d978555f9-6gjtz"] Feb 27 10:51:08 crc kubenswrapper[4728]: I0227 10:51:08.901075 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7d978555f9-6gjtz"] Feb 27 10:51:08 crc kubenswrapper[4728]: I0227 10:51:08.961626 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 27 10:51:09 crc kubenswrapper[4728]: E0227 10:51:09.138652 4728 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod11767f8f_ebed_4306_b5f7_5e79182d0ad1.slice\": RecentStats: unable to find data in memory cache]" Feb 27 10:51:09 crc kubenswrapper[4728]: I0227 10:51:09.397878 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"bd23f1b2-4fb6-4e9d-a692-7d9640e4c999","Type":"ContainerStarted","Data":"9d7f129f8f89000ace0bb28f9448a4f12cb09c4c0460f2252dadee25f40ec577"} Feb 27 10:51:09 crc kubenswrapper[4728]: I0227 10:51:09.398345 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"bd23f1b2-4fb6-4e9d-a692-7d9640e4c999","Type":"ContainerStarted","Data":"cb737e98e80cdec80a0101a1a34cdf6778e737ce9fb5e469cfb2b3181957daa0"} Feb 27 10:51:09 crc kubenswrapper[4728]: I0227 10:51:09.398357 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"bd23f1b2-4fb6-4e9d-a692-7d9640e4c999","Type":"ContainerStarted","Data":"0314621f89d8617223504b65cf6391c7ef428c82ebb744fb1f8f06db0c4e0af9"} Feb 27 10:51:09 crc kubenswrapper[4728]: I0227 10:51:09.403367 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 27 10:51:09 crc kubenswrapper[4728]: I0227 10:51:09.420350 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"136f2587-addd-432e-a591-fc74213bf87c","Type":"ContainerStarted","Data":"f1299dc3339493607dd2e559ab64e97ec7958bfd8a4d78aad865c5729007b21f"} Feb 27 10:51:09 crc kubenswrapper[4728]: I0227 10:51:09.424754 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.424715526 podStartE2EDuration="2.424715526s" podCreationTimestamp="2026-02-27 10:51:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:51:09.420679586 +0000 UTC m=+1489.383045692" watchObservedRunningTime="2026-02-27 10:51:09.424715526 +0000 UTC m=+1489.387081632" Feb 27 10:51:09 crc kubenswrapper[4728]: I0227 10:51:09.438474 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"3315d635-81af-4a85-b41f-d9736448876a","Type":"ContainerStarted","Data":"287a8bd0f389ab7c2215d38ba36f27cd797c18c8f22f6335a2e9529587860157"} Feb 27 10:51:09 crc kubenswrapper[4728]: I0227 10:51:09.439345 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Feb 27 10:51:09 crc kubenswrapper[4728]: I0227 10:51:09.443834 4728 generic.go:334] "Generic (PLEG): container finished" podID="e93f65f0-9517-45e7-bcfc-3cbb70046b3e" containerID="9f6ea11dad6bb2049d938751dcc3375fb2100ea3e696693d6c0e5f6ed31e08ad" exitCode=0 Feb 27 10:51:09 crc kubenswrapper[4728]: I0227 10:51:09.443905 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-mmv5g" event={"ID":"e93f65f0-9517-45e7-bcfc-3cbb70046b3e","Type":"ContainerDied","Data":"9f6ea11dad6bb2049d938751dcc3375fb2100ea3e696693d6c0e5f6ed31e08ad"} Feb 27 10:51:09 crc kubenswrapper[4728]: I0227 10:51:09.465770 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=3.465751059 podStartE2EDuration="3.465751059s" podCreationTimestamp="2026-02-27 10:51:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:51:09.456618982 +0000 UTC m=+1489.418985088" watchObservedRunningTime="2026-02-27 10:51:09.465751059 +0000 UTC m=+1489.428117165" Feb 27 10:51:09 crc kubenswrapper[4728]: I0227 10:51:09.555142 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 27 10:51:10 crc kubenswrapper[4728]: I0227 10:51:10.462475 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3e17d97c-f4dc-403c-b9ee-5ba6f43de1f3","Type":"ContainerStarted","Data":"4d292bde3c47282db30f40d5a29d6eeb80f93d395713fbad8699b84fc29e886f"} Feb 27 10:51:10 crc kubenswrapper[4728]: I0227 10:51:10.462830 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3e17d97c-f4dc-403c-b9ee-5ba6f43de1f3","Type":"ContainerStarted","Data":"db4380089af3ee2498b11fab8702f9991e2373bc93170790920e39f5b26200a9"} Feb 27 10:51:10 crc kubenswrapper[4728]: I0227 10:51:10.462842 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3e17d97c-f4dc-403c-b9ee-5ba6f43de1f3","Type":"ContainerStarted","Data":"d1b00620f06540d8dd28d1e4dc64762f4a931f9ff4b6c67bd0ef7e13058e9e73"} Feb 27 10:51:10 crc kubenswrapper[4728]: I0227 10:51:10.464949 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"136f2587-addd-432e-a591-fc74213bf87c","Type":"ContainerStarted","Data":"7e8d307f189e01e4d67ecaacdb8d9c40063a7dad4c7d69add94feaf0884e4609"} Feb 27 10:51:10 crc kubenswrapper[4728]: I0227 10:51:10.506644 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.506624583 podStartE2EDuration="2.506624583s" podCreationTimestamp="2026-02-27 10:51:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:51:10.496126959 +0000 UTC m=+1490.458493065" watchObservedRunningTime="2026-02-27 10:51:10.506624583 +0000 UTC m=+1490.468990689" Feb 27 10:51:10 crc kubenswrapper[4728]: I0227 10:51:10.751352 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="11767f8f-ebed-4306-b5f7-5e79182d0ad1" path="/var/lib/kubelet/pods/11767f8f-ebed-4306-b5f7-5e79182d0ad1/volumes" Feb 27 10:51:11 crc kubenswrapper[4728]: I0227 10:51:11.096616 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-mmv5g" Feb 27 10:51:11 crc kubenswrapper[4728]: I0227 10:51:11.282757 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e93f65f0-9517-45e7-bcfc-3cbb70046b3e-combined-ca-bundle\") pod \"e93f65f0-9517-45e7-bcfc-3cbb70046b3e\" (UID: \"e93f65f0-9517-45e7-bcfc-3cbb70046b3e\") " Feb 27 10:51:11 crc kubenswrapper[4728]: I0227 10:51:11.283104 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e93f65f0-9517-45e7-bcfc-3cbb70046b3e-config-data\") pod \"e93f65f0-9517-45e7-bcfc-3cbb70046b3e\" (UID: \"e93f65f0-9517-45e7-bcfc-3cbb70046b3e\") " Feb 27 10:51:11 crc kubenswrapper[4728]: I0227 10:51:11.283123 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e93f65f0-9517-45e7-bcfc-3cbb70046b3e-scripts\") pod \"e93f65f0-9517-45e7-bcfc-3cbb70046b3e\" (UID: \"e93f65f0-9517-45e7-bcfc-3cbb70046b3e\") " Feb 27 10:51:11 crc kubenswrapper[4728]: I0227 10:51:11.283237 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ppg9x\" (UniqueName: \"kubernetes.io/projected/e93f65f0-9517-45e7-bcfc-3cbb70046b3e-kube-api-access-ppg9x\") pod \"e93f65f0-9517-45e7-bcfc-3cbb70046b3e\" (UID: \"e93f65f0-9517-45e7-bcfc-3cbb70046b3e\") " Feb 27 10:51:11 crc kubenswrapper[4728]: I0227 10:51:11.289090 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e93f65f0-9517-45e7-bcfc-3cbb70046b3e-scripts" (OuterVolumeSpecName: "scripts") pod "e93f65f0-9517-45e7-bcfc-3cbb70046b3e" (UID: "e93f65f0-9517-45e7-bcfc-3cbb70046b3e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:51:11 crc kubenswrapper[4728]: I0227 10:51:11.289139 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e93f65f0-9517-45e7-bcfc-3cbb70046b3e-kube-api-access-ppg9x" (OuterVolumeSpecName: "kube-api-access-ppg9x") pod "e93f65f0-9517-45e7-bcfc-3cbb70046b3e" (UID: "e93f65f0-9517-45e7-bcfc-3cbb70046b3e"). InnerVolumeSpecName "kube-api-access-ppg9x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:51:11 crc kubenswrapper[4728]: I0227 10:51:11.312357 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e93f65f0-9517-45e7-bcfc-3cbb70046b3e-config-data" (OuterVolumeSpecName: "config-data") pod "e93f65f0-9517-45e7-bcfc-3cbb70046b3e" (UID: "e93f65f0-9517-45e7-bcfc-3cbb70046b3e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:51:11 crc kubenswrapper[4728]: I0227 10:51:11.321126 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e93f65f0-9517-45e7-bcfc-3cbb70046b3e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e93f65f0-9517-45e7-bcfc-3cbb70046b3e" (UID: "e93f65f0-9517-45e7-bcfc-3cbb70046b3e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:51:11 crc kubenswrapper[4728]: I0227 10:51:11.385775 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ppg9x\" (UniqueName: \"kubernetes.io/projected/e93f65f0-9517-45e7-bcfc-3cbb70046b3e-kube-api-access-ppg9x\") on node \"crc\" DevicePath \"\"" Feb 27 10:51:11 crc kubenswrapper[4728]: I0227 10:51:11.385811 4728 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e93f65f0-9517-45e7-bcfc-3cbb70046b3e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 10:51:11 crc kubenswrapper[4728]: I0227 10:51:11.385820 4728 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e93f65f0-9517-45e7-bcfc-3cbb70046b3e-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 10:51:11 crc kubenswrapper[4728]: I0227 10:51:11.385829 4728 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e93f65f0-9517-45e7-bcfc-3cbb70046b3e-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 10:51:11 crc kubenswrapper[4728]: I0227 10:51:11.479219 4728 generic.go:334] "Generic (PLEG): container finished" podID="72a5cb19-78b7-47a9-8d6f-7b5b2b67f395" containerID="e7f9e5e7af91bf89f829ae55dd9e9b4c6b616663b446670de46b1807e2c16165" exitCode=0 Feb 27 10:51:11 crc kubenswrapper[4728]: I0227 10:51:11.479306 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-85k7n" event={"ID":"72a5cb19-78b7-47a9-8d6f-7b5b2b67f395","Type":"ContainerDied","Data":"e7f9e5e7af91bf89f829ae55dd9e9b4c6b616663b446670de46b1807e2c16165"} Feb 27 10:51:11 crc kubenswrapper[4728]: I0227 10:51:11.482287 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-mmv5g" Feb 27 10:51:11 crc kubenswrapper[4728]: I0227 10:51:11.482298 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-mmv5g" event={"ID":"e93f65f0-9517-45e7-bcfc-3cbb70046b3e","Type":"ContainerDied","Data":"0e55ccad825bf81602a0b59692e1c4a7f29671d6a485ccb075d3cf6245b50833"} Feb 27 10:51:11 crc kubenswrapper[4728]: I0227 10:51:11.482341 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0e55ccad825bf81602a0b59692e1c4a7f29671d6a485ccb075d3cf6245b50833" Feb 27 10:51:11 crc kubenswrapper[4728]: I0227 10:51:11.484931 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"136f2587-addd-432e-a591-fc74213bf87c","Type":"ContainerStarted","Data":"85a5dac94aed8e183f346891f01f0a3e27e460c6b1ff4d65ebc507cd27c6688a"} Feb 27 10:51:12 crc kubenswrapper[4728]: I0227 10:51:12.496911 4728 generic.go:334] "Generic (PLEG): container finished" podID="96dc96a4-9c81-4702-8678-1f6824535e01" containerID="2e02056dd29e84312f0a4af87f7933d56f9035ceff201854e18c2833432799dd" exitCode=137 Feb 27 10:51:12 crc kubenswrapper[4728]: I0227 10:51:12.496990 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-5dcd5b5d76-tfs97" event={"ID":"96dc96a4-9c81-4702-8678-1f6824535e01","Type":"ContainerDied","Data":"2e02056dd29e84312f0a4af87f7933d56f9035ceff201854e18c2833432799dd"} Feb 27 10:51:12 crc kubenswrapper[4728]: I0227 10:51:12.497526 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-5dcd5b5d76-tfs97" event={"ID":"96dc96a4-9c81-4702-8678-1f6824535e01","Type":"ContainerDied","Data":"22e194aa56a0b907c0d4c6f80b91a1309b2605d95e93987a18c9c46d41526e85"} Feb 27 10:51:12 crc kubenswrapper[4728]: I0227 10:51:12.497544 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="22e194aa56a0b907c0d4c6f80b91a1309b2605d95e93987a18c9c46d41526e85" Feb 27 10:51:12 crc kubenswrapper[4728]: I0227 10:51:12.515461 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-5dcd5b5d76-tfs97" Feb 27 10:51:12 crc kubenswrapper[4728]: I0227 10:51:12.614426 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s6hh8\" (UniqueName: \"kubernetes.io/projected/96dc96a4-9c81-4702-8678-1f6824535e01-kube-api-access-s6hh8\") pod \"96dc96a4-9c81-4702-8678-1f6824535e01\" (UID: \"96dc96a4-9c81-4702-8678-1f6824535e01\") " Feb 27 10:51:12 crc kubenswrapper[4728]: I0227 10:51:12.614562 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96dc96a4-9c81-4702-8678-1f6824535e01-combined-ca-bundle\") pod \"96dc96a4-9c81-4702-8678-1f6824535e01\" (UID: \"96dc96a4-9c81-4702-8678-1f6824535e01\") " Feb 27 10:51:12 crc kubenswrapper[4728]: I0227 10:51:12.614604 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96dc96a4-9c81-4702-8678-1f6824535e01-config-data\") pod \"96dc96a4-9c81-4702-8678-1f6824535e01\" (UID: \"96dc96a4-9c81-4702-8678-1f6824535e01\") " Feb 27 10:51:12 crc kubenswrapper[4728]: I0227 10:51:12.614709 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/96dc96a4-9c81-4702-8678-1f6824535e01-config-data-custom\") pod \"96dc96a4-9c81-4702-8678-1f6824535e01\" (UID: \"96dc96a4-9c81-4702-8678-1f6824535e01\") " Feb 27 10:51:12 crc kubenswrapper[4728]: I0227 10:51:12.621694 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96dc96a4-9c81-4702-8678-1f6824535e01-kube-api-access-s6hh8" (OuterVolumeSpecName: "kube-api-access-s6hh8") pod "96dc96a4-9c81-4702-8678-1f6824535e01" (UID: "96dc96a4-9c81-4702-8678-1f6824535e01"). InnerVolumeSpecName "kube-api-access-s6hh8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:51:12 crc kubenswrapper[4728]: I0227 10:51:12.625325 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96dc96a4-9c81-4702-8678-1f6824535e01-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "96dc96a4-9c81-4702-8678-1f6824535e01" (UID: "96dc96a4-9c81-4702-8678-1f6824535e01"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:51:12 crc kubenswrapper[4728]: I0227 10:51:12.667567 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96dc96a4-9c81-4702-8678-1f6824535e01-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "96dc96a4-9c81-4702-8678-1f6824535e01" (UID: "96dc96a4-9c81-4702-8678-1f6824535e01"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:51:12 crc kubenswrapper[4728]: I0227 10:51:12.693039 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96dc96a4-9c81-4702-8678-1f6824535e01-config-data" (OuterVolumeSpecName: "config-data") pod "96dc96a4-9c81-4702-8678-1f6824535e01" (UID: "96dc96a4-9c81-4702-8678-1f6824535e01"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:51:12 crc kubenswrapper[4728]: I0227 10:51:12.717760 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s6hh8\" (UniqueName: \"kubernetes.io/projected/96dc96a4-9c81-4702-8678-1f6824535e01-kube-api-access-s6hh8\") on node \"crc\" DevicePath \"\"" Feb 27 10:51:12 crc kubenswrapper[4728]: I0227 10:51:12.717794 4728 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96dc96a4-9c81-4702-8678-1f6824535e01-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 10:51:12 crc kubenswrapper[4728]: I0227 10:51:12.717813 4728 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96dc96a4-9c81-4702-8678-1f6824535e01-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 10:51:12 crc kubenswrapper[4728]: I0227 10:51:12.717824 4728 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/96dc96a4-9c81-4702-8678-1f6824535e01-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 27 10:51:13 crc kubenswrapper[4728]: I0227 10:51:13.004068 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-85k7n" Feb 27 10:51:13 crc kubenswrapper[4728]: I0227 10:51:13.125586 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g6gvs\" (UniqueName: \"kubernetes.io/projected/72a5cb19-78b7-47a9-8d6f-7b5b2b67f395-kube-api-access-g6gvs\") pod \"72a5cb19-78b7-47a9-8d6f-7b5b2b67f395\" (UID: \"72a5cb19-78b7-47a9-8d6f-7b5b2b67f395\") " Feb 27 10:51:13 crc kubenswrapper[4728]: I0227 10:51:13.125694 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72a5cb19-78b7-47a9-8d6f-7b5b2b67f395-combined-ca-bundle\") pod \"72a5cb19-78b7-47a9-8d6f-7b5b2b67f395\" (UID: \"72a5cb19-78b7-47a9-8d6f-7b5b2b67f395\") " Feb 27 10:51:13 crc kubenswrapper[4728]: I0227 10:51:13.125716 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72a5cb19-78b7-47a9-8d6f-7b5b2b67f395-config-data\") pod \"72a5cb19-78b7-47a9-8d6f-7b5b2b67f395\" (UID: \"72a5cb19-78b7-47a9-8d6f-7b5b2b67f395\") " Feb 27 10:51:13 crc kubenswrapper[4728]: I0227 10:51:13.126709 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/72a5cb19-78b7-47a9-8d6f-7b5b2b67f395-scripts\") pod \"72a5cb19-78b7-47a9-8d6f-7b5b2b67f395\" (UID: \"72a5cb19-78b7-47a9-8d6f-7b5b2b67f395\") " Feb 27 10:51:13 crc kubenswrapper[4728]: I0227 10:51:13.130004 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72a5cb19-78b7-47a9-8d6f-7b5b2b67f395-kube-api-access-g6gvs" (OuterVolumeSpecName: "kube-api-access-g6gvs") pod "72a5cb19-78b7-47a9-8d6f-7b5b2b67f395" (UID: "72a5cb19-78b7-47a9-8d6f-7b5b2b67f395"). InnerVolumeSpecName "kube-api-access-g6gvs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:51:13 crc kubenswrapper[4728]: I0227 10:51:13.130444 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72a5cb19-78b7-47a9-8d6f-7b5b2b67f395-scripts" (OuterVolumeSpecName: "scripts") pod "72a5cb19-78b7-47a9-8d6f-7b5b2b67f395" (UID: "72a5cb19-78b7-47a9-8d6f-7b5b2b67f395"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:51:13 crc kubenswrapper[4728]: I0227 10:51:13.156302 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72a5cb19-78b7-47a9-8d6f-7b5b2b67f395-config-data" (OuterVolumeSpecName: "config-data") pod "72a5cb19-78b7-47a9-8d6f-7b5b2b67f395" (UID: "72a5cb19-78b7-47a9-8d6f-7b5b2b67f395"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:51:13 crc kubenswrapper[4728]: I0227 10:51:13.178990 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72a5cb19-78b7-47a9-8d6f-7b5b2b67f395-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "72a5cb19-78b7-47a9-8d6f-7b5b2b67f395" (UID: "72a5cb19-78b7-47a9-8d6f-7b5b2b67f395"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:51:13 crc kubenswrapper[4728]: I0227 10:51:13.230123 4728 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/72a5cb19-78b7-47a9-8d6f-7b5b2b67f395-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 10:51:13 crc kubenswrapper[4728]: I0227 10:51:13.230158 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g6gvs\" (UniqueName: \"kubernetes.io/projected/72a5cb19-78b7-47a9-8d6f-7b5b2b67f395-kube-api-access-g6gvs\") on node \"crc\" DevicePath \"\"" Feb 27 10:51:13 crc kubenswrapper[4728]: I0227 10:51:13.230170 4728 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72a5cb19-78b7-47a9-8d6f-7b5b2b67f395-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 10:51:13 crc kubenswrapper[4728]: I0227 10:51:13.230179 4728 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72a5cb19-78b7-47a9-8d6f-7b5b2b67f395-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 10:51:13 crc kubenswrapper[4728]: I0227 10:51:13.512077 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"136f2587-addd-432e-a591-fc74213bf87c","Type":"ContainerStarted","Data":"758ac345a0bdbef7ea8def9b887d6f5ab6b80eda76934b927f7079883781c5b4"} Feb 27 10:51:13 crc kubenswrapper[4728]: I0227 10:51:13.512202 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="136f2587-addd-432e-a591-fc74213bf87c" containerName="ceilometer-central-agent" containerID="cri-o://f1299dc3339493607dd2e559ab64e97ec7958bfd8a4d78aad865c5729007b21f" gracePeriod=30 Feb 27 10:51:13 crc kubenswrapper[4728]: I0227 10:51:13.512293 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="136f2587-addd-432e-a591-fc74213bf87c" containerName="ceilometer-notification-agent" containerID="cri-o://7e8d307f189e01e4d67ecaacdb8d9c40063a7dad4c7d69add94feaf0884e4609" gracePeriod=30 Feb 27 10:51:13 crc kubenswrapper[4728]: I0227 10:51:13.512293 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="136f2587-addd-432e-a591-fc74213bf87c" containerName="proxy-httpd" containerID="cri-o://758ac345a0bdbef7ea8def9b887d6f5ab6b80eda76934b927f7079883781c5b4" gracePeriod=30 Feb 27 10:51:13 crc kubenswrapper[4728]: I0227 10:51:13.512323 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="136f2587-addd-432e-a591-fc74213bf87c" containerName="sg-core" containerID="cri-o://85a5dac94aed8e183f346891f01f0a3e27e460c6b1ff4d65ebc507cd27c6688a" gracePeriod=30 Feb 27 10:51:13 crc kubenswrapper[4728]: I0227 10:51:13.513292 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 27 10:51:13 crc kubenswrapper[4728]: I0227 10:51:13.516256 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-5dcd5b5d76-tfs97" Feb 27 10:51:13 crc kubenswrapper[4728]: I0227 10:51:13.516432 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-85k7n" event={"ID":"72a5cb19-78b7-47a9-8d6f-7b5b2b67f395","Type":"ContainerDied","Data":"f89cf4e70cfa4f706dbcaa1f7834610172d3a53e002a226f898517d0b3eaa6ff"} Feb 27 10:51:13 crc kubenswrapper[4728]: I0227 10:51:13.516480 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f89cf4e70cfa4f706dbcaa1f7834610172d3a53e002a226f898517d0b3eaa6ff" Feb 27 10:51:13 crc kubenswrapper[4728]: I0227 10:51:13.516729 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-85k7n" Feb 27 10:51:13 crc kubenswrapper[4728]: I0227 10:51:13.554627 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.263572812 podStartE2EDuration="7.554603071s" podCreationTimestamp="2026-02-27 10:51:06 +0000 UTC" firstStartedPulling="2026-02-27 10:51:08.326607137 +0000 UTC m=+1488.288973243" lastFinishedPulling="2026-02-27 10:51:12.617637386 +0000 UTC m=+1492.580003502" observedRunningTime="2026-02-27 10:51:13.537201539 +0000 UTC m=+1493.499567645" watchObservedRunningTime="2026-02-27 10:51:13.554603071 +0000 UTC m=+1493.516969177" Feb 27 10:51:13 crc kubenswrapper[4728]: I0227 10:51:13.646426 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-5dcd5b5d76-tfs97"] Feb 27 10:51:13 crc kubenswrapper[4728]: I0227 10:51:13.656926 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-5dcd5b5d76-tfs97"] Feb 27 10:51:13 crc kubenswrapper[4728]: I0227 10:51:13.672732 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 27 10:51:13 crc kubenswrapper[4728]: E0227 10:51:13.673263 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72a5cb19-78b7-47a9-8d6f-7b5b2b67f395" containerName="nova-cell1-conductor-db-sync" Feb 27 10:51:13 crc kubenswrapper[4728]: I0227 10:51:13.673278 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="72a5cb19-78b7-47a9-8d6f-7b5b2b67f395" containerName="nova-cell1-conductor-db-sync" Feb 27 10:51:13 crc kubenswrapper[4728]: E0227 10:51:13.673322 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96dc96a4-9c81-4702-8678-1f6824535e01" containerName="heat-api" Feb 27 10:51:13 crc kubenswrapper[4728]: I0227 10:51:13.673329 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="96dc96a4-9c81-4702-8678-1f6824535e01" containerName="heat-api" Feb 27 10:51:13 crc kubenswrapper[4728]: E0227 10:51:13.673341 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e93f65f0-9517-45e7-bcfc-3cbb70046b3e" containerName="nova-manage" Feb 27 10:51:13 crc kubenswrapper[4728]: I0227 10:51:13.673349 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="e93f65f0-9517-45e7-bcfc-3cbb70046b3e" containerName="nova-manage" Feb 27 10:51:13 crc kubenswrapper[4728]: I0227 10:51:13.673569 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="e93f65f0-9517-45e7-bcfc-3cbb70046b3e" containerName="nova-manage" Feb 27 10:51:13 crc kubenswrapper[4728]: I0227 10:51:13.673605 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="96dc96a4-9c81-4702-8678-1f6824535e01" containerName="heat-api" Feb 27 10:51:13 crc kubenswrapper[4728]: I0227 10:51:13.673625 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="72a5cb19-78b7-47a9-8d6f-7b5b2b67f395" containerName="nova-cell1-conductor-db-sync" Feb 27 10:51:13 crc kubenswrapper[4728]: I0227 10:51:13.674438 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 27 10:51:13 crc kubenswrapper[4728]: I0227 10:51:13.676881 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 27 10:51:13 crc kubenswrapper[4728]: I0227 10:51:13.691267 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 27 10:51:13 crc kubenswrapper[4728]: I0227 10:51:13.748127 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2z6n9\" (UniqueName: \"kubernetes.io/projected/045a229f-359b-4278-b366-233bc1921370-kube-api-access-2z6n9\") pod \"nova-cell1-conductor-0\" (UID: \"045a229f-359b-4278-b366-233bc1921370\") " pod="openstack/nova-cell1-conductor-0" Feb 27 10:51:13 crc kubenswrapper[4728]: I0227 10:51:13.748247 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/045a229f-359b-4278-b366-233bc1921370-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"045a229f-359b-4278-b366-233bc1921370\") " pod="openstack/nova-cell1-conductor-0" Feb 27 10:51:13 crc kubenswrapper[4728]: I0227 10:51:13.748307 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/045a229f-359b-4278-b366-233bc1921370-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"045a229f-359b-4278-b366-233bc1921370\") " pod="openstack/nova-cell1-conductor-0" Feb 27 10:51:13 crc kubenswrapper[4728]: I0227 10:51:13.850880 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/045a229f-359b-4278-b366-233bc1921370-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"045a229f-359b-4278-b366-233bc1921370\") " pod="openstack/nova-cell1-conductor-0" Feb 27 10:51:13 crc kubenswrapper[4728]: I0227 10:51:13.851119 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2z6n9\" (UniqueName: \"kubernetes.io/projected/045a229f-359b-4278-b366-233bc1921370-kube-api-access-2z6n9\") pod \"nova-cell1-conductor-0\" (UID: \"045a229f-359b-4278-b366-233bc1921370\") " pod="openstack/nova-cell1-conductor-0" Feb 27 10:51:13 crc kubenswrapper[4728]: I0227 10:51:13.851191 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/045a229f-359b-4278-b366-233bc1921370-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"045a229f-359b-4278-b366-233bc1921370\") " pod="openstack/nova-cell1-conductor-0" Feb 27 10:51:13 crc kubenswrapper[4728]: I0227 10:51:13.868516 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/045a229f-359b-4278-b366-233bc1921370-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"045a229f-359b-4278-b366-233bc1921370\") " pod="openstack/nova-cell1-conductor-0" Feb 27 10:51:13 crc kubenswrapper[4728]: I0227 10:51:13.873134 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/045a229f-359b-4278-b366-233bc1921370-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"045a229f-359b-4278-b366-233bc1921370\") " pod="openstack/nova-cell1-conductor-0" Feb 27 10:51:13 crc kubenswrapper[4728]: I0227 10:51:13.910017 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2z6n9\" (UniqueName: \"kubernetes.io/projected/045a229f-359b-4278-b366-233bc1921370-kube-api-access-2z6n9\") pod \"nova-cell1-conductor-0\" (UID: \"045a229f-359b-4278-b366-233bc1921370\") " pod="openstack/nova-cell1-conductor-0" Feb 27 10:51:13 crc kubenswrapper[4728]: I0227 10:51:13.961691 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 27 10:51:13 crc kubenswrapper[4728]: I0227 10:51:13.962366 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 27 10:51:14 crc kubenswrapper[4728]: I0227 10:51:14.104443 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 27 10:51:14 crc kubenswrapper[4728]: I0227 10:51:14.530781 4728 generic.go:334] "Generic (PLEG): container finished" podID="136f2587-addd-432e-a591-fc74213bf87c" containerID="758ac345a0bdbef7ea8def9b887d6f5ab6b80eda76934b927f7079883781c5b4" exitCode=0 Feb 27 10:51:14 crc kubenswrapper[4728]: I0227 10:51:14.532433 4728 generic.go:334] "Generic (PLEG): container finished" podID="136f2587-addd-432e-a591-fc74213bf87c" containerID="85a5dac94aed8e183f346891f01f0a3e27e460c6b1ff4d65ebc507cd27c6688a" exitCode=2 Feb 27 10:51:14 crc kubenswrapper[4728]: I0227 10:51:14.532728 4728 generic.go:334] "Generic (PLEG): container finished" podID="136f2587-addd-432e-a591-fc74213bf87c" containerID="7e8d307f189e01e4d67ecaacdb8d9c40063a7dad4c7d69add94feaf0884e4609" exitCode=0 Feb 27 10:51:14 crc kubenswrapper[4728]: I0227 10:51:14.530854 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"136f2587-addd-432e-a591-fc74213bf87c","Type":"ContainerDied","Data":"758ac345a0bdbef7ea8def9b887d6f5ab6b80eda76934b927f7079883781c5b4"} Feb 27 10:51:14 crc kubenswrapper[4728]: I0227 10:51:14.532923 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"136f2587-addd-432e-a591-fc74213bf87c","Type":"ContainerDied","Data":"85a5dac94aed8e183f346891f01f0a3e27e460c6b1ff4d65ebc507cd27c6688a"} Feb 27 10:51:14 crc kubenswrapper[4728]: I0227 10:51:14.532948 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"136f2587-addd-432e-a591-fc74213bf87c","Type":"ContainerDied","Data":"7e8d307f189e01e4d67ecaacdb8d9c40063a7dad4c7d69add94feaf0884e4609"} Feb 27 10:51:14 crc kubenswrapper[4728]: I0227 10:51:14.645690 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 27 10:51:14 crc kubenswrapper[4728]: I0227 10:51:14.745177 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96dc96a4-9c81-4702-8678-1f6824535e01" path="/var/lib/kubelet/pods/96dc96a4-9c81-4702-8678-1f6824535e01/volumes" Feb 27 10:51:15 crc kubenswrapper[4728]: I0227 10:51:15.547774 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"045a229f-359b-4278-b366-233bc1921370","Type":"ContainerStarted","Data":"958a81eaa7ae39825c8e4fa644b4b75e888fd45c3976c8134a1bb02cc65af877"} Feb 27 10:51:15 crc kubenswrapper[4728]: I0227 10:51:15.548316 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Feb 27 10:51:15 crc kubenswrapper[4728]: I0227 10:51:15.548342 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"045a229f-359b-4278-b366-233bc1921370","Type":"ContainerStarted","Data":"11b395a54608ebee6bc7a0fab776336e5750449adb3de2c7e043f8b6963ed56a"} Feb 27 10:51:15 crc kubenswrapper[4728]: I0227 10:51:15.581142 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.581120151 podStartE2EDuration="2.581120151s" podCreationTimestamp="2026-02-27 10:51:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:51:15.569780224 +0000 UTC m=+1495.532146340" watchObservedRunningTime="2026-02-27 10:51:15.581120151 +0000 UTC m=+1495.543486257" Feb 27 10:51:16 crc kubenswrapper[4728]: I0227 10:51:16.124738 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-create-x426g"] Feb 27 10:51:16 crc kubenswrapper[4728]: I0227 10:51:16.126856 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-x426g" Feb 27 10:51:16 crc kubenswrapper[4728]: I0227 10:51:16.137490 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-x426g"] Feb 27 10:51:16 crc kubenswrapper[4728]: I0227 10:51:16.209936 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6pz2h\" (UniqueName: \"kubernetes.io/projected/41862d42-5899-4daf-8f29-a24ba28d3908-kube-api-access-6pz2h\") pod \"aodh-db-create-x426g\" (UID: \"41862d42-5899-4daf-8f29-a24ba28d3908\") " pod="openstack/aodh-db-create-x426g" Feb 27 10:51:16 crc kubenswrapper[4728]: I0227 10:51:16.210008 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/41862d42-5899-4daf-8f29-a24ba28d3908-operator-scripts\") pod \"aodh-db-create-x426g\" (UID: \"41862d42-5899-4daf-8f29-a24ba28d3908\") " pod="openstack/aodh-db-create-x426g" Feb 27 10:51:16 crc kubenswrapper[4728]: I0227 10:51:16.231148 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-88ac-account-create-update-q6xk7"] Feb 27 10:51:16 crc kubenswrapper[4728]: I0227 10:51:16.232835 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-88ac-account-create-update-q6xk7" Feb 27 10:51:16 crc kubenswrapper[4728]: I0227 10:51:16.234981 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-db-secret" Feb 27 10:51:16 crc kubenswrapper[4728]: I0227 10:51:16.248088 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-88ac-account-create-update-q6xk7"] Feb 27 10:51:16 crc kubenswrapper[4728]: I0227 10:51:16.312133 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6pz2h\" (UniqueName: \"kubernetes.io/projected/41862d42-5899-4daf-8f29-a24ba28d3908-kube-api-access-6pz2h\") pod \"aodh-db-create-x426g\" (UID: \"41862d42-5899-4daf-8f29-a24ba28d3908\") " pod="openstack/aodh-db-create-x426g" Feb 27 10:51:16 crc kubenswrapper[4728]: I0227 10:51:16.312240 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/41862d42-5899-4daf-8f29-a24ba28d3908-operator-scripts\") pod \"aodh-db-create-x426g\" (UID: \"41862d42-5899-4daf-8f29-a24ba28d3908\") " pod="openstack/aodh-db-create-x426g" Feb 27 10:51:16 crc kubenswrapper[4728]: I0227 10:51:16.312293 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/204df3b2-d221-4ba6-811d-d232b4a5d12e-operator-scripts\") pod \"aodh-88ac-account-create-update-q6xk7\" (UID: \"204df3b2-d221-4ba6-811d-d232b4a5d12e\") " pod="openstack/aodh-88ac-account-create-update-q6xk7" Feb 27 10:51:16 crc kubenswrapper[4728]: I0227 10:51:16.312447 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5bnz\" (UniqueName: \"kubernetes.io/projected/204df3b2-d221-4ba6-811d-d232b4a5d12e-kube-api-access-g5bnz\") pod \"aodh-88ac-account-create-update-q6xk7\" (UID: \"204df3b2-d221-4ba6-811d-d232b4a5d12e\") " pod="openstack/aodh-88ac-account-create-update-q6xk7" Feb 27 10:51:16 crc kubenswrapper[4728]: I0227 10:51:16.313743 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/41862d42-5899-4daf-8f29-a24ba28d3908-operator-scripts\") pod \"aodh-db-create-x426g\" (UID: \"41862d42-5899-4daf-8f29-a24ba28d3908\") " pod="openstack/aodh-db-create-x426g" Feb 27 10:51:16 crc kubenswrapper[4728]: I0227 10:51:16.330716 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6pz2h\" (UniqueName: \"kubernetes.io/projected/41862d42-5899-4daf-8f29-a24ba28d3908-kube-api-access-6pz2h\") pod \"aodh-db-create-x426g\" (UID: \"41862d42-5899-4daf-8f29-a24ba28d3908\") " pod="openstack/aodh-db-create-x426g" Feb 27 10:51:16 crc kubenswrapper[4728]: I0227 10:51:16.414200 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g5bnz\" (UniqueName: \"kubernetes.io/projected/204df3b2-d221-4ba6-811d-d232b4a5d12e-kube-api-access-g5bnz\") pod \"aodh-88ac-account-create-update-q6xk7\" (UID: \"204df3b2-d221-4ba6-811d-d232b4a5d12e\") " pod="openstack/aodh-88ac-account-create-update-q6xk7" Feb 27 10:51:16 crc kubenswrapper[4728]: I0227 10:51:16.414380 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/204df3b2-d221-4ba6-811d-d232b4a5d12e-operator-scripts\") pod \"aodh-88ac-account-create-update-q6xk7\" (UID: \"204df3b2-d221-4ba6-811d-d232b4a5d12e\") " pod="openstack/aodh-88ac-account-create-update-q6xk7" Feb 27 10:51:16 crc kubenswrapper[4728]: I0227 10:51:16.415040 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/204df3b2-d221-4ba6-811d-d232b4a5d12e-operator-scripts\") pod \"aodh-88ac-account-create-update-q6xk7\" (UID: \"204df3b2-d221-4ba6-811d-d232b4a5d12e\") " pod="openstack/aodh-88ac-account-create-update-q6xk7" Feb 27 10:51:16 crc kubenswrapper[4728]: I0227 10:51:16.434019 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g5bnz\" (UniqueName: \"kubernetes.io/projected/204df3b2-d221-4ba6-811d-d232b4a5d12e-kube-api-access-g5bnz\") pod \"aodh-88ac-account-create-update-q6xk7\" (UID: \"204df3b2-d221-4ba6-811d-d232b4a5d12e\") " pod="openstack/aodh-88ac-account-create-update-q6xk7" Feb 27 10:51:16 crc kubenswrapper[4728]: I0227 10:51:16.453976 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-x426g" Feb 27 10:51:16 crc kubenswrapper[4728]: I0227 10:51:16.548490 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-88ac-account-create-update-q6xk7" Feb 27 10:51:16 crc kubenswrapper[4728]: I0227 10:51:16.788722 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Feb 27 10:51:17 crc kubenswrapper[4728]: I0227 10:51:17.035841 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-x426g"] Feb 27 10:51:17 crc kubenswrapper[4728]: I0227 10:51:17.156472 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-88ac-account-create-update-q6xk7"] Feb 27 10:51:17 crc kubenswrapper[4728]: W0227 10:51:17.165967 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod204df3b2_d221_4ba6_811d_d232b4a5d12e.slice/crio-e47f415e9bd3d45561f78c9549bdc9c0efebd37ca413ea5fdad0539a32607574 WatchSource:0}: Error finding container e47f415e9bd3d45561f78c9549bdc9c0efebd37ca413ea5fdad0539a32607574: Status 404 returned error can't find the container with id e47f415e9bd3d45561f78c9549bdc9c0efebd37ca413ea5fdad0539a32607574 Feb 27 10:51:17 crc kubenswrapper[4728]: I0227 10:51:17.443094 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 27 10:51:17 crc kubenswrapper[4728]: I0227 10:51:17.443323 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="bd23f1b2-4fb6-4e9d-a692-7d9640e4c999" containerName="nova-api-log" containerID="cri-o://cb737e98e80cdec80a0101a1a34cdf6778e737ce9fb5e469cfb2b3181957daa0" gracePeriod=30 Feb 27 10:51:17 crc kubenswrapper[4728]: I0227 10:51:17.443779 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="bd23f1b2-4fb6-4e9d-a692-7d9640e4c999" containerName="nova-api-api" containerID="cri-o://9d7f129f8f89000ace0bb28f9448a4f12cb09c4c0460f2252dadee25f40ec577" gracePeriod=30 Feb 27 10:51:17 crc kubenswrapper[4728]: I0227 10:51:17.480427 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 27 10:51:17 crc kubenswrapper[4728]: I0227 10:51:17.480787 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="3e17d97c-f4dc-403c-b9ee-5ba6f43de1f3" containerName="nova-metadata-log" containerID="cri-o://db4380089af3ee2498b11fab8702f9991e2373bc93170790920e39f5b26200a9" gracePeriod=30 Feb 27 10:51:17 crc kubenswrapper[4728]: I0227 10:51:17.481483 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="3e17d97c-f4dc-403c-b9ee-5ba6f43de1f3" containerName="nova-metadata-metadata" containerID="cri-o://4d292bde3c47282db30f40d5a29d6eeb80f93d395713fbad8699b84fc29e886f" gracePeriod=30 Feb 27 10:51:17 crc kubenswrapper[4728]: I0227 10:51:17.578863 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-88ac-account-create-update-q6xk7" event={"ID":"204df3b2-d221-4ba6-811d-d232b4a5d12e","Type":"ContainerStarted","Data":"88d1b4dcf84f7296d467a124e0be85197cfc9732fb0ec8ec914114972c0f328b"} Feb 27 10:51:17 crc kubenswrapper[4728]: I0227 10:51:17.578912 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-88ac-account-create-update-q6xk7" event={"ID":"204df3b2-d221-4ba6-811d-d232b4a5d12e","Type":"ContainerStarted","Data":"e47f415e9bd3d45561f78c9549bdc9c0efebd37ca413ea5fdad0539a32607574"} Feb 27 10:51:17 crc kubenswrapper[4728]: I0227 10:51:17.580855 4728 generic.go:334] "Generic (PLEG): container finished" podID="bd23f1b2-4fb6-4e9d-a692-7d9640e4c999" containerID="cb737e98e80cdec80a0101a1a34cdf6778e737ce9fb5e469cfb2b3181957daa0" exitCode=143 Feb 27 10:51:17 crc kubenswrapper[4728]: I0227 10:51:17.580954 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"bd23f1b2-4fb6-4e9d-a692-7d9640e4c999","Type":"ContainerDied","Data":"cb737e98e80cdec80a0101a1a34cdf6778e737ce9fb5e469cfb2b3181957daa0"} Feb 27 10:51:17 crc kubenswrapper[4728]: I0227 10:51:17.582014 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-x426g" event={"ID":"41862d42-5899-4daf-8f29-a24ba28d3908","Type":"ContainerStarted","Data":"0f26c5efd0b2a4d72fccf77b12fbee4f8a5ef9a3950d91e4dc4a7c29890194f5"} Feb 27 10:51:17 crc kubenswrapper[4728]: I0227 10:51:17.582043 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-x426g" event={"ID":"41862d42-5899-4daf-8f29-a24ba28d3908","Type":"ContainerStarted","Data":"6b4a047e997109b933ea32523114e10147ab1430311cf8bf47242eb038e75bab"} Feb 27 10:51:17 crc kubenswrapper[4728]: I0227 10:51:17.624737 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-88ac-account-create-update-q6xk7" podStartSLOduration=1.624710083 podStartE2EDuration="1.624710083s" podCreationTimestamp="2026-02-27 10:51:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:51:17.597217598 +0000 UTC m=+1497.559583704" watchObservedRunningTime="2026-02-27 10:51:17.624710083 +0000 UTC m=+1497.587076189" Feb 27 10:51:17 crc kubenswrapper[4728]: I0227 10:51:17.633892 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-db-create-x426g" podStartSLOduration=1.633871053 podStartE2EDuration="1.633871053s" podCreationTimestamp="2026-02-27 10:51:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:51:17.619817561 +0000 UTC m=+1497.582183677" watchObservedRunningTime="2026-02-27 10:51:17.633871053 +0000 UTC m=+1497.596237159" Feb 27 10:51:18 crc kubenswrapper[4728]: I0227 10:51:18.170327 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 27 10:51:18 crc kubenswrapper[4728]: I0227 10:51:18.264799 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-stllz\" (UniqueName: \"kubernetes.io/projected/bd23f1b2-4fb6-4e9d-a692-7d9640e4c999-kube-api-access-stllz\") pod \"bd23f1b2-4fb6-4e9d-a692-7d9640e4c999\" (UID: \"bd23f1b2-4fb6-4e9d-a692-7d9640e4c999\") " Feb 27 10:51:18 crc kubenswrapper[4728]: I0227 10:51:18.265413 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd23f1b2-4fb6-4e9d-a692-7d9640e4c999-config-data\") pod \"bd23f1b2-4fb6-4e9d-a692-7d9640e4c999\" (UID: \"bd23f1b2-4fb6-4e9d-a692-7d9640e4c999\") " Feb 27 10:51:18 crc kubenswrapper[4728]: I0227 10:51:18.265572 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bd23f1b2-4fb6-4e9d-a692-7d9640e4c999-logs\") pod \"bd23f1b2-4fb6-4e9d-a692-7d9640e4c999\" (UID: \"bd23f1b2-4fb6-4e9d-a692-7d9640e4c999\") " Feb 27 10:51:18 crc kubenswrapper[4728]: I0227 10:51:18.265592 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd23f1b2-4fb6-4e9d-a692-7d9640e4c999-combined-ca-bundle\") pod \"bd23f1b2-4fb6-4e9d-a692-7d9640e4c999\" (UID: \"bd23f1b2-4fb6-4e9d-a692-7d9640e4c999\") " Feb 27 10:51:18 crc kubenswrapper[4728]: I0227 10:51:18.267773 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bd23f1b2-4fb6-4e9d-a692-7d9640e4c999-logs" (OuterVolumeSpecName: "logs") pod "bd23f1b2-4fb6-4e9d-a692-7d9640e4c999" (UID: "bd23f1b2-4fb6-4e9d-a692-7d9640e4c999"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:51:18 crc kubenswrapper[4728]: I0227 10:51:18.271489 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23f1b2-4fb6-4e9d-a692-7d9640e4c999-kube-api-access-stllz" (OuterVolumeSpecName: "kube-api-access-stllz") pod "bd23f1b2-4fb6-4e9d-a692-7d9640e4c999" (UID: "bd23f1b2-4fb6-4e9d-a692-7d9640e4c999"). InnerVolumeSpecName "kube-api-access-stllz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:51:18 crc kubenswrapper[4728]: I0227 10:51:18.316918 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd23f1b2-4fb6-4e9d-a692-7d9640e4c999-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bd23f1b2-4fb6-4e9d-a692-7d9640e4c999" (UID: "bd23f1b2-4fb6-4e9d-a692-7d9640e4c999"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:51:18 crc kubenswrapper[4728]: I0227 10:51:18.318511 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd23f1b2-4fb6-4e9d-a692-7d9640e4c999-config-data" (OuterVolumeSpecName: "config-data") pod "bd23f1b2-4fb6-4e9d-a692-7d9640e4c999" (UID: "bd23f1b2-4fb6-4e9d-a692-7d9640e4c999"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:51:18 crc kubenswrapper[4728]: I0227 10:51:18.368602 4728 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd23f1b2-4fb6-4e9d-a692-7d9640e4c999-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 10:51:18 crc kubenswrapper[4728]: I0227 10:51:18.368828 4728 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bd23f1b2-4fb6-4e9d-a692-7d9640e4c999-logs\") on node \"crc\" DevicePath \"\"" Feb 27 10:51:18 crc kubenswrapper[4728]: I0227 10:51:18.368887 4728 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd23f1b2-4fb6-4e9d-a692-7d9640e4c999-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 10:51:18 crc kubenswrapper[4728]: I0227 10:51:18.368950 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-stllz\" (UniqueName: \"kubernetes.io/projected/bd23f1b2-4fb6-4e9d-a692-7d9640e4c999-kube-api-access-stllz\") on node \"crc\" DevicePath \"\"" Feb 27 10:51:18 crc kubenswrapper[4728]: I0227 10:51:18.588859 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 27 10:51:18 crc kubenswrapper[4728]: I0227 10:51:18.596291 4728 generic.go:334] "Generic (PLEG): container finished" podID="204df3b2-d221-4ba6-811d-d232b4a5d12e" containerID="88d1b4dcf84f7296d467a124e0be85197cfc9732fb0ec8ec914114972c0f328b" exitCode=0 Feb 27 10:51:18 crc kubenswrapper[4728]: I0227 10:51:18.596377 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-88ac-account-create-update-q6xk7" event={"ID":"204df3b2-d221-4ba6-811d-d232b4a5d12e","Type":"ContainerDied","Data":"88d1b4dcf84f7296d467a124e0be85197cfc9732fb0ec8ec914114972c0f328b"} Feb 27 10:51:18 crc kubenswrapper[4728]: I0227 10:51:18.598243 4728 generic.go:334] "Generic (PLEG): container finished" podID="bd23f1b2-4fb6-4e9d-a692-7d9640e4c999" containerID="9d7f129f8f89000ace0bb28f9448a4f12cb09c4c0460f2252dadee25f40ec577" exitCode=0 Feb 27 10:51:18 crc kubenswrapper[4728]: I0227 10:51:18.598321 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 27 10:51:18 crc kubenswrapper[4728]: I0227 10:51:18.598418 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"bd23f1b2-4fb6-4e9d-a692-7d9640e4c999","Type":"ContainerDied","Data":"9d7f129f8f89000ace0bb28f9448a4f12cb09c4c0460f2252dadee25f40ec577"} Feb 27 10:51:18 crc kubenswrapper[4728]: I0227 10:51:18.598558 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"bd23f1b2-4fb6-4e9d-a692-7d9640e4c999","Type":"ContainerDied","Data":"0314621f89d8617223504b65cf6391c7ef428c82ebb744fb1f8f06db0c4e0af9"} Feb 27 10:51:18 crc kubenswrapper[4728]: I0227 10:51:18.598642 4728 scope.go:117] "RemoveContainer" containerID="9d7f129f8f89000ace0bb28f9448a4f12cb09c4c0460f2252dadee25f40ec577" Feb 27 10:51:18 crc kubenswrapper[4728]: I0227 10:51:18.600851 4728 generic.go:334] "Generic (PLEG): container finished" podID="3e17d97c-f4dc-403c-b9ee-5ba6f43de1f3" containerID="4d292bde3c47282db30f40d5a29d6eeb80f93d395713fbad8699b84fc29e886f" exitCode=0 Feb 27 10:51:18 crc kubenswrapper[4728]: I0227 10:51:18.600876 4728 generic.go:334] "Generic (PLEG): container finished" podID="3e17d97c-f4dc-403c-b9ee-5ba6f43de1f3" containerID="db4380089af3ee2498b11fab8702f9991e2373bc93170790920e39f5b26200a9" exitCode=143 Feb 27 10:51:18 crc kubenswrapper[4728]: I0227 10:51:18.600935 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3e17d97c-f4dc-403c-b9ee-5ba6f43de1f3","Type":"ContainerDied","Data":"4d292bde3c47282db30f40d5a29d6eeb80f93d395713fbad8699b84fc29e886f"} Feb 27 10:51:18 crc kubenswrapper[4728]: I0227 10:51:18.600952 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3e17d97c-f4dc-403c-b9ee-5ba6f43de1f3","Type":"ContainerDied","Data":"db4380089af3ee2498b11fab8702f9991e2373bc93170790920e39f5b26200a9"} Feb 27 10:51:18 crc kubenswrapper[4728]: I0227 10:51:18.600962 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3e17d97c-f4dc-403c-b9ee-5ba6f43de1f3","Type":"ContainerDied","Data":"d1b00620f06540d8dd28d1e4dc64762f4a931f9ff4b6c67bd0ef7e13058e9e73"} Feb 27 10:51:18 crc kubenswrapper[4728]: I0227 10:51:18.600967 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 27 10:51:18 crc kubenswrapper[4728]: I0227 10:51:18.606944 4728 generic.go:334] "Generic (PLEG): container finished" podID="41862d42-5899-4daf-8f29-a24ba28d3908" containerID="0f26c5efd0b2a4d72fccf77b12fbee4f8a5ef9a3950d91e4dc4a7c29890194f5" exitCode=0 Feb 27 10:51:18 crc kubenswrapper[4728]: I0227 10:51:18.607096 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-x426g" event={"ID":"41862d42-5899-4daf-8f29-a24ba28d3908","Type":"ContainerDied","Data":"0f26c5efd0b2a4d72fccf77b12fbee4f8a5ef9a3950d91e4dc4a7c29890194f5"} Feb 27 10:51:18 crc kubenswrapper[4728]: I0227 10:51:18.649759 4728 scope.go:117] "RemoveContainer" containerID="cb737e98e80cdec80a0101a1a34cdf6778e737ce9fb5e469cfb2b3181957daa0" Feb 27 10:51:18 crc kubenswrapper[4728]: I0227 10:51:18.674606 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n4q28\" (UniqueName: \"kubernetes.io/projected/3e17d97c-f4dc-403c-b9ee-5ba6f43de1f3-kube-api-access-n4q28\") pod \"3e17d97c-f4dc-403c-b9ee-5ba6f43de1f3\" (UID: \"3e17d97c-f4dc-403c-b9ee-5ba6f43de1f3\") " Feb 27 10:51:18 crc kubenswrapper[4728]: I0227 10:51:18.674751 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3e17d97c-f4dc-403c-b9ee-5ba6f43de1f3-logs\") pod \"3e17d97c-f4dc-403c-b9ee-5ba6f43de1f3\" (UID: \"3e17d97c-f4dc-403c-b9ee-5ba6f43de1f3\") " Feb 27 10:51:18 crc kubenswrapper[4728]: I0227 10:51:18.674802 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e17d97c-f4dc-403c-b9ee-5ba6f43de1f3-config-data\") pod \"3e17d97c-f4dc-403c-b9ee-5ba6f43de1f3\" (UID: \"3e17d97c-f4dc-403c-b9ee-5ba6f43de1f3\") " Feb 27 10:51:18 crc kubenswrapper[4728]: I0227 10:51:18.674878 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e17d97c-f4dc-403c-b9ee-5ba6f43de1f3-combined-ca-bundle\") pod \"3e17d97c-f4dc-403c-b9ee-5ba6f43de1f3\" (UID: \"3e17d97c-f4dc-403c-b9ee-5ba6f43de1f3\") " Feb 27 10:51:18 crc kubenswrapper[4728]: I0227 10:51:18.675016 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e17d97c-f4dc-403c-b9ee-5ba6f43de1f3-nova-metadata-tls-certs\") pod \"3e17d97c-f4dc-403c-b9ee-5ba6f43de1f3\" (UID: \"3e17d97c-f4dc-403c-b9ee-5ba6f43de1f3\") " Feb 27 10:51:18 crc kubenswrapper[4728]: I0227 10:51:18.675889 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3e17d97c-f4dc-403c-b9ee-5ba6f43de1f3-logs" (OuterVolumeSpecName: "logs") pod "3e17d97c-f4dc-403c-b9ee-5ba6f43de1f3" (UID: "3e17d97c-f4dc-403c-b9ee-5ba6f43de1f3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:51:18 crc kubenswrapper[4728]: I0227 10:51:18.685396 4728 scope.go:117] "RemoveContainer" containerID="9d7f129f8f89000ace0bb28f9448a4f12cb09c4c0460f2252dadee25f40ec577" Feb 27 10:51:18 crc kubenswrapper[4728]: E0227 10:51:18.688853 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d7f129f8f89000ace0bb28f9448a4f12cb09c4c0460f2252dadee25f40ec577\": container with ID starting with 9d7f129f8f89000ace0bb28f9448a4f12cb09c4c0460f2252dadee25f40ec577 not found: ID does not exist" containerID="9d7f129f8f89000ace0bb28f9448a4f12cb09c4c0460f2252dadee25f40ec577" Feb 27 10:51:18 crc kubenswrapper[4728]: I0227 10:51:18.688910 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d7f129f8f89000ace0bb28f9448a4f12cb09c4c0460f2252dadee25f40ec577"} err="failed to get container status \"9d7f129f8f89000ace0bb28f9448a4f12cb09c4c0460f2252dadee25f40ec577\": rpc error: code = NotFound desc = could not find container \"9d7f129f8f89000ace0bb28f9448a4f12cb09c4c0460f2252dadee25f40ec577\": container with ID starting with 9d7f129f8f89000ace0bb28f9448a4f12cb09c4c0460f2252dadee25f40ec577 not found: ID does not exist" Feb 27 10:51:18 crc kubenswrapper[4728]: I0227 10:51:18.688941 4728 scope.go:117] "RemoveContainer" containerID="cb737e98e80cdec80a0101a1a34cdf6778e737ce9fb5e469cfb2b3181957daa0" Feb 27 10:51:18 crc kubenswrapper[4728]: E0227 10:51:18.689595 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cb737e98e80cdec80a0101a1a34cdf6778e737ce9fb5e469cfb2b3181957daa0\": container with ID starting with cb737e98e80cdec80a0101a1a34cdf6778e737ce9fb5e469cfb2b3181957daa0 not found: ID does not exist" containerID="cb737e98e80cdec80a0101a1a34cdf6778e737ce9fb5e469cfb2b3181957daa0" Feb 27 10:51:18 crc kubenswrapper[4728]: I0227 10:51:18.696617 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb737e98e80cdec80a0101a1a34cdf6778e737ce9fb5e469cfb2b3181957daa0"} err="failed to get container status \"cb737e98e80cdec80a0101a1a34cdf6778e737ce9fb5e469cfb2b3181957daa0\": rpc error: code = NotFound desc = could not find container \"cb737e98e80cdec80a0101a1a34cdf6778e737ce9fb5e469cfb2b3181957daa0\": container with ID starting with cb737e98e80cdec80a0101a1a34cdf6778e737ce9fb5e469cfb2b3181957daa0 not found: ID does not exist" Feb 27 10:51:18 crc kubenswrapper[4728]: I0227 10:51:18.696730 4728 scope.go:117] "RemoveContainer" containerID="4d292bde3c47282db30f40d5a29d6eeb80f93d395713fbad8699b84fc29e886f" Feb 27 10:51:18 crc kubenswrapper[4728]: I0227 10:51:18.699891 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e17d97c-f4dc-403c-b9ee-5ba6f43de1f3-kube-api-access-n4q28" (OuterVolumeSpecName: "kube-api-access-n4q28") pod "3e17d97c-f4dc-403c-b9ee-5ba6f43de1f3" (UID: "3e17d97c-f4dc-403c-b9ee-5ba6f43de1f3"). InnerVolumeSpecName "kube-api-access-n4q28". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:51:18 crc kubenswrapper[4728]: I0227 10:51:18.764678 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e17d97c-f4dc-403c-b9ee-5ba6f43de1f3-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "3e17d97c-f4dc-403c-b9ee-5ba6f43de1f3" (UID: "3e17d97c-f4dc-403c-b9ee-5ba6f43de1f3"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:51:18 crc kubenswrapper[4728]: I0227 10:51:18.781476 4728 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e17d97c-f4dc-403c-b9ee-5ba6f43de1f3-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 27 10:51:18 crc kubenswrapper[4728]: I0227 10:51:18.781545 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n4q28\" (UniqueName: \"kubernetes.io/projected/3e17d97c-f4dc-403c-b9ee-5ba6f43de1f3-kube-api-access-n4q28\") on node \"crc\" DevicePath \"\"" Feb 27 10:51:18 crc kubenswrapper[4728]: I0227 10:51:18.781565 4728 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3e17d97c-f4dc-403c-b9ee-5ba6f43de1f3-logs\") on node \"crc\" DevicePath \"\"" Feb 27 10:51:18 crc kubenswrapper[4728]: I0227 10:51:18.799618 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e17d97c-f4dc-403c-b9ee-5ba6f43de1f3-config-data" (OuterVolumeSpecName: "config-data") pod "3e17d97c-f4dc-403c-b9ee-5ba6f43de1f3" (UID: "3e17d97c-f4dc-403c-b9ee-5ba6f43de1f3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:51:18 crc kubenswrapper[4728]: I0227 10:51:18.820436 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e17d97c-f4dc-403c-b9ee-5ba6f43de1f3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3e17d97c-f4dc-403c-b9ee-5ba6f43de1f3" (UID: "3e17d97c-f4dc-403c-b9ee-5ba6f43de1f3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:51:18 crc kubenswrapper[4728]: I0227 10:51:18.822170 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 27 10:51:18 crc kubenswrapper[4728]: I0227 10:51:18.822201 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 27 10:51:18 crc kubenswrapper[4728]: I0227 10:51:18.824699 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 27 10:51:18 crc kubenswrapper[4728]: E0227 10:51:18.825320 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e17d97c-f4dc-403c-b9ee-5ba6f43de1f3" containerName="nova-metadata-log" Feb 27 10:51:18 crc kubenswrapper[4728]: I0227 10:51:18.825339 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e17d97c-f4dc-403c-b9ee-5ba6f43de1f3" containerName="nova-metadata-log" Feb 27 10:51:18 crc kubenswrapper[4728]: E0227 10:51:18.825367 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd23f1b2-4fb6-4e9d-a692-7d9640e4c999" containerName="nova-api-api" Feb 27 10:51:18 crc kubenswrapper[4728]: I0227 10:51:18.825374 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd23f1b2-4fb6-4e9d-a692-7d9640e4c999" containerName="nova-api-api" Feb 27 10:51:18 crc kubenswrapper[4728]: E0227 10:51:18.825390 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e17d97c-f4dc-403c-b9ee-5ba6f43de1f3" containerName="nova-metadata-metadata" Feb 27 10:51:18 crc kubenswrapper[4728]: I0227 10:51:18.825396 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e17d97c-f4dc-403c-b9ee-5ba6f43de1f3" containerName="nova-metadata-metadata" Feb 27 10:51:18 crc kubenswrapper[4728]: E0227 10:51:18.825425 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd23f1b2-4fb6-4e9d-a692-7d9640e4c999" containerName="nova-api-log" Feb 27 10:51:18 crc kubenswrapper[4728]: I0227 10:51:18.825431 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd23f1b2-4fb6-4e9d-a692-7d9640e4c999" containerName="nova-api-log" Feb 27 10:51:18 crc kubenswrapper[4728]: I0227 10:51:18.825676 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd23f1b2-4fb6-4e9d-a692-7d9640e4c999" containerName="nova-api-api" Feb 27 10:51:18 crc kubenswrapper[4728]: I0227 10:51:18.825704 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e17d97c-f4dc-403c-b9ee-5ba6f43de1f3" containerName="nova-metadata-log" Feb 27 10:51:18 crc kubenswrapper[4728]: I0227 10:51:18.825721 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e17d97c-f4dc-403c-b9ee-5ba6f43de1f3" containerName="nova-metadata-metadata" Feb 27 10:51:18 crc kubenswrapper[4728]: I0227 10:51:18.825728 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd23f1b2-4fb6-4e9d-a692-7d9640e4c999" containerName="nova-api-log" Feb 27 10:51:18 crc kubenswrapper[4728]: I0227 10:51:18.829038 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 27 10:51:18 crc kubenswrapper[4728]: I0227 10:51:18.831737 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 27 10:51:18 crc kubenswrapper[4728]: I0227 10:51:18.839848 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 27 10:51:18 crc kubenswrapper[4728]: I0227 10:51:18.846275 4728 scope.go:117] "RemoveContainer" containerID="db4380089af3ee2498b11fab8702f9991e2373bc93170790920e39f5b26200a9" Feb 27 10:51:18 crc kubenswrapper[4728]: I0227 10:51:18.865649 4728 scope.go:117] "RemoveContainer" containerID="4d292bde3c47282db30f40d5a29d6eeb80f93d395713fbad8699b84fc29e886f" Feb 27 10:51:18 crc kubenswrapper[4728]: E0227 10:51:18.866521 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d292bde3c47282db30f40d5a29d6eeb80f93d395713fbad8699b84fc29e886f\": container with ID starting with 4d292bde3c47282db30f40d5a29d6eeb80f93d395713fbad8699b84fc29e886f not found: ID does not exist" containerID="4d292bde3c47282db30f40d5a29d6eeb80f93d395713fbad8699b84fc29e886f" Feb 27 10:51:18 crc kubenswrapper[4728]: I0227 10:51:18.866688 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d292bde3c47282db30f40d5a29d6eeb80f93d395713fbad8699b84fc29e886f"} err="failed to get container status \"4d292bde3c47282db30f40d5a29d6eeb80f93d395713fbad8699b84fc29e886f\": rpc error: code = NotFound desc = could not find container \"4d292bde3c47282db30f40d5a29d6eeb80f93d395713fbad8699b84fc29e886f\": container with ID starting with 4d292bde3c47282db30f40d5a29d6eeb80f93d395713fbad8699b84fc29e886f not found: ID does not exist" Feb 27 10:51:18 crc kubenswrapper[4728]: I0227 10:51:18.866803 4728 scope.go:117] "RemoveContainer" containerID="db4380089af3ee2498b11fab8702f9991e2373bc93170790920e39f5b26200a9" Feb 27 10:51:18 crc kubenswrapper[4728]: E0227 10:51:18.867335 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"db4380089af3ee2498b11fab8702f9991e2373bc93170790920e39f5b26200a9\": container with ID starting with db4380089af3ee2498b11fab8702f9991e2373bc93170790920e39f5b26200a9 not found: ID does not exist" containerID="db4380089af3ee2498b11fab8702f9991e2373bc93170790920e39f5b26200a9" Feb 27 10:51:18 crc kubenswrapper[4728]: I0227 10:51:18.867388 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db4380089af3ee2498b11fab8702f9991e2373bc93170790920e39f5b26200a9"} err="failed to get container status \"db4380089af3ee2498b11fab8702f9991e2373bc93170790920e39f5b26200a9\": rpc error: code = NotFound desc = could not find container \"db4380089af3ee2498b11fab8702f9991e2373bc93170790920e39f5b26200a9\": container with ID starting with db4380089af3ee2498b11fab8702f9991e2373bc93170790920e39f5b26200a9 not found: ID does not exist" Feb 27 10:51:18 crc kubenswrapper[4728]: I0227 10:51:18.867409 4728 scope.go:117] "RemoveContainer" containerID="4d292bde3c47282db30f40d5a29d6eeb80f93d395713fbad8699b84fc29e886f" Feb 27 10:51:18 crc kubenswrapper[4728]: I0227 10:51:18.867806 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d292bde3c47282db30f40d5a29d6eeb80f93d395713fbad8699b84fc29e886f"} err="failed to get container status \"4d292bde3c47282db30f40d5a29d6eeb80f93d395713fbad8699b84fc29e886f\": rpc error: code = NotFound desc = could not find container \"4d292bde3c47282db30f40d5a29d6eeb80f93d395713fbad8699b84fc29e886f\": container with ID starting with 4d292bde3c47282db30f40d5a29d6eeb80f93d395713fbad8699b84fc29e886f not found: ID does not exist" Feb 27 10:51:18 crc kubenswrapper[4728]: I0227 10:51:18.867862 4728 scope.go:117] "RemoveContainer" containerID="db4380089af3ee2498b11fab8702f9991e2373bc93170790920e39f5b26200a9" Feb 27 10:51:18 crc kubenswrapper[4728]: I0227 10:51:18.868278 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db4380089af3ee2498b11fab8702f9991e2373bc93170790920e39f5b26200a9"} err="failed to get container status \"db4380089af3ee2498b11fab8702f9991e2373bc93170790920e39f5b26200a9\": rpc error: code = NotFound desc = could not find container \"db4380089af3ee2498b11fab8702f9991e2373bc93170790920e39f5b26200a9\": container with ID starting with db4380089af3ee2498b11fab8702f9991e2373bc93170790920e39f5b26200a9 not found: ID does not exist" Feb 27 10:51:18 crc kubenswrapper[4728]: I0227 10:51:18.883591 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e99f57b6-6a8c-4e28-8a44-b1383a7ec0c8-config-data\") pod \"nova-api-0\" (UID: \"e99f57b6-6a8c-4e28-8a44-b1383a7ec0c8\") " pod="openstack/nova-api-0" Feb 27 10:51:18 crc kubenswrapper[4728]: I0227 10:51:18.883831 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4x2z\" (UniqueName: \"kubernetes.io/projected/e99f57b6-6a8c-4e28-8a44-b1383a7ec0c8-kube-api-access-v4x2z\") pod \"nova-api-0\" (UID: \"e99f57b6-6a8c-4e28-8a44-b1383a7ec0c8\") " pod="openstack/nova-api-0" Feb 27 10:51:18 crc kubenswrapper[4728]: I0227 10:51:18.883948 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e99f57b6-6a8c-4e28-8a44-b1383a7ec0c8-logs\") pod \"nova-api-0\" (UID: \"e99f57b6-6a8c-4e28-8a44-b1383a7ec0c8\") " pod="openstack/nova-api-0" Feb 27 10:51:18 crc kubenswrapper[4728]: I0227 10:51:18.884211 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e99f57b6-6a8c-4e28-8a44-b1383a7ec0c8-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e99f57b6-6a8c-4e28-8a44-b1383a7ec0c8\") " pod="openstack/nova-api-0" Feb 27 10:51:18 crc kubenswrapper[4728]: I0227 10:51:18.884399 4728 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e17d97c-f4dc-403c-b9ee-5ba6f43de1f3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 10:51:18 crc kubenswrapper[4728]: I0227 10:51:18.884543 4728 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e17d97c-f4dc-403c-b9ee-5ba6f43de1f3-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 10:51:18 crc kubenswrapper[4728]: I0227 10:51:18.936673 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 27 10:51:18 crc kubenswrapper[4728]: I0227 10:51:18.955186 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 27 10:51:18 crc kubenswrapper[4728]: I0227 10:51:18.974255 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 27 10:51:18 crc kubenswrapper[4728]: I0227 10:51:18.976410 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 27 10:51:18 crc kubenswrapper[4728]: I0227 10:51:18.985733 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 27 10:51:18 crc kubenswrapper[4728]: I0227 10:51:18.986676 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 27 10:51:18 crc kubenswrapper[4728]: I0227 10:51:18.986848 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 27 10:51:18 crc kubenswrapper[4728]: I0227 10:51:18.987726 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e99f57b6-6a8c-4e28-8a44-b1383a7ec0c8-config-data\") pod \"nova-api-0\" (UID: \"e99f57b6-6a8c-4e28-8a44-b1383a7ec0c8\") " pod="openstack/nova-api-0" Feb 27 10:51:18 crc kubenswrapper[4728]: I0227 10:51:18.987772 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v4x2z\" (UniqueName: \"kubernetes.io/projected/e99f57b6-6a8c-4e28-8a44-b1383a7ec0c8-kube-api-access-v4x2z\") pod \"nova-api-0\" (UID: \"e99f57b6-6a8c-4e28-8a44-b1383a7ec0c8\") " pod="openstack/nova-api-0" Feb 27 10:51:18 crc kubenswrapper[4728]: I0227 10:51:18.987854 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e99f57b6-6a8c-4e28-8a44-b1383a7ec0c8-logs\") pod \"nova-api-0\" (UID: \"e99f57b6-6a8c-4e28-8a44-b1383a7ec0c8\") " pod="openstack/nova-api-0" Feb 27 10:51:18 crc kubenswrapper[4728]: I0227 10:51:18.987890 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e99f57b6-6a8c-4e28-8a44-b1383a7ec0c8-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e99f57b6-6a8c-4e28-8a44-b1383a7ec0c8\") " pod="openstack/nova-api-0" Feb 27 10:51:18 crc kubenswrapper[4728]: I0227 10:51:18.990632 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e99f57b6-6a8c-4e28-8a44-b1383a7ec0c8-logs\") pod \"nova-api-0\" (UID: \"e99f57b6-6a8c-4e28-8a44-b1383a7ec0c8\") " pod="openstack/nova-api-0" Feb 27 10:51:18 crc kubenswrapper[4728]: I0227 10:51:18.995408 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e99f57b6-6a8c-4e28-8a44-b1383a7ec0c8-config-data\") pod \"nova-api-0\" (UID: \"e99f57b6-6a8c-4e28-8a44-b1383a7ec0c8\") " pod="openstack/nova-api-0" Feb 27 10:51:19 crc kubenswrapper[4728]: I0227 10:51:19.006652 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e99f57b6-6a8c-4e28-8a44-b1383a7ec0c8-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e99f57b6-6a8c-4e28-8a44-b1383a7ec0c8\") " pod="openstack/nova-api-0" Feb 27 10:51:19 crc kubenswrapper[4728]: I0227 10:51:19.009845 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v4x2z\" (UniqueName: \"kubernetes.io/projected/e99f57b6-6a8c-4e28-8a44-b1383a7ec0c8-kube-api-access-v4x2z\") pod \"nova-api-0\" (UID: \"e99f57b6-6a8c-4e28-8a44-b1383a7ec0c8\") " pod="openstack/nova-api-0" Feb 27 10:51:19 crc kubenswrapper[4728]: I0227 10:51:19.056561 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-h4gl8"] Feb 27 10:51:19 crc kubenswrapper[4728]: I0227 10:51:19.059515 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-h4gl8" Feb 27 10:51:19 crc kubenswrapper[4728]: I0227 10:51:19.077619 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-h4gl8"] Feb 27 10:51:19 crc kubenswrapper[4728]: I0227 10:51:19.089797 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec89ac4c-d100-4004-bb62-0f5e6a344efd-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"ec89ac4c-d100-4004-bb62-0f5e6a344efd\") " pod="openstack/nova-metadata-0" Feb 27 10:51:19 crc kubenswrapper[4728]: I0227 10:51:19.089855 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ec89ac4c-d100-4004-bb62-0f5e6a344efd-logs\") pod \"nova-metadata-0\" (UID: \"ec89ac4c-d100-4004-bb62-0f5e6a344efd\") " pod="openstack/nova-metadata-0" Feb 27 10:51:19 crc kubenswrapper[4728]: I0227 10:51:19.089906 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec89ac4c-d100-4004-bb62-0f5e6a344efd-config-data\") pod \"nova-metadata-0\" (UID: \"ec89ac4c-d100-4004-bb62-0f5e6a344efd\") " pod="openstack/nova-metadata-0" Feb 27 10:51:19 crc kubenswrapper[4728]: I0227 10:51:19.090003 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec89ac4c-d100-4004-bb62-0f5e6a344efd-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ec89ac4c-d100-4004-bb62-0f5e6a344efd\") " pod="openstack/nova-metadata-0" Feb 27 10:51:19 crc kubenswrapper[4728]: I0227 10:51:19.090029 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qfb4\" (UniqueName: \"kubernetes.io/projected/ec89ac4c-d100-4004-bb62-0f5e6a344efd-kube-api-access-5qfb4\") pod \"nova-metadata-0\" (UID: \"ec89ac4c-d100-4004-bb62-0f5e6a344efd\") " pod="openstack/nova-metadata-0" Feb 27 10:51:19 crc kubenswrapper[4728]: I0227 10:51:19.138966 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Feb 27 10:51:19 crc kubenswrapper[4728]: I0227 10:51:19.151912 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 27 10:51:19 crc kubenswrapper[4728]: I0227 10:51:19.191928 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec89ac4c-d100-4004-bb62-0f5e6a344efd-config-data\") pod \"nova-metadata-0\" (UID: \"ec89ac4c-d100-4004-bb62-0f5e6a344efd\") " pod="openstack/nova-metadata-0" Feb 27 10:51:19 crc kubenswrapper[4728]: I0227 10:51:19.192079 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/efa9e238-79b0-4757-acab-53537b5ae93a-catalog-content\") pod \"redhat-operators-h4gl8\" (UID: \"efa9e238-79b0-4757-acab-53537b5ae93a\") " pod="openshift-marketplace/redhat-operators-h4gl8" Feb 27 10:51:19 crc kubenswrapper[4728]: I0227 10:51:19.192178 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec89ac4c-d100-4004-bb62-0f5e6a344efd-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ec89ac4c-d100-4004-bb62-0f5e6a344efd\") " pod="openstack/nova-metadata-0" Feb 27 10:51:19 crc kubenswrapper[4728]: I0227 10:51:19.192222 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5qfb4\" (UniqueName: \"kubernetes.io/projected/ec89ac4c-d100-4004-bb62-0f5e6a344efd-kube-api-access-5qfb4\") pod \"nova-metadata-0\" (UID: \"ec89ac4c-d100-4004-bb62-0f5e6a344efd\") " pod="openstack/nova-metadata-0" Feb 27 10:51:19 crc kubenswrapper[4728]: I0227 10:51:19.192304 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/efa9e238-79b0-4757-acab-53537b5ae93a-utilities\") pod \"redhat-operators-h4gl8\" (UID: \"efa9e238-79b0-4757-acab-53537b5ae93a\") " pod="openshift-marketplace/redhat-operators-h4gl8" Feb 27 10:51:19 crc kubenswrapper[4728]: I0227 10:51:19.192415 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec89ac4c-d100-4004-bb62-0f5e6a344efd-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"ec89ac4c-d100-4004-bb62-0f5e6a344efd\") " pod="openstack/nova-metadata-0" Feb 27 10:51:19 crc kubenswrapper[4728]: I0227 10:51:19.192460 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ec89ac4c-d100-4004-bb62-0f5e6a344efd-logs\") pod \"nova-metadata-0\" (UID: \"ec89ac4c-d100-4004-bb62-0f5e6a344efd\") " pod="openstack/nova-metadata-0" Feb 27 10:51:19 crc kubenswrapper[4728]: I0227 10:51:19.192483 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rljkb\" (UniqueName: \"kubernetes.io/projected/efa9e238-79b0-4757-acab-53537b5ae93a-kube-api-access-rljkb\") pod \"redhat-operators-h4gl8\" (UID: \"efa9e238-79b0-4757-acab-53537b5ae93a\") " pod="openshift-marketplace/redhat-operators-h4gl8" Feb 27 10:51:19 crc kubenswrapper[4728]: I0227 10:51:19.193373 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ec89ac4c-d100-4004-bb62-0f5e6a344efd-logs\") pod \"nova-metadata-0\" (UID: \"ec89ac4c-d100-4004-bb62-0f5e6a344efd\") " pod="openstack/nova-metadata-0" Feb 27 10:51:19 crc kubenswrapper[4728]: I0227 10:51:19.196411 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec89ac4c-d100-4004-bb62-0f5e6a344efd-config-data\") pod \"nova-metadata-0\" (UID: \"ec89ac4c-d100-4004-bb62-0f5e6a344efd\") " pod="openstack/nova-metadata-0" Feb 27 10:51:19 crc kubenswrapper[4728]: I0227 10:51:19.196459 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec89ac4c-d100-4004-bb62-0f5e6a344efd-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ec89ac4c-d100-4004-bb62-0f5e6a344efd\") " pod="openstack/nova-metadata-0" Feb 27 10:51:19 crc kubenswrapper[4728]: I0227 10:51:19.201084 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec89ac4c-d100-4004-bb62-0f5e6a344efd-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"ec89ac4c-d100-4004-bb62-0f5e6a344efd\") " pod="openstack/nova-metadata-0" Feb 27 10:51:19 crc kubenswrapper[4728]: I0227 10:51:19.210692 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5qfb4\" (UniqueName: \"kubernetes.io/projected/ec89ac4c-d100-4004-bb62-0f5e6a344efd-kube-api-access-5qfb4\") pod \"nova-metadata-0\" (UID: \"ec89ac4c-d100-4004-bb62-0f5e6a344efd\") " pod="openstack/nova-metadata-0" Feb 27 10:51:19 crc kubenswrapper[4728]: I0227 10:51:19.294040 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/efa9e238-79b0-4757-acab-53537b5ae93a-utilities\") pod \"redhat-operators-h4gl8\" (UID: \"efa9e238-79b0-4757-acab-53537b5ae93a\") " pod="openshift-marketplace/redhat-operators-h4gl8" Feb 27 10:51:19 crc kubenswrapper[4728]: I0227 10:51:19.294379 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rljkb\" (UniqueName: \"kubernetes.io/projected/efa9e238-79b0-4757-acab-53537b5ae93a-kube-api-access-rljkb\") pod \"redhat-operators-h4gl8\" (UID: \"efa9e238-79b0-4757-acab-53537b5ae93a\") " pod="openshift-marketplace/redhat-operators-h4gl8" Feb 27 10:51:19 crc kubenswrapper[4728]: I0227 10:51:19.294484 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/efa9e238-79b0-4757-acab-53537b5ae93a-catalog-content\") pod \"redhat-operators-h4gl8\" (UID: \"efa9e238-79b0-4757-acab-53537b5ae93a\") " pod="openshift-marketplace/redhat-operators-h4gl8" Feb 27 10:51:19 crc kubenswrapper[4728]: I0227 10:51:19.294568 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/efa9e238-79b0-4757-acab-53537b5ae93a-utilities\") pod \"redhat-operators-h4gl8\" (UID: \"efa9e238-79b0-4757-acab-53537b5ae93a\") " pod="openshift-marketplace/redhat-operators-h4gl8" Feb 27 10:51:19 crc kubenswrapper[4728]: I0227 10:51:19.295019 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/efa9e238-79b0-4757-acab-53537b5ae93a-catalog-content\") pod \"redhat-operators-h4gl8\" (UID: \"efa9e238-79b0-4757-acab-53537b5ae93a\") " pod="openshift-marketplace/redhat-operators-h4gl8" Feb 27 10:51:19 crc kubenswrapper[4728]: I0227 10:51:19.306595 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 27 10:51:19 crc kubenswrapper[4728]: I0227 10:51:19.325324 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rljkb\" (UniqueName: \"kubernetes.io/projected/efa9e238-79b0-4757-acab-53537b5ae93a-kube-api-access-rljkb\") pod \"redhat-operators-h4gl8\" (UID: \"efa9e238-79b0-4757-acab-53537b5ae93a\") " pod="openshift-marketplace/redhat-operators-h4gl8" Feb 27 10:51:19 crc kubenswrapper[4728]: I0227 10:51:19.387990 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-h4gl8" Feb 27 10:51:19 crc kubenswrapper[4728]: I0227 10:51:19.631948 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 27 10:51:19 crc kubenswrapper[4728]: W0227 10:51:19.836486 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podec89ac4c_d100_4004_bb62_0f5e6a344efd.slice/crio-79d25bc6f61a2112686c3cbcfea377e07c653a59c490b039537dcf2c2ad416b8 WatchSource:0}: Error finding container 79d25bc6f61a2112686c3cbcfea377e07c653a59c490b039537dcf2c2ad416b8: Status 404 returned error can't find the container with id 79d25bc6f61a2112686c3cbcfea377e07c653a59c490b039537dcf2c2ad416b8 Feb 27 10:51:19 crc kubenswrapper[4728]: I0227 10:51:19.843325 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 27 10:51:20 crc kubenswrapper[4728]: I0227 10:51:20.148823 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-88ac-account-create-update-q6xk7" Feb 27 10:51:20 crc kubenswrapper[4728]: I0227 10:51:20.158273 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-x426g" Feb 27 10:51:20 crc kubenswrapper[4728]: I0227 10:51:20.236198 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6pz2h\" (UniqueName: \"kubernetes.io/projected/41862d42-5899-4daf-8f29-a24ba28d3908-kube-api-access-6pz2h\") pod \"41862d42-5899-4daf-8f29-a24ba28d3908\" (UID: \"41862d42-5899-4daf-8f29-a24ba28d3908\") " Feb 27 10:51:20 crc kubenswrapper[4728]: I0227 10:51:20.236355 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/41862d42-5899-4daf-8f29-a24ba28d3908-operator-scripts\") pod \"41862d42-5899-4daf-8f29-a24ba28d3908\" (UID: \"41862d42-5899-4daf-8f29-a24ba28d3908\") " Feb 27 10:51:20 crc kubenswrapper[4728]: I0227 10:51:20.236599 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/204df3b2-d221-4ba6-811d-d232b4a5d12e-operator-scripts\") pod \"204df3b2-d221-4ba6-811d-d232b4a5d12e\" (UID: \"204df3b2-d221-4ba6-811d-d232b4a5d12e\") " Feb 27 10:51:20 crc kubenswrapper[4728]: I0227 10:51:20.236636 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g5bnz\" (UniqueName: \"kubernetes.io/projected/204df3b2-d221-4ba6-811d-d232b4a5d12e-kube-api-access-g5bnz\") pod \"204df3b2-d221-4ba6-811d-d232b4a5d12e\" (UID: \"204df3b2-d221-4ba6-811d-d232b4a5d12e\") " Feb 27 10:51:20 crc kubenswrapper[4728]: W0227 10:51:20.236848 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podefa9e238_79b0_4757_acab_53537b5ae93a.slice/crio-74ef8f99282344ecb8d0dcb39e15ce74bec942b00c024aa6cbda5afb9c354ad4 WatchSource:0}: Error finding container 74ef8f99282344ecb8d0dcb39e15ce74bec942b00c024aa6cbda5afb9c354ad4: Status 404 returned error can't find the container with id 74ef8f99282344ecb8d0dcb39e15ce74bec942b00c024aa6cbda5afb9c354ad4 Feb 27 10:51:20 crc kubenswrapper[4728]: I0227 10:51:20.237088 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/41862d42-5899-4daf-8f29-a24ba28d3908-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "41862d42-5899-4daf-8f29-a24ba28d3908" (UID: "41862d42-5899-4daf-8f29-a24ba28d3908"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:51:20 crc kubenswrapper[4728]: I0227 10:51:20.237304 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/204df3b2-d221-4ba6-811d-d232b4a5d12e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "204df3b2-d221-4ba6-811d-d232b4a5d12e" (UID: "204df3b2-d221-4ba6-811d-d232b4a5d12e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:51:20 crc kubenswrapper[4728]: I0227 10:51:20.237444 4728 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/41862d42-5899-4daf-8f29-a24ba28d3908-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 10:51:20 crc kubenswrapper[4728]: I0227 10:51:20.237460 4728 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/204df3b2-d221-4ba6-811d-d232b4a5d12e-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 10:51:20 crc kubenswrapper[4728]: I0227 10:51:20.238751 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-h4gl8"] Feb 27 10:51:20 crc kubenswrapper[4728]: I0227 10:51:20.243450 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/204df3b2-d221-4ba6-811d-d232b4a5d12e-kube-api-access-g5bnz" (OuterVolumeSpecName: "kube-api-access-g5bnz") pod "204df3b2-d221-4ba6-811d-d232b4a5d12e" (UID: "204df3b2-d221-4ba6-811d-d232b4a5d12e"). InnerVolumeSpecName "kube-api-access-g5bnz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:51:20 crc kubenswrapper[4728]: I0227 10:51:20.243672 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41862d42-5899-4daf-8f29-a24ba28d3908-kube-api-access-6pz2h" (OuterVolumeSpecName: "kube-api-access-6pz2h") pod "41862d42-5899-4daf-8f29-a24ba28d3908" (UID: "41862d42-5899-4daf-8f29-a24ba28d3908"). InnerVolumeSpecName "kube-api-access-6pz2h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:51:20 crc kubenswrapper[4728]: I0227 10:51:20.340699 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g5bnz\" (UniqueName: \"kubernetes.io/projected/204df3b2-d221-4ba6-811d-d232b4a5d12e-kube-api-access-g5bnz\") on node \"crc\" DevicePath \"\"" Feb 27 10:51:20 crc kubenswrapper[4728]: I0227 10:51:20.340732 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6pz2h\" (UniqueName: \"kubernetes.io/projected/41862d42-5899-4daf-8f29-a24ba28d3908-kube-api-access-6pz2h\") on node \"crc\" DevicePath \"\"" Feb 27 10:51:20 crc kubenswrapper[4728]: I0227 10:51:20.663987 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-x426g" Feb 27 10:51:20 crc kubenswrapper[4728]: I0227 10:51:20.664821 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-x426g" event={"ID":"41862d42-5899-4daf-8f29-a24ba28d3908","Type":"ContainerDied","Data":"6b4a047e997109b933ea32523114e10147ab1430311cf8bf47242eb038e75bab"} Feb 27 10:51:20 crc kubenswrapper[4728]: I0227 10:51:20.664868 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6b4a047e997109b933ea32523114e10147ab1430311cf8bf47242eb038e75bab" Feb 27 10:51:20 crc kubenswrapper[4728]: I0227 10:51:20.671101 4728 generic.go:334] "Generic (PLEG): container finished" podID="136f2587-addd-432e-a591-fc74213bf87c" containerID="f1299dc3339493607dd2e559ab64e97ec7958bfd8a4d78aad865c5729007b21f" exitCode=0 Feb 27 10:51:20 crc kubenswrapper[4728]: I0227 10:51:20.671169 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"136f2587-addd-432e-a591-fc74213bf87c","Type":"ContainerDied","Data":"f1299dc3339493607dd2e559ab64e97ec7958bfd8a4d78aad865c5729007b21f"} Feb 27 10:51:20 crc kubenswrapper[4728]: I0227 10:51:20.674974 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-88ac-account-create-update-q6xk7" event={"ID":"204df3b2-d221-4ba6-811d-d232b4a5d12e","Type":"ContainerDied","Data":"e47f415e9bd3d45561f78c9549bdc9c0efebd37ca413ea5fdad0539a32607574"} Feb 27 10:51:20 crc kubenswrapper[4728]: I0227 10:51:20.674997 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e47f415e9bd3d45561f78c9549bdc9c0efebd37ca413ea5fdad0539a32607574" Feb 27 10:51:20 crc kubenswrapper[4728]: I0227 10:51:20.675043 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-88ac-account-create-update-q6xk7" Feb 27 10:51:20 crc kubenswrapper[4728]: I0227 10:51:20.685887 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e99f57b6-6a8c-4e28-8a44-b1383a7ec0c8","Type":"ContainerStarted","Data":"752366dc33bd3f59cec62f09b50960b6a4d19c83ad2c9d13f0fb06eba8337006"} Feb 27 10:51:20 crc kubenswrapper[4728]: I0227 10:51:20.685935 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e99f57b6-6a8c-4e28-8a44-b1383a7ec0c8","Type":"ContainerStarted","Data":"7733460b6570f072ed8fb2df9150baf3c06b13a0819a34de968d1856336c6fdb"} Feb 27 10:51:20 crc kubenswrapper[4728]: I0227 10:51:20.685945 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e99f57b6-6a8c-4e28-8a44-b1383a7ec0c8","Type":"ContainerStarted","Data":"e8d4aad0803ae1b1d37fdc2fe38d233412c5f13f1ca95f189c78312aa463b90d"} Feb 27 10:51:20 crc kubenswrapper[4728]: I0227 10:51:20.689243 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h4gl8" event={"ID":"efa9e238-79b0-4757-acab-53537b5ae93a","Type":"ContainerStarted","Data":"74ef8f99282344ecb8d0dcb39e15ce74bec942b00c024aa6cbda5afb9c354ad4"} Feb 27 10:51:20 crc kubenswrapper[4728]: I0227 10:51:20.694222 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ec89ac4c-d100-4004-bb62-0f5e6a344efd","Type":"ContainerStarted","Data":"31ae060db3042c2ee9fd2e07a59e8fbebf9c2620bdc53d1a88fce283fc5f05a8"} Feb 27 10:51:20 crc kubenswrapper[4728]: I0227 10:51:20.694260 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ec89ac4c-d100-4004-bb62-0f5e6a344efd","Type":"ContainerStarted","Data":"bc84e6508a54386778ee5d8c8866649291f99f6ccf8696cbb092027912f04156"} Feb 27 10:51:20 crc kubenswrapper[4728]: I0227 10:51:20.694269 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ec89ac4c-d100-4004-bb62-0f5e6a344efd","Type":"ContainerStarted","Data":"79d25bc6f61a2112686c3cbcfea377e07c653a59c490b039537dcf2c2ad416b8"} Feb 27 10:51:20 crc kubenswrapper[4728]: I0227 10:51:20.725443 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.725424872 podStartE2EDuration="2.725424872s" podCreationTimestamp="2026-02-27 10:51:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:51:20.714740762 +0000 UTC m=+1500.677106868" watchObservedRunningTime="2026-02-27 10:51:20.725424872 +0000 UTC m=+1500.687790978" Feb 27 10:51:20 crc kubenswrapper[4728]: I0227 10:51:20.749072 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.749050924 podStartE2EDuration="2.749050924s" podCreationTimestamp="2026-02-27 10:51:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:51:20.730552502 +0000 UTC m=+1500.692918608" watchObservedRunningTime="2026-02-27 10:51:20.749050924 +0000 UTC m=+1500.711417020" Feb 27 10:51:20 crc kubenswrapper[4728]: I0227 10:51:20.765981 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e17d97c-f4dc-403c-b9ee-5ba6f43de1f3" path="/var/lib/kubelet/pods/3e17d97c-f4dc-403c-b9ee-5ba6f43de1f3/volumes" Feb 27 10:51:20 crc kubenswrapper[4728]: I0227 10:51:20.766638 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23f1b2-4fb6-4e9d-a692-7d9640e4c999" path="/var/lib/kubelet/pods/bd23f1b2-4fb6-4e9d-a692-7d9640e4c999/volumes" Feb 27 10:51:20 crc kubenswrapper[4728]: I0227 10:51:20.865533 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 27 10:51:20 crc kubenswrapper[4728]: I0227 10:51:20.964176 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kkfdk\" (UniqueName: \"kubernetes.io/projected/136f2587-addd-432e-a591-fc74213bf87c-kube-api-access-kkfdk\") pod \"136f2587-addd-432e-a591-fc74213bf87c\" (UID: \"136f2587-addd-432e-a591-fc74213bf87c\") " Feb 27 10:51:20 crc kubenswrapper[4728]: I0227 10:51:20.964875 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/136f2587-addd-432e-a591-fc74213bf87c-run-httpd\") pod \"136f2587-addd-432e-a591-fc74213bf87c\" (UID: \"136f2587-addd-432e-a591-fc74213bf87c\") " Feb 27 10:51:20 crc kubenswrapper[4728]: I0227 10:51:20.965326 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/136f2587-addd-432e-a591-fc74213bf87c-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "136f2587-addd-432e-a591-fc74213bf87c" (UID: "136f2587-addd-432e-a591-fc74213bf87c"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:51:20 crc kubenswrapper[4728]: I0227 10:51:20.965413 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/136f2587-addd-432e-a591-fc74213bf87c-combined-ca-bundle\") pod \"136f2587-addd-432e-a591-fc74213bf87c\" (UID: \"136f2587-addd-432e-a591-fc74213bf87c\") " Feb 27 10:51:20 crc kubenswrapper[4728]: I0227 10:51:20.965943 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/136f2587-addd-432e-a591-fc74213bf87c-log-httpd\") pod \"136f2587-addd-432e-a591-fc74213bf87c\" (UID: \"136f2587-addd-432e-a591-fc74213bf87c\") " Feb 27 10:51:20 crc kubenswrapper[4728]: I0227 10:51:20.966385 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/136f2587-addd-432e-a591-fc74213bf87c-scripts\") pod \"136f2587-addd-432e-a591-fc74213bf87c\" (UID: \"136f2587-addd-432e-a591-fc74213bf87c\") " Feb 27 10:51:20 crc kubenswrapper[4728]: I0227 10:51:20.966432 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/136f2587-addd-432e-a591-fc74213bf87c-sg-core-conf-yaml\") pod \"136f2587-addd-432e-a591-fc74213bf87c\" (UID: \"136f2587-addd-432e-a591-fc74213bf87c\") " Feb 27 10:51:20 crc kubenswrapper[4728]: I0227 10:51:20.966472 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/136f2587-addd-432e-a591-fc74213bf87c-config-data\") pod \"136f2587-addd-432e-a591-fc74213bf87c\" (UID: \"136f2587-addd-432e-a591-fc74213bf87c\") " Feb 27 10:51:20 crc kubenswrapper[4728]: I0227 10:51:20.968261 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/136f2587-addd-432e-a591-fc74213bf87c-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "136f2587-addd-432e-a591-fc74213bf87c" (UID: "136f2587-addd-432e-a591-fc74213bf87c"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:51:20 crc kubenswrapper[4728]: I0227 10:51:20.969271 4728 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/136f2587-addd-432e-a591-fc74213bf87c-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 27 10:51:20 crc kubenswrapper[4728]: I0227 10:51:20.969301 4728 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/136f2587-addd-432e-a591-fc74213bf87c-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 27 10:51:20 crc kubenswrapper[4728]: I0227 10:51:20.971583 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/136f2587-addd-432e-a591-fc74213bf87c-scripts" (OuterVolumeSpecName: "scripts") pod "136f2587-addd-432e-a591-fc74213bf87c" (UID: "136f2587-addd-432e-a591-fc74213bf87c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:51:20 crc kubenswrapper[4728]: I0227 10:51:20.971760 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/136f2587-addd-432e-a591-fc74213bf87c-kube-api-access-kkfdk" (OuterVolumeSpecName: "kube-api-access-kkfdk") pod "136f2587-addd-432e-a591-fc74213bf87c" (UID: "136f2587-addd-432e-a591-fc74213bf87c"). InnerVolumeSpecName "kube-api-access-kkfdk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:51:21 crc kubenswrapper[4728]: I0227 10:51:21.007747 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/136f2587-addd-432e-a591-fc74213bf87c-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "136f2587-addd-432e-a591-fc74213bf87c" (UID: "136f2587-addd-432e-a591-fc74213bf87c"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:51:21 crc kubenswrapper[4728]: I0227 10:51:21.050256 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/136f2587-addd-432e-a591-fc74213bf87c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "136f2587-addd-432e-a591-fc74213bf87c" (UID: "136f2587-addd-432e-a591-fc74213bf87c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:51:21 crc kubenswrapper[4728]: I0227 10:51:21.071872 4728 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/136f2587-addd-432e-a591-fc74213bf87c-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 10:51:21 crc kubenswrapper[4728]: I0227 10:51:21.071908 4728 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/136f2587-addd-432e-a591-fc74213bf87c-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 27 10:51:21 crc kubenswrapper[4728]: I0227 10:51:21.071919 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kkfdk\" (UniqueName: \"kubernetes.io/projected/136f2587-addd-432e-a591-fc74213bf87c-kube-api-access-kkfdk\") on node \"crc\" DevicePath \"\"" Feb 27 10:51:21 crc kubenswrapper[4728]: I0227 10:51:21.071928 4728 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/136f2587-addd-432e-a591-fc74213bf87c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 10:51:21 crc kubenswrapper[4728]: I0227 10:51:21.104418 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/136f2587-addd-432e-a591-fc74213bf87c-config-data" (OuterVolumeSpecName: "config-data") pod "136f2587-addd-432e-a591-fc74213bf87c" (UID: "136f2587-addd-432e-a591-fc74213bf87c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:51:21 crc kubenswrapper[4728]: I0227 10:51:21.174420 4728 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/136f2587-addd-432e-a591-fc74213bf87c-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 10:51:21 crc kubenswrapper[4728]: I0227 10:51:21.627240 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-sync-txsbm"] Feb 27 10:51:21 crc kubenswrapper[4728]: E0227 10:51:21.627908 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="136f2587-addd-432e-a591-fc74213bf87c" containerName="ceilometer-notification-agent" Feb 27 10:51:21 crc kubenswrapper[4728]: I0227 10:51:21.627947 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="136f2587-addd-432e-a591-fc74213bf87c" containerName="ceilometer-notification-agent" Feb 27 10:51:21 crc kubenswrapper[4728]: E0227 10:51:21.627985 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="136f2587-addd-432e-a591-fc74213bf87c" containerName="sg-core" Feb 27 10:51:21 crc kubenswrapper[4728]: I0227 10:51:21.627994 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="136f2587-addd-432e-a591-fc74213bf87c" containerName="sg-core" Feb 27 10:51:21 crc kubenswrapper[4728]: E0227 10:51:21.628012 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="204df3b2-d221-4ba6-811d-d232b4a5d12e" containerName="mariadb-account-create-update" Feb 27 10:51:21 crc kubenswrapper[4728]: I0227 10:51:21.628021 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="204df3b2-d221-4ba6-811d-d232b4a5d12e" containerName="mariadb-account-create-update" Feb 27 10:51:21 crc kubenswrapper[4728]: E0227 10:51:21.628045 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="136f2587-addd-432e-a591-fc74213bf87c" containerName="proxy-httpd" Feb 27 10:51:21 crc kubenswrapper[4728]: I0227 10:51:21.628057 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="136f2587-addd-432e-a591-fc74213bf87c" containerName="proxy-httpd" Feb 27 10:51:21 crc kubenswrapper[4728]: E0227 10:51:21.628075 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="136f2587-addd-432e-a591-fc74213bf87c" containerName="ceilometer-central-agent" Feb 27 10:51:21 crc kubenswrapper[4728]: I0227 10:51:21.628083 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="136f2587-addd-432e-a591-fc74213bf87c" containerName="ceilometer-central-agent" Feb 27 10:51:21 crc kubenswrapper[4728]: E0227 10:51:21.628095 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41862d42-5899-4daf-8f29-a24ba28d3908" containerName="mariadb-database-create" Feb 27 10:51:21 crc kubenswrapper[4728]: I0227 10:51:21.628103 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="41862d42-5899-4daf-8f29-a24ba28d3908" containerName="mariadb-database-create" Feb 27 10:51:21 crc kubenswrapper[4728]: I0227 10:51:21.628378 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="136f2587-addd-432e-a591-fc74213bf87c" containerName="proxy-httpd" Feb 27 10:51:21 crc kubenswrapper[4728]: I0227 10:51:21.628409 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="136f2587-addd-432e-a591-fc74213bf87c" containerName="ceilometer-central-agent" Feb 27 10:51:21 crc kubenswrapper[4728]: I0227 10:51:21.628425 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="136f2587-addd-432e-a591-fc74213bf87c" containerName="ceilometer-notification-agent" Feb 27 10:51:21 crc kubenswrapper[4728]: I0227 10:51:21.628435 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="136f2587-addd-432e-a591-fc74213bf87c" containerName="sg-core" Feb 27 10:51:21 crc kubenswrapper[4728]: I0227 10:51:21.628444 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="204df3b2-d221-4ba6-811d-d232b4a5d12e" containerName="mariadb-account-create-update" Feb 27 10:51:21 crc kubenswrapper[4728]: I0227 10:51:21.628466 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="41862d42-5899-4daf-8f29-a24ba28d3908" containerName="mariadb-database-create" Feb 27 10:51:21 crc kubenswrapper[4728]: I0227 10:51:21.629547 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-txsbm" Feb 27 10:51:21 crc kubenswrapper[4728]: I0227 10:51:21.631724 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-4dctm" Feb 27 10:51:21 crc kubenswrapper[4728]: I0227 10:51:21.631891 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Feb 27 10:51:21 crc kubenswrapper[4728]: I0227 10:51:21.631953 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Feb 27 10:51:21 crc kubenswrapper[4728]: I0227 10:51:21.632141 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 27 10:51:21 crc kubenswrapper[4728]: I0227 10:51:21.646743 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-txsbm"] Feb 27 10:51:21 crc kubenswrapper[4728]: I0227 10:51:21.688225 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/138d00ee-f707-4482-8966-5a7f182ae3bf-scripts\") pod \"aodh-db-sync-txsbm\" (UID: \"138d00ee-f707-4482-8966-5a7f182ae3bf\") " pod="openstack/aodh-db-sync-txsbm" Feb 27 10:51:21 crc kubenswrapper[4728]: I0227 10:51:21.689490 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/138d00ee-f707-4482-8966-5a7f182ae3bf-combined-ca-bundle\") pod \"aodh-db-sync-txsbm\" (UID: \"138d00ee-f707-4482-8966-5a7f182ae3bf\") " pod="openstack/aodh-db-sync-txsbm" Feb 27 10:51:21 crc kubenswrapper[4728]: I0227 10:51:21.689657 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2fk7\" (UniqueName: \"kubernetes.io/projected/138d00ee-f707-4482-8966-5a7f182ae3bf-kube-api-access-d2fk7\") pod \"aodh-db-sync-txsbm\" (UID: \"138d00ee-f707-4482-8966-5a7f182ae3bf\") " pod="openstack/aodh-db-sync-txsbm" Feb 27 10:51:21 crc kubenswrapper[4728]: I0227 10:51:21.689770 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/138d00ee-f707-4482-8966-5a7f182ae3bf-config-data\") pod \"aodh-db-sync-txsbm\" (UID: \"138d00ee-f707-4482-8966-5a7f182ae3bf\") " pod="openstack/aodh-db-sync-txsbm" Feb 27 10:51:21 crc kubenswrapper[4728]: I0227 10:51:21.705429 4728 generic.go:334] "Generic (PLEG): container finished" podID="efa9e238-79b0-4757-acab-53537b5ae93a" containerID="20c67cb254710b9dab1898fc699ab336aed6ee306edafed02c5eff4caee96ea8" exitCode=0 Feb 27 10:51:21 crc kubenswrapper[4728]: I0227 10:51:21.705588 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h4gl8" event={"ID":"efa9e238-79b0-4757-acab-53537b5ae93a","Type":"ContainerDied","Data":"20c67cb254710b9dab1898fc699ab336aed6ee306edafed02c5eff4caee96ea8"} Feb 27 10:51:21 crc kubenswrapper[4728]: I0227 10:51:21.712016 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 27 10:51:21 crc kubenswrapper[4728]: I0227 10:51:21.712597 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"136f2587-addd-432e-a591-fc74213bf87c","Type":"ContainerDied","Data":"a674d4c4b12ee092a5d35c78b9b919f291cbd07c79fe9a8f3dcf096b23815eac"} Feb 27 10:51:21 crc kubenswrapper[4728]: I0227 10:51:21.712642 4728 scope.go:117] "RemoveContainer" containerID="758ac345a0bdbef7ea8def9b887d6f5ab6b80eda76934b927f7079883781c5b4" Feb 27 10:51:21 crc kubenswrapper[4728]: I0227 10:51:21.739776 4728 scope.go:117] "RemoveContainer" containerID="85a5dac94aed8e183f346891f01f0a3e27e460c6b1ff4d65ebc507cd27c6688a" Feb 27 10:51:21 crc kubenswrapper[4728]: I0227 10:51:21.767711 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 27 10:51:21 crc kubenswrapper[4728]: I0227 10:51:21.773668 4728 scope.go:117] "RemoveContainer" containerID="7e8d307f189e01e4d67ecaacdb8d9c40063a7dad4c7d69add94feaf0884e4609" Feb 27 10:51:21 crc kubenswrapper[4728]: I0227 10:51:21.791601 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/138d00ee-f707-4482-8966-5a7f182ae3bf-combined-ca-bundle\") pod \"aodh-db-sync-txsbm\" (UID: \"138d00ee-f707-4482-8966-5a7f182ae3bf\") " pod="openstack/aodh-db-sync-txsbm" Feb 27 10:51:21 crc kubenswrapper[4728]: I0227 10:51:21.792777 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d2fk7\" (UniqueName: \"kubernetes.io/projected/138d00ee-f707-4482-8966-5a7f182ae3bf-kube-api-access-d2fk7\") pod \"aodh-db-sync-txsbm\" (UID: \"138d00ee-f707-4482-8966-5a7f182ae3bf\") " pod="openstack/aodh-db-sync-txsbm" Feb 27 10:51:21 crc kubenswrapper[4728]: I0227 10:51:21.793162 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/138d00ee-f707-4482-8966-5a7f182ae3bf-config-data\") pod \"aodh-db-sync-txsbm\" (UID: \"138d00ee-f707-4482-8966-5a7f182ae3bf\") " pod="openstack/aodh-db-sync-txsbm" Feb 27 10:51:21 crc kubenswrapper[4728]: I0227 10:51:21.793427 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/138d00ee-f707-4482-8966-5a7f182ae3bf-scripts\") pod \"aodh-db-sync-txsbm\" (UID: \"138d00ee-f707-4482-8966-5a7f182ae3bf\") " pod="openstack/aodh-db-sync-txsbm" Feb 27 10:51:21 crc kubenswrapper[4728]: I0227 10:51:21.816576 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 27 10:51:21 crc kubenswrapper[4728]: I0227 10:51:21.819128 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/138d00ee-f707-4482-8966-5a7f182ae3bf-scripts\") pod \"aodh-db-sync-txsbm\" (UID: \"138d00ee-f707-4482-8966-5a7f182ae3bf\") " pod="openstack/aodh-db-sync-txsbm" Feb 27 10:51:21 crc kubenswrapper[4728]: I0227 10:51:21.823858 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/138d00ee-f707-4482-8966-5a7f182ae3bf-config-data\") pod \"aodh-db-sync-txsbm\" (UID: \"138d00ee-f707-4482-8966-5a7f182ae3bf\") " pod="openstack/aodh-db-sync-txsbm" Feb 27 10:51:21 crc kubenswrapper[4728]: I0227 10:51:21.828271 4728 scope.go:117] "RemoveContainer" containerID="f1299dc3339493607dd2e559ab64e97ec7958bfd8a4d78aad865c5729007b21f" Feb 27 10:51:21 crc kubenswrapper[4728]: I0227 10:51:21.831642 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/138d00ee-f707-4482-8966-5a7f182ae3bf-combined-ca-bundle\") pod \"aodh-db-sync-txsbm\" (UID: \"138d00ee-f707-4482-8966-5a7f182ae3bf\") " pod="openstack/aodh-db-sync-txsbm" Feb 27 10:51:21 crc kubenswrapper[4728]: I0227 10:51:21.833272 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d2fk7\" (UniqueName: \"kubernetes.io/projected/138d00ee-f707-4482-8966-5a7f182ae3bf-kube-api-access-d2fk7\") pod \"aodh-db-sync-txsbm\" (UID: \"138d00ee-f707-4482-8966-5a7f182ae3bf\") " pod="openstack/aodh-db-sync-txsbm" Feb 27 10:51:21 crc kubenswrapper[4728]: I0227 10:51:21.836915 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 27 10:51:21 crc kubenswrapper[4728]: I0227 10:51:21.840945 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 27 10:51:21 crc kubenswrapper[4728]: I0227 10:51:21.844089 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 27 10:51:21 crc kubenswrapper[4728]: I0227 10:51:21.844317 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 27 10:51:21 crc kubenswrapper[4728]: I0227 10:51:21.849303 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 27 10:51:21 crc kubenswrapper[4728]: I0227 10:51:21.924151 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75262df4-2453-419b-b516-f7a0d58deb82-scripts\") pod \"ceilometer-0\" (UID: \"75262df4-2453-419b-b516-f7a0d58deb82\") " pod="openstack/ceilometer-0" Feb 27 10:51:21 crc kubenswrapper[4728]: I0227 10:51:21.924283 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/75262df4-2453-419b-b516-f7a0d58deb82-log-httpd\") pod \"ceilometer-0\" (UID: \"75262df4-2453-419b-b516-f7a0d58deb82\") " pod="openstack/ceilometer-0" Feb 27 10:51:21 crc kubenswrapper[4728]: I0227 10:51:21.924346 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75262df4-2453-419b-b516-f7a0d58deb82-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"75262df4-2453-419b-b516-f7a0d58deb82\") " pod="openstack/ceilometer-0" Feb 27 10:51:21 crc kubenswrapper[4728]: I0227 10:51:21.924534 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hl6ms\" (UniqueName: \"kubernetes.io/projected/75262df4-2453-419b-b516-f7a0d58deb82-kube-api-access-hl6ms\") pod \"ceilometer-0\" (UID: \"75262df4-2453-419b-b516-f7a0d58deb82\") " pod="openstack/ceilometer-0" Feb 27 10:51:21 crc kubenswrapper[4728]: I0227 10:51:21.924711 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/75262df4-2453-419b-b516-f7a0d58deb82-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"75262df4-2453-419b-b516-f7a0d58deb82\") " pod="openstack/ceilometer-0" Feb 27 10:51:21 crc kubenswrapper[4728]: I0227 10:51:21.924842 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75262df4-2453-419b-b516-f7a0d58deb82-config-data\") pod \"ceilometer-0\" (UID: \"75262df4-2453-419b-b516-f7a0d58deb82\") " pod="openstack/ceilometer-0" Feb 27 10:51:21 crc kubenswrapper[4728]: I0227 10:51:21.924908 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/75262df4-2453-419b-b516-f7a0d58deb82-run-httpd\") pod \"ceilometer-0\" (UID: \"75262df4-2453-419b-b516-f7a0d58deb82\") " pod="openstack/ceilometer-0" Feb 27 10:51:21 crc kubenswrapper[4728]: I0227 10:51:21.959823 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-txsbm" Feb 27 10:51:22 crc kubenswrapper[4728]: I0227 10:51:22.026853 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75262df4-2453-419b-b516-f7a0d58deb82-config-data\") pod \"ceilometer-0\" (UID: \"75262df4-2453-419b-b516-f7a0d58deb82\") " pod="openstack/ceilometer-0" Feb 27 10:51:22 crc kubenswrapper[4728]: I0227 10:51:22.026925 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/75262df4-2453-419b-b516-f7a0d58deb82-run-httpd\") pod \"ceilometer-0\" (UID: \"75262df4-2453-419b-b516-f7a0d58deb82\") " pod="openstack/ceilometer-0" Feb 27 10:51:22 crc kubenswrapper[4728]: I0227 10:51:22.027023 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75262df4-2453-419b-b516-f7a0d58deb82-scripts\") pod \"ceilometer-0\" (UID: \"75262df4-2453-419b-b516-f7a0d58deb82\") " pod="openstack/ceilometer-0" Feb 27 10:51:22 crc kubenswrapper[4728]: I0227 10:51:22.027187 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/75262df4-2453-419b-b516-f7a0d58deb82-log-httpd\") pod \"ceilometer-0\" (UID: \"75262df4-2453-419b-b516-f7a0d58deb82\") " pod="openstack/ceilometer-0" Feb 27 10:51:22 crc kubenswrapper[4728]: I0227 10:51:22.027411 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/75262df4-2453-419b-b516-f7a0d58deb82-run-httpd\") pod \"ceilometer-0\" (UID: \"75262df4-2453-419b-b516-f7a0d58deb82\") " pod="openstack/ceilometer-0" Feb 27 10:51:22 crc kubenswrapper[4728]: I0227 10:51:22.027981 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/75262df4-2453-419b-b516-f7a0d58deb82-log-httpd\") pod \"ceilometer-0\" (UID: \"75262df4-2453-419b-b516-f7a0d58deb82\") " pod="openstack/ceilometer-0" Feb 27 10:51:22 crc kubenswrapper[4728]: I0227 10:51:22.028056 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75262df4-2453-419b-b516-f7a0d58deb82-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"75262df4-2453-419b-b516-f7a0d58deb82\") " pod="openstack/ceilometer-0" Feb 27 10:51:22 crc kubenswrapper[4728]: I0227 10:51:22.028525 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hl6ms\" (UniqueName: \"kubernetes.io/projected/75262df4-2453-419b-b516-f7a0d58deb82-kube-api-access-hl6ms\") pod \"ceilometer-0\" (UID: \"75262df4-2453-419b-b516-f7a0d58deb82\") " pod="openstack/ceilometer-0" Feb 27 10:51:22 crc kubenswrapper[4728]: I0227 10:51:22.028777 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/75262df4-2453-419b-b516-f7a0d58deb82-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"75262df4-2453-419b-b516-f7a0d58deb82\") " pod="openstack/ceilometer-0" Feb 27 10:51:22 crc kubenswrapper[4728]: I0227 10:51:22.031010 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75262df4-2453-419b-b516-f7a0d58deb82-scripts\") pod \"ceilometer-0\" (UID: \"75262df4-2453-419b-b516-f7a0d58deb82\") " pod="openstack/ceilometer-0" Feb 27 10:51:22 crc kubenswrapper[4728]: I0227 10:51:22.031386 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75262df4-2453-419b-b516-f7a0d58deb82-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"75262df4-2453-419b-b516-f7a0d58deb82\") " pod="openstack/ceilometer-0" Feb 27 10:51:22 crc kubenswrapper[4728]: I0227 10:51:22.033285 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/75262df4-2453-419b-b516-f7a0d58deb82-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"75262df4-2453-419b-b516-f7a0d58deb82\") " pod="openstack/ceilometer-0" Feb 27 10:51:22 crc kubenswrapper[4728]: I0227 10:51:22.033550 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75262df4-2453-419b-b516-f7a0d58deb82-config-data\") pod \"ceilometer-0\" (UID: \"75262df4-2453-419b-b516-f7a0d58deb82\") " pod="openstack/ceilometer-0" Feb 27 10:51:22 crc kubenswrapper[4728]: I0227 10:51:22.049179 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hl6ms\" (UniqueName: \"kubernetes.io/projected/75262df4-2453-419b-b516-f7a0d58deb82-kube-api-access-hl6ms\") pod \"ceilometer-0\" (UID: \"75262df4-2453-419b-b516-f7a0d58deb82\") " pod="openstack/ceilometer-0" Feb 27 10:51:22 crc kubenswrapper[4728]: I0227 10:51:22.229207 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 27 10:51:22 crc kubenswrapper[4728]: I0227 10:51:22.498814 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-txsbm"] Feb 27 10:51:22 crc kubenswrapper[4728]: I0227 10:51:22.711488 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 27 10:51:22 crc kubenswrapper[4728]: I0227 10:51:22.742133 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="136f2587-addd-432e-a591-fc74213bf87c" path="/var/lib/kubelet/pods/136f2587-addd-432e-a591-fc74213bf87c/volumes" Feb 27 10:51:22 crc kubenswrapper[4728]: I0227 10:51:22.745907 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-txsbm" event={"ID":"138d00ee-f707-4482-8966-5a7f182ae3bf","Type":"ContainerStarted","Data":"d8a0e9f6bb831d8b3460ab7dc75abe8cda9eb2f96f93b2b02354075434e3e5b7"} Feb 27 10:51:23 crc kubenswrapper[4728]: I0227 10:51:23.757654 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h4gl8" event={"ID":"efa9e238-79b0-4757-acab-53537b5ae93a","Type":"ContainerStarted","Data":"e8364fc84ab88348ec296479a71d7b4bfcacbc70aaba3595152c2dd8e66ef77a"} Feb 27 10:51:23 crc kubenswrapper[4728]: I0227 10:51:23.763811 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"75262df4-2453-419b-b516-f7a0d58deb82","Type":"ContainerStarted","Data":"2f7db1d2699a79a45efe79543ae1474ab17436fbd5be080f1985b713f4ce56bf"} Feb 27 10:51:23 crc kubenswrapper[4728]: I0227 10:51:23.763850 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"75262df4-2453-419b-b516-f7a0d58deb82","Type":"ContainerStarted","Data":"979c2b2ae1a777a6b62b017c865d49601d1c37487bc8d22d2dd2713e4b82d3a6"} Feb 27 10:51:24 crc kubenswrapper[4728]: I0227 10:51:24.307897 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 27 10:51:24 crc kubenswrapper[4728]: I0227 10:51:24.308237 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 27 10:51:24 crc kubenswrapper[4728]: I0227 10:51:24.777908 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"75262df4-2453-419b-b516-f7a0d58deb82","Type":"ContainerStarted","Data":"5da9b531de57551bb290118fc0f5853758a3bae376fa5fe9eeed8ea959871936"} Feb 27 10:51:27 crc kubenswrapper[4728]: I0227 10:51:27.934314 4728 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 27 10:51:28 crc kubenswrapper[4728]: I0227 10:51:28.829713 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-txsbm" event={"ID":"138d00ee-f707-4482-8966-5a7f182ae3bf","Type":"ContainerStarted","Data":"cc5279c9def68b0f45ba0a3747a69db7f715ce1c662903d49ec00534bf0b9335"} Feb 27 10:51:28 crc kubenswrapper[4728]: I0227 10:51:28.832133 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"75262df4-2453-419b-b516-f7a0d58deb82","Type":"ContainerStarted","Data":"a93105e8da41986fcc76fdb27291716d041b79c8fa83154a8faf05fbc4b5b939"} Feb 27 10:51:28 crc kubenswrapper[4728]: I0227 10:51:28.855487 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-db-sync-txsbm" podStartSLOduration=2.8681013589999997 podStartE2EDuration="7.855464372s" podCreationTimestamp="2026-02-27 10:51:21 +0000 UTC" firstStartedPulling="2026-02-27 10:51:22.545593833 +0000 UTC m=+1502.507959939" lastFinishedPulling="2026-02-27 10:51:27.532956836 +0000 UTC m=+1507.495322952" observedRunningTime="2026-02-27 10:51:28.84762903 +0000 UTC m=+1508.809995156" watchObservedRunningTime="2026-02-27 10:51:28.855464372 +0000 UTC m=+1508.817830488" Feb 27 10:51:29 crc kubenswrapper[4728]: I0227 10:51:29.152681 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 27 10:51:29 crc kubenswrapper[4728]: I0227 10:51:29.153051 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 27 10:51:29 crc kubenswrapper[4728]: I0227 10:51:29.306882 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 27 10:51:29 crc kubenswrapper[4728]: I0227 10:51:29.306921 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 27 10:51:30 crc kubenswrapper[4728]: I0227 10:51:30.235020 4728 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="e99f57b6-6a8c-4e28-8a44-b1383a7ec0c8" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.1.3:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 27 10:51:30 crc kubenswrapper[4728]: I0227 10:51:30.235090 4728 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="e99f57b6-6a8c-4e28-8a44-b1383a7ec0c8" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.1.3:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 27 10:51:30 crc kubenswrapper[4728]: I0227 10:51:30.322808 4728 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="ec89ac4c-d100-4004-bb62-0f5e6a344efd" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.1.4:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 27 10:51:30 crc kubenswrapper[4728]: I0227 10:51:30.322808 4728 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="ec89ac4c-d100-4004-bb62-0f5e6a344efd" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.1.4:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 27 10:51:30 crc kubenswrapper[4728]: I0227 10:51:30.867601 4728 generic.go:334] "Generic (PLEG): container finished" podID="138d00ee-f707-4482-8966-5a7f182ae3bf" containerID="cc5279c9def68b0f45ba0a3747a69db7f715ce1c662903d49ec00534bf0b9335" exitCode=0 Feb 27 10:51:30 crc kubenswrapper[4728]: I0227 10:51:30.867636 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-txsbm" event={"ID":"138d00ee-f707-4482-8966-5a7f182ae3bf","Type":"ContainerDied","Data":"cc5279c9def68b0f45ba0a3747a69db7f715ce1c662903d49ec00534bf0b9335"} Feb 27 10:51:30 crc kubenswrapper[4728]: I0227 10:51:30.871914 4728 generic.go:334] "Generic (PLEG): container finished" podID="efa9e238-79b0-4757-acab-53537b5ae93a" containerID="e8364fc84ab88348ec296479a71d7b4bfcacbc70aaba3595152c2dd8e66ef77a" exitCode=0 Feb 27 10:51:30 crc kubenswrapper[4728]: I0227 10:51:30.872017 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h4gl8" event={"ID":"efa9e238-79b0-4757-acab-53537b5ae93a","Type":"ContainerDied","Data":"e8364fc84ab88348ec296479a71d7b4bfcacbc70aaba3595152c2dd8e66ef77a"} Feb 27 10:51:30 crc kubenswrapper[4728]: I0227 10:51:30.880600 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"75262df4-2453-419b-b516-f7a0d58deb82","Type":"ContainerStarted","Data":"fea4c381c2e7aa12818964364b2085d666b42a1f0e711d978e1344b3a57850f0"} Feb 27 10:51:30 crc kubenswrapper[4728]: I0227 10:51:30.881823 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 27 10:51:30 crc kubenswrapper[4728]: I0227 10:51:30.930356 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.975830862 podStartE2EDuration="9.930330733s" podCreationTimestamp="2026-02-27 10:51:21 +0000 UTC" firstStartedPulling="2026-02-27 10:51:22.721673361 +0000 UTC m=+1502.684039467" lastFinishedPulling="2026-02-27 10:51:29.676173232 +0000 UTC m=+1509.638539338" observedRunningTime="2026-02-27 10:51:30.925134843 +0000 UTC m=+1510.887500989" watchObservedRunningTime="2026-02-27 10:51:30.930330733 +0000 UTC m=+1510.892696839" Feb 27 10:51:32 crc kubenswrapper[4728]: I0227 10:51:32.353622 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-txsbm" Feb 27 10:51:32 crc kubenswrapper[4728]: I0227 10:51:32.386435 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d2fk7\" (UniqueName: \"kubernetes.io/projected/138d00ee-f707-4482-8966-5a7f182ae3bf-kube-api-access-d2fk7\") pod \"138d00ee-f707-4482-8966-5a7f182ae3bf\" (UID: \"138d00ee-f707-4482-8966-5a7f182ae3bf\") " Feb 27 10:51:32 crc kubenswrapper[4728]: I0227 10:51:32.386629 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/138d00ee-f707-4482-8966-5a7f182ae3bf-scripts\") pod \"138d00ee-f707-4482-8966-5a7f182ae3bf\" (UID: \"138d00ee-f707-4482-8966-5a7f182ae3bf\") " Feb 27 10:51:32 crc kubenswrapper[4728]: I0227 10:51:32.386765 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/138d00ee-f707-4482-8966-5a7f182ae3bf-config-data\") pod \"138d00ee-f707-4482-8966-5a7f182ae3bf\" (UID: \"138d00ee-f707-4482-8966-5a7f182ae3bf\") " Feb 27 10:51:32 crc kubenswrapper[4728]: I0227 10:51:32.386846 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/138d00ee-f707-4482-8966-5a7f182ae3bf-combined-ca-bundle\") pod \"138d00ee-f707-4482-8966-5a7f182ae3bf\" (UID: \"138d00ee-f707-4482-8966-5a7f182ae3bf\") " Feb 27 10:51:32 crc kubenswrapper[4728]: I0227 10:51:32.401915 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/138d00ee-f707-4482-8966-5a7f182ae3bf-kube-api-access-d2fk7" (OuterVolumeSpecName: "kube-api-access-d2fk7") pod "138d00ee-f707-4482-8966-5a7f182ae3bf" (UID: "138d00ee-f707-4482-8966-5a7f182ae3bf"). InnerVolumeSpecName "kube-api-access-d2fk7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:51:32 crc kubenswrapper[4728]: I0227 10:51:32.406606 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/138d00ee-f707-4482-8966-5a7f182ae3bf-scripts" (OuterVolumeSpecName: "scripts") pod "138d00ee-f707-4482-8966-5a7f182ae3bf" (UID: "138d00ee-f707-4482-8966-5a7f182ae3bf"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:51:32 crc kubenswrapper[4728]: I0227 10:51:32.428538 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/138d00ee-f707-4482-8966-5a7f182ae3bf-config-data" (OuterVolumeSpecName: "config-data") pod "138d00ee-f707-4482-8966-5a7f182ae3bf" (UID: "138d00ee-f707-4482-8966-5a7f182ae3bf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:51:32 crc kubenswrapper[4728]: I0227 10:51:32.441093 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/138d00ee-f707-4482-8966-5a7f182ae3bf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "138d00ee-f707-4482-8966-5a7f182ae3bf" (UID: "138d00ee-f707-4482-8966-5a7f182ae3bf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:51:32 crc kubenswrapper[4728]: I0227 10:51:32.490061 4728 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/138d00ee-f707-4482-8966-5a7f182ae3bf-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 10:51:32 crc kubenswrapper[4728]: I0227 10:51:32.490096 4728 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/138d00ee-f707-4482-8966-5a7f182ae3bf-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 10:51:32 crc kubenswrapper[4728]: I0227 10:51:32.490106 4728 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/138d00ee-f707-4482-8966-5a7f182ae3bf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 10:51:32 crc kubenswrapper[4728]: I0227 10:51:32.490119 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d2fk7\" (UniqueName: \"kubernetes.io/projected/138d00ee-f707-4482-8966-5a7f182ae3bf-kube-api-access-d2fk7\") on node \"crc\" DevicePath \"\"" Feb 27 10:51:32 crc kubenswrapper[4728]: I0227 10:51:32.904858 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h4gl8" event={"ID":"efa9e238-79b0-4757-acab-53537b5ae93a","Type":"ContainerStarted","Data":"226995caa65f7b453cbb5dd7edc24c62282b79109c70e17cb47825c47482f362"} Feb 27 10:51:32 crc kubenswrapper[4728]: I0227 10:51:32.909009 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-txsbm" Feb 27 10:51:32 crc kubenswrapper[4728]: I0227 10:51:32.909778 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-txsbm" event={"ID":"138d00ee-f707-4482-8966-5a7f182ae3bf","Type":"ContainerDied","Data":"d8a0e9f6bb831d8b3460ab7dc75abe8cda9eb2f96f93b2b02354075434e3e5b7"} Feb 27 10:51:32 crc kubenswrapper[4728]: I0227 10:51:32.909818 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d8a0e9f6bb831d8b3460ab7dc75abe8cda9eb2f96f93b2b02354075434e3e5b7" Feb 27 10:51:32 crc kubenswrapper[4728]: I0227 10:51:32.937850 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-h4gl8" podStartSLOduration=3.762064846 podStartE2EDuration="13.937832267s" podCreationTimestamp="2026-02-27 10:51:19 +0000 UTC" firstStartedPulling="2026-02-27 10:51:21.707653695 +0000 UTC m=+1501.670019801" lastFinishedPulling="2026-02-27 10:51:31.883421116 +0000 UTC m=+1511.845787222" observedRunningTime="2026-02-27 10:51:32.928926116 +0000 UTC m=+1512.891292232" watchObservedRunningTime="2026-02-27 10:51:32.937832267 +0000 UTC m=+1512.900198373" Feb 27 10:51:35 crc kubenswrapper[4728]: I0227 10:51:35.734971 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 27 10:51:35 crc kubenswrapper[4728]: I0227 10:51:35.801597 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/214d42db-ca45-403f-89e3-7026fb6abed2-config-data\") pod \"214d42db-ca45-403f-89e3-7026fb6abed2\" (UID: \"214d42db-ca45-403f-89e3-7026fb6abed2\") " Feb 27 10:51:35 crc kubenswrapper[4728]: I0227 10:51:35.801807 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-twmf7\" (UniqueName: \"kubernetes.io/projected/214d42db-ca45-403f-89e3-7026fb6abed2-kube-api-access-twmf7\") pod \"214d42db-ca45-403f-89e3-7026fb6abed2\" (UID: \"214d42db-ca45-403f-89e3-7026fb6abed2\") " Feb 27 10:51:35 crc kubenswrapper[4728]: I0227 10:51:35.801888 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/214d42db-ca45-403f-89e3-7026fb6abed2-combined-ca-bundle\") pod \"214d42db-ca45-403f-89e3-7026fb6abed2\" (UID: \"214d42db-ca45-403f-89e3-7026fb6abed2\") " Feb 27 10:51:35 crc kubenswrapper[4728]: I0227 10:51:35.807255 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/214d42db-ca45-403f-89e3-7026fb6abed2-kube-api-access-twmf7" (OuterVolumeSpecName: "kube-api-access-twmf7") pod "214d42db-ca45-403f-89e3-7026fb6abed2" (UID: "214d42db-ca45-403f-89e3-7026fb6abed2"). InnerVolumeSpecName "kube-api-access-twmf7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:51:35 crc kubenswrapper[4728]: I0227 10:51:35.834318 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/214d42db-ca45-403f-89e3-7026fb6abed2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "214d42db-ca45-403f-89e3-7026fb6abed2" (UID: "214d42db-ca45-403f-89e3-7026fb6abed2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:51:35 crc kubenswrapper[4728]: I0227 10:51:35.837645 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/214d42db-ca45-403f-89e3-7026fb6abed2-config-data" (OuterVolumeSpecName: "config-data") pod "214d42db-ca45-403f-89e3-7026fb6abed2" (UID: "214d42db-ca45-403f-89e3-7026fb6abed2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:51:35 crc kubenswrapper[4728]: I0227 10:51:35.897826 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 27 10:51:35 crc kubenswrapper[4728]: I0227 10:51:35.904426 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-twmf7\" (UniqueName: \"kubernetes.io/projected/214d42db-ca45-403f-89e3-7026fb6abed2-kube-api-access-twmf7\") on node \"crc\" DevicePath \"\"" Feb 27 10:51:35 crc kubenswrapper[4728]: I0227 10:51:35.904461 4728 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/214d42db-ca45-403f-89e3-7026fb6abed2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 10:51:35 crc kubenswrapper[4728]: I0227 10:51:35.904472 4728 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/214d42db-ca45-403f-89e3-7026fb6abed2-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 10:51:35 crc kubenswrapper[4728]: I0227 10:51:35.922271 4728 patch_prober.go:28] interesting pod/machine-config-daemon-mf2hh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 10:51:35 crc kubenswrapper[4728]: I0227 10:51:35.922328 4728 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 10:51:35 crc kubenswrapper[4728]: I0227 10:51:35.948324 4728 generic.go:334] "Generic (PLEG): container finished" podID="214d42db-ca45-403f-89e3-7026fb6abed2" containerID="171887b04082b4cf63e4c743d6389ebad685207ffac1fd0da2bd9773c241220a" exitCode=137 Feb 27 10:51:35 crc kubenswrapper[4728]: I0227 10:51:35.948395 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"214d42db-ca45-403f-89e3-7026fb6abed2","Type":"ContainerDied","Data":"171887b04082b4cf63e4c743d6389ebad685207ffac1fd0da2bd9773c241220a"} Feb 27 10:51:35 crc kubenswrapper[4728]: I0227 10:51:35.948456 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"214d42db-ca45-403f-89e3-7026fb6abed2","Type":"ContainerDied","Data":"fdf2e1d636a05173ca7fc92d13da23736a716b0b721b9c679fa9430e29a5c4d8"} Feb 27 10:51:35 crc kubenswrapper[4728]: I0227 10:51:35.948479 4728 scope.go:117] "RemoveContainer" containerID="171887b04082b4cf63e4c743d6389ebad685207ffac1fd0da2bd9773c241220a" Feb 27 10:51:35 crc kubenswrapper[4728]: I0227 10:51:35.948639 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 27 10:51:35 crc kubenswrapper[4728]: I0227 10:51:35.961281 4728 generic.go:334] "Generic (PLEG): container finished" podID="67af1e62-168c-4a94-a206-79158119b0a4" containerID="35470350f7e534e870a85053d34cdc34d1ac705756500146263f184b9fac2860" exitCode=137 Feb 27 10:51:35 crc kubenswrapper[4728]: I0227 10:51:35.961319 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"67af1e62-168c-4a94-a206-79158119b0a4","Type":"ContainerDied","Data":"35470350f7e534e870a85053d34cdc34d1ac705756500146263f184b9fac2860"} Feb 27 10:51:35 crc kubenswrapper[4728]: I0227 10:51:35.961344 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"67af1e62-168c-4a94-a206-79158119b0a4","Type":"ContainerDied","Data":"e7915ecf5e4c34868ea4181cdf1fd6414005a1a371bcdad0299301d0d62751b6"} Feb 27 10:51:35 crc kubenswrapper[4728]: I0227 10:51:35.961409 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 27 10:51:36 crc kubenswrapper[4728]: I0227 10:51:36.006478 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67af1e62-168c-4a94-a206-79158119b0a4-combined-ca-bundle\") pod \"67af1e62-168c-4a94-a206-79158119b0a4\" (UID: \"67af1e62-168c-4a94-a206-79158119b0a4\") " Feb 27 10:51:36 crc kubenswrapper[4728]: I0227 10:51:36.008377 4728 scope.go:117] "RemoveContainer" containerID="171887b04082b4cf63e4c743d6389ebad685207ffac1fd0da2bd9773c241220a" Feb 27 10:51:36 crc kubenswrapper[4728]: I0227 10:51:36.009008 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cv4d5\" (UniqueName: \"kubernetes.io/projected/67af1e62-168c-4a94-a206-79158119b0a4-kube-api-access-cv4d5\") pod \"67af1e62-168c-4a94-a206-79158119b0a4\" (UID: \"67af1e62-168c-4a94-a206-79158119b0a4\") " Feb 27 10:51:36 crc kubenswrapper[4728]: I0227 10:51:36.009425 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67af1e62-168c-4a94-a206-79158119b0a4-config-data\") pod \"67af1e62-168c-4a94-a206-79158119b0a4\" (UID: \"67af1e62-168c-4a94-a206-79158119b0a4\") " Feb 27 10:51:36 crc kubenswrapper[4728]: E0227 10:51:36.010480 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"171887b04082b4cf63e4c743d6389ebad685207ffac1fd0da2bd9773c241220a\": container with ID starting with 171887b04082b4cf63e4c743d6389ebad685207ffac1fd0da2bd9773c241220a not found: ID does not exist" containerID="171887b04082b4cf63e4c743d6389ebad685207ffac1fd0da2bd9773c241220a" Feb 27 10:51:36 crc kubenswrapper[4728]: I0227 10:51:36.011240 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"171887b04082b4cf63e4c743d6389ebad685207ffac1fd0da2bd9773c241220a"} err="failed to get container status \"171887b04082b4cf63e4c743d6389ebad685207ffac1fd0da2bd9773c241220a\": rpc error: code = NotFound desc = could not find container \"171887b04082b4cf63e4c743d6389ebad685207ffac1fd0da2bd9773c241220a\": container with ID starting with 171887b04082b4cf63e4c743d6389ebad685207ffac1fd0da2bd9773c241220a not found: ID does not exist" Feb 27 10:51:36 crc kubenswrapper[4728]: I0227 10:51:36.011285 4728 scope.go:117] "RemoveContainer" containerID="35470350f7e534e870a85053d34cdc34d1ac705756500146263f184b9fac2860" Feb 27 10:51:36 crc kubenswrapper[4728]: I0227 10:51:36.011165 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 27 10:51:36 crc kubenswrapper[4728]: I0227 10:51:36.017152 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67af1e62-168c-4a94-a206-79158119b0a4-kube-api-access-cv4d5" (OuterVolumeSpecName: "kube-api-access-cv4d5") pod "67af1e62-168c-4a94-a206-79158119b0a4" (UID: "67af1e62-168c-4a94-a206-79158119b0a4"). InnerVolumeSpecName "kube-api-access-cv4d5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:51:36 crc kubenswrapper[4728]: I0227 10:51:36.036156 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 27 10:51:36 crc kubenswrapper[4728]: I0227 10:51:36.044826 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67af1e62-168c-4a94-a206-79158119b0a4-config-data" (OuterVolumeSpecName: "config-data") pod "67af1e62-168c-4a94-a206-79158119b0a4" (UID: "67af1e62-168c-4a94-a206-79158119b0a4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:51:36 crc kubenswrapper[4728]: I0227 10:51:36.058098 4728 scope.go:117] "RemoveContainer" containerID="35470350f7e534e870a85053d34cdc34d1ac705756500146263f184b9fac2860" Feb 27 10:51:36 crc kubenswrapper[4728]: E0227 10:51:36.058643 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"35470350f7e534e870a85053d34cdc34d1ac705756500146263f184b9fac2860\": container with ID starting with 35470350f7e534e870a85053d34cdc34d1ac705756500146263f184b9fac2860 not found: ID does not exist" containerID="35470350f7e534e870a85053d34cdc34d1ac705756500146263f184b9fac2860" Feb 27 10:51:36 crc kubenswrapper[4728]: I0227 10:51:36.058699 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"35470350f7e534e870a85053d34cdc34d1ac705756500146263f184b9fac2860"} err="failed to get container status \"35470350f7e534e870a85053d34cdc34d1ac705756500146263f184b9fac2860\": rpc error: code = NotFound desc = could not find container \"35470350f7e534e870a85053d34cdc34d1ac705756500146263f184b9fac2860\": container with ID starting with 35470350f7e534e870a85053d34cdc34d1ac705756500146263f184b9fac2860 not found: ID does not exist" Feb 27 10:51:36 crc kubenswrapper[4728]: I0227 10:51:36.064653 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 27 10:51:36 crc kubenswrapper[4728]: E0227 10:51:36.065278 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="214d42db-ca45-403f-89e3-7026fb6abed2" containerName="nova-cell1-novncproxy-novncproxy" Feb 27 10:51:36 crc kubenswrapper[4728]: I0227 10:51:36.065346 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="214d42db-ca45-403f-89e3-7026fb6abed2" containerName="nova-cell1-novncproxy-novncproxy" Feb 27 10:51:36 crc kubenswrapper[4728]: E0227 10:51:36.065414 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="138d00ee-f707-4482-8966-5a7f182ae3bf" containerName="aodh-db-sync" Feb 27 10:51:36 crc kubenswrapper[4728]: I0227 10:51:36.065463 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="138d00ee-f707-4482-8966-5a7f182ae3bf" containerName="aodh-db-sync" Feb 27 10:51:36 crc kubenswrapper[4728]: E0227 10:51:36.065543 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67af1e62-168c-4a94-a206-79158119b0a4" containerName="nova-scheduler-scheduler" Feb 27 10:51:36 crc kubenswrapper[4728]: I0227 10:51:36.065630 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="67af1e62-168c-4a94-a206-79158119b0a4" containerName="nova-scheduler-scheduler" Feb 27 10:51:36 crc kubenswrapper[4728]: I0227 10:51:36.065888 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="67af1e62-168c-4a94-a206-79158119b0a4" containerName="nova-scheduler-scheduler" Feb 27 10:51:36 crc kubenswrapper[4728]: I0227 10:51:36.065971 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="214d42db-ca45-403f-89e3-7026fb6abed2" containerName="nova-cell1-novncproxy-novncproxy" Feb 27 10:51:36 crc kubenswrapper[4728]: I0227 10:51:36.066040 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="138d00ee-f707-4482-8966-5a7f182ae3bf" containerName="aodh-db-sync" Feb 27 10:51:36 crc kubenswrapper[4728]: I0227 10:51:36.067162 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 27 10:51:36 crc kubenswrapper[4728]: I0227 10:51:36.067448 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67af1e62-168c-4a94-a206-79158119b0a4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "67af1e62-168c-4a94-a206-79158119b0a4" (UID: "67af1e62-168c-4a94-a206-79158119b0a4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:51:36 crc kubenswrapper[4728]: I0227 10:51:36.069322 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Feb 27 10:51:36 crc kubenswrapper[4728]: I0227 10:51:36.069742 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 27 10:51:36 crc kubenswrapper[4728]: I0227 10:51:36.069743 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Feb 27 10:51:36 crc kubenswrapper[4728]: I0227 10:51:36.080416 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 27 10:51:36 crc kubenswrapper[4728]: I0227 10:51:36.112884 4728 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67af1e62-168c-4a94-a206-79158119b0a4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 10:51:36 crc kubenswrapper[4728]: I0227 10:51:36.112921 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cv4d5\" (UniqueName: \"kubernetes.io/projected/67af1e62-168c-4a94-a206-79158119b0a4-kube-api-access-cv4d5\") on node \"crc\" DevicePath \"\"" Feb 27 10:51:36 crc kubenswrapper[4728]: I0227 10:51:36.112930 4728 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67af1e62-168c-4a94-a206-79158119b0a4-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 10:51:36 crc kubenswrapper[4728]: I0227 10:51:36.214766 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zfhjc\" (UniqueName: \"kubernetes.io/projected/32bb7294-8dfc-4b00-9227-445c322a47a1-kube-api-access-zfhjc\") pod \"nova-cell1-novncproxy-0\" (UID: \"32bb7294-8dfc-4b00-9227-445c322a47a1\") " pod="openstack/nova-cell1-novncproxy-0" Feb 27 10:51:36 crc kubenswrapper[4728]: I0227 10:51:36.214842 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/32bb7294-8dfc-4b00-9227-445c322a47a1-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"32bb7294-8dfc-4b00-9227-445c322a47a1\") " pod="openstack/nova-cell1-novncproxy-0" Feb 27 10:51:36 crc kubenswrapper[4728]: I0227 10:51:36.214928 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/32bb7294-8dfc-4b00-9227-445c322a47a1-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"32bb7294-8dfc-4b00-9227-445c322a47a1\") " pod="openstack/nova-cell1-novncproxy-0" Feb 27 10:51:36 crc kubenswrapper[4728]: I0227 10:51:36.215029 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32bb7294-8dfc-4b00-9227-445c322a47a1-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"32bb7294-8dfc-4b00-9227-445c322a47a1\") " pod="openstack/nova-cell1-novncproxy-0" Feb 27 10:51:36 crc kubenswrapper[4728]: I0227 10:51:36.215063 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32bb7294-8dfc-4b00-9227-445c322a47a1-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"32bb7294-8dfc-4b00-9227-445c322a47a1\") " pod="openstack/nova-cell1-novncproxy-0" Feb 27 10:51:36 crc kubenswrapper[4728]: I0227 10:51:36.284267 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Feb 27 10:51:36 crc kubenswrapper[4728]: I0227 10:51:36.293484 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 27 10:51:36 crc kubenswrapper[4728]: I0227 10:51:36.301116 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-4dctm" Feb 27 10:51:36 crc kubenswrapper[4728]: I0227 10:51:36.301321 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Feb 27 10:51:36 crc kubenswrapper[4728]: I0227 10:51:36.301623 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Feb 27 10:51:36 crc kubenswrapper[4728]: I0227 10:51:36.317647 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zfhjc\" (UniqueName: \"kubernetes.io/projected/32bb7294-8dfc-4b00-9227-445c322a47a1-kube-api-access-zfhjc\") pod \"nova-cell1-novncproxy-0\" (UID: \"32bb7294-8dfc-4b00-9227-445c322a47a1\") " pod="openstack/nova-cell1-novncproxy-0" Feb 27 10:51:36 crc kubenswrapper[4728]: I0227 10:51:36.317724 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/32bb7294-8dfc-4b00-9227-445c322a47a1-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"32bb7294-8dfc-4b00-9227-445c322a47a1\") " pod="openstack/nova-cell1-novncproxy-0" Feb 27 10:51:36 crc kubenswrapper[4728]: I0227 10:51:36.317788 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/32bb7294-8dfc-4b00-9227-445c322a47a1-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"32bb7294-8dfc-4b00-9227-445c322a47a1\") " pod="openstack/nova-cell1-novncproxy-0" Feb 27 10:51:36 crc kubenswrapper[4728]: I0227 10:51:36.317973 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32bb7294-8dfc-4b00-9227-445c322a47a1-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"32bb7294-8dfc-4b00-9227-445c322a47a1\") " pod="openstack/nova-cell1-novncproxy-0" Feb 27 10:51:36 crc kubenswrapper[4728]: I0227 10:51:36.318015 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32bb7294-8dfc-4b00-9227-445c322a47a1-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"32bb7294-8dfc-4b00-9227-445c322a47a1\") " pod="openstack/nova-cell1-novncproxy-0" Feb 27 10:51:36 crc kubenswrapper[4728]: I0227 10:51:36.330152 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/32bb7294-8dfc-4b00-9227-445c322a47a1-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"32bb7294-8dfc-4b00-9227-445c322a47a1\") " pod="openstack/nova-cell1-novncproxy-0" Feb 27 10:51:36 crc kubenswrapper[4728]: I0227 10:51:36.330158 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32bb7294-8dfc-4b00-9227-445c322a47a1-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"32bb7294-8dfc-4b00-9227-445c322a47a1\") " pod="openstack/nova-cell1-novncproxy-0" Feb 27 10:51:36 crc kubenswrapper[4728]: I0227 10:51:36.331645 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/32bb7294-8dfc-4b00-9227-445c322a47a1-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"32bb7294-8dfc-4b00-9227-445c322a47a1\") " pod="openstack/nova-cell1-novncproxy-0" Feb 27 10:51:36 crc kubenswrapper[4728]: I0227 10:51:36.337716 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Feb 27 10:51:36 crc kubenswrapper[4728]: I0227 10:51:36.343204 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32bb7294-8dfc-4b00-9227-445c322a47a1-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"32bb7294-8dfc-4b00-9227-445c322a47a1\") " pod="openstack/nova-cell1-novncproxy-0" Feb 27 10:51:36 crc kubenswrapper[4728]: I0227 10:51:36.359638 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zfhjc\" (UniqueName: \"kubernetes.io/projected/32bb7294-8dfc-4b00-9227-445c322a47a1-kube-api-access-zfhjc\") pod \"nova-cell1-novncproxy-0\" (UID: \"32bb7294-8dfc-4b00-9227-445c322a47a1\") " pod="openstack/nova-cell1-novncproxy-0" Feb 27 10:51:36 crc kubenswrapper[4728]: I0227 10:51:36.387302 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 27 10:51:36 crc kubenswrapper[4728]: I0227 10:51:36.430850 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c43d8f20-c2f5-4269-b8fb-aec91f9c9150-scripts\") pod \"aodh-0\" (UID: \"c43d8f20-c2f5-4269-b8fb-aec91f9c9150\") " pod="openstack/aodh-0" Feb 27 10:51:36 crc kubenswrapper[4728]: I0227 10:51:36.430933 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c43d8f20-c2f5-4269-b8fb-aec91f9c9150-combined-ca-bundle\") pod \"aodh-0\" (UID: \"c43d8f20-c2f5-4269-b8fb-aec91f9c9150\") " pod="openstack/aodh-0" Feb 27 10:51:36 crc kubenswrapper[4728]: I0227 10:51:36.431003 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pt2sh\" (UniqueName: \"kubernetes.io/projected/c43d8f20-c2f5-4269-b8fb-aec91f9c9150-kube-api-access-pt2sh\") pod \"aodh-0\" (UID: \"c43d8f20-c2f5-4269-b8fb-aec91f9c9150\") " pod="openstack/aodh-0" Feb 27 10:51:36 crc kubenswrapper[4728]: I0227 10:51:36.431123 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c43d8f20-c2f5-4269-b8fb-aec91f9c9150-config-data\") pod \"aodh-0\" (UID: \"c43d8f20-c2f5-4269-b8fb-aec91f9c9150\") " pod="openstack/aodh-0" Feb 27 10:51:36 crc kubenswrapper[4728]: I0227 10:51:36.532893 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c43d8f20-c2f5-4269-b8fb-aec91f9c9150-config-data\") pod \"aodh-0\" (UID: \"c43d8f20-c2f5-4269-b8fb-aec91f9c9150\") " pod="openstack/aodh-0" Feb 27 10:51:36 crc kubenswrapper[4728]: I0227 10:51:36.532996 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c43d8f20-c2f5-4269-b8fb-aec91f9c9150-scripts\") pod \"aodh-0\" (UID: \"c43d8f20-c2f5-4269-b8fb-aec91f9c9150\") " pod="openstack/aodh-0" Feb 27 10:51:36 crc kubenswrapper[4728]: I0227 10:51:36.533057 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c43d8f20-c2f5-4269-b8fb-aec91f9c9150-combined-ca-bundle\") pod \"aodh-0\" (UID: \"c43d8f20-c2f5-4269-b8fb-aec91f9c9150\") " pod="openstack/aodh-0" Feb 27 10:51:36 crc kubenswrapper[4728]: I0227 10:51:36.533125 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pt2sh\" (UniqueName: \"kubernetes.io/projected/c43d8f20-c2f5-4269-b8fb-aec91f9c9150-kube-api-access-pt2sh\") pod \"aodh-0\" (UID: \"c43d8f20-c2f5-4269-b8fb-aec91f9c9150\") " pod="openstack/aodh-0" Feb 27 10:51:36 crc kubenswrapper[4728]: I0227 10:51:36.540399 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c43d8f20-c2f5-4269-b8fb-aec91f9c9150-scripts\") pod \"aodh-0\" (UID: \"c43d8f20-c2f5-4269-b8fb-aec91f9c9150\") " pod="openstack/aodh-0" Feb 27 10:51:36 crc kubenswrapper[4728]: I0227 10:51:36.547055 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c43d8f20-c2f5-4269-b8fb-aec91f9c9150-config-data\") pod \"aodh-0\" (UID: \"c43d8f20-c2f5-4269-b8fb-aec91f9c9150\") " pod="openstack/aodh-0" Feb 27 10:51:36 crc kubenswrapper[4728]: I0227 10:51:36.548067 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c43d8f20-c2f5-4269-b8fb-aec91f9c9150-combined-ca-bundle\") pod \"aodh-0\" (UID: \"c43d8f20-c2f5-4269-b8fb-aec91f9c9150\") " pod="openstack/aodh-0" Feb 27 10:51:36 crc kubenswrapper[4728]: I0227 10:51:36.551551 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pt2sh\" (UniqueName: \"kubernetes.io/projected/c43d8f20-c2f5-4269-b8fb-aec91f9c9150-kube-api-access-pt2sh\") pod \"aodh-0\" (UID: \"c43d8f20-c2f5-4269-b8fb-aec91f9c9150\") " pod="openstack/aodh-0" Feb 27 10:51:36 crc kubenswrapper[4728]: I0227 10:51:36.651123 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 27 10:51:36 crc kubenswrapper[4728]: I0227 10:51:36.658704 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 27 10:51:36 crc kubenswrapper[4728]: I0227 10:51:36.676286 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 27 10:51:36 crc kubenswrapper[4728]: I0227 10:51:36.691974 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 27 10:51:36 crc kubenswrapper[4728]: I0227 10:51:36.693558 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 27 10:51:36 crc kubenswrapper[4728]: I0227 10:51:36.695635 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 27 10:51:36 crc kubenswrapper[4728]: I0227 10:51:36.704469 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 27 10:51:36 crc kubenswrapper[4728]: I0227 10:51:36.741465 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="214d42db-ca45-403f-89e3-7026fb6abed2" path="/var/lib/kubelet/pods/214d42db-ca45-403f-89e3-7026fb6abed2/volumes" Feb 27 10:51:36 crc kubenswrapper[4728]: I0227 10:51:36.742167 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="67af1e62-168c-4a94-a206-79158119b0a4" path="/var/lib/kubelet/pods/67af1e62-168c-4a94-a206-79158119b0a4/volumes" Feb 27 10:51:36 crc kubenswrapper[4728]: I0227 10:51:36.846106 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-587r2\" (UniqueName: \"kubernetes.io/projected/a22e62c7-44fe-4603-af1a-95ca06a943c4-kube-api-access-587r2\") pod \"nova-scheduler-0\" (UID: \"a22e62c7-44fe-4603-af1a-95ca06a943c4\") " pod="openstack/nova-scheduler-0" Feb 27 10:51:36 crc kubenswrapper[4728]: I0227 10:51:36.846160 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a22e62c7-44fe-4603-af1a-95ca06a943c4-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"a22e62c7-44fe-4603-af1a-95ca06a943c4\") " pod="openstack/nova-scheduler-0" Feb 27 10:51:36 crc kubenswrapper[4728]: I0227 10:51:36.846207 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a22e62c7-44fe-4603-af1a-95ca06a943c4-config-data\") pod \"nova-scheduler-0\" (UID: \"a22e62c7-44fe-4603-af1a-95ca06a943c4\") " pod="openstack/nova-scheduler-0" Feb 27 10:51:36 crc kubenswrapper[4728]: W0227 10:51:36.939256 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod32bb7294_8dfc_4b00_9227_445c322a47a1.slice/crio-6ca84247038cad9a0a891c35beff16ded4bd1ddd3f81cd1654114e39702b559e WatchSource:0}: Error finding container 6ca84247038cad9a0a891c35beff16ded4bd1ddd3f81cd1654114e39702b559e: Status 404 returned error can't find the container with id 6ca84247038cad9a0a891c35beff16ded4bd1ddd3f81cd1654114e39702b559e Feb 27 10:51:36 crc kubenswrapper[4728]: I0227 10:51:36.948822 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 27 10:51:36 crc kubenswrapper[4728]: I0227 10:51:36.951076 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-587r2\" (UniqueName: \"kubernetes.io/projected/a22e62c7-44fe-4603-af1a-95ca06a943c4-kube-api-access-587r2\") pod \"nova-scheduler-0\" (UID: \"a22e62c7-44fe-4603-af1a-95ca06a943c4\") " pod="openstack/nova-scheduler-0" Feb 27 10:51:36 crc kubenswrapper[4728]: I0227 10:51:36.951120 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a22e62c7-44fe-4603-af1a-95ca06a943c4-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"a22e62c7-44fe-4603-af1a-95ca06a943c4\") " pod="openstack/nova-scheduler-0" Feb 27 10:51:36 crc kubenswrapper[4728]: I0227 10:51:36.951167 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a22e62c7-44fe-4603-af1a-95ca06a943c4-config-data\") pod \"nova-scheduler-0\" (UID: \"a22e62c7-44fe-4603-af1a-95ca06a943c4\") " pod="openstack/nova-scheduler-0" Feb 27 10:51:36 crc kubenswrapper[4728]: I0227 10:51:36.956018 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a22e62c7-44fe-4603-af1a-95ca06a943c4-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"a22e62c7-44fe-4603-af1a-95ca06a943c4\") " pod="openstack/nova-scheduler-0" Feb 27 10:51:36 crc kubenswrapper[4728]: I0227 10:51:36.959832 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a22e62c7-44fe-4603-af1a-95ca06a943c4-config-data\") pod \"nova-scheduler-0\" (UID: \"a22e62c7-44fe-4603-af1a-95ca06a943c4\") " pod="openstack/nova-scheduler-0" Feb 27 10:51:36 crc kubenswrapper[4728]: I0227 10:51:36.969038 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-587r2\" (UniqueName: \"kubernetes.io/projected/a22e62c7-44fe-4603-af1a-95ca06a943c4-kube-api-access-587r2\") pod \"nova-scheduler-0\" (UID: \"a22e62c7-44fe-4603-af1a-95ca06a943c4\") " pod="openstack/nova-scheduler-0" Feb 27 10:51:36 crc kubenswrapper[4728]: I0227 10:51:36.996536 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"32bb7294-8dfc-4b00-9227-445c322a47a1","Type":"ContainerStarted","Data":"6ca84247038cad9a0a891c35beff16ded4bd1ddd3f81cd1654114e39702b559e"} Feb 27 10:51:37 crc kubenswrapper[4728]: I0227 10:51:37.011800 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 27 10:51:37 crc kubenswrapper[4728]: I0227 10:51:37.147777 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Feb 27 10:51:37 crc kubenswrapper[4728]: I0227 10:51:37.536415 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 27 10:51:38 crc kubenswrapper[4728]: I0227 10:51:38.024568 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"c43d8f20-c2f5-4269-b8fb-aec91f9c9150","Type":"ContainerStarted","Data":"872d71a017d84b6caf3530613786e47bfd46227c97cce702f8be4e463f454c20"} Feb 27 10:51:38 crc kubenswrapper[4728]: I0227 10:51:38.029229 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"a22e62c7-44fe-4603-af1a-95ca06a943c4","Type":"ContainerStarted","Data":"b23a3afe2d0e169412dc8126968556686aa43e8a84aac3f3690f7737003595cd"} Feb 27 10:51:38 crc kubenswrapper[4728]: I0227 10:51:38.029280 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"a22e62c7-44fe-4603-af1a-95ca06a943c4","Type":"ContainerStarted","Data":"f5dfca0f6317791058a4e104dad9b2fd8bad0c90fa8033ef4de9d52a745defeb"} Feb 27 10:51:38 crc kubenswrapper[4728]: I0227 10:51:38.031106 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"32bb7294-8dfc-4b00-9227-445c322a47a1","Type":"ContainerStarted","Data":"fd6e3db9921506883bd2cc91d1eabe1d2aabc0f09244b0e58f9b2b8015f7e25a"} Feb 27 10:51:38 crc kubenswrapper[4728]: I0227 10:51:38.049282 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.049264267 podStartE2EDuration="3.049264267s" podCreationTimestamp="2026-02-27 10:51:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:51:38.047290203 +0000 UTC m=+1518.009656309" watchObservedRunningTime="2026-02-27 10:51:38.049264267 +0000 UTC m=+1518.011630373" Feb 27 10:51:39 crc kubenswrapper[4728]: I0227 10:51:39.155255 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 27 10:51:39 crc kubenswrapper[4728]: I0227 10:51:39.155562 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 27 10:51:39 crc kubenswrapper[4728]: I0227 10:51:39.155788 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 27 10:51:39 crc kubenswrapper[4728]: I0227 10:51:39.155841 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 27 10:51:39 crc kubenswrapper[4728]: I0227 10:51:39.157705 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 27 10:51:39 crc kubenswrapper[4728]: I0227 10:51:39.160546 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 27 10:51:39 crc kubenswrapper[4728]: I0227 10:51:39.225382 4728 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/test-operator-controller-manager-55b5ff4dbb-tq6s8" podUID="e81ced4b-a6cf-4dba-964a-52f8bcbd82ae" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.124:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 27 10:51:39 crc kubenswrapper[4728]: I0227 10:51:39.313878 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 27 10:51:39 crc kubenswrapper[4728]: I0227 10:51:39.320208 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 27 10:51:39 crc kubenswrapper[4728]: I0227 10:51:39.320419 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 27 10:51:39 crc kubenswrapper[4728]: I0227 10:51:39.389133 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-h4gl8" Feb 27 10:51:39 crc kubenswrapper[4728]: I0227 10:51:39.390766 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-h4gl8" Feb 27 10:51:40 crc kubenswrapper[4728]: I0227 10:51:40.057018 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 27 10:51:40 crc kubenswrapper[4728]: I0227 10:51:40.448348 4728 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-h4gl8" podUID="efa9e238-79b0-4757-acab-53537b5ae93a" containerName="registry-server" probeResult="failure" output=< Feb 27 10:51:40 crc kubenswrapper[4728]: timeout: failed to connect service ":50051" within 1s Feb 27 10:51:40 crc kubenswrapper[4728]: > Feb 27 10:51:40 crc kubenswrapper[4728]: I0227 10:51:40.913694 4728 patch_prober.go:28] interesting pod/controller-manager-6749b8c876-bbl5m container/controller-manager namespace/openshift-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.68:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 27 10:51:40 crc kubenswrapper[4728]: I0227 10:51:40.913771 4728 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-controller-manager/controller-manager-6749b8c876-bbl5m" podUID="b8a0e313-f25f-4fbd-a611-73dfaaea0cfb" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.68:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 27 10:51:41 crc kubenswrapper[4728]: I0227 10:51:41.387713 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 27 10:51:43 crc kubenswrapper[4728]: I0227 10:51:43.782243 4728 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="803ed01f-b95c-4718-a5e8-3a864b0b7850" containerName="galera" probeResult="failure" output="command timed out" Feb 27 10:51:43 crc kubenswrapper[4728]: I0227 10:51:43.784063 4728 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="803ed01f-b95c-4718-a5e8-3a864b0b7850" containerName="galera" probeResult="failure" output="command timed out" Feb 27 10:51:44 crc kubenswrapper[4728]: I0227 10:51:44.125452 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=8.125432234 podStartE2EDuration="8.125432234s" podCreationTimestamp="2026-02-27 10:51:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:51:44.120327406 +0000 UTC m=+1524.082693512" watchObservedRunningTime="2026-02-27 10:51:44.125432234 +0000 UTC m=+1524.087798340" Feb 27 10:51:44 crc kubenswrapper[4728]: I0227 10:51:44.199557 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6d99f6bc7f-gr6np"] Feb 27 10:51:44 crc kubenswrapper[4728]: I0227 10:51:44.201882 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d99f6bc7f-gr6np" Feb 27 10:51:44 crc kubenswrapper[4728]: I0227 10:51:44.220466 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4447fe65-418c-43b8-aa5a-b78a7d97fe56-ovsdbserver-nb\") pod \"dnsmasq-dns-6d99f6bc7f-gr6np\" (UID: \"4447fe65-418c-43b8-aa5a-b78a7d97fe56\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-gr6np" Feb 27 10:51:44 crc kubenswrapper[4728]: I0227 10:51:44.220557 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4447fe65-418c-43b8-aa5a-b78a7d97fe56-ovsdbserver-sb\") pod \"dnsmasq-dns-6d99f6bc7f-gr6np\" (UID: \"4447fe65-418c-43b8-aa5a-b78a7d97fe56\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-gr6np" Feb 27 10:51:44 crc kubenswrapper[4728]: I0227 10:51:44.220592 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4447fe65-418c-43b8-aa5a-b78a7d97fe56-dns-svc\") pod \"dnsmasq-dns-6d99f6bc7f-gr6np\" (UID: \"4447fe65-418c-43b8-aa5a-b78a7d97fe56\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-gr6np" Feb 27 10:51:44 crc kubenswrapper[4728]: I0227 10:51:44.220803 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4447fe65-418c-43b8-aa5a-b78a7d97fe56-dns-swift-storage-0\") pod \"dnsmasq-dns-6d99f6bc7f-gr6np\" (UID: \"4447fe65-418c-43b8-aa5a-b78a7d97fe56\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-gr6np" Feb 27 10:51:44 crc kubenswrapper[4728]: I0227 10:51:44.220975 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wp2g7\" (UniqueName: \"kubernetes.io/projected/4447fe65-418c-43b8-aa5a-b78a7d97fe56-kube-api-access-wp2g7\") pod \"dnsmasq-dns-6d99f6bc7f-gr6np\" (UID: \"4447fe65-418c-43b8-aa5a-b78a7d97fe56\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-gr6np" Feb 27 10:51:44 crc kubenswrapper[4728]: I0227 10:51:44.221029 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4447fe65-418c-43b8-aa5a-b78a7d97fe56-config\") pod \"dnsmasq-dns-6d99f6bc7f-gr6np\" (UID: \"4447fe65-418c-43b8-aa5a-b78a7d97fe56\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-gr6np" Feb 27 10:51:44 crc kubenswrapper[4728]: I0227 10:51:44.225375 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6d99f6bc7f-gr6np"] Feb 27 10:51:44 crc kubenswrapper[4728]: I0227 10:51:44.325354 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4447fe65-418c-43b8-aa5a-b78a7d97fe56-ovsdbserver-sb\") pod \"dnsmasq-dns-6d99f6bc7f-gr6np\" (UID: \"4447fe65-418c-43b8-aa5a-b78a7d97fe56\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-gr6np" Feb 27 10:51:44 crc kubenswrapper[4728]: I0227 10:51:44.325435 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4447fe65-418c-43b8-aa5a-b78a7d97fe56-dns-svc\") pod \"dnsmasq-dns-6d99f6bc7f-gr6np\" (UID: \"4447fe65-418c-43b8-aa5a-b78a7d97fe56\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-gr6np" Feb 27 10:51:44 crc kubenswrapper[4728]: I0227 10:51:44.326564 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4447fe65-418c-43b8-aa5a-b78a7d97fe56-dns-svc\") pod \"dnsmasq-dns-6d99f6bc7f-gr6np\" (UID: \"4447fe65-418c-43b8-aa5a-b78a7d97fe56\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-gr6np" Feb 27 10:51:44 crc kubenswrapper[4728]: I0227 10:51:44.326654 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4447fe65-418c-43b8-aa5a-b78a7d97fe56-dns-swift-storage-0\") pod \"dnsmasq-dns-6d99f6bc7f-gr6np\" (UID: \"4447fe65-418c-43b8-aa5a-b78a7d97fe56\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-gr6np" Feb 27 10:51:44 crc kubenswrapper[4728]: I0227 10:51:44.326780 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wp2g7\" (UniqueName: \"kubernetes.io/projected/4447fe65-418c-43b8-aa5a-b78a7d97fe56-kube-api-access-wp2g7\") pod \"dnsmasq-dns-6d99f6bc7f-gr6np\" (UID: \"4447fe65-418c-43b8-aa5a-b78a7d97fe56\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-gr6np" Feb 27 10:51:44 crc kubenswrapper[4728]: I0227 10:51:44.326828 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4447fe65-418c-43b8-aa5a-b78a7d97fe56-config\") pod \"dnsmasq-dns-6d99f6bc7f-gr6np\" (UID: \"4447fe65-418c-43b8-aa5a-b78a7d97fe56\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-gr6np" Feb 27 10:51:44 crc kubenswrapper[4728]: I0227 10:51:44.326943 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4447fe65-418c-43b8-aa5a-b78a7d97fe56-ovsdbserver-nb\") pod \"dnsmasq-dns-6d99f6bc7f-gr6np\" (UID: \"4447fe65-418c-43b8-aa5a-b78a7d97fe56\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-gr6np" Feb 27 10:51:44 crc kubenswrapper[4728]: I0227 10:51:44.327666 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4447fe65-418c-43b8-aa5a-b78a7d97fe56-ovsdbserver-nb\") pod \"dnsmasq-dns-6d99f6bc7f-gr6np\" (UID: \"4447fe65-418c-43b8-aa5a-b78a7d97fe56\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-gr6np" Feb 27 10:51:44 crc kubenswrapper[4728]: I0227 10:51:44.328288 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4447fe65-418c-43b8-aa5a-b78a7d97fe56-ovsdbserver-sb\") pod \"dnsmasq-dns-6d99f6bc7f-gr6np\" (UID: \"4447fe65-418c-43b8-aa5a-b78a7d97fe56\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-gr6np" Feb 27 10:51:44 crc kubenswrapper[4728]: I0227 10:51:44.329680 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4447fe65-418c-43b8-aa5a-b78a7d97fe56-config\") pod \"dnsmasq-dns-6d99f6bc7f-gr6np\" (UID: \"4447fe65-418c-43b8-aa5a-b78a7d97fe56\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-gr6np" Feb 27 10:51:44 crc kubenswrapper[4728]: I0227 10:51:44.329999 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4447fe65-418c-43b8-aa5a-b78a7d97fe56-dns-swift-storage-0\") pod \"dnsmasq-dns-6d99f6bc7f-gr6np\" (UID: \"4447fe65-418c-43b8-aa5a-b78a7d97fe56\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-gr6np" Feb 27 10:51:44 crc kubenswrapper[4728]: I0227 10:51:44.363606 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wp2g7\" (UniqueName: \"kubernetes.io/projected/4447fe65-418c-43b8-aa5a-b78a7d97fe56-kube-api-access-wp2g7\") pod \"dnsmasq-dns-6d99f6bc7f-gr6np\" (UID: \"4447fe65-418c-43b8-aa5a-b78a7d97fe56\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-gr6np" Feb 27 10:51:44 crc kubenswrapper[4728]: I0227 10:51:44.549169 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d99f6bc7f-gr6np" Feb 27 10:51:45 crc kubenswrapper[4728]: I0227 10:51:45.161776 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"c43d8f20-c2f5-4269-b8fb-aec91f9c9150","Type":"ContainerStarted","Data":"dd99b9342a0c90fd99db61847460349bf4d0b929a28db0ba9c8935f77e2b0803"} Feb 27 10:51:45 crc kubenswrapper[4728]: W0227 10:51:45.355753 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4447fe65_418c_43b8_aa5a_b78a7d97fe56.slice/crio-14de993c05220d694141d7f7be1545bef0f4cd44c062b4ff6edf1371b0bbd17d WatchSource:0}: Error finding container 14de993c05220d694141d7f7be1545bef0f4cd44c062b4ff6edf1371b0bbd17d: Status 404 returned error can't find the container with id 14de993c05220d694141d7f7be1545bef0f4cd44c062b4ff6edf1371b0bbd17d Feb 27 10:51:45 crc kubenswrapper[4728]: I0227 10:51:45.357372 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6d99f6bc7f-gr6np"] Feb 27 10:51:46 crc kubenswrapper[4728]: I0227 10:51:46.184214 4728 generic.go:334] "Generic (PLEG): container finished" podID="4447fe65-418c-43b8-aa5a-b78a7d97fe56" containerID="a602bd244e0453c0b392a6c151070755904dbf737d422bc8d0d1a2b5487ba005" exitCode=0 Feb 27 10:51:46 crc kubenswrapper[4728]: I0227 10:51:46.184330 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d99f6bc7f-gr6np" event={"ID":"4447fe65-418c-43b8-aa5a-b78a7d97fe56","Type":"ContainerDied","Data":"a602bd244e0453c0b392a6c151070755904dbf737d422bc8d0d1a2b5487ba005"} Feb 27 10:51:46 crc kubenswrapper[4728]: I0227 10:51:46.185381 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d99f6bc7f-gr6np" event={"ID":"4447fe65-418c-43b8-aa5a-b78a7d97fe56","Type":"ContainerStarted","Data":"14de993c05220d694141d7f7be1545bef0f4cd44c062b4ff6edf1371b0bbd17d"} Feb 27 10:51:46 crc kubenswrapper[4728]: I0227 10:51:46.387719 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Feb 27 10:51:46 crc kubenswrapper[4728]: I0227 10:51:46.390521 4728 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-cell1-novncproxy-0" podUID="32bb7294-8dfc-4b00-9227-445c322a47a1" containerName="nova-cell1-novncproxy-novncproxy" probeResult="failure" output="Get \"https://10.217.1.8:6080/vnc_lite.html\": dial tcp 10.217.1.8:6080: connect: connection refused" Feb 27 10:51:47 crc kubenswrapper[4728]: I0227 10:51:47.012687 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 27 10:51:47 crc kubenswrapper[4728]: I0227 10:51:47.013023 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 27 10:51:47 crc kubenswrapper[4728]: I0227 10:51:47.052322 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 27 10:51:47 crc kubenswrapper[4728]: I0227 10:51:47.203204 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"c43d8f20-c2f5-4269-b8fb-aec91f9c9150","Type":"ContainerStarted","Data":"e4041dbb5c8d290e129167eed9e2894e923153d73e5b5952d283d9b19a991075"} Feb 27 10:51:47 crc kubenswrapper[4728]: I0227 10:51:47.205716 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d99f6bc7f-gr6np" event={"ID":"4447fe65-418c-43b8-aa5a-b78a7d97fe56","Type":"ContainerStarted","Data":"9daef0f852c0139f194c3aa591c9673245e5aaef34c3d7f724a39223b54d9f05"} Feb 27 10:51:47 crc kubenswrapper[4728]: I0227 10:51:47.232720 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6d99f6bc7f-gr6np" podStartSLOduration=3.23267242 podStartE2EDuration="3.23267242s" podCreationTimestamp="2026-02-27 10:51:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:51:47.222766771 +0000 UTC m=+1527.185132867" watchObservedRunningTime="2026-02-27 10:51:47.23267242 +0000 UTC m=+1527.195038536" Feb 27 10:51:47 crc kubenswrapper[4728]: I0227 10:51:47.250293 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 27 10:51:47 crc kubenswrapper[4728]: I0227 10:51:47.478550 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 27 10:51:47 crc kubenswrapper[4728]: I0227 10:51:47.478887 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="75262df4-2453-419b-b516-f7a0d58deb82" containerName="ceilometer-central-agent" containerID="cri-o://2f7db1d2699a79a45efe79543ae1474ab17436fbd5be080f1985b713f4ce56bf" gracePeriod=30 Feb 27 10:51:47 crc kubenswrapper[4728]: I0227 10:51:47.478915 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="75262df4-2453-419b-b516-f7a0d58deb82" containerName="proxy-httpd" containerID="cri-o://fea4c381c2e7aa12818964364b2085d666b42a1f0e711d978e1344b3a57850f0" gracePeriod=30 Feb 27 10:51:47 crc kubenswrapper[4728]: I0227 10:51:47.479017 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="75262df4-2453-419b-b516-f7a0d58deb82" containerName="sg-core" containerID="cri-o://a93105e8da41986fcc76fdb27291716d041b79c8fa83154a8faf05fbc4b5b939" gracePeriod=30 Feb 27 10:51:47 crc kubenswrapper[4728]: I0227 10:51:47.479034 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="75262df4-2453-419b-b516-f7a0d58deb82" containerName="ceilometer-notification-agent" containerID="cri-o://5da9b531de57551bb290118fc0f5853758a3bae376fa5fe9eeed8ea959871936" gracePeriod=30 Feb 27 10:51:47 crc kubenswrapper[4728]: I0227 10:51:47.504589 4728 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="75262df4-2453-419b-b516-f7a0d58deb82" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.1.7:3000/\": EOF" Feb 27 10:51:48 crc kubenswrapper[4728]: I0227 10:51:48.244611 4728 generic.go:334] "Generic (PLEG): container finished" podID="75262df4-2453-419b-b516-f7a0d58deb82" containerID="fea4c381c2e7aa12818964364b2085d666b42a1f0e711d978e1344b3a57850f0" exitCode=0 Feb 27 10:51:48 crc kubenswrapper[4728]: I0227 10:51:48.244847 4728 generic.go:334] "Generic (PLEG): container finished" podID="75262df4-2453-419b-b516-f7a0d58deb82" containerID="a93105e8da41986fcc76fdb27291716d041b79c8fa83154a8faf05fbc4b5b939" exitCode=2 Feb 27 10:51:48 crc kubenswrapper[4728]: I0227 10:51:48.244855 4728 generic.go:334] "Generic (PLEG): container finished" podID="75262df4-2453-419b-b516-f7a0d58deb82" containerID="2f7db1d2699a79a45efe79543ae1474ab17436fbd5be080f1985b713f4ce56bf" exitCode=0 Feb 27 10:51:48 crc kubenswrapper[4728]: I0227 10:51:48.245590 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"75262df4-2453-419b-b516-f7a0d58deb82","Type":"ContainerDied","Data":"fea4c381c2e7aa12818964364b2085d666b42a1f0e711d978e1344b3a57850f0"} Feb 27 10:51:48 crc kubenswrapper[4728]: I0227 10:51:48.245642 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"75262df4-2453-419b-b516-f7a0d58deb82","Type":"ContainerDied","Data":"a93105e8da41986fcc76fdb27291716d041b79c8fa83154a8faf05fbc4b5b939"} Feb 27 10:51:48 crc kubenswrapper[4728]: I0227 10:51:48.245655 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"75262df4-2453-419b-b516-f7a0d58deb82","Type":"ContainerDied","Data":"2f7db1d2699a79a45efe79543ae1474ab17436fbd5be080f1985b713f4ce56bf"} Feb 27 10:51:48 crc kubenswrapper[4728]: I0227 10:51:48.245961 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6d99f6bc7f-gr6np" Feb 27 10:51:48 crc kubenswrapper[4728]: I0227 10:51:48.295457 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 27 10:51:48 crc kubenswrapper[4728]: I0227 10:51:48.295706 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="e99f57b6-6a8c-4e28-8a44-b1383a7ec0c8" containerName="nova-api-log" containerID="cri-o://7733460b6570f072ed8fb2df9150baf3c06b13a0819a34de968d1856336c6fdb" gracePeriod=30 Feb 27 10:51:48 crc kubenswrapper[4728]: I0227 10:51:48.295834 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="e99f57b6-6a8c-4e28-8a44-b1383a7ec0c8" containerName="nova-api-api" containerID="cri-o://752366dc33bd3f59cec62f09b50960b6a4d19c83ad2c9d13f0fb06eba8337006" gracePeriod=30 Feb 27 10:51:48 crc kubenswrapper[4728]: I0227 10:51:48.656236 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Feb 27 10:51:49 crc kubenswrapper[4728]: I0227 10:51:49.271147 4728 generic.go:334] "Generic (PLEG): container finished" podID="e99f57b6-6a8c-4e28-8a44-b1383a7ec0c8" containerID="7733460b6570f072ed8fb2df9150baf3c06b13a0819a34de968d1856336c6fdb" exitCode=143 Feb 27 10:51:49 crc kubenswrapper[4728]: I0227 10:51:49.271523 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e99f57b6-6a8c-4e28-8a44-b1383a7ec0c8","Type":"ContainerDied","Data":"7733460b6570f072ed8fb2df9150baf3c06b13a0819a34de968d1856336c6fdb"} Feb 27 10:51:49 crc kubenswrapper[4728]: I0227 10:51:49.333991 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"c43d8f20-c2f5-4269-b8fb-aec91f9c9150","Type":"ContainerStarted","Data":"8766429dec4d5a212661654f4bc0512821fe17b80d97bfe039c3f4c0d0273195"} Feb 27 10:51:50 crc kubenswrapper[4728]: I0227 10:51:50.460223 4728 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-h4gl8" podUID="efa9e238-79b0-4757-acab-53537b5ae93a" containerName="registry-server" probeResult="failure" output=< Feb 27 10:51:50 crc kubenswrapper[4728]: timeout: failed to connect service ":50051" within 1s Feb 27 10:51:50 crc kubenswrapper[4728]: > Feb 27 10:51:51 crc kubenswrapper[4728]: I0227 10:51:51.365269 4728 generic.go:334] "Generic (PLEG): container finished" podID="75262df4-2453-419b-b516-f7a0d58deb82" containerID="5da9b531de57551bb290118fc0f5853758a3bae376fa5fe9eeed8ea959871936" exitCode=0 Feb 27 10:51:51 crc kubenswrapper[4728]: I0227 10:51:51.365445 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"75262df4-2453-419b-b516-f7a0d58deb82","Type":"ContainerDied","Data":"5da9b531de57551bb290118fc0f5853758a3bae376fa5fe9eeed8ea959871936"} Feb 27 10:51:51 crc kubenswrapper[4728]: I0227 10:51:51.377040 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"c43d8f20-c2f5-4269-b8fb-aec91f9c9150","Type":"ContainerStarted","Data":"f9f2d8d912390de684dbeca893fc6e00f5cff27566b226162549305a7fe73564"} Feb 27 10:51:51 crc kubenswrapper[4728]: I0227 10:51:51.377219 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="c43d8f20-c2f5-4269-b8fb-aec91f9c9150" containerName="aodh-api" containerID="cri-o://dd99b9342a0c90fd99db61847460349bf4d0b929a28db0ba9c8935f77e2b0803" gracePeriod=30 Feb 27 10:51:51 crc kubenswrapper[4728]: I0227 10:51:51.377767 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="c43d8f20-c2f5-4269-b8fb-aec91f9c9150" containerName="aodh-notifier" containerID="cri-o://8766429dec4d5a212661654f4bc0512821fe17b80d97bfe039c3f4c0d0273195" gracePeriod=30 Feb 27 10:51:51 crc kubenswrapper[4728]: I0227 10:51:51.377934 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="c43d8f20-c2f5-4269-b8fb-aec91f9c9150" containerName="aodh-listener" containerID="cri-o://f9f2d8d912390de684dbeca893fc6e00f5cff27566b226162549305a7fe73564" gracePeriod=30 Feb 27 10:51:51 crc kubenswrapper[4728]: I0227 10:51:51.377942 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="c43d8f20-c2f5-4269-b8fb-aec91f9c9150" containerName="aodh-evaluator" containerID="cri-o://e4041dbb5c8d290e129167eed9e2894e923153d73e5b5952d283d9b19a991075" gracePeriod=30 Feb 27 10:51:51 crc kubenswrapper[4728]: I0227 10:51:51.421050 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=2.075195331 podStartE2EDuration="15.421022721s" podCreationTimestamp="2026-02-27 10:51:36 +0000 UTC" firstStartedPulling="2026-02-27 10:51:37.16566116 +0000 UTC m=+1517.128027266" lastFinishedPulling="2026-02-27 10:51:50.51148855 +0000 UTC m=+1530.473854656" observedRunningTime="2026-02-27 10:51:51.401867391 +0000 UTC m=+1531.364233527" watchObservedRunningTime="2026-02-27 10:51:51.421022721 +0000 UTC m=+1531.383388827" Feb 27 10:51:51 crc kubenswrapper[4728]: I0227 10:51:51.682691 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 27 10:51:51 crc kubenswrapper[4728]: I0227 10:51:51.783142 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/75262df4-2453-419b-b516-f7a0d58deb82-log-httpd\") pod \"75262df4-2453-419b-b516-f7a0d58deb82\" (UID: \"75262df4-2453-419b-b516-f7a0d58deb82\") " Feb 27 10:51:51 crc kubenswrapper[4728]: I0227 10:51:51.783223 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/75262df4-2453-419b-b516-f7a0d58deb82-sg-core-conf-yaml\") pod \"75262df4-2453-419b-b516-f7a0d58deb82\" (UID: \"75262df4-2453-419b-b516-f7a0d58deb82\") " Feb 27 10:51:51 crc kubenswrapper[4728]: I0227 10:51:51.783336 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75262df4-2453-419b-b516-f7a0d58deb82-combined-ca-bundle\") pod \"75262df4-2453-419b-b516-f7a0d58deb82\" (UID: \"75262df4-2453-419b-b516-f7a0d58deb82\") " Feb 27 10:51:51 crc kubenswrapper[4728]: I0227 10:51:51.783361 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/75262df4-2453-419b-b516-f7a0d58deb82-run-httpd\") pod \"75262df4-2453-419b-b516-f7a0d58deb82\" (UID: \"75262df4-2453-419b-b516-f7a0d58deb82\") " Feb 27 10:51:51 crc kubenswrapper[4728]: I0227 10:51:51.783404 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75262df4-2453-419b-b516-f7a0d58deb82-scripts\") pod \"75262df4-2453-419b-b516-f7a0d58deb82\" (UID: \"75262df4-2453-419b-b516-f7a0d58deb82\") " Feb 27 10:51:51 crc kubenswrapper[4728]: I0227 10:51:51.783445 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hl6ms\" (UniqueName: \"kubernetes.io/projected/75262df4-2453-419b-b516-f7a0d58deb82-kube-api-access-hl6ms\") pod \"75262df4-2453-419b-b516-f7a0d58deb82\" (UID: \"75262df4-2453-419b-b516-f7a0d58deb82\") " Feb 27 10:51:51 crc kubenswrapper[4728]: I0227 10:51:51.784662 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/75262df4-2453-419b-b516-f7a0d58deb82-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "75262df4-2453-419b-b516-f7a0d58deb82" (UID: "75262df4-2453-419b-b516-f7a0d58deb82"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:51:51 crc kubenswrapper[4728]: I0227 10:51:51.785039 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/75262df4-2453-419b-b516-f7a0d58deb82-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "75262df4-2453-419b-b516-f7a0d58deb82" (UID: "75262df4-2453-419b-b516-f7a0d58deb82"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:51:51 crc kubenswrapper[4728]: I0227 10:51:51.798975 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75262df4-2453-419b-b516-f7a0d58deb82-scripts" (OuterVolumeSpecName: "scripts") pod "75262df4-2453-419b-b516-f7a0d58deb82" (UID: "75262df4-2453-419b-b516-f7a0d58deb82"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:51:51 crc kubenswrapper[4728]: I0227 10:51:51.801404 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75262df4-2453-419b-b516-f7a0d58deb82-kube-api-access-hl6ms" (OuterVolumeSpecName: "kube-api-access-hl6ms") pod "75262df4-2453-419b-b516-f7a0d58deb82" (UID: "75262df4-2453-419b-b516-f7a0d58deb82"). InnerVolumeSpecName "kube-api-access-hl6ms". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:51:51 crc kubenswrapper[4728]: I0227 10:51:51.833150 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75262df4-2453-419b-b516-f7a0d58deb82-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "75262df4-2453-419b-b516-f7a0d58deb82" (UID: "75262df4-2453-419b-b516-f7a0d58deb82"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:51:51 crc kubenswrapper[4728]: I0227 10:51:51.885490 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75262df4-2453-419b-b516-f7a0d58deb82-config-data\") pod \"75262df4-2453-419b-b516-f7a0d58deb82\" (UID: \"75262df4-2453-419b-b516-f7a0d58deb82\") " Feb 27 10:51:51 crc kubenswrapper[4728]: I0227 10:51:51.889031 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hl6ms\" (UniqueName: \"kubernetes.io/projected/75262df4-2453-419b-b516-f7a0d58deb82-kube-api-access-hl6ms\") on node \"crc\" DevicePath \"\"" Feb 27 10:51:51 crc kubenswrapper[4728]: I0227 10:51:51.889070 4728 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/75262df4-2453-419b-b516-f7a0d58deb82-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 27 10:51:51 crc kubenswrapper[4728]: I0227 10:51:51.889086 4728 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/75262df4-2453-419b-b516-f7a0d58deb82-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 27 10:51:51 crc kubenswrapper[4728]: I0227 10:51:51.889098 4728 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/75262df4-2453-419b-b516-f7a0d58deb82-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 27 10:51:51 crc kubenswrapper[4728]: I0227 10:51:51.889110 4728 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75262df4-2453-419b-b516-f7a0d58deb82-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 10:51:51 crc kubenswrapper[4728]: I0227 10:51:51.926615 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75262df4-2453-419b-b516-f7a0d58deb82-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "75262df4-2453-419b-b516-f7a0d58deb82" (UID: "75262df4-2453-419b-b516-f7a0d58deb82"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:51:51 crc kubenswrapper[4728]: I0227 10:51:51.995110 4728 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75262df4-2453-419b-b516-f7a0d58deb82-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 10:51:52 crc kubenswrapper[4728]: I0227 10:51:52.064721 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75262df4-2453-419b-b516-f7a0d58deb82-config-data" (OuterVolumeSpecName: "config-data") pod "75262df4-2453-419b-b516-f7a0d58deb82" (UID: "75262df4-2453-419b-b516-f7a0d58deb82"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:51:52 crc kubenswrapper[4728]: I0227 10:51:52.097921 4728 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75262df4-2453-419b-b516-f7a0d58deb82-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 10:51:52 crc kubenswrapper[4728]: I0227 10:51:52.390315 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"75262df4-2453-419b-b516-f7a0d58deb82","Type":"ContainerDied","Data":"979c2b2ae1a777a6b62b017c865d49601d1c37487bc8d22d2dd2713e4b82d3a6"} Feb 27 10:51:52 crc kubenswrapper[4728]: I0227 10:51:52.390385 4728 scope.go:117] "RemoveContainer" containerID="fea4c381c2e7aa12818964364b2085d666b42a1f0e711d978e1344b3a57850f0" Feb 27 10:51:52 crc kubenswrapper[4728]: I0227 10:51:52.390330 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 27 10:51:52 crc kubenswrapper[4728]: I0227 10:51:52.400066 4728 generic.go:334] "Generic (PLEG): container finished" podID="c43d8f20-c2f5-4269-b8fb-aec91f9c9150" containerID="e4041dbb5c8d290e129167eed9e2894e923153d73e5b5952d283d9b19a991075" exitCode=0 Feb 27 10:51:52 crc kubenswrapper[4728]: I0227 10:51:52.400102 4728 generic.go:334] "Generic (PLEG): container finished" podID="c43d8f20-c2f5-4269-b8fb-aec91f9c9150" containerID="dd99b9342a0c90fd99db61847460349bf4d0b929a28db0ba9c8935f77e2b0803" exitCode=0 Feb 27 10:51:52 crc kubenswrapper[4728]: I0227 10:51:52.400124 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"c43d8f20-c2f5-4269-b8fb-aec91f9c9150","Type":"ContainerDied","Data":"e4041dbb5c8d290e129167eed9e2894e923153d73e5b5952d283d9b19a991075"} Feb 27 10:51:52 crc kubenswrapper[4728]: I0227 10:51:52.400158 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"c43d8f20-c2f5-4269-b8fb-aec91f9c9150","Type":"ContainerDied","Data":"dd99b9342a0c90fd99db61847460349bf4d0b929a28db0ba9c8935f77e2b0803"} Feb 27 10:51:52 crc kubenswrapper[4728]: I0227 10:51:52.418025 4728 scope.go:117] "RemoveContainer" containerID="a93105e8da41986fcc76fdb27291716d041b79c8fa83154a8faf05fbc4b5b939" Feb 27 10:51:52 crc kubenswrapper[4728]: I0227 10:51:52.444121 4728 scope.go:117] "RemoveContainer" containerID="5da9b531de57551bb290118fc0f5853758a3bae376fa5fe9eeed8ea959871936" Feb 27 10:51:52 crc kubenswrapper[4728]: I0227 10:51:52.448736 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 27 10:51:52 crc kubenswrapper[4728]: I0227 10:51:52.466984 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 27 10:51:52 crc kubenswrapper[4728]: I0227 10:51:52.492872 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 27 10:51:52 crc kubenswrapper[4728]: E0227 10:51:52.493449 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75262df4-2453-419b-b516-f7a0d58deb82" containerName="proxy-httpd" Feb 27 10:51:52 crc kubenswrapper[4728]: I0227 10:51:52.493469 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="75262df4-2453-419b-b516-f7a0d58deb82" containerName="proxy-httpd" Feb 27 10:51:52 crc kubenswrapper[4728]: E0227 10:51:52.493481 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75262df4-2453-419b-b516-f7a0d58deb82" containerName="ceilometer-notification-agent" Feb 27 10:51:52 crc kubenswrapper[4728]: I0227 10:51:52.493488 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="75262df4-2453-419b-b516-f7a0d58deb82" containerName="ceilometer-notification-agent" Feb 27 10:51:52 crc kubenswrapper[4728]: E0227 10:51:52.493516 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75262df4-2453-419b-b516-f7a0d58deb82" containerName="sg-core" Feb 27 10:51:52 crc kubenswrapper[4728]: I0227 10:51:52.493522 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="75262df4-2453-419b-b516-f7a0d58deb82" containerName="sg-core" Feb 27 10:51:52 crc kubenswrapper[4728]: E0227 10:51:52.493540 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75262df4-2453-419b-b516-f7a0d58deb82" containerName="ceilometer-central-agent" Feb 27 10:51:52 crc kubenswrapper[4728]: I0227 10:51:52.493553 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="75262df4-2453-419b-b516-f7a0d58deb82" containerName="ceilometer-central-agent" Feb 27 10:51:52 crc kubenswrapper[4728]: I0227 10:51:52.493846 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="75262df4-2453-419b-b516-f7a0d58deb82" containerName="sg-core" Feb 27 10:51:52 crc kubenswrapper[4728]: I0227 10:51:52.493874 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="75262df4-2453-419b-b516-f7a0d58deb82" containerName="ceilometer-central-agent" Feb 27 10:51:52 crc kubenswrapper[4728]: I0227 10:51:52.493895 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="75262df4-2453-419b-b516-f7a0d58deb82" containerName="proxy-httpd" Feb 27 10:51:52 crc kubenswrapper[4728]: I0227 10:51:52.493913 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="75262df4-2453-419b-b516-f7a0d58deb82" containerName="ceilometer-notification-agent" Feb 27 10:51:52 crc kubenswrapper[4728]: I0227 10:51:52.496406 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 27 10:51:52 crc kubenswrapper[4728]: I0227 10:51:52.498557 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 27 10:51:52 crc kubenswrapper[4728]: I0227 10:51:52.498954 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 27 10:51:52 crc kubenswrapper[4728]: I0227 10:51:52.528689 4728 scope.go:117] "RemoveContainer" containerID="2f7db1d2699a79a45efe79543ae1474ab17436fbd5be080f1985b713f4ce56bf" Feb 27 10:51:52 crc kubenswrapper[4728]: I0227 10:51:52.536001 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 27 10:51:52 crc kubenswrapper[4728]: I0227 10:51:52.609544 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7779b2a2-e696-4b07-9b34-9c43065ada96-config-data\") pod \"ceilometer-0\" (UID: \"7779b2a2-e696-4b07-9b34-9c43065ada96\") " pod="openstack/ceilometer-0" Feb 27 10:51:52 crc kubenswrapper[4728]: I0227 10:51:52.609587 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7779b2a2-e696-4b07-9b34-9c43065ada96-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7779b2a2-e696-4b07-9b34-9c43065ada96\") " pod="openstack/ceilometer-0" Feb 27 10:51:52 crc kubenswrapper[4728]: I0227 10:51:52.609622 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7779b2a2-e696-4b07-9b34-9c43065ada96-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7779b2a2-e696-4b07-9b34-9c43065ada96\") " pod="openstack/ceilometer-0" Feb 27 10:51:52 crc kubenswrapper[4728]: I0227 10:51:52.610578 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hg6hh\" (UniqueName: \"kubernetes.io/projected/7779b2a2-e696-4b07-9b34-9c43065ada96-kube-api-access-hg6hh\") pod \"ceilometer-0\" (UID: \"7779b2a2-e696-4b07-9b34-9c43065ada96\") " pod="openstack/ceilometer-0" Feb 27 10:51:52 crc kubenswrapper[4728]: I0227 10:51:52.610823 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7779b2a2-e696-4b07-9b34-9c43065ada96-run-httpd\") pod \"ceilometer-0\" (UID: \"7779b2a2-e696-4b07-9b34-9c43065ada96\") " pod="openstack/ceilometer-0" Feb 27 10:51:52 crc kubenswrapper[4728]: I0227 10:51:52.610881 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7779b2a2-e696-4b07-9b34-9c43065ada96-log-httpd\") pod \"ceilometer-0\" (UID: \"7779b2a2-e696-4b07-9b34-9c43065ada96\") " pod="openstack/ceilometer-0" Feb 27 10:51:52 crc kubenswrapper[4728]: I0227 10:51:52.610952 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7779b2a2-e696-4b07-9b34-9c43065ada96-scripts\") pod \"ceilometer-0\" (UID: \"7779b2a2-e696-4b07-9b34-9c43065ada96\") " pod="openstack/ceilometer-0" Feb 27 10:51:52 crc kubenswrapper[4728]: I0227 10:51:52.712923 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7779b2a2-e696-4b07-9b34-9c43065ada96-run-httpd\") pod \"ceilometer-0\" (UID: \"7779b2a2-e696-4b07-9b34-9c43065ada96\") " pod="openstack/ceilometer-0" Feb 27 10:51:52 crc kubenswrapper[4728]: I0227 10:51:52.713194 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7779b2a2-e696-4b07-9b34-9c43065ada96-log-httpd\") pod \"ceilometer-0\" (UID: \"7779b2a2-e696-4b07-9b34-9c43065ada96\") " pod="openstack/ceilometer-0" Feb 27 10:51:52 crc kubenswrapper[4728]: I0227 10:51:52.713269 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7779b2a2-e696-4b07-9b34-9c43065ada96-scripts\") pod \"ceilometer-0\" (UID: \"7779b2a2-e696-4b07-9b34-9c43065ada96\") " pod="openstack/ceilometer-0" Feb 27 10:51:52 crc kubenswrapper[4728]: I0227 10:51:52.713311 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7779b2a2-e696-4b07-9b34-9c43065ada96-config-data\") pod \"ceilometer-0\" (UID: \"7779b2a2-e696-4b07-9b34-9c43065ada96\") " pod="openstack/ceilometer-0" Feb 27 10:51:52 crc kubenswrapper[4728]: I0227 10:51:52.713333 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7779b2a2-e696-4b07-9b34-9c43065ada96-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7779b2a2-e696-4b07-9b34-9c43065ada96\") " pod="openstack/ceilometer-0" Feb 27 10:51:52 crc kubenswrapper[4728]: I0227 10:51:52.713367 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7779b2a2-e696-4b07-9b34-9c43065ada96-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7779b2a2-e696-4b07-9b34-9c43065ada96\") " pod="openstack/ceilometer-0" Feb 27 10:51:52 crc kubenswrapper[4728]: I0227 10:51:52.713518 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hg6hh\" (UniqueName: \"kubernetes.io/projected/7779b2a2-e696-4b07-9b34-9c43065ada96-kube-api-access-hg6hh\") pod \"ceilometer-0\" (UID: \"7779b2a2-e696-4b07-9b34-9c43065ada96\") " pod="openstack/ceilometer-0" Feb 27 10:51:52 crc kubenswrapper[4728]: I0227 10:51:52.713591 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7779b2a2-e696-4b07-9b34-9c43065ada96-run-httpd\") pod \"ceilometer-0\" (UID: \"7779b2a2-e696-4b07-9b34-9c43065ada96\") " pod="openstack/ceilometer-0" Feb 27 10:51:52 crc kubenswrapper[4728]: I0227 10:51:52.714671 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7779b2a2-e696-4b07-9b34-9c43065ada96-log-httpd\") pod \"ceilometer-0\" (UID: \"7779b2a2-e696-4b07-9b34-9c43065ada96\") " pod="openstack/ceilometer-0" Feb 27 10:51:52 crc kubenswrapper[4728]: I0227 10:51:52.719014 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7779b2a2-e696-4b07-9b34-9c43065ada96-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7779b2a2-e696-4b07-9b34-9c43065ada96\") " pod="openstack/ceilometer-0" Feb 27 10:51:52 crc kubenswrapper[4728]: I0227 10:51:52.719977 4728 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-api-0" podUID="e99f57b6-6a8c-4e28-8a44-b1383a7ec0c8" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.1.3:8774/\": read tcp 10.217.0.2:41212->10.217.1.3:8774: read: connection reset by peer" Feb 27 10:51:52 crc kubenswrapper[4728]: I0227 10:51:52.720017 4728 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-api-0" podUID="e99f57b6-6a8c-4e28-8a44-b1383a7ec0c8" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.1.3:8774/\": read tcp 10.217.0.2:41226->10.217.1.3:8774: read: connection reset by peer" Feb 27 10:51:52 crc kubenswrapper[4728]: I0227 10:51:52.722192 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7779b2a2-e696-4b07-9b34-9c43065ada96-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7779b2a2-e696-4b07-9b34-9c43065ada96\") " pod="openstack/ceilometer-0" Feb 27 10:51:52 crc kubenswrapper[4728]: I0227 10:51:52.722255 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7779b2a2-e696-4b07-9b34-9c43065ada96-scripts\") pod \"ceilometer-0\" (UID: \"7779b2a2-e696-4b07-9b34-9c43065ada96\") " pod="openstack/ceilometer-0" Feb 27 10:51:52 crc kubenswrapper[4728]: I0227 10:51:52.722737 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7779b2a2-e696-4b07-9b34-9c43065ada96-config-data\") pod \"ceilometer-0\" (UID: \"7779b2a2-e696-4b07-9b34-9c43065ada96\") " pod="openstack/ceilometer-0" Feb 27 10:51:52 crc kubenswrapper[4728]: I0227 10:51:52.731902 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hg6hh\" (UniqueName: \"kubernetes.io/projected/7779b2a2-e696-4b07-9b34-9c43065ada96-kube-api-access-hg6hh\") pod \"ceilometer-0\" (UID: \"7779b2a2-e696-4b07-9b34-9c43065ada96\") " pod="openstack/ceilometer-0" Feb 27 10:51:52 crc kubenswrapper[4728]: I0227 10:51:52.739056 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="75262df4-2453-419b-b516-f7a0d58deb82" path="/var/lib/kubelet/pods/75262df4-2453-419b-b516-f7a0d58deb82/volumes" Feb 27 10:51:52 crc kubenswrapper[4728]: I0227 10:51:52.835456 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 27 10:51:53 crc kubenswrapper[4728]: I0227 10:51:53.231821 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 27 10:51:53 crc kubenswrapper[4728]: I0227 10:51:53.326918 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e99f57b6-6a8c-4e28-8a44-b1383a7ec0c8-logs\") pod \"e99f57b6-6a8c-4e28-8a44-b1383a7ec0c8\" (UID: \"e99f57b6-6a8c-4e28-8a44-b1383a7ec0c8\") " Feb 27 10:51:53 crc kubenswrapper[4728]: I0227 10:51:53.327010 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e99f57b6-6a8c-4e28-8a44-b1383a7ec0c8-combined-ca-bundle\") pod \"e99f57b6-6a8c-4e28-8a44-b1383a7ec0c8\" (UID: \"e99f57b6-6a8c-4e28-8a44-b1383a7ec0c8\") " Feb 27 10:51:53 crc kubenswrapper[4728]: I0227 10:51:53.327044 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v4x2z\" (UniqueName: \"kubernetes.io/projected/e99f57b6-6a8c-4e28-8a44-b1383a7ec0c8-kube-api-access-v4x2z\") pod \"e99f57b6-6a8c-4e28-8a44-b1383a7ec0c8\" (UID: \"e99f57b6-6a8c-4e28-8a44-b1383a7ec0c8\") " Feb 27 10:51:53 crc kubenswrapper[4728]: I0227 10:51:53.327113 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e99f57b6-6a8c-4e28-8a44-b1383a7ec0c8-config-data\") pod \"e99f57b6-6a8c-4e28-8a44-b1383a7ec0c8\" (UID: \"e99f57b6-6a8c-4e28-8a44-b1383a7ec0c8\") " Feb 27 10:51:53 crc kubenswrapper[4728]: I0227 10:51:53.327420 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e99f57b6-6a8c-4e28-8a44-b1383a7ec0c8-logs" (OuterVolumeSpecName: "logs") pod "e99f57b6-6a8c-4e28-8a44-b1383a7ec0c8" (UID: "e99f57b6-6a8c-4e28-8a44-b1383a7ec0c8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:51:53 crc kubenswrapper[4728]: I0227 10:51:53.327714 4728 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e99f57b6-6a8c-4e28-8a44-b1383a7ec0c8-logs\") on node \"crc\" DevicePath \"\"" Feb 27 10:51:53 crc kubenswrapper[4728]: I0227 10:51:53.333840 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e99f57b6-6a8c-4e28-8a44-b1383a7ec0c8-kube-api-access-v4x2z" (OuterVolumeSpecName: "kube-api-access-v4x2z") pod "e99f57b6-6a8c-4e28-8a44-b1383a7ec0c8" (UID: "e99f57b6-6a8c-4e28-8a44-b1383a7ec0c8"). InnerVolumeSpecName "kube-api-access-v4x2z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:51:53 crc kubenswrapper[4728]: I0227 10:51:53.358040 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e99f57b6-6a8c-4e28-8a44-b1383a7ec0c8-config-data" (OuterVolumeSpecName: "config-data") pod "e99f57b6-6a8c-4e28-8a44-b1383a7ec0c8" (UID: "e99f57b6-6a8c-4e28-8a44-b1383a7ec0c8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:51:53 crc kubenswrapper[4728]: I0227 10:51:53.367349 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e99f57b6-6a8c-4e28-8a44-b1383a7ec0c8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e99f57b6-6a8c-4e28-8a44-b1383a7ec0c8" (UID: "e99f57b6-6a8c-4e28-8a44-b1383a7ec0c8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:51:53 crc kubenswrapper[4728]: I0227 10:51:53.423128 4728 generic.go:334] "Generic (PLEG): container finished" podID="e99f57b6-6a8c-4e28-8a44-b1383a7ec0c8" containerID="752366dc33bd3f59cec62f09b50960b6a4d19c83ad2c9d13f0fb06eba8337006" exitCode=0 Feb 27 10:51:53 crc kubenswrapper[4728]: I0227 10:51:53.423224 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e99f57b6-6a8c-4e28-8a44-b1383a7ec0c8","Type":"ContainerDied","Data":"752366dc33bd3f59cec62f09b50960b6a4d19c83ad2c9d13f0fb06eba8337006"} Feb 27 10:51:53 crc kubenswrapper[4728]: I0227 10:51:53.423256 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e99f57b6-6a8c-4e28-8a44-b1383a7ec0c8","Type":"ContainerDied","Data":"e8d4aad0803ae1b1d37fdc2fe38d233412c5f13f1ca95f189c78312aa463b90d"} Feb 27 10:51:53 crc kubenswrapper[4728]: I0227 10:51:53.423277 4728 scope.go:117] "RemoveContainer" containerID="752366dc33bd3f59cec62f09b50960b6a4d19c83ad2c9d13f0fb06eba8337006" Feb 27 10:51:53 crc kubenswrapper[4728]: I0227 10:51:53.423404 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 27 10:51:53 crc kubenswrapper[4728]: I0227 10:51:53.429282 4728 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e99f57b6-6a8c-4e28-8a44-b1383a7ec0c8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 10:51:53 crc kubenswrapper[4728]: I0227 10:51:53.429319 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v4x2z\" (UniqueName: \"kubernetes.io/projected/e99f57b6-6a8c-4e28-8a44-b1383a7ec0c8-kube-api-access-v4x2z\") on node \"crc\" DevicePath \"\"" Feb 27 10:51:53 crc kubenswrapper[4728]: I0227 10:51:53.429336 4728 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e99f57b6-6a8c-4e28-8a44-b1383a7ec0c8-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 10:51:53 crc kubenswrapper[4728]: I0227 10:51:53.441088 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 27 10:51:53 crc kubenswrapper[4728]: I0227 10:51:53.459594 4728 scope.go:117] "RemoveContainer" containerID="7733460b6570f072ed8fb2df9150baf3c06b13a0819a34de968d1856336c6fdb" Feb 27 10:51:53 crc kubenswrapper[4728]: I0227 10:51:53.472481 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 27 10:51:53 crc kubenswrapper[4728]: I0227 10:51:53.484229 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 27 10:51:53 crc kubenswrapper[4728]: I0227 10:51:53.508144 4728 scope.go:117] "RemoveContainer" containerID="752366dc33bd3f59cec62f09b50960b6a4d19c83ad2c9d13f0fb06eba8337006" Feb 27 10:51:53 crc kubenswrapper[4728]: E0227 10:51:53.510169 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"752366dc33bd3f59cec62f09b50960b6a4d19c83ad2c9d13f0fb06eba8337006\": container with ID starting with 752366dc33bd3f59cec62f09b50960b6a4d19c83ad2c9d13f0fb06eba8337006 not found: ID does not exist" containerID="752366dc33bd3f59cec62f09b50960b6a4d19c83ad2c9d13f0fb06eba8337006" Feb 27 10:51:53 crc kubenswrapper[4728]: I0227 10:51:53.510239 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"752366dc33bd3f59cec62f09b50960b6a4d19c83ad2c9d13f0fb06eba8337006"} err="failed to get container status \"752366dc33bd3f59cec62f09b50960b6a4d19c83ad2c9d13f0fb06eba8337006\": rpc error: code = NotFound desc = could not find container \"752366dc33bd3f59cec62f09b50960b6a4d19c83ad2c9d13f0fb06eba8337006\": container with ID starting with 752366dc33bd3f59cec62f09b50960b6a4d19c83ad2c9d13f0fb06eba8337006 not found: ID does not exist" Feb 27 10:51:53 crc kubenswrapper[4728]: I0227 10:51:53.510267 4728 scope.go:117] "RemoveContainer" containerID="7733460b6570f072ed8fb2df9150baf3c06b13a0819a34de968d1856336c6fdb" Feb 27 10:51:53 crc kubenswrapper[4728]: E0227 10:51:53.511252 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7733460b6570f072ed8fb2df9150baf3c06b13a0819a34de968d1856336c6fdb\": container with ID starting with 7733460b6570f072ed8fb2df9150baf3c06b13a0819a34de968d1856336c6fdb not found: ID does not exist" containerID="7733460b6570f072ed8fb2df9150baf3c06b13a0819a34de968d1856336c6fdb" Feb 27 10:51:53 crc kubenswrapper[4728]: I0227 10:51:53.511286 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7733460b6570f072ed8fb2df9150baf3c06b13a0819a34de968d1856336c6fdb"} err="failed to get container status \"7733460b6570f072ed8fb2df9150baf3c06b13a0819a34de968d1856336c6fdb\": rpc error: code = NotFound desc = could not find container \"7733460b6570f072ed8fb2df9150baf3c06b13a0819a34de968d1856336c6fdb\": container with ID starting with 7733460b6570f072ed8fb2df9150baf3c06b13a0819a34de968d1856336c6fdb not found: ID does not exist" Feb 27 10:51:53 crc kubenswrapper[4728]: I0227 10:51:53.520409 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 27 10:51:53 crc kubenswrapper[4728]: E0227 10:51:53.520952 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e99f57b6-6a8c-4e28-8a44-b1383a7ec0c8" containerName="nova-api-log" Feb 27 10:51:53 crc kubenswrapper[4728]: I0227 10:51:53.520969 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="e99f57b6-6a8c-4e28-8a44-b1383a7ec0c8" containerName="nova-api-log" Feb 27 10:51:53 crc kubenswrapper[4728]: E0227 10:51:53.520982 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e99f57b6-6a8c-4e28-8a44-b1383a7ec0c8" containerName="nova-api-api" Feb 27 10:51:53 crc kubenswrapper[4728]: I0227 10:51:53.520988 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="e99f57b6-6a8c-4e28-8a44-b1383a7ec0c8" containerName="nova-api-api" Feb 27 10:51:53 crc kubenswrapper[4728]: I0227 10:51:53.521205 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="e99f57b6-6a8c-4e28-8a44-b1383a7ec0c8" containerName="nova-api-log" Feb 27 10:51:53 crc kubenswrapper[4728]: I0227 10:51:53.521227 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="e99f57b6-6a8c-4e28-8a44-b1383a7ec0c8" containerName="nova-api-api" Feb 27 10:51:53 crc kubenswrapper[4728]: I0227 10:51:53.522480 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 27 10:51:53 crc kubenswrapper[4728]: I0227 10:51:53.525202 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 27 10:51:53 crc kubenswrapper[4728]: I0227 10:51:53.525398 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 27 10:51:53 crc kubenswrapper[4728]: I0227 10:51:53.525819 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 27 10:51:53 crc kubenswrapper[4728]: I0227 10:51:53.543518 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 27 10:51:53 crc kubenswrapper[4728]: I0227 10:51:53.559895 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/195e680c-d89c-4b39-ae68-145934b70fa2-internal-tls-certs\") pod \"nova-api-0\" (UID: \"195e680c-d89c-4b39-ae68-145934b70fa2\") " pod="openstack/nova-api-0" Feb 27 10:51:53 crc kubenswrapper[4728]: I0227 10:51:53.560113 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24f4l\" (UniqueName: \"kubernetes.io/projected/195e680c-d89c-4b39-ae68-145934b70fa2-kube-api-access-24f4l\") pod \"nova-api-0\" (UID: \"195e680c-d89c-4b39-ae68-145934b70fa2\") " pod="openstack/nova-api-0" Feb 27 10:51:53 crc kubenswrapper[4728]: I0227 10:51:53.560306 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/195e680c-d89c-4b39-ae68-145934b70fa2-logs\") pod \"nova-api-0\" (UID: \"195e680c-d89c-4b39-ae68-145934b70fa2\") " pod="openstack/nova-api-0" Feb 27 10:51:53 crc kubenswrapper[4728]: I0227 10:51:53.560387 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/195e680c-d89c-4b39-ae68-145934b70fa2-public-tls-certs\") pod \"nova-api-0\" (UID: \"195e680c-d89c-4b39-ae68-145934b70fa2\") " pod="openstack/nova-api-0" Feb 27 10:51:53 crc kubenswrapper[4728]: I0227 10:51:53.560483 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/195e680c-d89c-4b39-ae68-145934b70fa2-config-data\") pod \"nova-api-0\" (UID: \"195e680c-d89c-4b39-ae68-145934b70fa2\") " pod="openstack/nova-api-0" Feb 27 10:51:53 crc kubenswrapper[4728]: I0227 10:51:53.560599 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/195e680c-d89c-4b39-ae68-145934b70fa2-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"195e680c-d89c-4b39-ae68-145934b70fa2\") " pod="openstack/nova-api-0" Feb 27 10:51:53 crc kubenswrapper[4728]: I0227 10:51:53.668242 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/195e680c-d89c-4b39-ae68-145934b70fa2-logs\") pod \"nova-api-0\" (UID: \"195e680c-d89c-4b39-ae68-145934b70fa2\") " pod="openstack/nova-api-0" Feb 27 10:51:53 crc kubenswrapper[4728]: I0227 10:51:53.668314 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/195e680c-d89c-4b39-ae68-145934b70fa2-public-tls-certs\") pod \"nova-api-0\" (UID: \"195e680c-d89c-4b39-ae68-145934b70fa2\") " pod="openstack/nova-api-0" Feb 27 10:51:53 crc kubenswrapper[4728]: I0227 10:51:53.668361 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/195e680c-d89c-4b39-ae68-145934b70fa2-config-data\") pod \"nova-api-0\" (UID: \"195e680c-d89c-4b39-ae68-145934b70fa2\") " pod="openstack/nova-api-0" Feb 27 10:51:53 crc kubenswrapper[4728]: I0227 10:51:53.668402 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/195e680c-d89c-4b39-ae68-145934b70fa2-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"195e680c-d89c-4b39-ae68-145934b70fa2\") " pod="openstack/nova-api-0" Feb 27 10:51:53 crc kubenswrapper[4728]: I0227 10:51:53.668446 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/195e680c-d89c-4b39-ae68-145934b70fa2-internal-tls-certs\") pod \"nova-api-0\" (UID: \"195e680c-d89c-4b39-ae68-145934b70fa2\") " pod="openstack/nova-api-0" Feb 27 10:51:53 crc kubenswrapper[4728]: I0227 10:51:53.668522 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-24f4l\" (UniqueName: \"kubernetes.io/projected/195e680c-d89c-4b39-ae68-145934b70fa2-kube-api-access-24f4l\") pod \"nova-api-0\" (UID: \"195e680c-d89c-4b39-ae68-145934b70fa2\") " pod="openstack/nova-api-0" Feb 27 10:51:53 crc kubenswrapper[4728]: I0227 10:51:53.668693 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/195e680c-d89c-4b39-ae68-145934b70fa2-logs\") pod \"nova-api-0\" (UID: \"195e680c-d89c-4b39-ae68-145934b70fa2\") " pod="openstack/nova-api-0" Feb 27 10:51:53 crc kubenswrapper[4728]: I0227 10:51:53.670684 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 27 10:51:53 crc kubenswrapper[4728]: I0227 10:51:53.674933 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/195e680c-d89c-4b39-ae68-145934b70fa2-internal-tls-certs\") pod \"nova-api-0\" (UID: \"195e680c-d89c-4b39-ae68-145934b70fa2\") " pod="openstack/nova-api-0" Feb 27 10:51:53 crc kubenswrapper[4728]: I0227 10:51:53.675433 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/195e680c-d89c-4b39-ae68-145934b70fa2-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"195e680c-d89c-4b39-ae68-145934b70fa2\") " pod="openstack/nova-api-0" Feb 27 10:51:53 crc kubenswrapper[4728]: I0227 10:51:53.676635 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/195e680c-d89c-4b39-ae68-145934b70fa2-config-data\") pod \"nova-api-0\" (UID: \"195e680c-d89c-4b39-ae68-145934b70fa2\") " pod="openstack/nova-api-0" Feb 27 10:51:53 crc kubenswrapper[4728]: I0227 10:51:53.677105 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/195e680c-d89c-4b39-ae68-145934b70fa2-public-tls-certs\") pod \"nova-api-0\" (UID: \"195e680c-d89c-4b39-ae68-145934b70fa2\") " pod="openstack/nova-api-0" Feb 27 10:51:53 crc kubenswrapper[4728]: I0227 10:51:53.684958 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-24f4l\" (UniqueName: \"kubernetes.io/projected/195e680c-d89c-4b39-ae68-145934b70fa2-kube-api-access-24f4l\") pod \"nova-api-0\" (UID: \"195e680c-d89c-4b39-ae68-145934b70fa2\") " pod="openstack/nova-api-0" Feb 27 10:51:53 crc kubenswrapper[4728]: I0227 10:51:53.949202 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 27 10:51:54 crc kubenswrapper[4728]: I0227 10:51:54.482294 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7779b2a2-e696-4b07-9b34-9c43065ada96","Type":"ContainerStarted","Data":"39c3495ef2240bcb4ccd6ecfbbf1d3a45e7ead86602f45e065cea3b63dc56269"} Feb 27 10:51:54 crc kubenswrapper[4728]: I0227 10:51:54.482984 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7779b2a2-e696-4b07-9b34-9c43065ada96","Type":"ContainerStarted","Data":"32167401d2ea6cdd70f83f4b38da417f08dd40b9bb67b4e542ba6ec9dc4ad69f"} Feb 27 10:51:54 crc kubenswrapper[4728]: I0227 10:51:54.486012 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 27 10:51:54 crc kubenswrapper[4728]: W0227 10:51:54.497910 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod195e680c_d89c_4b39_ae68_145934b70fa2.slice/crio-90ee8606f1cbb298b6a49cadeaf5ecf39a12959fab55e7b9c5ae0a323a51e8d7 WatchSource:0}: Error finding container 90ee8606f1cbb298b6a49cadeaf5ecf39a12959fab55e7b9c5ae0a323a51e8d7: Status 404 returned error can't find the container with id 90ee8606f1cbb298b6a49cadeaf5ecf39a12959fab55e7b9c5ae0a323a51e8d7 Feb 27 10:51:54 crc kubenswrapper[4728]: I0227 10:51:54.551772 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6d99f6bc7f-gr6np" Feb 27 10:51:54 crc kubenswrapper[4728]: I0227 10:51:54.623315 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7877d89589-bb282"] Feb 27 10:51:54 crc kubenswrapper[4728]: I0227 10:51:54.623579 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7877d89589-bb282" podUID="03fe94ce-b874-4412-b3a8-adb6d8172507" containerName="dnsmasq-dns" containerID="cri-o://43f906b3b632e54291e58d236fff5ceca13200ace38dc4fbb48d1a0447e98594" gracePeriod=10 Feb 27 10:51:54 crc kubenswrapper[4728]: I0227 10:51:54.742211 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e99f57b6-6a8c-4e28-8a44-b1383a7ec0c8" path="/var/lib/kubelet/pods/e99f57b6-6a8c-4e28-8a44-b1383a7ec0c8/volumes" Feb 27 10:51:55 crc kubenswrapper[4728]: I0227 10:51:55.095184 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7877d89589-bb282" Feb 27 10:51:55 crc kubenswrapper[4728]: I0227 10:51:55.204862 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/03fe94ce-b874-4412-b3a8-adb6d8172507-ovsdbserver-nb\") pod \"03fe94ce-b874-4412-b3a8-adb6d8172507\" (UID: \"03fe94ce-b874-4412-b3a8-adb6d8172507\") " Feb 27 10:51:55 crc kubenswrapper[4728]: I0227 10:51:55.205150 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/03fe94ce-b874-4412-b3a8-adb6d8172507-dns-svc\") pod \"03fe94ce-b874-4412-b3a8-adb6d8172507\" (UID: \"03fe94ce-b874-4412-b3a8-adb6d8172507\") " Feb 27 10:51:55 crc kubenswrapper[4728]: I0227 10:51:55.205244 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03fe94ce-b874-4412-b3a8-adb6d8172507-config\") pod \"03fe94ce-b874-4412-b3a8-adb6d8172507\" (UID: \"03fe94ce-b874-4412-b3a8-adb6d8172507\") " Feb 27 10:51:55 crc kubenswrapper[4728]: I0227 10:51:55.205271 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/03fe94ce-b874-4412-b3a8-adb6d8172507-ovsdbserver-sb\") pod \"03fe94ce-b874-4412-b3a8-adb6d8172507\" (UID: \"03fe94ce-b874-4412-b3a8-adb6d8172507\") " Feb 27 10:51:55 crc kubenswrapper[4728]: I0227 10:51:55.205340 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/03fe94ce-b874-4412-b3a8-adb6d8172507-dns-swift-storage-0\") pod \"03fe94ce-b874-4412-b3a8-adb6d8172507\" (UID: \"03fe94ce-b874-4412-b3a8-adb6d8172507\") " Feb 27 10:51:55 crc kubenswrapper[4728]: I0227 10:51:55.205373 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ctcxz\" (UniqueName: \"kubernetes.io/projected/03fe94ce-b874-4412-b3a8-adb6d8172507-kube-api-access-ctcxz\") pod \"03fe94ce-b874-4412-b3a8-adb6d8172507\" (UID: \"03fe94ce-b874-4412-b3a8-adb6d8172507\") " Feb 27 10:51:55 crc kubenswrapper[4728]: I0227 10:51:55.222970 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03fe94ce-b874-4412-b3a8-adb6d8172507-kube-api-access-ctcxz" (OuterVolumeSpecName: "kube-api-access-ctcxz") pod "03fe94ce-b874-4412-b3a8-adb6d8172507" (UID: "03fe94ce-b874-4412-b3a8-adb6d8172507"). InnerVolumeSpecName "kube-api-access-ctcxz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:51:55 crc kubenswrapper[4728]: I0227 10:51:55.308583 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ctcxz\" (UniqueName: \"kubernetes.io/projected/03fe94ce-b874-4412-b3a8-adb6d8172507-kube-api-access-ctcxz\") on node \"crc\" DevicePath \"\"" Feb 27 10:51:55 crc kubenswrapper[4728]: I0227 10:51:55.455420 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/03fe94ce-b874-4412-b3a8-adb6d8172507-config" (OuterVolumeSpecName: "config") pod "03fe94ce-b874-4412-b3a8-adb6d8172507" (UID: "03fe94ce-b874-4412-b3a8-adb6d8172507"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:51:55 crc kubenswrapper[4728]: I0227 10:51:55.464686 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/03fe94ce-b874-4412-b3a8-adb6d8172507-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "03fe94ce-b874-4412-b3a8-adb6d8172507" (UID: "03fe94ce-b874-4412-b3a8-adb6d8172507"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:51:55 crc kubenswrapper[4728]: I0227 10:51:55.470288 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/03fe94ce-b874-4412-b3a8-adb6d8172507-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "03fe94ce-b874-4412-b3a8-adb6d8172507" (UID: "03fe94ce-b874-4412-b3a8-adb6d8172507"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:51:55 crc kubenswrapper[4728]: I0227 10:51:55.479895 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/03fe94ce-b874-4412-b3a8-adb6d8172507-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "03fe94ce-b874-4412-b3a8-adb6d8172507" (UID: "03fe94ce-b874-4412-b3a8-adb6d8172507"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:51:55 crc kubenswrapper[4728]: I0227 10:51:55.487326 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/03fe94ce-b874-4412-b3a8-adb6d8172507-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "03fe94ce-b874-4412-b3a8-adb6d8172507" (UID: "03fe94ce-b874-4412-b3a8-adb6d8172507"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:51:55 crc kubenswrapper[4728]: I0227 10:51:55.497699 4728 generic.go:334] "Generic (PLEG): container finished" podID="03fe94ce-b874-4412-b3a8-adb6d8172507" containerID="43f906b3b632e54291e58d236fff5ceca13200ace38dc4fbb48d1a0447e98594" exitCode=0 Feb 27 10:51:55 crc kubenswrapper[4728]: I0227 10:51:55.497757 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7877d89589-bb282" event={"ID":"03fe94ce-b874-4412-b3a8-adb6d8172507","Type":"ContainerDied","Data":"43f906b3b632e54291e58d236fff5ceca13200ace38dc4fbb48d1a0447e98594"} Feb 27 10:51:55 crc kubenswrapper[4728]: I0227 10:51:55.497788 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7877d89589-bb282" Feb 27 10:51:55 crc kubenswrapper[4728]: I0227 10:51:55.497818 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7877d89589-bb282" event={"ID":"03fe94ce-b874-4412-b3a8-adb6d8172507","Type":"ContainerDied","Data":"6b925af78c426e1bfb61fc68745377b0a0b85b13bf006b4106226d100e576bad"} Feb 27 10:51:55 crc kubenswrapper[4728]: I0227 10:51:55.497839 4728 scope.go:117] "RemoveContainer" containerID="43f906b3b632e54291e58d236fff5ceca13200ace38dc4fbb48d1a0447e98594" Feb 27 10:51:55 crc kubenswrapper[4728]: I0227 10:51:55.499975 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7779b2a2-e696-4b07-9b34-9c43065ada96","Type":"ContainerStarted","Data":"b3e48fbd853810a5fa71c2ec9a6b3a770c83d310830c8678e070b6ff9a459c5b"} Feb 27 10:51:55 crc kubenswrapper[4728]: I0227 10:51:55.503308 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"195e680c-d89c-4b39-ae68-145934b70fa2","Type":"ContainerStarted","Data":"1166b274401bd1f7b903a229bfec9c38ef0399907dc26a92c7654a11f4cb85ad"} Feb 27 10:51:55 crc kubenswrapper[4728]: I0227 10:51:55.503400 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"195e680c-d89c-4b39-ae68-145934b70fa2","Type":"ContainerStarted","Data":"8e6000b65abfaea8381dd3976c155f1a601e5c00805ad4d2bd622bfacd6d0749"} Feb 27 10:51:55 crc kubenswrapper[4728]: I0227 10:51:55.503417 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"195e680c-d89c-4b39-ae68-145934b70fa2","Type":"ContainerStarted","Data":"90ee8606f1cbb298b6a49cadeaf5ecf39a12959fab55e7b9c5ae0a323a51e8d7"} Feb 27 10:51:55 crc kubenswrapper[4728]: I0227 10:51:55.512601 4728 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/03fe94ce-b874-4412-b3a8-adb6d8172507-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 27 10:51:55 crc kubenswrapper[4728]: I0227 10:51:55.512631 4728 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03fe94ce-b874-4412-b3a8-adb6d8172507-config\") on node \"crc\" DevicePath \"\"" Feb 27 10:51:55 crc kubenswrapper[4728]: I0227 10:51:55.512641 4728 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/03fe94ce-b874-4412-b3a8-adb6d8172507-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 27 10:51:55 crc kubenswrapper[4728]: I0227 10:51:55.512653 4728 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/03fe94ce-b874-4412-b3a8-adb6d8172507-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 27 10:51:55 crc kubenswrapper[4728]: I0227 10:51:55.512663 4728 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/03fe94ce-b874-4412-b3a8-adb6d8172507-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 27 10:51:55 crc kubenswrapper[4728]: I0227 10:51:55.550210 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.5501871659999997 podStartE2EDuration="2.550187166s" podCreationTimestamp="2026-02-27 10:51:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:51:55.524370736 +0000 UTC m=+1535.486736842" watchObservedRunningTime="2026-02-27 10:51:55.550187166 +0000 UTC m=+1535.512553272" Feb 27 10:51:55 crc kubenswrapper[4728]: I0227 10:51:55.558265 4728 scope.go:117] "RemoveContainer" containerID="d5b702f4c67f3c1b5305b10b926e10817a9c76ae5867e9c02cf500c9f85c8fd4" Feb 27 10:51:55 crc kubenswrapper[4728]: I0227 10:51:55.565205 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7877d89589-bb282"] Feb 27 10:51:55 crc kubenswrapper[4728]: I0227 10:51:55.580714 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7877d89589-bb282"] Feb 27 10:51:55 crc kubenswrapper[4728]: I0227 10:51:55.722821 4728 scope.go:117] "RemoveContainer" containerID="43f906b3b632e54291e58d236fff5ceca13200ace38dc4fbb48d1a0447e98594" Feb 27 10:51:55 crc kubenswrapper[4728]: E0227 10:51:55.723419 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"43f906b3b632e54291e58d236fff5ceca13200ace38dc4fbb48d1a0447e98594\": container with ID starting with 43f906b3b632e54291e58d236fff5ceca13200ace38dc4fbb48d1a0447e98594 not found: ID does not exist" containerID="43f906b3b632e54291e58d236fff5ceca13200ace38dc4fbb48d1a0447e98594" Feb 27 10:51:55 crc kubenswrapper[4728]: I0227 10:51:55.723454 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43f906b3b632e54291e58d236fff5ceca13200ace38dc4fbb48d1a0447e98594"} err="failed to get container status \"43f906b3b632e54291e58d236fff5ceca13200ace38dc4fbb48d1a0447e98594\": rpc error: code = NotFound desc = could not find container \"43f906b3b632e54291e58d236fff5ceca13200ace38dc4fbb48d1a0447e98594\": container with ID starting with 43f906b3b632e54291e58d236fff5ceca13200ace38dc4fbb48d1a0447e98594 not found: ID does not exist" Feb 27 10:51:55 crc kubenswrapper[4728]: I0227 10:51:55.723478 4728 scope.go:117] "RemoveContainer" containerID="d5b702f4c67f3c1b5305b10b926e10817a9c76ae5867e9c02cf500c9f85c8fd4" Feb 27 10:51:55 crc kubenswrapper[4728]: E0227 10:51:55.723800 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d5b702f4c67f3c1b5305b10b926e10817a9c76ae5867e9c02cf500c9f85c8fd4\": container with ID starting with d5b702f4c67f3c1b5305b10b926e10817a9c76ae5867e9c02cf500c9f85c8fd4 not found: ID does not exist" containerID="d5b702f4c67f3c1b5305b10b926e10817a9c76ae5867e9c02cf500c9f85c8fd4" Feb 27 10:51:55 crc kubenswrapper[4728]: I0227 10:51:55.723903 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d5b702f4c67f3c1b5305b10b926e10817a9c76ae5867e9c02cf500c9f85c8fd4"} err="failed to get container status \"d5b702f4c67f3c1b5305b10b926e10817a9c76ae5867e9c02cf500c9f85c8fd4\": rpc error: code = NotFound desc = could not find container \"d5b702f4c67f3c1b5305b10b926e10817a9c76ae5867e9c02cf500c9f85c8fd4\": container with ID starting with d5b702f4c67f3c1b5305b10b926e10817a9c76ae5867e9c02cf500c9f85c8fd4 not found: ID does not exist" Feb 27 10:51:56 crc kubenswrapper[4728]: I0227 10:51:56.432073 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Feb 27 10:51:56 crc kubenswrapper[4728]: I0227 10:51:56.455477 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Feb 27 10:51:56 crc kubenswrapper[4728]: I0227 10:51:56.535855 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7779b2a2-e696-4b07-9b34-9c43065ada96","Type":"ContainerStarted","Data":"94083ca2be748847e2f46f86ecabedd161cb7582bee47fabfeca03ac316c26ed"} Feb 27 10:51:56 crc kubenswrapper[4728]: I0227 10:51:56.641542 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-zdmnv"] Feb 27 10:51:56 crc kubenswrapper[4728]: E0227 10:51:56.642037 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03fe94ce-b874-4412-b3a8-adb6d8172507" containerName="init" Feb 27 10:51:56 crc kubenswrapper[4728]: I0227 10:51:56.642052 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="03fe94ce-b874-4412-b3a8-adb6d8172507" containerName="init" Feb 27 10:51:56 crc kubenswrapper[4728]: E0227 10:51:56.642081 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03fe94ce-b874-4412-b3a8-adb6d8172507" containerName="dnsmasq-dns" Feb 27 10:51:56 crc kubenswrapper[4728]: I0227 10:51:56.642088 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="03fe94ce-b874-4412-b3a8-adb6d8172507" containerName="dnsmasq-dns" Feb 27 10:51:56 crc kubenswrapper[4728]: I0227 10:51:56.642302 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="03fe94ce-b874-4412-b3a8-adb6d8172507" containerName="dnsmasq-dns" Feb 27 10:51:56 crc kubenswrapper[4728]: I0227 10:51:56.643181 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-zdmnv" Feb 27 10:51:56 crc kubenswrapper[4728]: I0227 10:51:56.645397 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Feb 27 10:51:56 crc kubenswrapper[4728]: I0227 10:51:56.645695 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Feb 27 10:51:56 crc kubenswrapper[4728]: I0227 10:51:56.654924 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-zdmnv"] Feb 27 10:51:56 crc kubenswrapper[4728]: I0227 10:51:56.738731 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="03fe94ce-b874-4412-b3a8-adb6d8172507" path="/var/lib/kubelet/pods/03fe94ce-b874-4412-b3a8-adb6d8172507/volumes" Feb 27 10:51:56 crc kubenswrapper[4728]: I0227 10:51:56.742706 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxf2t\" (UniqueName: \"kubernetes.io/projected/08778ec2-d0d0-42a2-8497-290bfe1b10c1-kube-api-access-lxf2t\") pod \"nova-cell1-cell-mapping-zdmnv\" (UID: \"08778ec2-d0d0-42a2-8497-290bfe1b10c1\") " pod="openstack/nova-cell1-cell-mapping-zdmnv" Feb 27 10:51:56 crc kubenswrapper[4728]: I0227 10:51:56.742742 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08778ec2-d0d0-42a2-8497-290bfe1b10c1-config-data\") pod \"nova-cell1-cell-mapping-zdmnv\" (UID: \"08778ec2-d0d0-42a2-8497-290bfe1b10c1\") " pod="openstack/nova-cell1-cell-mapping-zdmnv" Feb 27 10:51:56 crc kubenswrapper[4728]: I0227 10:51:56.742764 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/08778ec2-d0d0-42a2-8497-290bfe1b10c1-scripts\") pod \"nova-cell1-cell-mapping-zdmnv\" (UID: \"08778ec2-d0d0-42a2-8497-290bfe1b10c1\") " pod="openstack/nova-cell1-cell-mapping-zdmnv" Feb 27 10:51:56 crc kubenswrapper[4728]: I0227 10:51:56.742780 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08778ec2-d0d0-42a2-8497-290bfe1b10c1-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-zdmnv\" (UID: \"08778ec2-d0d0-42a2-8497-290bfe1b10c1\") " pod="openstack/nova-cell1-cell-mapping-zdmnv" Feb 27 10:51:56 crc kubenswrapper[4728]: I0227 10:51:56.844656 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxf2t\" (UniqueName: \"kubernetes.io/projected/08778ec2-d0d0-42a2-8497-290bfe1b10c1-kube-api-access-lxf2t\") pod \"nova-cell1-cell-mapping-zdmnv\" (UID: \"08778ec2-d0d0-42a2-8497-290bfe1b10c1\") " pod="openstack/nova-cell1-cell-mapping-zdmnv" Feb 27 10:51:56 crc kubenswrapper[4728]: I0227 10:51:56.844699 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08778ec2-d0d0-42a2-8497-290bfe1b10c1-config-data\") pod \"nova-cell1-cell-mapping-zdmnv\" (UID: \"08778ec2-d0d0-42a2-8497-290bfe1b10c1\") " pod="openstack/nova-cell1-cell-mapping-zdmnv" Feb 27 10:51:56 crc kubenswrapper[4728]: I0227 10:51:56.844724 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/08778ec2-d0d0-42a2-8497-290bfe1b10c1-scripts\") pod \"nova-cell1-cell-mapping-zdmnv\" (UID: \"08778ec2-d0d0-42a2-8497-290bfe1b10c1\") " pod="openstack/nova-cell1-cell-mapping-zdmnv" Feb 27 10:51:56 crc kubenswrapper[4728]: I0227 10:51:56.844741 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08778ec2-d0d0-42a2-8497-290bfe1b10c1-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-zdmnv\" (UID: \"08778ec2-d0d0-42a2-8497-290bfe1b10c1\") " pod="openstack/nova-cell1-cell-mapping-zdmnv" Feb 27 10:51:56 crc kubenswrapper[4728]: I0227 10:51:56.851038 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/08778ec2-d0d0-42a2-8497-290bfe1b10c1-scripts\") pod \"nova-cell1-cell-mapping-zdmnv\" (UID: \"08778ec2-d0d0-42a2-8497-290bfe1b10c1\") " pod="openstack/nova-cell1-cell-mapping-zdmnv" Feb 27 10:51:56 crc kubenswrapper[4728]: I0227 10:51:56.851486 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08778ec2-d0d0-42a2-8497-290bfe1b10c1-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-zdmnv\" (UID: \"08778ec2-d0d0-42a2-8497-290bfe1b10c1\") " pod="openstack/nova-cell1-cell-mapping-zdmnv" Feb 27 10:51:56 crc kubenswrapper[4728]: I0227 10:51:56.852004 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08778ec2-d0d0-42a2-8497-290bfe1b10c1-config-data\") pod \"nova-cell1-cell-mapping-zdmnv\" (UID: \"08778ec2-d0d0-42a2-8497-290bfe1b10c1\") " pod="openstack/nova-cell1-cell-mapping-zdmnv" Feb 27 10:51:56 crc kubenswrapper[4728]: I0227 10:51:56.867918 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lxf2t\" (UniqueName: \"kubernetes.io/projected/08778ec2-d0d0-42a2-8497-290bfe1b10c1-kube-api-access-lxf2t\") pod \"nova-cell1-cell-mapping-zdmnv\" (UID: \"08778ec2-d0d0-42a2-8497-290bfe1b10c1\") " pod="openstack/nova-cell1-cell-mapping-zdmnv" Feb 27 10:51:57 crc kubenswrapper[4728]: I0227 10:51:57.002421 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-zdmnv" Feb 27 10:51:57 crc kubenswrapper[4728]: I0227 10:51:57.545303 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-zdmnv"] Feb 27 10:51:58 crc kubenswrapper[4728]: I0227 10:51:58.563161 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-zdmnv" event={"ID":"08778ec2-d0d0-42a2-8497-290bfe1b10c1","Type":"ContainerStarted","Data":"7daba541974762bd1dc66fca9853b1be4a8ce00014832111e32d50a498e242d1"} Feb 27 10:51:58 crc kubenswrapper[4728]: I0227 10:51:58.563542 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-zdmnv" event={"ID":"08778ec2-d0d0-42a2-8497-290bfe1b10c1","Type":"ContainerStarted","Data":"353894cda2446bd1ec1a5cd4de68d341cf7858138a53e7a92e59e041b4facb37"} Feb 27 10:51:58 crc kubenswrapper[4728]: I0227 10:51:58.565944 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7779b2a2-e696-4b07-9b34-9c43065ada96","Type":"ContainerStarted","Data":"b1e98f0a01f972e8e0a4e700d3c5a92aaee69d6d29c18e60e1f23b8b081e4173"} Feb 27 10:51:58 crc kubenswrapper[4728]: I0227 10:51:58.566099 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7779b2a2-e696-4b07-9b34-9c43065ada96" containerName="ceilometer-central-agent" containerID="cri-o://39c3495ef2240bcb4ccd6ecfbbf1d3a45e7ead86602f45e065cea3b63dc56269" gracePeriod=30 Feb 27 10:51:58 crc kubenswrapper[4728]: I0227 10:51:58.566163 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 27 10:51:58 crc kubenswrapper[4728]: I0227 10:51:58.566151 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7779b2a2-e696-4b07-9b34-9c43065ada96" containerName="proxy-httpd" containerID="cri-o://b1e98f0a01f972e8e0a4e700d3c5a92aaee69d6d29c18e60e1f23b8b081e4173" gracePeriod=30 Feb 27 10:51:58 crc kubenswrapper[4728]: I0227 10:51:58.566181 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7779b2a2-e696-4b07-9b34-9c43065ada96" containerName="sg-core" containerID="cri-o://94083ca2be748847e2f46f86ecabedd161cb7582bee47fabfeca03ac316c26ed" gracePeriod=30 Feb 27 10:51:58 crc kubenswrapper[4728]: I0227 10:51:58.566214 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7779b2a2-e696-4b07-9b34-9c43065ada96" containerName="ceilometer-notification-agent" containerID="cri-o://b3e48fbd853810a5fa71c2ec9a6b3a770c83d310830c8678e070b6ff9a459c5b" gracePeriod=30 Feb 27 10:51:58 crc kubenswrapper[4728]: I0227 10:51:58.594157 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-zdmnv" podStartSLOduration=2.594134494 podStartE2EDuration="2.594134494s" podCreationTimestamp="2026-02-27 10:51:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:51:58.580144714 +0000 UTC m=+1538.542510840" watchObservedRunningTime="2026-02-27 10:51:58.594134494 +0000 UTC m=+1538.556500600" Feb 27 10:51:58 crc kubenswrapper[4728]: I0227 10:51:58.617783 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.528478332 podStartE2EDuration="6.617761805s" podCreationTimestamp="2026-02-27 10:51:52 +0000 UTC" firstStartedPulling="2026-02-27 10:51:53.427662271 +0000 UTC m=+1533.390028377" lastFinishedPulling="2026-02-27 10:51:57.516945744 +0000 UTC m=+1537.479311850" observedRunningTime="2026-02-27 10:51:58.615576725 +0000 UTC m=+1538.577942831" watchObservedRunningTime="2026-02-27 10:51:58.617761805 +0000 UTC m=+1538.580127911" Feb 27 10:51:59 crc kubenswrapper[4728]: I0227 10:51:59.589039 4728 generic.go:334] "Generic (PLEG): container finished" podID="7779b2a2-e696-4b07-9b34-9c43065ada96" containerID="b1e98f0a01f972e8e0a4e700d3c5a92aaee69d6d29c18e60e1f23b8b081e4173" exitCode=0 Feb 27 10:51:59 crc kubenswrapper[4728]: I0227 10:51:59.589073 4728 generic.go:334] "Generic (PLEG): container finished" podID="7779b2a2-e696-4b07-9b34-9c43065ada96" containerID="94083ca2be748847e2f46f86ecabedd161cb7582bee47fabfeca03ac316c26ed" exitCode=2 Feb 27 10:51:59 crc kubenswrapper[4728]: I0227 10:51:59.589081 4728 generic.go:334] "Generic (PLEG): container finished" podID="7779b2a2-e696-4b07-9b34-9c43065ada96" containerID="b3e48fbd853810a5fa71c2ec9a6b3a770c83d310830c8678e070b6ff9a459c5b" exitCode=0 Feb 27 10:51:59 crc kubenswrapper[4728]: I0227 10:51:59.589168 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7779b2a2-e696-4b07-9b34-9c43065ada96","Type":"ContainerDied","Data":"b1e98f0a01f972e8e0a4e700d3c5a92aaee69d6d29c18e60e1f23b8b081e4173"} Feb 27 10:51:59 crc kubenswrapper[4728]: I0227 10:51:59.589314 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7779b2a2-e696-4b07-9b34-9c43065ada96","Type":"ContainerDied","Data":"94083ca2be748847e2f46f86ecabedd161cb7582bee47fabfeca03ac316c26ed"} Feb 27 10:51:59 crc kubenswrapper[4728]: I0227 10:51:59.589345 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7779b2a2-e696-4b07-9b34-9c43065ada96","Type":"ContainerDied","Data":"b3e48fbd853810a5fa71c2ec9a6b3a770c83d310830c8678e070b6ff9a459c5b"} Feb 27 10:52:00 crc kubenswrapper[4728]: I0227 10:52:00.154804 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536492-xrgfd"] Feb 27 10:52:00 crc kubenswrapper[4728]: I0227 10:52:00.157464 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536492-xrgfd" Feb 27 10:52:00 crc kubenswrapper[4728]: I0227 10:52:00.170056 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 10:52:00 crc kubenswrapper[4728]: I0227 10:52:00.170161 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-wxdfj" Feb 27 10:52:00 crc kubenswrapper[4728]: I0227 10:52:00.172404 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 10:52:00 crc kubenswrapper[4728]: I0227 10:52:00.184195 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536492-xrgfd"] Feb 27 10:52:00 crc kubenswrapper[4728]: I0227 10:52:00.246130 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxth8\" (UniqueName: \"kubernetes.io/projected/f8c25d7b-2d32-4f79-a9f0-c8a53ad7c787-kube-api-access-kxth8\") pod \"auto-csr-approver-29536492-xrgfd\" (UID: \"f8c25d7b-2d32-4f79-a9f0-c8a53ad7c787\") " pod="openshift-infra/auto-csr-approver-29536492-xrgfd" Feb 27 10:52:00 crc kubenswrapper[4728]: I0227 10:52:00.348528 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kxth8\" (UniqueName: \"kubernetes.io/projected/f8c25d7b-2d32-4f79-a9f0-c8a53ad7c787-kube-api-access-kxth8\") pod \"auto-csr-approver-29536492-xrgfd\" (UID: \"f8c25d7b-2d32-4f79-a9f0-c8a53ad7c787\") " pod="openshift-infra/auto-csr-approver-29536492-xrgfd" Feb 27 10:52:00 crc kubenswrapper[4728]: I0227 10:52:00.372131 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kxth8\" (UniqueName: \"kubernetes.io/projected/f8c25d7b-2d32-4f79-a9f0-c8a53ad7c787-kube-api-access-kxth8\") pod \"auto-csr-approver-29536492-xrgfd\" (UID: \"f8c25d7b-2d32-4f79-a9f0-c8a53ad7c787\") " pod="openshift-infra/auto-csr-approver-29536492-xrgfd" Feb 27 10:52:00 crc kubenswrapper[4728]: I0227 10:52:00.456981 4728 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-h4gl8" podUID="efa9e238-79b0-4757-acab-53537b5ae93a" containerName="registry-server" probeResult="failure" output=< Feb 27 10:52:00 crc kubenswrapper[4728]: timeout: failed to connect service ":50051" within 1s Feb 27 10:52:00 crc kubenswrapper[4728]: > Feb 27 10:52:00 crc kubenswrapper[4728]: I0227 10:52:00.497797 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536492-xrgfd" Feb 27 10:52:00 crc kubenswrapper[4728]: W0227 10:52:00.975117 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf8c25d7b_2d32_4f79_a9f0_c8a53ad7c787.slice/crio-bf807ea8fcaac8d3ad046e506ae986d9312836b551f69506e8ebc6d32f4f2754 WatchSource:0}: Error finding container bf807ea8fcaac8d3ad046e506ae986d9312836b551f69506e8ebc6d32f4f2754: Status 404 returned error can't find the container with id bf807ea8fcaac8d3ad046e506ae986d9312836b551f69506e8ebc6d32f4f2754 Feb 27 10:52:00 crc kubenswrapper[4728]: I0227 10:52:00.982005 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536492-xrgfd"] Feb 27 10:52:01 crc kubenswrapper[4728]: I0227 10:52:01.621418 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536492-xrgfd" event={"ID":"f8c25d7b-2d32-4f79-a9f0-c8a53ad7c787","Type":"ContainerStarted","Data":"bf807ea8fcaac8d3ad046e506ae986d9312836b551f69506e8ebc6d32f4f2754"} Feb 27 10:52:01 crc kubenswrapper[4728]: I0227 10:52:01.624607 4728 generic.go:334] "Generic (PLEG): container finished" podID="7779b2a2-e696-4b07-9b34-9c43065ada96" containerID="39c3495ef2240bcb4ccd6ecfbbf1d3a45e7ead86602f45e065cea3b63dc56269" exitCode=0 Feb 27 10:52:01 crc kubenswrapper[4728]: I0227 10:52:01.624650 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7779b2a2-e696-4b07-9b34-9c43065ada96","Type":"ContainerDied","Data":"39c3495ef2240bcb4ccd6ecfbbf1d3a45e7ead86602f45e065cea3b63dc56269"} Feb 27 10:52:01 crc kubenswrapper[4728]: I0227 10:52:01.624675 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7779b2a2-e696-4b07-9b34-9c43065ada96","Type":"ContainerDied","Data":"32167401d2ea6cdd70f83f4b38da417f08dd40b9bb67b4e542ba6ec9dc4ad69f"} Feb 27 10:52:01 crc kubenswrapper[4728]: I0227 10:52:01.624689 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="32167401d2ea6cdd70f83f4b38da417f08dd40b9bb67b4e542ba6ec9dc4ad69f" Feb 27 10:52:01 crc kubenswrapper[4728]: I0227 10:52:01.669605 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 27 10:52:01 crc kubenswrapper[4728]: I0227 10:52:01.798444 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7779b2a2-e696-4b07-9b34-9c43065ada96-sg-core-conf-yaml\") pod \"7779b2a2-e696-4b07-9b34-9c43065ada96\" (UID: \"7779b2a2-e696-4b07-9b34-9c43065ada96\") " Feb 27 10:52:01 crc kubenswrapper[4728]: I0227 10:52:01.798543 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7779b2a2-e696-4b07-9b34-9c43065ada96-run-httpd\") pod \"7779b2a2-e696-4b07-9b34-9c43065ada96\" (UID: \"7779b2a2-e696-4b07-9b34-9c43065ada96\") " Feb 27 10:52:01 crc kubenswrapper[4728]: I0227 10:52:01.798591 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7779b2a2-e696-4b07-9b34-9c43065ada96-combined-ca-bundle\") pod \"7779b2a2-e696-4b07-9b34-9c43065ada96\" (UID: \"7779b2a2-e696-4b07-9b34-9c43065ada96\") " Feb 27 10:52:01 crc kubenswrapper[4728]: I0227 10:52:01.798699 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7779b2a2-e696-4b07-9b34-9c43065ada96-config-data\") pod \"7779b2a2-e696-4b07-9b34-9c43065ada96\" (UID: \"7779b2a2-e696-4b07-9b34-9c43065ada96\") " Feb 27 10:52:01 crc kubenswrapper[4728]: I0227 10:52:01.798832 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7779b2a2-e696-4b07-9b34-9c43065ada96-scripts\") pod \"7779b2a2-e696-4b07-9b34-9c43065ada96\" (UID: \"7779b2a2-e696-4b07-9b34-9c43065ada96\") " Feb 27 10:52:01 crc kubenswrapper[4728]: I0227 10:52:01.798887 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hg6hh\" (UniqueName: \"kubernetes.io/projected/7779b2a2-e696-4b07-9b34-9c43065ada96-kube-api-access-hg6hh\") pod \"7779b2a2-e696-4b07-9b34-9c43065ada96\" (UID: \"7779b2a2-e696-4b07-9b34-9c43065ada96\") " Feb 27 10:52:01 crc kubenswrapper[4728]: I0227 10:52:01.798923 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7779b2a2-e696-4b07-9b34-9c43065ada96-log-httpd\") pod \"7779b2a2-e696-4b07-9b34-9c43065ada96\" (UID: \"7779b2a2-e696-4b07-9b34-9c43065ada96\") " Feb 27 10:52:01 crc kubenswrapper[4728]: I0227 10:52:01.801619 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7779b2a2-e696-4b07-9b34-9c43065ada96-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "7779b2a2-e696-4b07-9b34-9c43065ada96" (UID: "7779b2a2-e696-4b07-9b34-9c43065ada96"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:52:01 crc kubenswrapper[4728]: I0227 10:52:01.802311 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7779b2a2-e696-4b07-9b34-9c43065ada96-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "7779b2a2-e696-4b07-9b34-9c43065ada96" (UID: "7779b2a2-e696-4b07-9b34-9c43065ada96"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:52:01 crc kubenswrapper[4728]: I0227 10:52:01.808924 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7779b2a2-e696-4b07-9b34-9c43065ada96-scripts" (OuterVolumeSpecName: "scripts") pod "7779b2a2-e696-4b07-9b34-9c43065ada96" (UID: "7779b2a2-e696-4b07-9b34-9c43065ada96"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:52:01 crc kubenswrapper[4728]: I0227 10:52:01.823191 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7779b2a2-e696-4b07-9b34-9c43065ada96-kube-api-access-hg6hh" (OuterVolumeSpecName: "kube-api-access-hg6hh") pod "7779b2a2-e696-4b07-9b34-9c43065ada96" (UID: "7779b2a2-e696-4b07-9b34-9c43065ada96"). InnerVolumeSpecName "kube-api-access-hg6hh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:52:01 crc kubenswrapper[4728]: I0227 10:52:01.835325 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7779b2a2-e696-4b07-9b34-9c43065ada96-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "7779b2a2-e696-4b07-9b34-9c43065ada96" (UID: "7779b2a2-e696-4b07-9b34-9c43065ada96"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:52:01 crc kubenswrapper[4728]: I0227 10:52:01.901911 4728 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7779b2a2-e696-4b07-9b34-9c43065ada96-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 27 10:52:01 crc kubenswrapper[4728]: I0227 10:52:01.901949 4728 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7779b2a2-e696-4b07-9b34-9c43065ada96-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 27 10:52:01 crc kubenswrapper[4728]: I0227 10:52:01.901962 4728 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7779b2a2-e696-4b07-9b34-9c43065ada96-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 10:52:01 crc kubenswrapper[4728]: I0227 10:52:01.901975 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hg6hh\" (UniqueName: \"kubernetes.io/projected/7779b2a2-e696-4b07-9b34-9c43065ada96-kube-api-access-hg6hh\") on node \"crc\" DevicePath \"\"" Feb 27 10:52:01 crc kubenswrapper[4728]: I0227 10:52:01.901991 4728 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7779b2a2-e696-4b07-9b34-9c43065ada96-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 27 10:52:01 crc kubenswrapper[4728]: I0227 10:52:01.903098 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7779b2a2-e696-4b07-9b34-9c43065ada96-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7779b2a2-e696-4b07-9b34-9c43065ada96" (UID: "7779b2a2-e696-4b07-9b34-9c43065ada96"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:52:01 crc kubenswrapper[4728]: I0227 10:52:01.919312 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7779b2a2-e696-4b07-9b34-9c43065ada96-config-data" (OuterVolumeSpecName: "config-data") pod "7779b2a2-e696-4b07-9b34-9c43065ada96" (UID: "7779b2a2-e696-4b07-9b34-9c43065ada96"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:52:02 crc kubenswrapper[4728]: I0227 10:52:02.005056 4728 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7779b2a2-e696-4b07-9b34-9c43065ada96-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 10:52:02 crc kubenswrapper[4728]: I0227 10:52:02.005129 4728 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7779b2a2-e696-4b07-9b34-9c43065ada96-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 10:52:02 crc kubenswrapper[4728]: I0227 10:52:02.644214 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536492-xrgfd" event={"ID":"f8c25d7b-2d32-4f79-a9f0-c8a53ad7c787","Type":"ContainerStarted","Data":"0fc5bba26e1853d9f84d1ce5af9e45dc24081e9904ceae5bd9266248fe545f9d"} Feb 27 10:52:02 crc kubenswrapper[4728]: I0227 10:52:02.644277 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 27 10:52:02 crc kubenswrapper[4728]: I0227 10:52:02.658560 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29536492-xrgfd" podStartSLOduration=1.620541357 podStartE2EDuration="2.658493961s" podCreationTimestamp="2026-02-27 10:52:00 +0000 UTC" firstStartedPulling="2026-02-27 10:52:00.979588154 +0000 UTC m=+1540.941954260" lastFinishedPulling="2026-02-27 10:52:02.017540758 +0000 UTC m=+1541.979906864" observedRunningTime="2026-02-27 10:52:02.6558733 +0000 UTC m=+1542.618239406" watchObservedRunningTime="2026-02-27 10:52:02.658493961 +0000 UTC m=+1542.620860077" Feb 27 10:52:02 crc kubenswrapper[4728]: I0227 10:52:02.707268 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 27 10:52:02 crc kubenswrapper[4728]: I0227 10:52:02.722856 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 27 10:52:02 crc kubenswrapper[4728]: I0227 10:52:02.744287 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7779b2a2-e696-4b07-9b34-9c43065ada96" path="/var/lib/kubelet/pods/7779b2a2-e696-4b07-9b34-9c43065ada96/volumes" Feb 27 10:52:02 crc kubenswrapper[4728]: I0227 10:52:02.745398 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 27 10:52:02 crc kubenswrapper[4728]: E0227 10:52:02.745847 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7779b2a2-e696-4b07-9b34-9c43065ada96" containerName="proxy-httpd" Feb 27 10:52:02 crc kubenswrapper[4728]: I0227 10:52:02.745862 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="7779b2a2-e696-4b07-9b34-9c43065ada96" containerName="proxy-httpd" Feb 27 10:52:02 crc kubenswrapper[4728]: E0227 10:52:02.745894 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7779b2a2-e696-4b07-9b34-9c43065ada96" containerName="sg-core" Feb 27 10:52:02 crc kubenswrapper[4728]: I0227 10:52:02.745901 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="7779b2a2-e696-4b07-9b34-9c43065ada96" containerName="sg-core" Feb 27 10:52:02 crc kubenswrapper[4728]: E0227 10:52:02.745923 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7779b2a2-e696-4b07-9b34-9c43065ada96" containerName="ceilometer-central-agent" Feb 27 10:52:02 crc kubenswrapper[4728]: I0227 10:52:02.745930 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="7779b2a2-e696-4b07-9b34-9c43065ada96" containerName="ceilometer-central-agent" Feb 27 10:52:02 crc kubenswrapper[4728]: E0227 10:52:02.745945 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7779b2a2-e696-4b07-9b34-9c43065ada96" containerName="ceilometer-notification-agent" Feb 27 10:52:02 crc kubenswrapper[4728]: I0227 10:52:02.745951 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="7779b2a2-e696-4b07-9b34-9c43065ada96" containerName="ceilometer-notification-agent" Feb 27 10:52:02 crc kubenswrapper[4728]: I0227 10:52:02.746192 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="7779b2a2-e696-4b07-9b34-9c43065ada96" containerName="ceilometer-central-agent" Feb 27 10:52:02 crc kubenswrapper[4728]: I0227 10:52:02.746212 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="7779b2a2-e696-4b07-9b34-9c43065ada96" containerName="sg-core" Feb 27 10:52:02 crc kubenswrapper[4728]: I0227 10:52:02.746251 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="7779b2a2-e696-4b07-9b34-9c43065ada96" containerName="ceilometer-notification-agent" Feb 27 10:52:02 crc kubenswrapper[4728]: I0227 10:52:02.746261 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="7779b2a2-e696-4b07-9b34-9c43065ada96" containerName="proxy-httpd" Feb 27 10:52:02 crc kubenswrapper[4728]: I0227 10:52:02.749235 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 27 10:52:02 crc kubenswrapper[4728]: I0227 10:52:02.749755 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 27 10:52:02 crc kubenswrapper[4728]: I0227 10:52:02.768567 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 27 10:52:02 crc kubenswrapper[4728]: I0227 10:52:02.768870 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 27 10:52:02 crc kubenswrapper[4728]: I0227 10:52:02.826166 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c0b373eb-8903-41d1-b698-5e2a0a87aae7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c0b373eb-8903-41d1-b698-5e2a0a87aae7\") " pod="openstack/ceilometer-0" Feb 27 10:52:02 crc kubenswrapper[4728]: I0227 10:52:02.826325 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0b373eb-8903-41d1-b698-5e2a0a87aae7-config-data\") pod \"ceilometer-0\" (UID: \"c0b373eb-8903-41d1-b698-5e2a0a87aae7\") " pod="openstack/ceilometer-0" Feb 27 10:52:02 crc kubenswrapper[4728]: I0227 10:52:02.826546 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c0b373eb-8903-41d1-b698-5e2a0a87aae7-run-httpd\") pod \"ceilometer-0\" (UID: \"c0b373eb-8903-41d1-b698-5e2a0a87aae7\") " pod="openstack/ceilometer-0" Feb 27 10:52:02 crc kubenswrapper[4728]: I0227 10:52:02.826636 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0b373eb-8903-41d1-b698-5e2a0a87aae7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c0b373eb-8903-41d1-b698-5e2a0a87aae7\") " pod="openstack/ceilometer-0" Feb 27 10:52:02 crc kubenswrapper[4728]: I0227 10:52:02.826915 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c0b373eb-8903-41d1-b698-5e2a0a87aae7-log-httpd\") pod \"ceilometer-0\" (UID: \"c0b373eb-8903-41d1-b698-5e2a0a87aae7\") " pod="openstack/ceilometer-0" Feb 27 10:52:02 crc kubenswrapper[4728]: I0227 10:52:02.826965 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwc9g\" (UniqueName: \"kubernetes.io/projected/c0b373eb-8903-41d1-b698-5e2a0a87aae7-kube-api-access-nwc9g\") pod \"ceilometer-0\" (UID: \"c0b373eb-8903-41d1-b698-5e2a0a87aae7\") " pod="openstack/ceilometer-0" Feb 27 10:52:02 crc kubenswrapper[4728]: I0227 10:52:02.827142 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c0b373eb-8903-41d1-b698-5e2a0a87aae7-scripts\") pod \"ceilometer-0\" (UID: \"c0b373eb-8903-41d1-b698-5e2a0a87aae7\") " pod="openstack/ceilometer-0" Feb 27 10:52:02 crc kubenswrapper[4728]: I0227 10:52:02.930430 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0b373eb-8903-41d1-b698-5e2a0a87aae7-config-data\") pod \"ceilometer-0\" (UID: \"c0b373eb-8903-41d1-b698-5e2a0a87aae7\") " pod="openstack/ceilometer-0" Feb 27 10:52:02 crc kubenswrapper[4728]: I0227 10:52:02.930486 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c0b373eb-8903-41d1-b698-5e2a0a87aae7-run-httpd\") pod \"ceilometer-0\" (UID: \"c0b373eb-8903-41d1-b698-5e2a0a87aae7\") " pod="openstack/ceilometer-0" Feb 27 10:52:02 crc kubenswrapper[4728]: I0227 10:52:02.930535 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0b373eb-8903-41d1-b698-5e2a0a87aae7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c0b373eb-8903-41d1-b698-5e2a0a87aae7\") " pod="openstack/ceilometer-0" Feb 27 10:52:02 crc kubenswrapper[4728]: I0227 10:52:02.930670 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c0b373eb-8903-41d1-b698-5e2a0a87aae7-log-httpd\") pod \"ceilometer-0\" (UID: \"c0b373eb-8903-41d1-b698-5e2a0a87aae7\") " pod="openstack/ceilometer-0" Feb 27 10:52:02 crc kubenswrapper[4728]: I0227 10:52:02.930703 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nwc9g\" (UniqueName: \"kubernetes.io/projected/c0b373eb-8903-41d1-b698-5e2a0a87aae7-kube-api-access-nwc9g\") pod \"ceilometer-0\" (UID: \"c0b373eb-8903-41d1-b698-5e2a0a87aae7\") " pod="openstack/ceilometer-0" Feb 27 10:52:02 crc kubenswrapper[4728]: I0227 10:52:02.930772 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c0b373eb-8903-41d1-b698-5e2a0a87aae7-scripts\") pod \"ceilometer-0\" (UID: \"c0b373eb-8903-41d1-b698-5e2a0a87aae7\") " pod="openstack/ceilometer-0" Feb 27 10:52:02 crc kubenswrapper[4728]: I0227 10:52:02.930867 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c0b373eb-8903-41d1-b698-5e2a0a87aae7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c0b373eb-8903-41d1-b698-5e2a0a87aae7\") " pod="openstack/ceilometer-0" Feb 27 10:52:02 crc kubenswrapper[4728]: I0227 10:52:02.931312 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c0b373eb-8903-41d1-b698-5e2a0a87aae7-run-httpd\") pod \"ceilometer-0\" (UID: \"c0b373eb-8903-41d1-b698-5e2a0a87aae7\") " pod="openstack/ceilometer-0" Feb 27 10:52:02 crc kubenswrapper[4728]: I0227 10:52:02.931356 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c0b373eb-8903-41d1-b698-5e2a0a87aae7-log-httpd\") pod \"ceilometer-0\" (UID: \"c0b373eb-8903-41d1-b698-5e2a0a87aae7\") " pod="openstack/ceilometer-0" Feb 27 10:52:02 crc kubenswrapper[4728]: I0227 10:52:02.935578 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0b373eb-8903-41d1-b698-5e2a0a87aae7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c0b373eb-8903-41d1-b698-5e2a0a87aae7\") " pod="openstack/ceilometer-0" Feb 27 10:52:02 crc kubenswrapper[4728]: I0227 10:52:02.936547 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c0b373eb-8903-41d1-b698-5e2a0a87aae7-scripts\") pod \"ceilometer-0\" (UID: \"c0b373eb-8903-41d1-b698-5e2a0a87aae7\") " pod="openstack/ceilometer-0" Feb 27 10:52:02 crc kubenswrapper[4728]: I0227 10:52:02.936852 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c0b373eb-8903-41d1-b698-5e2a0a87aae7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c0b373eb-8903-41d1-b698-5e2a0a87aae7\") " pod="openstack/ceilometer-0" Feb 27 10:52:02 crc kubenswrapper[4728]: I0227 10:52:02.947735 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nwc9g\" (UniqueName: \"kubernetes.io/projected/c0b373eb-8903-41d1-b698-5e2a0a87aae7-kube-api-access-nwc9g\") pod \"ceilometer-0\" (UID: \"c0b373eb-8903-41d1-b698-5e2a0a87aae7\") " pod="openstack/ceilometer-0" Feb 27 10:52:02 crc kubenswrapper[4728]: I0227 10:52:02.955624 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0b373eb-8903-41d1-b698-5e2a0a87aae7-config-data\") pod \"ceilometer-0\" (UID: \"c0b373eb-8903-41d1-b698-5e2a0a87aae7\") " pod="openstack/ceilometer-0" Feb 27 10:52:03 crc kubenswrapper[4728]: I0227 10:52:03.161928 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 27 10:52:03 crc kubenswrapper[4728]: I0227 10:52:03.662237 4728 generic.go:334] "Generic (PLEG): container finished" podID="08778ec2-d0d0-42a2-8497-290bfe1b10c1" containerID="7daba541974762bd1dc66fca9853b1be4a8ce00014832111e32d50a498e242d1" exitCode=0 Feb 27 10:52:03 crc kubenswrapper[4728]: I0227 10:52:03.662440 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-zdmnv" event={"ID":"08778ec2-d0d0-42a2-8497-290bfe1b10c1","Type":"ContainerDied","Data":"7daba541974762bd1dc66fca9853b1be4a8ce00014832111e32d50a498e242d1"} Feb 27 10:52:03 crc kubenswrapper[4728]: I0227 10:52:03.665742 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 27 10:52:03 crc kubenswrapper[4728]: I0227 10:52:03.665928 4728 generic.go:334] "Generic (PLEG): container finished" podID="f8c25d7b-2d32-4f79-a9f0-c8a53ad7c787" containerID="0fc5bba26e1853d9f84d1ce5af9e45dc24081e9904ceae5bd9266248fe545f9d" exitCode=0 Feb 27 10:52:03 crc kubenswrapper[4728]: I0227 10:52:03.666058 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536492-xrgfd" event={"ID":"f8c25d7b-2d32-4f79-a9f0-c8a53ad7c787","Type":"ContainerDied","Data":"0fc5bba26e1853d9f84d1ce5af9e45dc24081e9904ceae5bd9266248fe545f9d"} Feb 27 10:52:03 crc kubenswrapper[4728]: I0227 10:52:03.951403 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 27 10:52:03 crc kubenswrapper[4728]: I0227 10:52:03.951467 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 27 10:52:04 crc kubenswrapper[4728]: I0227 10:52:04.685223 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c0b373eb-8903-41d1-b698-5e2a0a87aae7","Type":"ContainerStarted","Data":"d34e6f3f51c21e903d239d9ff8d2e3625e8ad1365225d3419b8e3f0ff4c71928"} Feb 27 10:52:04 crc kubenswrapper[4728]: I0227 10:52:04.685781 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c0b373eb-8903-41d1-b698-5e2a0a87aae7","Type":"ContainerStarted","Data":"dfbcb593b17c42c849d4bc93bb5c402a3d5faa2bf73fc99ed6b8d3f9d7c799d0"} Feb 27 10:52:04 crc kubenswrapper[4728]: I0227 10:52:04.962963 4728 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="195e680c-d89c-4b39-ae68-145934b70fa2" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.1.13:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 27 10:52:04 crc kubenswrapper[4728]: I0227 10:52:04.962988 4728 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="195e680c-d89c-4b39-ae68-145934b70fa2" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.1.13:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 27 10:52:05 crc kubenswrapper[4728]: I0227 10:52:05.442451 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536492-xrgfd" Feb 27 10:52:05 crc kubenswrapper[4728]: I0227 10:52:05.453721 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-zdmnv" Feb 27 10:52:05 crc kubenswrapper[4728]: I0227 10:52:05.499992 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08778ec2-d0d0-42a2-8497-290bfe1b10c1-config-data\") pod \"08778ec2-d0d0-42a2-8497-290bfe1b10c1\" (UID: \"08778ec2-d0d0-42a2-8497-290bfe1b10c1\") " Feb 27 10:52:05 crc kubenswrapper[4728]: I0227 10:52:05.500055 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kxth8\" (UniqueName: \"kubernetes.io/projected/f8c25d7b-2d32-4f79-a9f0-c8a53ad7c787-kube-api-access-kxth8\") pod \"f8c25d7b-2d32-4f79-a9f0-c8a53ad7c787\" (UID: \"f8c25d7b-2d32-4f79-a9f0-c8a53ad7c787\") " Feb 27 10:52:05 crc kubenswrapper[4728]: I0227 10:52:05.500135 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08778ec2-d0d0-42a2-8497-290bfe1b10c1-combined-ca-bundle\") pod \"08778ec2-d0d0-42a2-8497-290bfe1b10c1\" (UID: \"08778ec2-d0d0-42a2-8497-290bfe1b10c1\") " Feb 27 10:52:05 crc kubenswrapper[4728]: I0227 10:52:05.500195 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/08778ec2-d0d0-42a2-8497-290bfe1b10c1-scripts\") pod \"08778ec2-d0d0-42a2-8497-290bfe1b10c1\" (UID: \"08778ec2-d0d0-42a2-8497-290bfe1b10c1\") " Feb 27 10:52:05 crc kubenswrapper[4728]: I0227 10:52:05.500326 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lxf2t\" (UniqueName: \"kubernetes.io/projected/08778ec2-d0d0-42a2-8497-290bfe1b10c1-kube-api-access-lxf2t\") pod \"08778ec2-d0d0-42a2-8497-290bfe1b10c1\" (UID: \"08778ec2-d0d0-42a2-8497-290bfe1b10c1\") " Feb 27 10:52:05 crc kubenswrapper[4728]: I0227 10:52:05.514496 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8c25d7b-2d32-4f79-a9f0-c8a53ad7c787-kube-api-access-kxth8" (OuterVolumeSpecName: "kube-api-access-kxth8") pod "f8c25d7b-2d32-4f79-a9f0-c8a53ad7c787" (UID: "f8c25d7b-2d32-4f79-a9f0-c8a53ad7c787"). InnerVolumeSpecName "kube-api-access-kxth8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:52:05 crc kubenswrapper[4728]: I0227 10:52:05.515036 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08778ec2-d0d0-42a2-8497-290bfe1b10c1-kube-api-access-lxf2t" (OuterVolumeSpecName: "kube-api-access-lxf2t") pod "08778ec2-d0d0-42a2-8497-290bfe1b10c1" (UID: "08778ec2-d0d0-42a2-8497-290bfe1b10c1"). InnerVolumeSpecName "kube-api-access-lxf2t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:52:05 crc kubenswrapper[4728]: I0227 10:52:05.547593 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08778ec2-d0d0-42a2-8497-290bfe1b10c1-scripts" (OuterVolumeSpecName: "scripts") pod "08778ec2-d0d0-42a2-8497-290bfe1b10c1" (UID: "08778ec2-d0d0-42a2-8497-290bfe1b10c1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:52:05 crc kubenswrapper[4728]: I0227 10:52:05.592706 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08778ec2-d0d0-42a2-8497-290bfe1b10c1-config-data" (OuterVolumeSpecName: "config-data") pod "08778ec2-d0d0-42a2-8497-290bfe1b10c1" (UID: "08778ec2-d0d0-42a2-8497-290bfe1b10c1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:52:05 crc kubenswrapper[4728]: I0227 10:52:05.605371 4728 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/08778ec2-d0d0-42a2-8497-290bfe1b10c1-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 10:52:05 crc kubenswrapper[4728]: I0227 10:52:05.605406 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lxf2t\" (UniqueName: \"kubernetes.io/projected/08778ec2-d0d0-42a2-8497-290bfe1b10c1-kube-api-access-lxf2t\") on node \"crc\" DevicePath \"\"" Feb 27 10:52:05 crc kubenswrapper[4728]: I0227 10:52:05.605421 4728 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08778ec2-d0d0-42a2-8497-290bfe1b10c1-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 10:52:05 crc kubenswrapper[4728]: I0227 10:52:05.605434 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kxth8\" (UniqueName: \"kubernetes.io/projected/f8c25d7b-2d32-4f79-a9f0-c8a53ad7c787-kube-api-access-kxth8\") on node \"crc\" DevicePath \"\"" Feb 27 10:52:05 crc kubenswrapper[4728]: I0227 10:52:05.625959 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08778ec2-d0d0-42a2-8497-290bfe1b10c1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "08778ec2-d0d0-42a2-8497-290bfe1b10c1" (UID: "08778ec2-d0d0-42a2-8497-290bfe1b10c1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:52:05 crc kubenswrapper[4728]: I0227 10:52:05.697472 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c0b373eb-8903-41d1-b698-5e2a0a87aae7","Type":"ContainerStarted","Data":"ff5386bc9ce4ec7bc6ad56a5dc3d71b3f25aefa437b4d819fb231decd7b4e545"} Feb 27 10:52:05 crc kubenswrapper[4728]: I0227 10:52:05.703377 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-zdmnv" Feb 27 10:52:05 crc kubenswrapper[4728]: I0227 10:52:05.703563 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-zdmnv" event={"ID":"08778ec2-d0d0-42a2-8497-290bfe1b10c1","Type":"ContainerDied","Data":"353894cda2446bd1ec1a5cd4de68d341cf7858138a53e7a92e59e041b4facb37"} Feb 27 10:52:05 crc kubenswrapper[4728]: I0227 10:52:05.703605 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="353894cda2446bd1ec1a5cd4de68d341cf7858138a53e7a92e59e041b4facb37" Feb 27 10:52:05 crc kubenswrapper[4728]: I0227 10:52:05.707047 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536492-xrgfd" event={"ID":"f8c25d7b-2d32-4f79-a9f0-c8a53ad7c787","Type":"ContainerDied","Data":"bf807ea8fcaac8d3ad046e506ae986d9312836b551f69506e8ebc6d32f4f2754"} Feb 27 10:52:05 crc kubenswrapper[4728]: I0227 10:52:05.707123 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bf807ea8fcaac8d3ad046e506ae986d9312836b551f69506e8ebc6d32f4f2754" Feb 27 10:52:05 crc kubenswrapper[4728]: I0227 10:52:05.707128 4728 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08778ec2-d0d0-42a2-8497-290bfe1b10c1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 10:52:05 crc kubenswrapper[4728]: I0227 10:52:05.707194 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536492-xrgfd" Feb 27 10:52:05 crc kubenswrapper[4728]: I0227 10:52:05.753557 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536486-wmfdc"] Feb 27 10:52:05 crc kubenswrapper[4728]: I0227 10:52:05.769165 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536486-wmfdc"] Feb 27 10:52:05 crc kubenswrapper[4728]: I0227 10:52:05.888257 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 27 10:52:05 crc kubenswrapper[4728]: I0227 10:52:05.888464 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="a22e62c7-44fe-4603-af1a-95ca06a943c4" containerName="nova-scheduler-scheduler" containerID="cri-o://b23a3afe2d0e169412dc8126968556686aa43e8a84aac3f3690f7737003595cd" gracePeriod=30 Feb 27 10:52:05 crc kubenswrapper[4728]: I0227 10:52:05.917002 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 27 10:52:05 crc kubenswrapper[4728]: I0227 10:52:05.917276 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="195e680c-d89c-4b39-ae68-145934b70fa2" containerName="nova-api-log" containerID="cri-o://8e6000b65abfaea8381dd3976c155f1a601e5c00805ad4d2bd622bfacd6d0749" gracePeriod=30 Feb 27 10:52:05 crc kubenswrapper[4728]: I0227 10:52:05.917368 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="195e680c-d89c-4b39-ae68-145934b70fa2" containerName="nova-api-api" containerID="cri-o://1166b274401bd1f7b903a229bfec9c38ef0399907dc26a92c7654a11f4cb85ad" gracePeriod=30 Feb 27 10:52:05 crc kubenswrapper[4728]: I0227 10:52:05.922442 4728 patch_prober.go:28] interesting pod/machine-config-daemon-mf2hh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 10:52:05 crc kubenswrapper[4728]: I0227 10:52:05.922533 4728 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 10:52:06 crc kubenswrapper[4728]: I0227 10:52:06.009800 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 27 10:52:06 crc kubenswrapper[4728]: I0227 10:52:06.011142 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="ec89ac4c-d100-4004-bb62-0f5e6a344efd" containerName="nova-metadata-metadata" containerID="cri-o://31ae060db3042c2ee9fd2e07a59e8fbebf9c2620bdc53d1a88fce283fc5f05a8" gracePeriod=30 Feb 27 10:52:06 crc kubenswrapper[4728]: I0227 10:52:06.011650 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="ec89ac4c-d100-4004-bb62-0f5e6a344efd" containerName="nova-metadata-log" containerID="cri-o://bc84e6508a54386778ee5d8c8866649291f99f6ccf8696cbb092027912f04156" gracePeriod=30 Feb 27 10:52:06 crc kubenswrapper[4728]: I0227 10:52:06.720121 4728 generic.go:334] "Generic (PLEG): container finished" podID="195e680c-d89c-4b39-ae68-145934b70fa2" containerID="8e6000b65abfaea8381dd3976c155f1a601e5c00805ad4d2bd622bfacd6d0749" exitCode=143 Feb 27 10:52:06 crc kubenswrapper[4728]: I0227 10:52:06.720460 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"195e680c-d89c-4b39-ae68-145934b70fa2","Type":"ContainerDied","Data":"8e6000b65abfaea8381dd3976c155f1a601e5c00805ad4d2bd622bfacd6d0749"} Feb 27 10:52:06 crc kubenswrapper[4728]: I0227 10:52:06.726103 4728 generic.go:334] "Generic (PLEG): container finished" podID="ec89ac4c-d100-4004-bb62-0f5e6a344efd" containerID="bc84e6508a54386778ee5d8c8866649291f99f6ccf8696cbb092027912f04156" exitCode=143 Feb 27 10:52:06 crc kubenswrapper[4728]: I0227 10:52:06.738913 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="073d11d2-49f2-497c-a676-5aa3dcb13859" path="/var/lib/kubelet/pods/073d11d2-49f2-497c-a676-5aa3dcb13859/volumes" Feb 27 10:52:06 crc kubenswrapper[4728]: I0227 10:52:06.739730 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c0b373eb-8903-41d1-b698-5e2a0a87aae7","Type":"ContainerStarted","Data":"4872384e50c57811fe680473d1af185d03e153375b72a73f74b00dcf699ea81a"} Feb 27 10:52:06 crc kubenswrapper[4728]: I0227 10:52:06.739760 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ec89ac4c-d100-4004-bb62-0f5e6a344efd","Type":"ContainerDied","Data":"bc84e6508a54386778ee5d8c8866649291f99f6ccf8696cbb092027912f04156"} Feb 27 10:52:07 crc kubenswrapper[4728]: E0227 10:52:07.015813 4728 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b23a3afe2d0e169412dc8126968556686aa43e8a84aac3f3690f7737003595cd" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 27 10:52:07 crc kubenswrapper[4728]: E0227 10:52:07.021593 4728 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b23a3afe2d0e169412dc8126968556686aa43e8a84aac3f3690f7737003595cd" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 27 10:52:07 crc kubenswrapper[4728]: E0227 10:52:07.024880 4728 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b23a3afe2d0e169412dc8126968556686aa43e8a84aac3f3690f7737003595cd" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 27 10:52:07 crc kubenswrapper[4728]: E0227 10:52:07.024958 4728 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="a22e62c7-44fe-4603-af1a-95ca06a943c4" containerName="nova-scheduler-scheduler" Feb 27 10:52:07 crc kubenswrapper[4728]: I0227 10:52:07.720188 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 27 10:52:07 crc kubenswrapper[4728]: I0227 10:52:07.741468 4728 generic.go:334] "Generic (PLEG): container finished" podID="a22e62c7-44fe-4603-af1a-95ca06a943c4" containerID="b23a3afe2d0e169412dc8126968556686aa43e8a84aac3f3690f7737003595cd" exitCode=0 Feb 27 10:52:07 crc kubenswrapper[4728]: I0227 10:52:07.741529 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"a22e62c7-44fe-4603-af1a-95ca06a943c4","Type":"ContainerDied","Data":"b23a3afe2d0e169412dc8126968556686aa43e8a84aac3f3690f7737003595cd"} Feb 27 10:52:07 crc kubenswrapper[4728]: I0227 10:52:07.741562 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"a22e62c7-44fe-4603-af1a-95ca06a943c4","Type":"ContainerDied","Data":"f5dfca0f6317791058a4e104dad9b2fd8bad0c90fa8033ef4de9d52a745defeb"} Feb 27 10:52:07 crc kubenswrapper[4728]: I0227 10:52:07.741581 4728 scope.go:117] "RemoveContainer" containerID="b23a3afe2d0e169412dc8126968556686aa43e8a84aac3f3690f7737003595cd" Feb 27 10:52:07 crc kubenswrapper[4728]: I0227 10:52:07.741695 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 27 10:52:07 crc kubenswrapper[4728]: I0227 10:52:07.766813 4728 scope.go:117] "RemoveContainer" containerID="b23a3afe2d0e169412dc8126968556686aa43e8a84aac3f3690f7737003595cd" Feb 27 10:52:07 crc kubenswrapper[4728]: E0227 10:52:07.768879 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b23a3afe2d0e169412dc8126968556686aa43e8a84aac3f3690f7737003595cd\": container with ID starting with b23a3afe2d0e169412dc8126968556686aa43e8a84aac3f3690f7737003595cd not found: ID does not exist" containerID="b23a3afe2d0e169412dc8126968556686aa43e8a84aac3f3690f7737003595cd" Feb 27 10:52:07 crc kubenswrapper[4728]: I0227 10:52:07.768936 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b23a3afe2d0e169412dc8126968556686aa43e8a84aac3f3690f7737003595cd"} err="failed to get container status \"b23a3afe2d0e169412dc8126968556686aa43e8a84aac3f3690f7737003595cd\": rpc error: code = NotFound desc = could not find container \"b23a3afe2d0e169412dc8126968556686aa43e8a84aac3f3690f7737003595cd\": container with ID starting with b23a3afe2d0e169412dc8126968556686aa43e8a84aac3f3690f7737003595cd not found: ID does not exist" Feb 27 10:52:07 crc kubenswrapper[4728]: I0227 10:52:07.857186 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-587r2\" (UniqueName: \"kubernetes.io/projected/a22e62c7-44fe-4603-af1a-95ca06a943c4-kube-api-access-587r2\") pod \"a22e62c7-44fe-4603-af1a-95ca06a943c4\" (UID: \"a22e62c7-44fe-4603-af1a-95ca06a943c4\") " Feb 27 10:52:07 crc kubenswrapper[4728]: I0227 10:52:07.857390 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a22e62c7-44fe-4603-af1a-95ca06a943c4-combined-ca-bundle\") pod \"a22e62c7-44fe-4603-af1a-95ca06a943c4\" (UID: \"a22e62c7-44fe-4603-af1a-95ca06a943c4\") " Feb 27 10:52:07 crc kubenswrapper[4728]: I0227 10:52:07.857426 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a22e62c7-44fe-4603-af1a-95ca06a943c4-config-data\") pod \"a22e62c7-44fe-4603-af1a-95ca06a943c4\" (UID: \"a22e62c7-44fe-4603-af1a-95ca06a943c4\") " Feb 27 10:52:07 crc kubenswrapper[4728]: I0227 10:52:07.874785 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a22e62c7-44fe-4603-af1a-95ca06a943c4-kube-api-access-587r2" (OuterVolumeSpecName: "kube-api-access-587r2") pod "a22e62c7-44fe-4603-af1a-95ca06a943c4" (UID: "a22e62c7-44fe-4603-af1a-95ca06a943c4"). InnerVolumeSpecName "kube-api-access-587r2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:52:07 crc kubenswrapper[4728]: I0227 10:52:07.904605 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a22e62c7-44fe-4603-af1a-95ca06a943c4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a22e62c7-44fe-4603-af1a-95ca06a943c4" (UID: "a22e62c7-44fe-4603-af1a-95ca06a943c4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:52:07 crc kubenswrapper[4728]: I0227 10:52:07.909364 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a22e62c7-44fe-4603-af1a-95ca06a943c4-config-data" (OuterVolumeSpecName: "config-data") pod "a22e62c7-44fe-4603-af1a-95ca06a943c4" (UID: "a22e62c7-44fe-4603-af1a-95ca06a943c4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:52:07 crc kubenswrapper[4728]: I0227 10:52:07.961075 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-587r2\" (UniqueName: \"kubernetes.io/projected/a22e62c7-44fe-4603-af1a-95ca06a943c4-kube-api-access-587r2\") on node \"crc\" DevicePath \"\"" Feb 27 10:52:07 crc kubenswrapper[4728]: I0227 10:52:07.961120 4728 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a22e62c7-44fe-4603-af1a-95ca06a943c4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 10:52:07 crc kubenswrapper[4728]: I0227 10:52:07.961133 4728 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a22e62c7-44fe-4603-af1a-95ca06a943c4-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 10:52:08 crc kubenswrapper[4728]: I0227 10:52:08.080925 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 27 10:52:08 crc kubenswrapper[4728]: I0227 10:52:08.122320 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 27 10:52:08 crc kubenswrapper[4728]: I0227 10:52:08.153165 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 27 10:52:08 crc kubenswrapper[4728]: E0227 10:52:08.155130 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8c25d7b-2d32-4f79-a9f0-c8a53ad7c787" containerName="oc" Feb 27 10:52:08 crc kubenswrapper[4728]: I0227 10:52:08.155170 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8c25d7b-2d32-4f79-a9f0-c8a53ad7c787" containerName="oc" Feb 27 10:52:08 crc kubenswrapper[4728]: E0227 10:52:08.155202 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08778ec2-d0d0-42a2-8497-290bfe1b10c1" containerName="nova-manage" Feb 27 10:52:08 crc kubenswrapper[4728]: I0227 10:52:08.155210 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="08778ec2-d0d0-42a2-8497-290bfe1b10c1" containerName="nova-manage" Feb 27 10:52:08 crc kubenswrapper[4728]: E0227 10:52:08.160424 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a22e62c7-44fe-4603-af1a-95ca06a943c4" containerName="nova-scheduler-scheduler" Feb 27 10:52:08 crc kubenswrapper[4728]: I0227 10:52:08.160475 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="a22e62c7-44fe-4603-af1a-95ca06a943c4" containerName="nova-scheduler-scheduler" Feb 27 10:52:08 crc kubenswrapper[4728]: I0227 10:52:08.161564 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8c25d7b-2d32-4f79-a9f0-c8a53ad7c787" containerName="oc" Feb 27 10:52:08 crc kubenswrapper[4728]: I0227 10:52:08.161589 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="08778ec2-d0d0-42a2-8497-290bfe1b10c1" containerName="nova-manage" Feb 27 10:52:08 crc kubenswrapper[4728]: I0227 10:52:08.161757 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="a22e62c7-44fe-4603-af1a-95ca06a943c4" containerName="nova-scheduler-scheduler" Feb 27 10:52:08 crc kubenswrapper[4728]: I0227 10:52:08.162804 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 27 10:52:08 crc kubenswrapper[4728]: I0227 10:52:08.166856 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 27 10:52:08 crc kubenswrapper[4728]: I0227 10:52:08.200497 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 27 10:52:08 crc kubenswrapper[4728]: I0227 10:52:08.282509 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96210d4e-7270-4181-b154-388611ae10fc-config-data\") pod \"nova-scheduler-0\" (UID: \"96210d4e-7270-4181-b154-388611ae10fc\") " pod="openstack/nova-scheduler-0" Feb 27 10:52:08 crc kubenswrapper[4728]: I0227 10:52:08.282571 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cpcns\" (UniqueName: \"kubernetes.io/projected/96210d4e-7270-4181-b154-388611ae10fc-kube-api-access-cpcns\") pod \"nova-scheduler-0\" (UID: \"96210d4e-7270-4181-b154-388611ae10fc\") " pod="openstack/nova-scheduler-0" Feb 27 10:52:08 crc kubenswrapper[4728]: I0227 10:52:08.282772 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96210d4e-7270-4181-b154-388611ae10fc-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"96210d4e-7270-4181-b154-388611ae10fc\") " pod="openstack/nova-scheduler-0" Feb 27 10:52:08 crc kubenswrapper[4728]: I0227 10:52:08.385851 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96210d4e-7270-4181-b154-388611ae10fc-config-data\") pod \"nova-scheduler-0\" (UID: \"96210d4e-7270-4181-b154-388611ae10fc\") " pod="openstack/nova-scheduler-0" Feb 27 10:52:08 crc kubenswrapper[4728]: I0227 10:52:08.385951 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cpcns\" (UniqueName: \"kubernetes.io/projected/96210d4e-7270-4181-b154-388611ae10fc-kube-api-access-cpcns\") pod \"nova-scheduler-0\" (UID: \"96210d4e-7270-4181-b154-388611ae10fc\") " pod="openstack/nova-scheduler-0" Feb 27 10:52:08 crc kubenswrapper[4728]: I0227 10:52:08.386009 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96210d4e-7270-4181-b154-388611ae10fc-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"96210d4e-7270-4181-b154-388611ae10fc\") " pod="openstack/nova-scheduler-0" Feb 27 10:52:08 crc kubenswrapper[4728]: I0227 10:52:08.391094 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96210d4e-7270-4181-b154-388611ae10fc-config-data\") pod \"nova-scheduler-0\" (UID: \"96210d4e-7270-4181-b154-388611ae10fc\") " pod="openstack/nova-scheduler-0" Feb 27 10:52:08 crc kubenswrapper[4728]: I0227 10:52:08.391158 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96210d4e-7270-4181-b154-388611ae10fc-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"96210d4e-7270-4181-b154-388611ae10fc\") " pod="openstack/nova-scheduler-0" Feb 27 10:52:08 crc kubenswrapper[4728]: I0227 10:52:08.412463 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cpcns\" (UniqueName: \"kubernetes.io/projected/96210d4e-7270-4181-b154-388611ae10fc-kube-api-access-cpcns\") pod \"nova-scheduler-0\" (UID: \"96210d4e-7270-4181-b154-388611ae10fc\") " pod="openstack/nova-scheduler-0" Feb 27 10:52:08 crc kubenswrapper[4728]: I0227 10:52:08.575489 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 27 10:52:08 crc kubenswrapper[4728]: I0227 10:52:08.747371 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a22e62c7-44fe-4603-af1a-95ca06a943c4" path="/var/lib/kubelet/pods/a22e62c7-44fe-4603-af1a-95ca06a943c4/volumes" Feb 27 10:52:08 crc kubenswrapper[4728]: I0227 10:52:08.773916 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c0b373eb-8903-41d1-b698-5e2a0a87aae7","Type":"ContainerStarted","Data":"2af07107f345b07c0265df730d5c3a3cc12c941598a5cf2874e2da81d16d043b"} Feb 27 10:52:08 crc kubenswrapper[4728]: I0227 10:52:08.774158 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 27 10:52:08 crc kubenswrapper[4728]: I0227 10:52:08.804136 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.422766015 podStartE2EDuration="6.804119823s" podCreationTimestamp="2026-02-27 10:52:02 +0000 UTC" firstStartedPulling="2026-02-27 10:52:03.652066992 +0000 UTC m=+1543.614433108" lastFinishedPulling="2026-02-27 10:52:08.03342081 +0000 UTC m=+1547.995786916" observedRunningTime="2026-02-27 10:52:08.803897027 +0000 UTC m=+1548.766263133" watchObservedRunningTime="2026-02-27 10:52:08.804119823 +0000 UTC m=+1548.766485929" Feb 27 10:52:09 crc kubenswrapper[4728]: I0227 10:52:09.145358 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 27 10:52:09 crc kubenswrapper[4728]: I0227 10:52:09.442654 4728 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="ec89ac4c-d100-4004-bb62-0f5e6a344efd" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.1.4:8775/\": read tcp 10.217.0.2:41896->10.217.1.4:8775: read: connection reset by peer" Feb 27 10:52:09 crc kubenswrapper[4728]: I0227 10:52:09.442676 4728 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="ec89ac4c-d100-4004-bb62-0f5e6a344efd" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.1.4:8775/\": read tcp 10.217.0.2:41894->10.217.1.4:8775: read: connection reset by peer" Feb 27 10:52:09 crc kubenswrapper[4728]: I0227 10:52:09.791826 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"96210d4e-7270-4181-b154-388611ae10fc","Type":"ContainerStarted","Data":"48bf4eb151ced2c6d56050cd109fa8b1ab788861a9b980f91843b3a04c3dddf4"} Feb 27 10:52:09 crc kubenswrapper[4728]: I0227 10:52:09.791881 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"96210d4e-7270-4181-b154-388611ae10fc","Type":"ContainerStarted","Data":"238026df171d8f2cc40741d682a761eaabd1f9ee8f1b844a7563dbb4e5d9d270"} Feb 27 10:52:09 crc kubenswrapper[4728]: I0227 10:52:09.794890 4728 generic.go:334] "Generic (PLEG): container finished" podID="ec89ac4c-d100-4004-bb62-0f5e6a344efd" containerID="31ae060db3042c2ee9fd2e07a59e8fbebf9c2620bdc53d1a88fce283fc5f05a8" exitCode=0 Feb 27 10:52:09 crc kubenswrapper[4728]: I0227 10:52:09.794977 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ec89ac4c-d100-4004-bb62-0f5e6a344efd","Type":"ContainerDied","Data":"31ae060db3042c2ee9fd2e07a59e8fbebf9c2620bdc53d1a88fce283fc5f05a8"} Feb 27 10:52:09 crc kubenswrapper[4728]: I0227 10:52:09.821195 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=1.821176221 podStartE2EDuration="1.821176221s" podCreationTimestamp="2026-02-27 10:52:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:52:09.812236608 +0000 UTC m=+1549.774602734" watchObservedRunningTime="2026-02-27 10:52:09.821176221 +0000 UTC m=+1549.783542327" Feb 27 10:52:10 crc kubenswrapper[4728]: I0227 10:52:10.043497 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 27 10:52:10 crc kubenswrapper[4728]: I0227 10:52:10.135619 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec89ac4c-d100-4004-bb62-0f5e6a344efd-nova-metadata-tls-certs\") pod \"ec89ac4c-d100-4004-bb62-0f5e6a344efd\" (UID: \"ec89ac4c-d100-4004-bb62-0f5e6a344efd\") " Feb 27 10:52:10 crc kubenswrapper[4728]: I0227 10:52:10.135729 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5qfb4\" (UniqueName: \"kubernetes.io/projected/ec89ac4c-d100-4004-bb62-0f5e6a344efd-kube-api-access-5qfb4\") pod \"ec89ac4c-d100-4004-bb62-0f5e6a344efd\" (UID: \"ec89ac4c-d100-4004-bb62-0f5e6a344efd\") " Feb 27 10:52:10 crc kubenswrapper[4728]: I0227 10:52:10.135838 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec89ac4c-d100-4004-bb62-0f5e6a344efd-combined-ca-bundle\") pod \"ec89ac4c-d100-4004-bb62-0f5e6a344efd\" (UID: \"ec89ac4c-d100-4004-bb62-0f5e6a344efd\") " Feb 27 10:52:10 crc kubenswrapper[4728]: I0227 10:52:10.136040 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ec89ac4c-d100-4004-bb62-0f5e6a344efd-logs\") pod \"ec89ac4c-d100-4004-bb62-0f5e6a344efd\" (UID: \"ec89ac4c-d100-4004-bb62-0f5e6a344efd\") " Feb 27 10:52:10 crc kubenswrapper[4728]: I0227 10:52:10.136077 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec89ac4c-d100-4004-bb62-0f5e6a344efd-config-data\") pod \"ec89ac4c-d100-4004-bb62-0f5e6a344efd\" (UID: \"ec89ac4c-d100-4004-bb62-0f5e6a344efd\") " Feb 27 10:52:10 crc kubenswrapper[4728]: I0227 10:52:10.138187 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ec89ac4c-d100-4004-bb62-0f5e6a344efd-logs" (OuterVolumeSpecName: "logs") pod "ec89ac4c-d100-4004-bb62-0f5e6a344efd" (UID: "ec89ac4c-d100-4004-bb62-0f5e6a344efd"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:52:10 crc kubenswrapper[4728]: I0227 10:52:10.143343 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec89ac4c-d100-4004-bb62-0f5e6a344efd-kube-api-access-5qfb4" (OuterVolumeSpecName: "kube-api-access-5qfb4") pod "ec89ac4c-d100-4004-bb62-0f5e6a344efd" (UID: "ec89ac4c-d100-4004-bb62-0f5e6a344efd"). InnerVolumeSpecName "kube-api-access-5qfb4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:52:10 crc kubenswrapper[4728]: I0227 10:52:10.176314 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec89ac4c-d100-4004-bb62-0f5e6a344efd-config-data" (OuterVolumeSpecName: "config-data") pod "ec89ac4c-d100-4004-bb62-0f5e6a344efd" (UID: "ec89ac4c-d100-4004-bb62-0f5e6a344efd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:52:10 crc kubenswrapper[4728]: I0227 10:52:10.180655 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec89ac4c-d100-4004-bb62-0f5e6a344efd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ec89ac4c-d100-4004-bb62-0f5e6a344efd" (UID: "ec89ac4c-d100-4004-bb62-0f5e6a344efd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:52:10 crc kubenswrapper[4728]: I0227 10:52:10.241954 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5qfb4\" (UniqueName: \"kubernetes.io/projected/ec89ac4c-d100-4004-bb62-0f5e6a344efd-kube-api-access-5qfb4\") on node \"crc\" DevicePath \"\"" Feb 27 10:52:10 crc kubenswrapper[4728]: I0227 10:52:10.242004 4728 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec89ac4c-d100-4004-bb62-0f5e6a344efd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 10:52:10 crc kubenswrapper[4728]: I0227 10:52:10.242022 4728 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ec89ac4c-d100-4004-bb62-0f5e6a344efd-logs\") on node \"crc\" DevicePath \"\"" Feb 27 10:52:10 crc kubenswrapper[4728]: I0227 10:52:10.242039 4728 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec89ac4c-d100-4004-bb62-0f5e6a344efd-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 10:52:10 crc kubenswrapper[4728]: I0227 10:52:10.253778 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec89ac4c-d100-4004-bb62-0f5e6a344efd-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "ec89ac4c-d100-4004-bb62-0f5e6a344efd" (UID: "ec89ac4c-d100-4004-bb62-0f5e6a344efd"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:52:10 crc kubenswrapper[4728]: I0227 10:52:10.344600 4728 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec89ac4c-d100-4004-bb62-0f5e6a344efd-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 27 10:52:10 crc kubenswrapper[4728]: I0227 10:52:10.449791 4728 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-h4gl8" podUID="efa9e238-79b0-4757-acab-53537b5ae93a" containerName="registry-server" probeResult="failure" output=< Feb 27 10:52:10 crc kubenswrapper[4728]: timeout: failed to connect service ":50051" within 1s Feb 27 10:52:10 crc kubenswrapper[4728]: > Feb 27 10:52:10 crc kubenswrapper[4728]: I0227 10:52:10.810561 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 27 10:52:10 crc kubenswrapper[4728]: I0227 10:52:10.810573 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ec89ac4c-d100-4004-bb62-0f5e6a344efd","Type":"ContainerDied","Data":"79d25bc6f61a2112686c3cbcfea377e07c653a59c490b039537dcf2c2ad416b8"} Feb 27 10:52:10 crc kubenswrapper[4728]: I0227 10:52:10.810741 4728 scope.go:117] "RemoveContainer" containerID="31ae060db3042c2ee9fd2e07a59e8fbebf9c2620bdc53d1a88fce283fc5f05a8" Feb 27 10:52:10 crc kubenswrapper[4728]: I0227 10:52:10.851419 4728 scope.go:117] "RemoveContainer" containerID="bc84e6508a54386778ee5d8c8866649291f99f6ccf8696cbb092027912f04156" Feb 27 10:52:10 crc kubenswrapper[4728]: I0227 10:52:10.868323 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 27 10:52:10 crc kubenswrapper[4728]: I0227 10:52:10.885495 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 27 10:52:10 crc kubenswrapper[4728]: I0227 10:52:10.893012 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 27 10:52:10 crc kubenswrapper[4728]: E0227 10:52:10.893466 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec89ac4c-d100-4004-bb62-0f5e6a344efd" containerName="nova-metadata-log" Feb 27 10:52:10 crc kubenswrapper[4728]: I0227 10:52:10.893483 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec89ac4c-d100-4004-bb62-0f5e6a344efd" containerName="nova-metadata-log" Feb 27 10:52:10 crc kubenswrapper[4728]: E0227 10:52:10.893531 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec89ac4c-d100-4004-bb62-0f5e6a344efd" containerName="nova-metadata-metadata" Feb 27 10:52:10 crc kubenswrapper[4728]: I0227 10:52:10.893538 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec89ac4c-d100-4004-bb62-0f5e6a344efd" containerName="nova-metadata-metadata" Feb 27 10:52:10 crc kubenswrapper[4728]: I0227 10:52:10.893726 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec89ac4c-d100-4004-bb62-0f5e6a344efd" containerName="nova-metadata-metadata" Feb 27 10:52:10 crc kubenswrapper[4728]: I0227 10:52:10.893757 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec89ac4c-d100-4004-bb62-0f5e6a344efd" containerName="nova-metadata-log" Feb 27 10:52:10 crc kubenswrapper[4728]: I0227 10:52:10.894883 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 27 10:52:10 crc kubenswrapper[4728]: I0227 10:52:10.903885 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 27 10:52:10 crc kubenswrapper[4728]: I0227 10:52:10.924716 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 27 10:52:10 crc kubenswrapper[4728]: I0227 10:52:10.924897 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 27 10:52:10 crc kubenswrapper[4728]: I0227 10:52:10.966113 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65e3743e-125d-4cf1-b8bb-85cd0197b833-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"65e3743e-125d-4cf1-b8bb-85cd0197b833\") " pod="openstack/nova-metadata-0" Feb 27 10:52:10 crc kubenswrapper[4728]: I0227 10:52:10.967079 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4gn5c\" (UniqueName: \"kubernetes.io/projected/65e3743e-125d-4cf1-b8bb-85cd0197b833-kube-api-access-4gn5c\") pod \"nova-metadata-0\" (UID: \"65e3743e-125d-4cf1-b8bb-85cd0197b833\") " pod="openstack/nova-metadata-0" Feb 27 10:52:10 crc kubenswrapper[4728]: I0227 10:52:10.967120 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65e3743e-125d-4cf1-b8bb-85cd0197b833-config-data\") pod \"nova-metadata-0\" (UID: \"65e3743e-125d-4cf1-b8bb-85cd0197b833\") " pod="openstack/nova-metadata-0" Feb 27 10:52:10 crc kubenswrapper[4728]: I0227 10:52:10.967297 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/65e3743e-125d-4cf1-b8bb-85cd0197b833-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"65e3743e-125d-4cf1-b8bb-85cd0197b833\") " pod="openstack/nova-metadata-0" Feb 27 10:52:10 crc kubenswrapper[4728]: I0227 10:52:10.967362 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/65e3743e-125d-4cf1-b8bb-85cd0197b833-logs\") pod \"nova-metadata-0\" (UID: \"65e3743e-125d-4cf1-b8bb-85cd0197b833\") " pod="openstack/nova-metadata-0" Feb 27 10:52:11 crc kubenswrapper[4728]: I0227 10:52:11.069328 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4gn5c\" (UniqueName: \"kubernetes.io/projected/65e3743e-125d-4cf1-b8bb-85cd0197b833-kube-api-access-4gn5c\") pod \"nova-metadata-0\" (UID: \"65e3743e-125d-4cf1-b8bb-85cd0197b833\") " pod="openstack/nova-metadata-0" Feb 27 10:52:11 crc kubenswrapper[4728]: I0227 10:52:11.069397 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65e3743e-125d-4cf1-b8bb-85cd0197b833-config-data\") pod \"nova-metadata-0\" (UID: \"65e3743e-125d-4cf1-b8bb-85cd0197b833\") " pod="openstack/nova-metadata-0" Feb 27 10:52:11 crc kubenswrapper[4728]: I0227 10:52:11.069560 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/65e3743e-125d-4cf1-b8bb-85cd0197b833-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"65e3743e-125d-4cf1-b8bb-85cd0197b833\") " pod="openstack/nova-metadata-0" Feb 27 10:52:11 crc kubenswrapper[4728]: I0227 10:52:11.069628 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/65e3743e-125d-4cf1-b8bb-85cd0197b833-logs\") pod \"nova-metadata-0\" (UID: \"65e3743e-125d-4cf1-b8bb-85cd0197b833\") " pod="openstack/nova-metadata-0" Feb 27 10:52:11 crc kubenswrapper[4728]: I0227 10:52:11.069803 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65e3743e-125d-4cf1-b8bb-85cd0197b833-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"65e3743e-125d-4cf1-b8bb-85cd0197b833\") " pod="openstack/nova-metadata-0" Feb 27 10:52:11 crc kubenswrapper[4728]: I0227 10:52:11.071422 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/65e3743e-125d-4cf1-b8bb-85cd0197b833-logs\") pod \"nova-metadata-0\" (UID: \"65e3743e-125d-4cf1-b8bb-85cd0197b833\") " pod="openstack/nova-metadata-0" Feb 27 10:52:11 crc kubenswrapper[4728]: I0227 10:52:11.076474 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/65e3743e-125d-4cf1-b8bb-85cd0197b833-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"65e3743e-125d-4cf1-b8bb-85cd0197b833\") " pod="openstack/nova-metadata-0" Feb 27 10:52:11 crc kubenswrapper[4728]: I0227 10:52:11.077639 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65e3743e-125d-4cf1-b8bb-85cd0197b833-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"65e3743e-125d-4cf1-b8bb-85cd0197b833\") " pod="openstack/nova-metadata-0" Feb 27 10:52:11 crc kubenswrapper[4728]: I0227 10:52:11.078852 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65e3743e-125d-4cf1-b8bb-85cd0197b833-config-data\") pod \"nova-metadata-0\" (UID: \"65e3743e-125d-4cf1-b8bb-85cd0197b833\") " pod="openstack/nova-metadata-0" Feb 27 10:52:11 crc kubenswrapper[4728]: I0227 10:52:11.089191 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4gn5c\" (UniqueName: \"kubernetes.io/projected/65e3743e-125d-4cf1-b8bb-85cd0197b833-kube-api-access-4gn5c\") pod \"nova-metadata-0\" (UID: \"65e3743e-125d-4cf1-b8bb-85cd0197b833\") " pod="openstack/nova-metadata-0" Feb 27 10:52:11 crc kubenswrapper[4728]: I0227 10:52:11.244646 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 27 10:52:11 crc kubenswrapper[4728]: I0227 10:52:11.825862 4728 generic.go:334] "Generic (PLEG): container finished" podID="195e680c-d89c-4b39-ae68-145934b70fa2" containerID="1166b274401bd1f7b903a229bfec9c38ef0399907dc26a92c7654a11f4cb85ad" exitCode=0 Feb 27 10:52:11 crc kubenswrapper[4728]: I0227 10:52:11.825887 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"195e680c-d89c-4b39-ae68-145934b70fa2","Type":"ContainerDied","Data":"1166b274401bd1f7b903a229bfec9c38ef0399907dc26a92c7654a11f4cb85ad"} Feb 27 10:52:12 crc kubenswrapper[4728]: I0227 10:52:12.084859 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 27 10:52:12 crc kubenswrapper[4728]: W0227 10:52:12.111204 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod65e3743e_125d_4cf1_b8bb_85cd0197b833.slice/crio-6a9a997a259d2e20a39f6aa76d2ad562317bb07e530f9b65ba49ce66b833c3ec WatchSource:0}: Error finding container 6a9a997a259d2e20a39f6aa76d2ad562317bb07e530f9b65ba49ce66b833c3ec: Status 404 returned error can't find the container with id 6a9a997a259d2e20a39f6aa76d2ad562317bb07e530f9b65ba49ce66b833c3ec Feb 27 10:52:12 crc kubenswrapper[4728]: I0227 10:52:12.286049 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 27 10:52:12 crc kubenswrapper[4728]: I0227 10:52:12.415776 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/195e680c-d89c-4b39-ae68-145934b70fa2-logs\") pod \"195e680c-d89c-4b39-ae68-145934b70fa2\" (UID: \"195e680c-d89c-4b39-ae68-145934b70fa2\") " Feb 27 10:52:12 crc kubenswrapper[4728]: I0227 10:52:12.415912 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/195e680c-d89c-4b39-ae68-145934b70fa2-combined-ca-bundle\") pod \"195e680c-d89c-4b39-ae68-145934b70fa2\" (UID: \"195e680c-d89c-4b39-ae68-145934b70fa2\") " Feb 27 10:52:12 crc kubenswrapper[4728]: I0227 10:52:12.415956 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-24f4l\" (UniqueName: \"kubernetes.io/projected/195e680c-d89c-4b39-ae68-145934b70fa2-kube-api-access-24f4l\") pod \"195e680c-d89c-4b39-ae68-145934b70fa2\" (UID: \"195e680c-d89c-4b39-ae68-145934b70fa2\") " Feb 27 10:52:12 crc kubenswrapper[4728]: I0227 10:52:12.416482 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/195e680c-d89c-4b39-ae68-145934b70fa2-public-tls-certs\") pod \"195e680c-d89c-4b39-ae68-145934b70fa2\" (UID: \"195e680c-d89c-4b39-ae68-145934b70fa2\") " Feb 27 10:52:12 crc kubenswrapper[4728]: I0227 10:52:12.416572 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/195e680c-d89c-4b39-ae68-145934b70fa2-internal-tls-certs\") pod \"195e680c-d89c-4b39-ae68-145934b70fa2\" (UID: \"195e680c-d89c-4b39-ae68-145934b70fa2\") " Feb 27 10:52:12 crc kubenswrapper[4728]: I0227 10:52:12.416667 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/195e680c-d89c-4b39-ae68-145934b70fa2-config-data\") pod \"195e680c-d89c-4b39-ae68-145934b70fa2\" (UID: \"195e680c-d89c-4b39-ae68-145934b70fa2\") " Feb 27 10:52:12 crc kubenswrapper[4728]: I0227 10:52:12.417021 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/195e680c-d89c-4b39-ae68-145934b70fa2-logs" (OuterVolumeSpecName: "logs") pod "195e680c-d89c-4b39-ae68-145934b70fa2" (UID: "195e680c-d89c-4b39-ae68-145934b70fa2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:52:12 crc kubenswrapper[4728]: I0227 10:52:12.418206 4728 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/195e680c-d89c-4b39-ae68-145934b70fa2-logs\") on node \"crc\" DevicePath \"\"" Feb 27 10:52:12 crc kubenswrapper[4728]: I0227 10:52:12.421819 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/195e680c-d89c-4b39-ae68-145934b70fa2-kube-api-access-24f4l" (OuterVolumeSpecName: "kube-api-access-24f4l") pod "195e680c-d89c-4b39-ae68-145934b70fa2" (UID: "195e680c-d89c-4b39-ae68-145934b70fa2"). InnerVolumeSpecName "kube-api-access-24f4l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:52:12 crc kubenswrapper[4728]: I0227 10:52:12.459569 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/195e680c-d89c-4b39-ae68-145934b70fa2-config-data" (OuterVolumeSpecName: "config-data") pod "195e680c-d89c-4b39-ae68-145934b70fa2" (UID: "195e680c-d89c-4b39-ae68-145934b70fa2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:52:12 crc kubenswrapper[4728]: I0227 10:52:12.460521 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/195e680c-d89c-4b39-ae68-145934b70fa2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "195e680c-d89c-4b39-ae68-145934b70fa2" (UID: "195e680c-d89c-4b39-ae68-145934b70fa2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:52:12 crc kubenswrapper[4728]: I0227 10:52:12.489238 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/195e680c-d89c-4b39-ae68-145934b70fa2-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "195e680c-d89c-4b39-ae68-145934b70fa2" (UID: "195e680c-d89c-4b39-ae68-145934b70fa2"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:52:12 crc kubenswrapper[4728]: I0227 10:52:12.499724 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/195e680c-d89c-4b39-ae68-145934b70fa2-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "195e680c-d89c-4b39-ae68-145934b70fa2" (UID: "195e680c-d89c-4b39-ae68-145934b70fa2"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:52:12 crc kubenswrapper[4728]: I0227 10:52:12.521424 4728 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/195e680c-d89c-4b39-ae68-145934b70fa2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 10:52:12 crc kubenswrapper[4728]: I0227 10:52:12.521577 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-24f4l\" (UniqueName: \"kubernetes.io/projected/195e680c-d89c-4b39-ae68-145934b70fa2-kube-api-access-24f4l\") on node \"crc\" DevicePath \"\"" Feb 27 10:52:12 crc kubenswrapper[4728]: I0227 10:52:12.521588 4728 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/195e680c-d89c-4b39-ae68-145934b70fa2-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 27 10:52:12 crc kubenswrapper[4728]: I0227 10:52:12.521598 4728 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/195e680c-d89c-4b39-ae68-145934b70fa2-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 27 10:52:12 crc kubenswrapper[4728]: I0227 10:52:12.521607 4728 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/195e680c-d89c-4b39-ae68-145934b70fa2-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 10:52:12 crc kubenswrapper[4728]: I0227 10:52:12.742891 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec89ac4c-d100-4004-bb62-0f5e6a344efd" path="/var/lib/kubelet/pods/ec89ac4c-d100-4004-bb62-0f5e6a344efd/volumes" Feb 27 10:52:12 crc kubenswrapper[4728]: I0227 10:52:12.809743 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-6xxv8"] Feb 27 10:52:12 crc kubenswrapper[4728]: E0227 10:52:12.810381 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="195e680c-d89c-4b39-ae68-145934b70fa2" containerName="nova-api-log" Feb 27 10:52:12 crc kubenswrapper[4728]: I0227 10:52:12.810404 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="195e680c-d89c-4b39-ae68-145934b70fa2" containerName="nova-api-log" Feb 27 10:52:12 crc kubenswrapper[4728]: E0227 10:52:12.810428 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="195e680c-d89c-4b39-ae68-145934b70fa2" containerName="nova-api-api" Feb 27 10:52:12 crc kubenswrapper[4728]: I0227 10:52:12.810437 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="195e680c-d89c-4b39-ae68-145934b70fa2" containerName="nova-api-api" Feb 27 10:52:12 crc kubenswrapper[4728]: I0227 10:52:12.810779 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="195e680c-d89c-4b39-ae68-145934b70fa2" containerName="nova-api-api" Feb 27 10:52:12 crc kubenswrapper[4728]: I0227 10:52:12.810821 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="195e680c-d89c-4b39-ae68-145934b70fa2" containerName="nova-api-log" Feb 27 10:52:12 crc kubenswrapper[4728]: I0227 10:52:12.814912 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6xxv8" Feb 27 10:52:12 crc kubenswrapper[4728]: I0227 10:52:12.826192 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6xxv8"] Feb 27 10:52:12 crc kubenswrapper[4728]: I0227 10:52:12.854205 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"65e3743e-125d-4cf1-b8bb-85cd0197b833","Type":"ContainerStarted","Data":"0ab2277997012f4b5c3a64c796a876e5280f28539d6c299e9fd0b35f37f8864b"} Feb 27 10:52:12 crc kubenswrapper[4728]: I0227 10:52:12.854256 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"65e3743e-125d-4cf1-b8bb-85cd0197b833","Type":"ContainerStarted","Data":"aa8b1591e50eb5d244b493fe5f125a507d833837fa77cbc0804dfe6429cb742f"} Feb 27 10:52:12 crc kubenswrapper[4728]: I0227 10:52:12.854271 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"65e3743e-125d-4cf1-b8bb-85cd0197b833","Type":"ContainerStarted","Data":"6a9a997a259d2e20a39f6aa76d2ad562317bb07e530f9b65ba49ce66b833c3ec"} Feb 27 10:52:12 crc kubenswrapper[4728]: I0227 10:52:12.856276 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"195e680c-d89c-4b39-ae68-145934b70fa2","Type":"ContainerDied","Data":"90ee8606f1cbb298b6a49cadeaf5ecf39a12959fab55e7b9c5ae0a323a51e8d7"} Feb 27 10:52:12 crc kubenswrapper[4728]: I0227 10:52:12.856388 4728 scope.go:117] "RemoveContainer" containerID="1166b274401bd1f7b903a229bfec9c38ef0399907dc26a92c7654a11f4cb85ad" Feb 27 10:52:12 crc kubenswrapper[4728]: I0227 10:52:12.856585 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 27 10:52:12 crc kubenswrapper[4728]: I0227 10:52:12.884813 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.884789153 podStartE2EDuration="2.884789153s" podCreationTimestamp="2026-02-27 10:52:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:52:12.878755389 +0000 UTC m=+1552.841121485" watchObservedRunningTime="2026-02-27 10:52:12.884789153 +0000 UTC m=+1552.847155259" Feb 27 10:52:12 crc kubenswrapper[4728]: I0227 10:52:12.899037 4728 scope.go:117] "RemoveContainer" containerID="8e6000b65abfaea8381dd3976c155f1a601e5c00805ad4d2bd622bfacd6d0749" Feb 27 10:52:12 crc kubenswrapper[4728]: I0227 10:52:12.923291 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 27 10:52:12 crc kubenswrapper[4728]: I0227 10:52:12.930112 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d87df44b-fc24-4c81-8c22-94a12665da84-utilities\") pod \"certified-operators-6xxv8\" (UID: \"d87df44b-fc24-4c81-8c22-94a12665da84\") " pod="openshift-marketplace/certified-operators-6xxv8" Feb 27 10:52:12 crc kubenswrapper[4728]: I0227 10:52:12.930213 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fnj8s\" (UniqueName: \"kubernetes.io/projected/d87df44b-fc24-4c81-8c22-94a12665da84-kube-api-access-fnj8s\") pod \"certified-operators-6xxv8\" (UID: \"d87df44b-fc24-4c81-8c22-94a12665da84\") " pod="openshift-marketplace/certified-operators-6xxv8" Feb 27 10:52:12 crc kubenswrapper[4728]: I0227 10:52:12.930272 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d87df44b-fc24-4c81-8c22-94a12665da84-catalog-content\") pod \"certified-operators-6xxv8\" (UID: \"d87df44b-fc24-4c81-8c22-94a12665da84\") " pod="openshift-marketplace/certified-operators-6xxv8" Feb 27 10:52:12 crc kubenswrapper[4728]: I0227 10:52:12.936754 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 27 10:52:12 crc kubenswrapper[4728]: I0227 10:52:12.957992 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 27 10:52:12 crc kubenswrapper[4728]: I0227 10:52:12.961027 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 27 10:52:12 crc kubenswrapper[4728]: I0227 10:52:12.974760 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 27 10:52:12 crc kubenswrapper[4728]: I0227 10:52:12.974859 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 27 10:52:12 crc kubenswrapper[4728]: I0227 10:52:12.975112 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 27 10:52:12 crc kubenswrapper[4728]: I0227 10:52:12.975273 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 27 10:52:13 crc kubenswrapper[4728]: I0227 10:52:13.031904 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d87df44b-fc24-4c81-8c22-94a12665da84-utilities\") pod \"certified-operators-6xxv8\" (UID: \"d87df44b-fc24-4c81-8c22-94a12665da84\") " pod="openshift-marketplace/certified-operators-6xxv8" Feb 27 10:52:13 crc kubenswrapper[4728]: I0227 10:52:13.032025 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fnj8s\" (UniqueName: \"kubernetes.io/projected/d87df44b-fc24-4c81-8c22-94a12665da84-kube-api-access-fnj8s\") pod \"certified-operators-6xxv8\" (UID: \"d87df44b-fc24-4c81-8c22-94a12665da84\") " pod="openshift-marketplace/certified-operators-6xxv8" Feb 27 10:52:13 crc kubenswrapper[4728]: I0227 10:52:13.032108 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d87df44b-fc24-4c81-8c22-94a12665da84-catalog-content\") pod \"certified-operators-6xxv8\" (UID: \"d87df44b-fc24-4c81-8c22-94a12665da84\") " pod="openshift-marketplace/certified-operators-6xxv8" Feb 27 10:52:13 crc kubenswrapper[4728]: I0227 10:52:13.035565 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d87df44b-fc24-4c81-8c22-94a12665da84-utilities\") pod \"certified-operators-6xxv8\" (UID: \"d87df44b-fc24-4c81-8c22-94a12665da84\") " pod="openshift-marketplace/certified-operators-6xxv8" Feb 27 10:52:13 crc kubenswrapper[4728]: I0227 10:52:13.050328 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d87df44b-fc24-4c81-8c22-94a12665da84-catalog-content\") pod \"certified-operators-6xxv8\" (UID: \"d87df44b-fc24-4c81-8c22-94a12665da84\") " pod="openshift-marketplace/certified-operators-6xxv8" Feb 27 10:52:13 crc kubenswrapper[4728]: I0227 10:52:13.061298 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fnj8s\" (UniqueName: \"kubernetes.io/projected/d87df44b-fc24-4c81-8c22-94a12665da84-kube-api-access-fnj8s\") pod \"certified-operators-6xxv8\" (UID: \"d87df44b-fc24-4c81-8c22-94a12665da84\") " pod="openshift-marketplace/certified-operators-6xxv8" Feb 27 10:52:13 crc kubenswrapper[4728]: I0227 10:52:13.134048 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cb4ac27f-bce8-4cac-b444-4bc0921f975d-internal-tls-certs\") pod \"nova-api-0\" (UID: \"cb4ac27f-bce8-4cac-b444-4bc0921f975d\") " pod="openstack/nova-api-0" Feb 27 10:52:13 crc kubenswrapper[4728]: I0227 10:52:13.134113 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb4ac27f-bce8-4cac-b444-4bc0921f975d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"cb4ac27f-bce8-4cac-b444-4bc0921f975d\") " pod="openstack/nova-api-0" Feb 27 10:52:13 crc kubenswrapper[4728]: I0227 10:52:13.134143 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krlbg\" (UniqueName: \"kubernetes.io/projected/cb4ac27f-bce8-4cac-b444-4bc0921f975d-kube-api-access-krlbg\") pod \"nova-api-0\" (UID: \"cb4ac27f-bce8-4cac-b444-4bc0921f975d\") " pod="openstack/nova-api-0" Feb 27 10:52:13 crc kubenswrapper[4728]: I0227 10:52:13.134344 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cb4ac27f-bce8-4cac-b444-4bc0921f975d-public-tls-certs\") pod \"nova-api-0\" (UID: \"cb4ac27f-bce8-4cac-b444-4bc0921f975d\") " pod="openstack/nova-api-0" Feb 27 10:52:13 crc kubenswrapper[4728]: I0227 10:52:13.134394 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cb4ac27f-bce8-4cac-b444-4bc0921f975d-logs\") pod \"nova-api-0\" (UID: \"cb4ac27f-bce8-4cac-b444-4bc0921f975d\") " pod="openstack/nova-api-0" Feb 27 10:52:13 crc kubenswrapper[4728]: I0227 10:52:13.134692 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb4ac27f-bce8-4cac-b444-4bc0921f975d-config-data\") pod \"nova-api-0\" (UID: \"cb4ac27f-bce8-4cac-b444-4bc0921f975d\") " pod="openstack/nova-api-0" Feb 27 10:52:13 crc kubenswrapper[4728]: I0227 10:52:13.150375 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6xxv8" Feb 27 10:52:13 crc kubenswrapper[4728]: I0227 10:52:13.237907 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb4ac27f-bce8-4cac-b444-4bc0921f975d-config-data\") pod \"nova-api-0\" (UID: \"cb4ac27f-bce8-4cac-b444-4bc0921f975d\") " pod="openstack/nova-api-0" Feb 27 10:52:13 crc kubenswrapper[4728]: I0227 10:52:13.238348 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cb4ac27f-bce8-4cac-b444-4bc0921f975d-internal-tls-certs\") pod \"nova-api-0\" (UID: \"cb4ac27f-bce8-4cac-b444-4bc0921f975d\") " pod="openstack/nova-api-0" Feb 27 10:52:13 crc kubenswrapper[4728]: I0227 10:52:13.238398 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb4ac27f-bce8-4cac-b444-4bc0921f975d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"cb4ac27f-bce8-4cac-b444-4bc0921f975d\") " pod="openstack/nova-api-0" Feb 27 10:52:13 crc kubenswrapper[4728]: I0227 10:52:13.238437 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-krlbg\" (UniqueName: \"kubernetes.io/projected/cb4ac27f-bce8-4cac-b444-4bc0921f975d-kube-api-access-krlbg\") pod \"nova-api-0\" (UID: \"cb4ac27f-bce8-4cac-b444-4bc0921f975d\") " pod="openstack/nova-api-0" Feb 27 10:52:13 crc kubenswrapper[4728]: I0227 10:52:13.238582 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cb4ac27f-bce8-4cac-b444-4bc0921f975d-public-tls-certs\") pod \"nova-api-0\" (UID: \"cb4ac27f-bce8-4cac-b444-4bc0921f975d\") " pod="openstack/nova-api-0" Feb 27 10:52:13 crc kubenswrapper[4728]: I0227 10:52:13.238611 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cb4ac27f-bce8-4cac-b444-4bc0921f975d-logs\") pod \"nova-api-0\" (UID: \"cb4ac27f-bce8-4cac-b444-4bc0921f975d\") " pod="openstack/nova-api-0" Feb 27 10:52:13 crc kubenswrapper[4728]: I0227 10:52:13.239305 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cb4ac27f-bce8-4cac-b444-4bc0921f975d-logs\") pod \"nova-api-0\" (UID: \"cb4ac27f-bce8-4cac-b444-4bc0921f975d\") " pod="openstack/nova-api-0" Feb 27 10:52:13 crc kubenswrapper[4728]: I0227 10:52:13.244132 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb4ac27f-bce8-4cac-b444-4bc0921f975d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"cb4ac27f-bce8-4cac-b444-4bc0921f975d\") " pod="openstack/nova-api-0" Feb 27 10:52:13 crc kubenswrapper[4728]: I0227 10:52:13.247239 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb4ac27f-bce8-4cac-b444-4bc0921f975d-config-data\") pod \"nova-api-0\" (UID: \"cb4ac27f-bce8-4cac-b444-4bc0921f975d\") " pod="openstack/nova-api-0" Feb 27 10:52:13 crc kubenswrapper[4728]: I0227 10:52:13.249136 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cb4ac27f-bce8-4cac-b444-4bc0921f975d-public-tls-certs\") pod \"nova-api-0\" (UID: \"cb4ac27f-bce8-4cac-b444-4bc0921f975d\") " pod="openstack/nova-api-0" Feb 27 10:52:13 crc kubenswrapper[4728]: I0227 10:52:13.249867 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cb4ac27f-bce8-4cac-b444-4bc0921f975d-internal-tls-certs\") pod \"nova-api-0\" (UID: \"cb4ac27f-bce8-4cac-b444-4bc0921f975d\") " pod="openstack/nova-api-0" Feb 27 10:52:13 crc kubenswrapper[4728]: I0227 10:52:13.257265 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-krlbg\" (UniqueName: \"kubernetes.io/projected/cb4ac27f-bce8-4cac-b444-4bc0921f975d-kube-api-access-krlbg\") pod \"nova-api-0\" (UID: \"cb4ac27f-bce8-4cac-b444-4bc0921f975d\") " pod="openstack/nova-api-0" Feb 27 10:52:13 crc kubenswrapper[4728]: I0227 10:52:13.281296 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 27 10:52:13 crc kubenswrapper[4728]: I0227 10:52:13.578438 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 27 10:52:13 crc kubenswrapper[4728]: W0227 10:52:13.706465 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd87df44b_fc24_4c81_8c22_94a12665da84.slice/crio-7c28ae35291bc6cbda5119f012b4228917d07b488730dc578ced6f07e2ff8859 WatchSource:0}: Error finding container 7c28ae35291bc6cbda5119f012b4228917d07b488730dc578ced6f07e2ff8859: Status 404 returned error can't find the container with id 7c28ae35291bc6cbda5119f012b4228917d07b488730dc578ced6f07e2ff8859 Feb 27 10:52:13 crc kubenswrapper[4728]: I0227 10:52:13.718083 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6xxv8"] Feb 27 10:52:13 crc kubenswrapper[4728]: I0227 10:52:13.869344 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6xxv8" event={"ID":"d87df44b-fc24-4c81-8c22-94a12665da84","Type":"ContainerStarted","Data":"7c28ae35291bc6cbda5119f012b4228917d07b488730dc578ced6f07e2ff8859"} Feb 27 10:52:13 crc kubenswrapper[4728]: W0227 10:52:13.874221 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcb4ac27f_bce8_4cac_b444_4bc0921f975d.slice/crio-914bcbb6c8f38356ff788b26fb0948822236ade0ae7e3dadf0e19b60b21da9a1 WatchSource:0}: Error finding container 914bcbb6c8f38356ff788b26fb0948822236ade0ae7e3dadf0e19b60b21da9a1: Status 404 returned error can't find the container with id 914bcbb6c8f38356ff788b26fb0948822236ade0ae7e3dadf0e19b60b21da9a1 Feb 27 10:52:13 crc kubenswrapper[4728]: I0227 10:52:13.882675 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 27 10:52:14 crc kubenswrapper[4728]: I0227 10:52:14.743898 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="195e680c-d89c-4b39-ae68-145934b70fa2" path="/var/lib/kubelet/pods/195e680c-d89c-4b39-ae68-145934b70fa2/volumes" Feb 27 10:52:14 crc kubenswrapper[4728]: I0227 10:52:14.885292 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"cb4ac27f-bce8-4cac-b444-4bc0921f975d","Type":"ContainerStarted","Data":"6bfb3fffa9ec5b69c740a2b6378d075aba4f35faa6620fd52bdbd055b810bbd6"} Feb 27 10:52:14 crc kubenswrapper[4728]: I0227 10:52:14.885339 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"cb4ac27f-bce8-4cac-b444-4bc0921f975d","Type":"ContainerStarted","Data":"034fef1daa17f853981843c29594fb3ef1ddbcdeb706ee2303f47ad92be6c48d"} Feb 27 10:52:14 crc kubenswrapper[4728]: I0227 10:52:14.885353 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"cb4ac27f-bce8-4cac-b444-4bc0921f975d","Type":"ContainerStarted","Data":"914bcbb6c8f38356ff788b26fb0948822236ade0ae7e3dadf0e19b60b21da9a1"} Feb 27 10:52:14 crc kubenswrapper[4728]: I0227 10:52:14.888173 4728 generic.go:334] "Generic (PLEG): container finished" podID="d87df44b-fc24-4c81-8c22-94a12665da84" containerID="9256ee19a53ff1048b4b9fa9b802f009e81ac959e7a766aa7fd2e27b13b02c2f" exitCode=0 Feb 27 10:52:14 crc kubenswrapper[4728]: I0227 10:52:14.888233 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6xxv8" event={"ID":"d87df44b-fc24-4c81-8c22-94a12665da84","Type":"ContainerDied","Data":"9256ee19a53ff1048b4b9fa9b802f009e81ac959e7a766aa7fd2e27b13b02c2f"} Feb 27 10:52:14 crc kubenswrapper[4728]: I0227 10:52:14.917679 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.917659265 podStartE2EDuration="2.917659265s" podCreationTimestamp="2026-02-27 10:52:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:52:14.910027208 +0000 UTC m=+1554.872393314" watchObservedRunningTime="2026-02-27 10:52:14.917659265 +0000 UTC m=+1554.880025381" Feb 27 10:52:16 crc kubenswrapper[4728]: I0227 10:52:16.244723 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 27 10:52:16 crc kubenswrapper[4728]: I0227 10:52:16.245378 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 27 10:52:18 crc kubenswrapper[4728]: I0227 10:52:18.575705 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 27 10:52:18 crc kubenswrapper[4728]: I0227 10:52:18.610820 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 27 10:52:19 crc kubenswrapper[4728]: I0227 10:52:19.162224 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 27 10:52:20 crc kubenswrapper[4728]: I0227 10:52:20.445140 4728 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-h4gl8" podUID="efa9e238-79b0-4757-acab-53537b5ae93a" containerName="registry-server" probeResult="failure" output=< Feb 27 10:52:20 crc kubenswrapper[4728]: timeout: failed to connect service ":50051" within 1s Feb 27 10:52:20 crc kubenswrapper[4728]: > Feb 27 10:52:21 crc kubenswrapper[4728]: I0227 10:52:21.244751 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 27 10:52:21 crc kubenswrapper[4728]: I0227 10:52:21.245350 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 27 10:52:22 crc kubenswrapper[4728]: I0227 10:52:22.120826 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6xxv8" event={"ID":"d87df44b-fc24-4c81-8c22-94a12665da84","Type":"ContainerStarted","Data":"f5a837c4028ceb761d3df344c8933ebf9bcd88f21fdad4ac60d885a7d072e931"} Feb 27 10:52:22 crc kubenswrapper[4728]: I0227 10:52:22.127757 4728 generic.go:334] "Generic (PLEG): container finished" podID="c43d8f20-c2f5-4269-b8fb-aec91f9c9150" containerID="f9f2d8d912390de684dbeca893fc6e00f5cff27566b226162549305a7fe73564" exitCode=137 Feb 27 10:52:22 crc kubenswrapper[4728]: I0227 10:52:22.127796 4728 generic.go:334] "Generic (PLEG): container finished" podID="c43d8f20-c2f5-4269-b8fb-aec91f9c9150" containerID="8766429dec4d5a212661654f4bc0512821fe17b80d97bfe039c3f4c0d0273195" exitCode=137 Feb 27 10:52:22 crc kubenswrapper[4728]: I0227 10:52:22.128705 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"c43d8f20-c2f5-4269-b8fb-aec91f9c9150","Type":"ContainerDied","Data":"f9f2d8d912390de684dbeca893fc6e00f5cff27566b226162549305a7fe73564"} Feb 27 10:52:22 crc kubenswrapper[4728]: I0227 10:52:22.128785 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"c43d8f20-c2f5-4269-b8fb-aec91f9c9150","Type":"ContainerDied","Data":"8766429dec4d5a212661654f4bc0512821fe17b80d97bfe039c3f4c0d0273195"} Feb 27 10:52:22 crc kubenswrapper[4728]: I0227 10:52:22.128799 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"c43d8f20-c2f5-4269-b8fb-aec91f9c9150","Type":"ContainerDied","Data":"872d71a017d84b6caf3530613786e47bfd46227c97cce702f8be4e463f454c20"} Feb 27 10:52:22 crc kubenswrapper[4728]: I0227 10:52:22.128811 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="872d71a017d84b6caf3530613786e47bfd46227c97cce702f8be4e463f454c20" Feb 27 10:52:22 crc kubenswrapper[4728]: I0227 10:52:22.257762 4728 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="65e3743e-125d-4cf1-b8bb-85cd0197b833" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.1.18:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 27 10:52:22 crc kubenswrapper[4728]: I0227 10:52:22.258112 4728 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="65e3743e-125d-4cf1-b8bb-85cd0197b833" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.1.18:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 27 10:52:22 crc kubenswrapper[4728]: I0227 10:52:22.292450 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 27 10:52:22 crc kubenswrapper[4728]: I0227 10:52:22.354673 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c43d8f20-c2f5-4269-b8fb-aec91f9c9150-scripts\") pod \"c43d8f20-c2f5-4269-b8fb-aec91f9c9150\" (UID: \"c43d8f20-c2f5-4269-b8fb-aec91f9c9150\") " Feb 27 10:52:22 crc kubenswrapper[4728]: I0227 10:52:22.354850 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c43d8f20-c2f5-4269-b8fb-aec91f9c9150-combined-ca-bundle\") pod \"c43d8f20-c2f5-4269-b8fb-aec91f9c9150\" (UID: \"c43d8f20-c2f5-4269-b8fb-aec91f9c9150\") " Feb 27 10:52:22 crc kubenswrapper[4728]: I0227 10:52:22.354925 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pt2sh\" (UniqueName: \"kubernetes.io/projected/c43d8f20-c2f5-4269-b8fb-aec91f9c9150-kube-api-access-pt2sh\") pod \"c43d8f20-c2f5-4269-b8fb-aec91f9c9150\" (UID: \"c43d8f20-c2f5-4269-b8fb-aec91f9c9150\") " Feb 27 10:52:22 crc kubenswrapper[4728]: I0227 10:52:22.354955 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c43d8f20-c2f5-4269-b8fb-aec91f9c9150-config-data\") pod \"c43d8f20-c2f5-4269-b8fb-aec91f9c9150\" (UID: \"c43d8f20-c2f5-4269-b8fb-aec91f9c9150\") " Feb 27 10:52:22 crc kubenswrapper[4728]: I0227 10:52:22.368991 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c43d8f20-c2f5-4269-b8fb-aec91f9c9150-kube-api-access-pt2sh" (OuterVolumeSpecName: "kube-api-access-pt2sh") pod "c43d8f20-c2f5-4269-b8fb-aec91f9c9150" (UID: "c43d8f20-c2f5-4269-b8fb-aec91f9c9150"). InnerVolumeSpecName "kube-api-access-pt2sh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:52:22 crc kubenswrapper[4728]: I0227 10:52:22.380735 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c43d8f20-c2f5-4269-b8fb-aec91f9c9150-scripts" (OuterVolumeSpecName: "scripts") pod "c43d8f20-c2f5-4269-b8fb-aec91f9c9150" (UID: "c43d8f20-c2f5-4269-b8fb-aec91f9c9150"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:52:22 crc kubenswrapper[4728]: I0227 10:52:22.460641 4728 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c43d8f20-c2f5-4269-b8fb-aec91f9c9150-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 10:52:22 crc kubenswrapper[4728]: I0227 10:52:22.460676 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pt2sh\" (UniqueName: \"kubernetes.io/projected/c43d8f20-c2f5-4269-b8fb-aec91f9c9150-kube-api-access-pt2sh\") on node \"crc\" DevicePath \"\"" Feb 27 10:52:22 crc kubenswrapper[4728]: I0227 10:52:22.482594 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-fn4cs"] Feb 27 10:52:22 crc kubenswrapper[4728]: E0227 10:52:22.483476 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c43d8f20-c2f5-4269-b8fb-aec91f9c9150" containerName="aodh-api" Feb 27 10:52:22 crc kubenswrapper[4728]: I0227 10:52:22.483497 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="c43d8f20-c2f5-4269-b8fb-aec91f9c9150" containerName="aodh-api" Feb 27 10:52:22 crc kubenswrapper[4728]: E0227 10:52:22.483763 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c43d8f20-c2f5-4269-b8fb-aec91f9c9150" containerName="aodh-notifier" Feb 27 10:52:22 crc kubenswrapper[4728]: I0227 10:52:22.483773 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="c43d8f20-c2f5-4269-b8fb-aec91f9c9150" containerName="aodh-notifier" Feb 27 10:52:22 crc kubenswrapper[4728]: E0227 10:52:22.483823 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c43d8f20-c2f5-4269-b8fb-aec91f9c9150" containerName="aodh-evaluator" Feb 27 10:52:22 crc kubenswrapper[4728]: I0227 10:52:22.483834 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="c43d8f20-c2f5-4269-b8fb-aec91f9c9150" containerName="aodh-evaluator" Feb 27 10:52:22 crc kubenswrapper[4728]: E0227 10:52:22.483890 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c43d8f20-c2f5-4269-b8fb-aec91f9c9150" containerName="aodh-listener" Feb 27 10:52:22 crc kubenswrapper[4728]: I0227 10:52:22.483897 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="c43d8f20-c2f5-4269-b8fb-aec91f9c9150" containerName="aodh-listener" Feb 27 10:52:22 crc kubenswrapper[4728]: I0227 10:52:22.484124 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="c43d8f20-c2f5-4269-b8fb-aec91f9c9150" containerName="aodh-notifier" Feb 27 10:52:22 crc kubenswrapper[4728]: I0227 10:52:22.484147 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="c43d8f20-c2f5-4269-b8fb-aec91f9c9150" containerName="aodh-evaluator" Feb 27 10:52:22 crc kubenswrapper[4728]: I0227 10:52:22.484159 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="c43d8f20-c2f5-4269-b8fb-aec91f9c9150" containerName="aodh-listener" Feb 27 10:52:22 crc kubenswrapper[4728]: I0227 10:52:22.484182 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="c43d8f20-c2f5-4269-b8fb-aec91f9c9150" containerName="aodh-api" Feb 27 10:52:22 crc kubenswrapper[4728]: I0227 10:52:22.487668 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fn4cs" Feb 27 10:52:22 crc kubenswrapper[4728]: I0227 10:52:22.512147 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fn4cs"] Feb 27 10:52:22 crc kubenswrapper[4728]: I0227 10:52:22.563645 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c6a8916-fd56-45ba-837e-78eb7fd7b7f2-utilities\") pod \"redhat-marketplace-fn4cs\" (UID: \"9c6a8916-fd56-45ba-837e-78eb7fd7b7f2\") " pod="openshift-marketplace/redhat-marketplace-fn4cs" Feb 27 10:52:22 crc kubenswrapper[4728]: I0227 10:52:22.563712 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6vjl\" (UniqueName: \"kubernetes.io/projected/9c6a8916-fd56-45ba-837e-78eb7fd7b7f2-kube-api-access-g6vjl\") pod \"redhat-marketplace-fn4cs\" (UID: \"9c6a8916-fd56-45ba-837e-78eb7fd7b7f2\") " pod="openshift-marketplace/redhat-marketplace-fn4cs" Feb 27 10:52:22 crc kubenswrapper[4728]: I0227 10:52:22.563747 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c6a8916-fd56-45ba-837e-78eb7fd7b7f2-catalog-content\") pod \"redhat-marketplace-fn4cs\" (UID: \"9c6a8916-fd56-45ba-837e-78eb7fd7b7f2\") " pod="openshift-marketplace/redhat-marketplace-fn4cs" Feb 27 10:52:22 crc kubenswrapper[4728]: I0227 10:52:22.569147 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c43d8f20-c2f5-4269-b8fb-aec91f9c9150-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c43d8f20-c2f5-4269-b8fb-aec91f9c9150" (UID: "c43d8f20-c2f5-4269-b8fb-aec91f9c9150"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:52:22 crc kubenswrapper[4728]: I0227 10:52:22.576916 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c43d8f20-c2f5-4269-b8fb-aec91f9c9150-config-data" (OuterVolumeSpecName: "config-data") pod "c43d8f20-c2f5-4269-b8fb-aec91f9c9150" (UID: "c43d8f20-c2f5-4269-b8fb-aec91f9c9150"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:52:22 crc kubenswrapper[4728]: I0227 10:52:22.665927 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c6a8916-fd56-45ba-837e-78eb7fd7b7f2-utilities\") pod \"redhat-marketplace-fn4cs\" (UID: \"9c6a8916-fd56-45ba-837e-78eb7fd7b7f2\") " pod="openshift-marketplace/redhat-marketplace-fn4cs" Feb 27 10:52:22 crc kubenswrapper[4728]: I0227 10:52:22.666482 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c6a8916-fd56-45ba-837e-78eb7fd7b7f2-utilities\") pod \"redhat-marketplace-fn4cs\" (UID: \"9c6a8916-fd56-45ba-837e-78eb7fd7b7f2\") " pod="openshift-marketplace/redhat-marketplace-fn4cs" Feb 27 10:52:22 crc kubenswrapper[4728]: I0227 10:52:22.666632 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g6vjl\" (UniqueName: \"kubernetes.io/projected/9c6a8916-fd56-45ba-837e-78eb7fd7b7f2-kube-api-access-g6vjl\") pod \"redhat-marketplace-fn4cs\" (UID: \"9c6a8916-fd56-45ba-837e-78eb7fd7b7f2\") " pod="openshift-marketplace/redhat-marketplace-fn4cs" Feb 27 10:52:22 crc kubenswrapper[4728]: I0227 10:52:22.666763 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c6a8916-fd56-45ba-837e-78eb7fd7b7f2-catalog-content\") pod \"redhat-marketplace-fn4cs\" (UID: \"9c6a8916-fd56-45ba-837e-78eb7fd7b7f2\") " pod="openshift-marketplace/redhat-marketplace-fn4cs" Feb 27 10:52:22 crc kubenswrapper[4728]: I0227 10:52:22.666975 4728 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c43d8f20-c2f5-4269-b8fb-aec91f9c9150-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 10:52:22 crc kubenswrapper[4728]: I0227 10:52:22.667078 4728 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c43d8f20-c2f5-4269-b8fb-aec91f9c9150-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 10:52:22 crc kubenswrapper[4728]: I0227 10:52:22.667146 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c6a8916-fd56-45ba-837e-78eb7fd7b7f2-catalog-content\") pod \"redhat-marketplace-fn4cs\" (UID: \"9c6a8916-fd56-45ba-837e-78eb7fd7b7f2\") " pod="openshift-marketplace/redhat-marketplace-fn4cs" Feb 27 10:52:22 crc kubenswrapper[4728]: I0227 10:52:22.682875 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g6vjl\" (UniqueName: \"kubernetes.io/projected/9c6a8916-fd56-45ba-837e-78eb7fd7b7f2-kube-api-access-g6vjl\") pod \"redhat-marketplace-fn4cs\" (UID: \"9c6a8916-fd56-45ba-837e-78eb7fd7b7f2\") " pod="openshift-marketplace/redhat-marketplace-fn4cs" Feb 27 10:52:22 crc kubenswrapper[4728]: I0227 10:52:22.927002 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fn4cs" Feb 27 10:52:23 crc kubenswrapper[4728]: I0227 10:52:23.146198 4728 generic.go:334] "Generic (PLEG): container finished" podID="d87df44b-fc24-4c81-8c22-94a12665da84" containerID="f5a837c4028ceb761d3df344c8933ebf9bcd88f21fdad4ac60d885a7d072e931" exitCode=0 Feb 27 10:52:23 crc kubenswrapper[4728]: I0227 10:52:23.146549 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 27 10:52:23 crc kubenswrapper[4728]: I0227 10:52:23.147349 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6xxv8" event={"ID":"d87df44b-fc24-4c81-8c22-94a12665da84","Type":"ContainerDied","Data":"f5a837c4028ceb761d3df344c8933ebf9bcd88f21fdad4ac60d885a7d072e931"} Feb 27 10:52:23 crc kubenswrapper[4728]: I0227 10:52:23.250052 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Feb 27 10:52:23 crc kubenswrapper[4728]: I0227 10:52:23.280832 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-0"] Feb 27 10:52:23 crc kubenswrapper[4728]: I0227 10:52:23.281697 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 27 10:52:23 crc kubenswrapper[4728]: I0227 10:52:23.281757 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 27 10:52:23 crc kubenswrapper[4728]: I0227 10:52:23.292799 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Feb 27 10:52:23 crc kubenswrapper[4728]: I0227 10:52:23.296692 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 27 10:52:23 crc kubenswrapper[4728]: I0227 10:52:23.298984 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Feb 27 10:52:23 crc kubenswrapper[4728]: I0227 10:52:23.299390 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Feb 27 10:52:23 crc kubenswrapper[4728]: I0227 10:52:23.299896 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-public-svc" Feb 27 10:52:23 crc kubenswrapper[4728]: I0227 10:52:23.300667 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-internal-svc" Feb 27 10:52:23 crc kubenswrapper[4728]: I0227 10:52:23.300947 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-4dctm" Feb 27 10:52:23 crc kubenswrapper[4728]: I0227 10:52:23.324529 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Feb 27 10:52:23 crc kubenswrapper[4728]: I0227 10:52:23.388901 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4tvxw\" (UniqueName: \"kubernetes.io/projected/a056167f-457f-4547-ab3e-cbe2433d3cfc-kube-api-access-4tvxw\") pod \"aodh-0\" (UID: \"a056167f-457f-4547-ab3e-cbe2433d3cfc\") " pod="openstack/aodh-0" Feb 27 10:52:23 crc kubenswrapper[4728]: I0227 10:52:23.388993 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a056167f-457f-4547-ab3e-cbe2433d3cfc-public-tls-certs\") pod \"aodh-0\" (UID: \"a056167f-457f-4547-ab3e-cbe2433d3cfc\") " pod="openstack/aodh-0" Feb 27 10:52:23 crc kubenswrapper[4728]: I0227 10:52:23.389027 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a056167f-457f-4547-ab3e-cbe2433d3cfc-config-data\") pod \"aodh-0\" (UID: \"a056167f-457f-4547-ab3e-cbe2433d3cfc\") " pod="openstack/aodh-0" Feb 27 10:52:23 crc kubenswrapper[4728]: I0227 10:52:23.389064 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a056167f-457f-4547-ab3e-cbe2433d3cfc-combined-ca-bundle\") pod \"aodh-0\" (UID: \"a056167f-457f-4547-ab3e-cbe2433d3cfc\") " pod="openstack/aodh-0" Feb 27 10:52:23 crc kubenswrapper[4728]: I0227 10:52:23.389175 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a056167f-457f-4547-ab3e-cbe2433d3cfc-scripts\") pod \"aodh-0\" (UID: \"a056167f-457f-4547-ab3e-cbe2433d3cfc\") " pod="openstack/aodh-0" Feb 27 10:52:23 crc kubenswrapper[4728]: I0227 10:52:23.389271 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a056167f-457f-4547-ab3e-cbe2433d3cfc-internal-tls-certs\") pod \"aodh-0\" (UID: \"a056167f-457f-4547-ab3e-cbe2433d3cfc\") " pod="openstack/aodh-0" Feb 27 10:52:23 crc kubenswrapper[4728]: I0227 10:52:23.476405 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fn4cs"] Feb 27 10:52:23 crc kubenswrapper[4728]: I0227 10:52:23.491466 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a056167f-457f-4547-ab3e-cbe2433d3cfc-combined-ca-bundle\") pod \"aodh-0\" (UID: \"a056167f-457f-4547-ab3e-cbe2433d3cfc\") " pod="openstack/aodh-0" Feb 27 10:52:23 crc kubenswrapper[4728]: I0227 10:52:23.491658 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a056167f-457f-4547-ab3e-cbe2433d3cfc-scripts\") pod \"aodh-0\" (UID: \"a056167f-457f-4547-ab3e-cbe2433d3cfc\") " pod="openstack/aodh-0" Feb 27 10:52:23 crc kubenswrapper[4728]: I0227 10:52:23.491803 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a056167f-457f-4547-ab3e-cbe2433d3cfc-internal-tls-certs\") pod \"aodh-0\" (UID: \"a056167f-457f-4547-ab3e-cbe2433d3cfc\") " pod="openstack/aodh-0" Feb 27 10:52:23 crc kubenswrapper[4728]: I0227 10:52:23.491855 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4tvxw\" (UniqueName: \"kubernetes.io/projected/a056167f-457f-4547-ab3e-cbe2433d3cfc-kube-api-access-4tvxw\") pod \"aodh-0\" (UID: \"a056167f-457f-4547-ab3e-cbe2433d3cfc\") " pod="openstack/aodh-0" Feb 27 10:52:23 crc kubenswrapper[4728]: I0227 10:52:23.491917 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a056167f-457f-4547-ab3e-cbe2433d3cfc-public-tls-certs\") pod \"aodh-0\" (UID: \"a056167f-457f-4547-ab3e-cbe2433d3cfc\") " pod="openstack/aodh-0" Feb 27 10:52:23 crc kubenswrapper[4728]: I0227 10:52:23.491949 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a056167f-457f-4547-ab3e-cbe2433d3cfc-config-data\") pod \"aodh-0\" (UID: \"a056167f-457f-4547-ab3e-cbe2433d3cfc\") " pod="openstack/aodh-0" Feb 27 10:52:23 crc kubenswrapper[4728]: I0227 10:52:23.501687 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a056167f-457f-4547-ab3e-cbe2433d3cfc-scripts\") pod \"aodh-0\" (UID: \"a056167f-457f-4547-ab3e-cbe2433d3cfc\") " pod="openstack/aodh-0" Feb 27 10:52:23 crc kubenswrapper[4728]: I0227 10:52:23.504030 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a056167f-457f-4547-ab3e-cbe2433d3cfc-internal-tls-certs\") pod \"aodh-0\" (UID: \"a056167f-457f-4547-ab3e-cbe2433d3cfc\") " pod="openstack/aodh-0" Feb 27 10:52:23 crc kubenswrapper[4728]: I0227 10:52:23.505392 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a056167f-457f-4547-ab3e-cbe2433d3cfc-config-data\") pod \"aodh-0\" (UID: \"a056167f-457f-4547-ab3e-cbe2433d3cfc\") " pod="openstack/aodh-0" Feb 27 10:52:23 crc kubenswrapper[4728]: I0227 10:52:23.505768 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a056167f-457f-4547-ab3e-cbe2433d3cfc-combined-ca-bundle\") pod \"aodh-0\" (UID: \"a056167f-457f-4547-ab3e-cbe2433d3cfc\") " pod="openstack/aodh-0" Feb 27 10:52:23 crc kubenswrapper[4728]: I0227 10:52:23.515138 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4tvxw\" (UniqueName: \"kubernetes.io/projected/a056167f-457f-4547-ab3e-cbe2433d3cfc-kube-api-access-4tvxw\") pod \"aodh-0\" (UID: \"a056167f-457f-4547-ab3e-cbe2433d3cfc\") " pod="openstack/aodh-0" Feb 27 10:52:23 crc kubenswrapper[4728]: I0227 10:52:23.522480 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a056167f-457f-4547-ab3e-cbe2433d3cfc-public-tls-certs\") pod \"aodh-0\" (UID: \"a056167f-457f-4547-ab3e-cbe2433d3cfc\") " pod="openstack/aodh-0" Feb 27 10:52:23 crc kubenswrapper[4728]: I0227 10:52:23.677373 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 27 10:52:24 crc kubenswrapper[4728]: I0227 10:52:24.172048 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fn4cs" event={"ID":"9c6a8916-fd56-45ba-837e-78eb7fd7b7f2","Type":"ContainerStarted","Data":"7d0a526bea657ac3b7f0454548f59bac988c33f070c6af16b0bead05a183ecf3"} Feb 27 10:52:24 crc kubenswrapper[4728]: I0227 10:52:24.172373 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fn4cs" event={"ID":"9c6a8916-fd56-45ba-837e-78eb7fd7b7f2","Type":"ContainerStarted","Data":"7b6367e016331b46edd4c1c7b40213ef835c171f53d35d597281c688e29cf1c2"} Feb 27 10:52:24 crc kubenswrapper[4728]: I0227 10:52:24.213528 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Feb 27 10:52:24 crc kubenswrapper[4728]: I0227 10:52:24.303762 4728 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="cb4ac27f-bce8-4cac-b444-4bc0921f975d" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.1.20:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 27 10:52:24 crc kubenswrapper[4728]: I0227 10:52:24.304034 4728 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="cb4ac27f-bce8-4cac-b444-4bc0921f975d" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.1.20:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 27 10:52:24 crc kubenswrapper[4728]: I0227 10:52:24.748724 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c43d8f20-c2f5-4269-b8fb-aec91f9c9150" path="/var/lib/kubelet/pods/c43d8f20-c2f5-4269-b8fb-aec91f9c9150/volumes" Feb 27 10:52:25 crc kubenswrapper[4728]: I0227 10:52:25.188763 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6xxv8" event={"ID":"d87df44b-fc24-4c81-8c22-94a12665da84","Type":"ContainerStarted","Data":"98411c6a8e68e4347c53bd1ac875f4e5dc092db779cf9e6232d4ad91ca4eb319"} Feb 27 10:52:25 crc kubenswrapper[4728]: I0227 10:52:25.194399 4728 generic.go:334] "Generic (PLEG): container finished" podID="9c6a8916-fd56-45ba-837e-78eb7fd7b7f2" containerID="7d0a526bea657ac3b7f0454548f59bac988c33f070c6af16b0bead05a183ecf3" exitCode=0 Feb 27 10:52:25 crc kubenswrapper[4728]: I0227 10:52:25.194531 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fn4cs" event={"ID":"9c6a8916-fd56-45ba-837e-78eb7fd7b7f2","Type":"ContainerDied","Data":"7d0a526bea657ac3b7f0454548f59bac988c33f070c6af16b0bead05a183ecf3"} Feb 27 10:52:25 crc kubenswrapper[4728]: I0227 10:52:25.197792 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"a056167f-457f-4547-ab3e-cbe2433d3cfc","Type":"ContainerStarted","Data":"f1abe65966a47425466ace996490bcf4d94d3349b1740e7639e183104240803e"} Feb 27 10:52:25 crc kubenswrapper[4728]: I0227 10:52:25.197840 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"a056167f-457f-4547-ab3e-cbe2433d3cfc","Type":"ContainerStarted","Data":"ccf35c61e9d68b3146200ae01aa4c1d041ea7f202a2978c87ec24400a03a78eb"} Feb 27 10:52:25 crc kubenswrapper[4728]: I0227 10:52:25.215595 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-6xxv8" podStartSLOduration=3.862352729 podStartE2EDuration="13.215571881s" podCreationTimestamp="2026-02-27 10:52:12 +0000 UTC" firstStartedPulling="2026-02-27 10:52:14.89207418 +0000 UTC m=+1554.854440286" lastFinishedPulling="2026-02-27 10:52:24.245293332 +0000 UTC m=+1564.207659438" observedRunningTime="2026-02-27 10:52:25.211267873 +0000 UTC m=+1565.173633989" watchObservedRunningTime="2026-02-27 10:52:25.215571881 +0000 UTC m=+1565.177937987" Feb 27 10:52:26 crc kubenswrapper[4728]: I0227 10:52:26.213047 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fn4cs" event={"ID":"9c6a8916-fd56-45ba-837e-78eb7fd7b7f2","Type":"ContainerStarted","Data":"4314e9dc2109a3698b2863ddc6375131f7055ac70f7d9683a7f306b7d70d1187"} Feb 27 10:52:26 crc kubenswrapper[4728]: I0227 10:52:26.221176 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"a056167f-457f-4547-ab3e-cbe2433d3cfc","Type":"ContainerStarted","Data":"9752f11f948dc0098e7f9171797e12dd9bdabf580206df8f6012025a1ce7634e"} Feb 27 10:52:27 crc kubenswrapper[4728]: I0227 10:52:27.236652 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"a056167f-457f-4547-ab3e-cbe2433d3cfc","Type":"ContainerStarted","Data":"c050a67b2f5f62b9a818d67124386db2c589134db2850e4f241133daee7eed5b"} Feb 27 10:52:27 crc kubenswrapper[4728]: I0227 10:52:27.238996 4728 generic.go:334] "Generic (PLEG): container finished" podID="9c6a8916-fd56-45ba-837e-78eb7fd7b7f2" containerID="4314e9dc2109a3698b2863ddc6375131f7055ac70f7d9683a7f306b7d70d1187" exitCode=0 Feb 27 10:52:27 crc kubenswrapper[4728]: I0227 10:52:27.239030 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fn4cs" event={"ID":"9c6a8916-fd56-45ba-837e-78eb7fd7b7f2","Type":"ContainerDied","Data":"4314e9dc2109a3698b2863ddc6375131f7055ac70f7d9683a7f306b7d70d1187"} Feb 27 10:52:27 crc kubenswrapper[4728]: I0227 10:52:27.569392 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-fktbl"] Feb 27 10:52:27 crc kubenswrapper[4728]: I0227 10:52:27.571691 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fktbl" Feb 27 10:52:27 crc kubenswrapper[4728]: I0227 10:52:27.583636 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fktbl"] Feb 27 10:52:27 crc kubenswrapper[4728]: I0227 10:52:27.717907 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lgckv\" (UniqueName: \"kubernetes.io/projected/24ef4f58-08f3-4576-9e84-83c0575600a3-kube-api-access-lgckv\") pod \"community-operators-fktbl\" (UID: \"24ef4f58-08f3-4576-9e84-83c0575600a3\") " pod="openshift-marketplace/community-operators-fktbl" Feb 27 10:52:27 crc kubenswrapper[4728]: I0227 10:52:27.718058 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24ef4f58-08f3-4576-9e84-83c0575600a3-utilities\") pod \"community-operators-fktbl\" (UID: \"24ef4f58-08f3-4576-9e84-83c0575600a3\") " pod="openshift-marketplace/community-operators-fktbl" Feb 27 10:52:27 crc kubenswrapper[4728]: I0227 10:52:27.718084 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24ef4f58-08f3-4576-9e84-83c0575600a3-catalog-content\") pod \"community-operators-fktbl\" (UID: \"24ef4f58-08f3-4576-9e84-83c0575600a3\") " pod="openshift-marketplace/community-operators-fktbl" Feb 27 10:52:27 crc kubenswrapper[4728]: I0227 10:52:27.820457 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24ef4f58-08f3-4576-9e84-83c0575600a3-utilities\") pod \"community-operators-fktbl\" (UID: \"24ef4f58-08f3-4576-9e84-83c0575600a3\") " pod="openshift-marketplace/community-operators-fktbl" Feb 27 10:52:27 crc kubenswrapper[4728]: I0227 10:52:27.820532 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24ef4f58-08f3-4576-9e84-83c0575600a3-catalog-content\") pod \"community-operators-fktbl\" (UID: \"24ef4f58-08f3-4576-9e84-83c0575600a3\") " pod="openshift-marketplace/community-operators-fktbl" Feb 27 10:52:27 crc kubenswrapper[4728]: I0227 10:52:27.820694 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lgckv\" (UniqueName: \"kubernetes.io/projected/24ef4f58-08f3-4576-9e84-83c0575600a3-kube-api-access-lgckv\") pod \"community-operators-fktbl\" (UID: \"24ef4f58-08f3-4576-9e84-83c0575600a3\") " pod="openshift-marketplace/community-operators-fktbl" Feb 27 10:52:27 crc kubenswrapper[4728]: I0227 10:52:27.821874 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24ef4f58-08f3-4576-9e84-83c0575600a3-utilities\") pod \"community-operators-fktbl\" (UID: \"24ef4f58-08f3-4576-9e84-83c0575600a3\") " pod="openshift-marketplace/community-operators-fktbl" Feb 27 10:52:27 crc kubenswrapper[4728]: I0227 10:52:27.821928 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24ef4f58-08f3-4576-9e84-83c0575600a3-catalog-content\") pod \"community-operators-fktbl\" (UID: \"24ef4f58-08f3-4576-9e84-83c0575600a3\") " pod="openshift-marketplace/community-operators-fktbl" Feb 27 10:52:27 crc kubenswrapper[4728]: I0227 10:52:27.868203 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lgckv\" (UniqueName: \"kubernetes.io/projected/24ef4f58-08f3-4576-9e84-83c0575600a3-kube-api-access-lgckv\") pod \"community-operators-fktbl\" (UID: \"24ef4f58-08f3-4576-9e84-83c0575600a3\") " pod="openshift-marketplace/community-operators-fktbl" Feb 27 10:52:27 crc kubenswrapper[4728]: I0227 10:52:27.937563 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fktbl" Feb 27 10:52:28 crc kubenswrapper[4728]: I0227 10:52:28.322485 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"a056167f-457f-4547-ab3e-cbe2433d3cfc","Type":"ContainerStarted","Data":"bfdc65d6422751035eac485694f2efb82adc4ce16dfd42a905a39c32b4823bbf"} Feb 27 10:52:28 crc kubenswrapper[4728]: I0227 10:52:28.332248 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fn4cs" event={"ID":"9c6a8916-fd56-45ba-837e-78eb7fd7b7f2","Type":"ContainerStarted","Data":"7909ca97f45415011ac59b761937371303c6829059f1d23ea11e9524cd9fb36a"} Feb 27 10:52:28 crc kubenswrapper[4728]: I0227 10:52:28.349478 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=2.574334716 podStartE2EDuration="5.349459939s" podCreationTimestamp="2026-02-27 10:52:23 +0000 UTC" firstStartedPulling="2026-02-27 10:52:24.237073899 +0000 UTC m=+1564.199440005" lastFinishedPulling="2026-02-27 10:52:27.012199122 +0000 UTC m=+1566.974565228" observedRunningTime="2026-02-27 10:52:28.349078068 +0000 UTC m=+1568.311444174" watchObservedRunningTime="2026-02-27 10:52:28.349459939 +0000 UTC m=+1568.311826045" Feb 27 10:52:28 crc kubenswrapper[4728]: I0227 10:52:28.379802 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-fn4cs" podStartSLOduration=3.7138191799999998 podStartE2EDuration="6.379780561s" podCreationTimestamp="2026-02-27 10:52:22 +0000 UTC" firstStartedPulling="2026-02-27 10:52:25.197705146 +0000 UTC m=+1565.160071252" lastFinishedPulling="2026-02-27 10:52:27.863666527 +0000 UTC m=+1567.826032633" observedRunningTime="2026-02-27 10:52:28.37052411 +0000 UTC m=+1568.332890217" watchObservedRunningTime="2026-02-27 10:52:28.379780561 +0000 UTC m=+1568.342146667" Feb 27 10:52:28 crc kubenswrapper[4728]: I0227 10:52:28.593888 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fktbl"] Feb 27 10:52:29 crc kubenswrapper[4728]: I0227 10:52:29.394756 4728 generic.go:334] "Generic (PLEG): container finished" podID="24ef4f58-08f3-4576-9e84-83c0575600a3" containerID="b4a17788435898136838b62b6e261a0b47b31af4c9fa2253a3639cc8a2f55203" exitCode=0 Feb 27 10:52:29 crc kubenswrapper[4728]: I0227 10:52:29.398547 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fktbl" event={"ID":"24ef4f58-08f3-4576-9e84-83c0575600a3","Type":"ContainerDied","Data":"b4a17788435898136838b62b6e261a0b47b31af4c9fa2253a3639cc8a2f55203"} Feb 27 10:52:29 crc kubenswrapper[4728]: I0227 10:52:29.398580 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fktbl" event={"ID":"24ef4f58-08f3-4576-9e84-83c0575600a3","Type":"ContainerStarted","Data":"7e7f0f243897ab79e9975f4b8a099c33f571749c4c82bec4cbb380e19fce3fd5"} Feb 27 10:52:30 crc kubenswrapper[4728]: I0227 10:52:30.408886 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fktbl" event={"ID":"24ef4f58-08f3-4576-9e84-83c0575600a3","Type":"ContainerStarted","Data":"5bb2caa527a84e5e3a90245b5c0d8aa47dc0c1a4fc4f2b69dacfc36afd0260b4"} Feb 27 10:52:30 crc kubenswrapper[4728]: I0227 10:52:30.480959 4728 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-h4gl8" podUID="efa9e238-79b0-4757-acab-53537b5ae93a" containerName="registry-server" probeResult="failure" output=< Feb 27 10:52:30 crc kubenswrapper[4728]: timeout: failed to connect service ":50051" within 1s Feb 27 10:52:30 crc kubenswrapper[4728]: > Feb 27 10:52:31 crc kubenswrapper[4728]: I0227 10:52:31.294976 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 27 10:52:31 crc kubenswrapper[4728]: I0227 10:52:31.297010 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 27 10:52:31 crc kubenswrapper[4728]: I0227 10:52:31.305660 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 27 10:52:31 crc kubenswrapper[4728]: I0227 10:52:31.427244 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 27 10:52:32 crc kubenswrapper[4728]: I0227 10:52:32.438932 4728 generic.go:334] "Generic (PLEG): container finished" podID="24ef4f58-08f3-4576-9e84-83c0575600a3" containerID="5bb2caa527a84e5e3a90245b5c0d8aa47dc0c1a4fc4f2b69dacfc36afd0260b4" exitCode=0 Feb 27 10:52:32 crc kubenswrapper[4728]: I0227 10:52:32.439101 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fktbl" event={"ID":"24ef4f58-08f3-4576-9e84-83c0575600a3","Type":"ContainerDied","Data":"5bb2caa527a84e5e3a90245b5c0d8aa47dc0c1a4fc4f2b69dacfc36afd0260b4"} Feb 27 10:52:32 crc kubenswrapper[4728]: I0227 10:52:32.927485 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-fn4cs" Feb 27 10:52:32 crc kubenswrapper[4728]: I0227 10:52:32.927552 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-fn4cs" Feb 27 10:52:33 crc kubenswrapper[4728]: I0227 10:52:33.150758 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-6xxv8" Feb 27 10:52:33 crc kubenswrapper[4728]: I0227 10:52:33.151135 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-6xxv8" Feb 27 10:52:33 crc kubenswrapper[4728]: I0227 10:52:33.186949 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 27 10:52:33 crc kubenswrapper[4728]: I0227 10:52:33.291282 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 27 10:52:33 crc kubenswrapper[4728]: I0227 10:52:33.292389 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 27 10:52:33 crc kubenswrapper[4728]: I0227 10:52:33.296139 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 27 10:52:33 crc kubenswrapper[4728]: I0227 10:52:33.303106 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 27 10:52:33 crc kubenswrapper[4728]: I0227 10:52:33.451569 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fktbl" event={"ID":"24ef4f58-08f3-4576-9e84-83c0575600a3","Type":"ContainerStarted","Data":"6a60be2da75ed4d772840f030bca0061f9f590e49ec3f22b4bbe70bf113faf77"} Feb 27 10:52:33 crc kubenswrapper[4728]: I0227 10:52:33.452119 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 27 10:52:33 crc kubenswrapper[4728]: I0227 10:52:33.458171 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 27 10:52:33 crc kubenswrapper[4728]: I0227 10:52:33.500831 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-fktbl" podStartSLOduration=3.064477576 podStartE2EDuration="6.500810631s" podCreationTimestamp="2026-02-27 10:52:27 +0000 UTC" firstStartedPulling="2026-02-27 10:52:29.399451571 +0000 UTC m=+1569.361817677" lastFinishedPulling="2026-02-27 10:52:32.835784636 +0000 UTC m=+1572.798150732" observedRunningTime="2026-02-27 10:52:33.481314712 +0000 UTC m=+1573.443680828" watchObservedRunningTime="2026-02-27 10:52:33.500810631 +0000 UTC m=+1573.463176737" Feb 27 10:52:33 crc kubenswrapper[4728]: I0227 10:52:33.981948 4728 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-fn4cs" podUID="9c6a8916-fd56-45ba-837e-78eb7fd7b7f2" containerName="registry-server" probeResult="failure" output=< Feb 27 10:52:33 crc kubenswrapper[4728]: timeout: failed to connect service ":50051" within 1s Feb 27 10:52:33 crc kubenswrapper[4728]: > Feb 27 10:52:34 crc kubenswrapper[4728]: I0227 10:52:34.194758 4728 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-6xxv8" podUID="d87df44b-fc24-4c81-8c22-94a12665da84" containerName="registry-server" probeResult="failure" output=< Feb 27 10:52:34 crc kubenswrapper[4728]: timeout: failed to connect service ":50051" within 1s Feb 27 10:52:34 crc kubenswrapper[4728]: > Feb 27 10:52:35 crc kubenswrapper[4728]: I0227 10:52:35.921602 4728 patch_prober.go:28] interesting pod/machine-config-daemon-mf2hh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 10:52:35 crc kubenswrapper[4728]: I0227 10:52:35.921874 4728 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 10:52:35 crc kubenswrapper[4728]: I0227 10:52:35.921915 4728 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" Feb 27 10:52:35 crc kubenswrapper[4728]: I0227 10:52:35.922698 4728 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e31218d6f1087057441016d4a0b5eeb91a41486bd3e9c9784604100aaaedc60a"} pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 27 10:52:35 crc kubenswrapper[4728]: I0227 10:52:35.922751 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" containerName="machine-config-daemon" containerID="cri-o://e31218d6f1087057441016d4a0b5eeb91a41486bd3e9c9784604100aaaedc60a" gracePeriod=600 Feb 27 10:52:36 crc kubenswrapper[4728]: I0227 10:52:36.484391 4728 generic.go:334] "Generic (PLEG): container finished" podID="c2cfd349-f825-497b-b698-7fb6bc258b22" containerID="e31218d6f1087057441016d4a0b5eeb91a41486bd3e9c9784604100aaaedc60a" exitCode=0 Feb 27 10:52:36 crc kubenswrapper[4728]: I0227 10:52:36.484497 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" event={"ID":"c2cfd349-f825-497b-b698-7fb6bc258b22","Type":"ContainerDied","Data":"e31218d6f1087057441016d4a0b5eeb91a41486bd3e9c9784604100aaaedc60a"} Feb 27 10:52:36 crc kubenswrapper[4728]: I0227 10:52:36.484776 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" event={"ID":"c2cfd349-f825-497b-b698-7fb6bc258b22","Type":"ContainerStarted","Data":"1e9c3f8a2c89093b960b65e9fc2c283b1d41282e75aab6b6afbc29feaad679c4"} Feb 27 10:52:36 crc kubenswrapper[4728]: I0227 10:52:36.484798 4728 scope.go:117] "RemoveContainer" containerID="38e4421806f8078d8e00d718689caad66ee119d857ee6a04b69a7a968f3e70aa" Feb 27 10:52:37 crc kubenswrapper[4728]: I0227 10:52:37.937756 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-fktbl" Feb 27 10:52:37 crc kubenswrapper[4728]: I0227 10:52:37.938112 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-fktbl" Feb 27 10:52:38 crc kubenswrapper[4728]: I0227 10:52:38.005189 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-fktbl" Feb 27 10:52:38 crc kubenswrapper[4728]: I0227 10:52:38.572957 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-fktbl" Feb 27 10:52:38 crc kubenswrapper[4728]: I0227 10:52:38.646310 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fktbl"] Feb 27 10:52:39 crc kubenswrapper[4728]: I0227 10:52:39.074694 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 27 10:52:39 crc kubenswrapper[4728]: I0227 10:52:39.074917 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="ed5c2715-a8a7-4d10-ba69-32133e2b6e51" containerName="kube-state-metrics" containerID="cri-o://1503074435abecf09855a93619b6cf0dbeab23c896af55fe7e0a4d539da35b29" gracePeriod=30 Feb 27 10:52:39 crc kubenswrapper[4728]: I0227 10:52:39.213042 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-0"] Feb 27 10:52:39 crc kubenswrapper[4728]: I0227 10:52:39.213545 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/mysqld-exporter-0" podUID="60433146-3d7a-433d-a3c3-3152b7591e49" containerName="mysqld-exporter" containerID="cri-o://a340d4a1674ed3d59c7300b26bdc066d546b7c44c64c94ca95931667f740f667" gracePeriod=30 Feb 27 10:52:39 crc kubenswrapper[4728]: E0227 10:52:39.422481 4728 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poded5c2715_a8a7_4d10_ba69_32133e2b6e51.slice/crio-conmon-1503074435abecf09855a93619b6cf0dbeab23c896af55fe7e0a4d539da35b29.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod60433146_3d7a_433d_a3c3_3152b7591e49.slice/crio-a340d4a1674ed3d59c7300b26bdc066d546b7c44c64c94ca95931667f740f667.scope\": RecentStats: unable to find data in memory cache]" Feb 27 10:52:39 crc kubenswrapper[4728]: I0227 10:52:39.479697 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-h4gl8" Feb 27 10:52:39 crc kubenswrapper[4728]: I0227 10:52:39.548127 4728 generic.go:334] "Generic (PLEG): container finished" podID="ed5c2715-a8a7-4d10-ba69-32133e2b6e51" containerID="1503074435abecf09855a93619b6cf0dbeab23c896af55fe7e0a4d539da35b29" exitCode=2 Feb 27 10:52:39 crc kubenswrapper[4728]: I0227 10:52:39.548413 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"ed5c2715-a8a7-4d10-ba69-32133e2b6e51","Type":"ContainerDied","Data":"1503074435abecf09855a93619b6cf0dbeab23c896af55fe7e0a4d539da35b29"} Feb 27 10:52:39 crc kubenswrapper[4728]: I0227 10:52:39.549358 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-h4gl8" Feb 27 10:52:39 crc kubenswrapper[4728]: I0227 10:52:39.560301 4728 generic.go:334] "Generic (PLEG): container finished" podID="60433146-3d7a-433d-a3c3-3152b7591e49" containerID="a340d4a1674ed3d59c7300b26bdc066d546b7c44c64c94ca95931667f740f667" exitCode=2 Feb 27 10:52:39 crc kubenswrapper[4728]: I0227 10:52:39.561621 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"60433146-3d7a-433d-a3c3-3152b7591e49","Type":"ContainerDied","Data":"a340d4a1674ed3d59c7300b26bdc066d546b7c44c64c94ca95931667f740f667"} Feb 27 10:52:39 crc kubenswrapper[4728]: I0227 10:52:39.965074 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 27 10:52:39 crc kubenswrapper[4728]: I0227 10:52:39.970844 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Feb 27 10:52:40 crc kubenswrapper[4728]: I0227 10:52:40.066033 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ptsbx\" (UniqueName: \"kubernetes.io/projected/60433146-3d7a-433d-a3c3-3152b7591e49-kube-api-access-ptsbx\") pod \"60433146-3d7a-433d-a3c3-3152b7591e49\" (UID: \"60433146-3d7a-433d-a3c3-3152b7591e49\") " Feb 27 10:52:40 crc kubenswrapper[4728]: I0227 10:52:40.066188 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60433146-3d7a-433d-a3c3-3152b7591e49-config-data\") pod \"60433146-3d7a-433d-a3c3-3152b7591e49\" (UID: \"60433146-3d7a-433d-a3c3-3152b7591e49\") " Feb 27 10:52:40 crc kubenswrapper[4728]: I0227 10:52:40.066263 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tsc5n\" (UniqueName: \"kubernetes.io/projected/ed5c2715-a8a7-4d10-ba69-32133e2b6e51-kube-api-access-tsc5n\") pod \"ed5c2715-a8a7-4d10-ba69-32133e2b6e51\" (UID: \"ed5c2715-a8a7-4d10-ba69-32133e2b6e51\") " Feb 27 10:52:40 crc kubenswrapper[4728]: I0227 10:52:40.066306 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60433146-3d7a-433d-a3c3-3152b7591e49-combined-ca-bundle\") pod \"60433146-3d7a-433d-a3c3-3152b7591e49\" (UID: \"60433146-3d7a-433d-a3c3-3152b7591e49\") " Feb 27 10:52:40 crc kubenswrapper[4728]: I0227 10:52:40.072818 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60433146-3d7a-433d-a3c3-3152b7591e49-kube-api-access-ptsbx" (OuterVolumeSpecName: "kube-api-access-ptsbx") pod "60433146-3d7a-433d-a3c3-3152b7591e49" (UID: "60433146-3d7a-433d-a3c3-3152b7591e49"). InnerVolumeSpecName "kube-api-access-ptsbx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:52:40 crc kubenswrapper[4728]: I0227 10:52:40.075984 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed5c2715-a8a7-4d10-ba69-32133e2b6e51-kube-api-access-tsc5n" (OuterVolumeSpecName: "kube-api-access-tsc5n") pod "ed5c2715-a8a7-4d10-ba69-32133e2b6e51" (UID: "ed5c2715-a8a7-4d10-ba69-32133e2b6e51"). InnerVolumeSpecName "kube-api-access-tsc5n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:52:40 crc kubenswrapper[4728]: I0227 10:52:40.106400 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60433146-3d7a-433d-a3c3-3152b7591e49-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "60433146-3d7a-433d-a3c3-3152b7591e49" (UID: "60433146-3d7a-433d-a3c3-3152b7591e49"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:52:40 crc kubenswrapper[4728]: I0227 10:52:40.166261 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60433146-3d7a-433d-a3c3-3152b7591e49-config-data" (OuterVolumeSpecName: "config-data") pod "60433146-3d7a-433d-a3c3-3152b7591e49" (UID: "60433146-3d7a-433d-a3c3-3152b7591e49"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:52:40 crc kubenswrapper[4728]: I0227 10:52:40.169529 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tsc5n\" (UniqueName: \"kubernetes.io/projected/ed5c2715-a8a7-4d10-ba69-32133e2b6e51-kube-api-access-tsc5n\") on node \"crc\" DevicePath \"\"" Feb 27 10:52:40 crc kubenswrapper[4728]: I0227 10:52:40.169572 4728 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60433146-3d7a-433d-a3c3-3152b7591e49-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 10:52:40 crc kubenswrapper[4728]: I0227 10:52:40.169586 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ptsbx\" (UniqueName: \"kubernetes.io/projected/60433146-3d7a-433d-a3c3-3152b7591e49-kube-api-access-ptsbx\") on node \"crc\" DevicePath \"\"" Feb 27 10:52:40 crc kubenswrapper[4728]: I0227 10:52:40.169600 4728 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60433146-3d7a-433d-a3c3-3152b7591e49-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 10:52:40 crc kubenswrapper[4728]: I0227 10:52:40.576253 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 27 10:52:40 crc kubenswrapper[4728]: I0227 10:52:40.576242 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"ed5c2715-a8a7-4d10-ba69-32133e2b6e51","Type":"ContainerDied","Data":"be2f88fb9a96191505ae662e54e0b8050a92780a5316ca642015bd9582aebf5e"} Feb 27 10:52:40 crc kubenswrapper[4728]: I0227 10:52:40.576684 4728 scope.go:117] "RemoveContainer" containerID="1503074435abecf09855a93619b6cf0dbeab23c896af55fe7e0a4d539da35b29" Feb 27 10:52:40 crc kubenswrapper[4728]: I0227 10:52:40.577889 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Feb 27 10:52:40 crc kubenswrapper[4728]: I0227 10:52:40.577953 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"60433146-3d7a-433d-a3c3-3152b7591e49","Type":"ContainerDied","Data":"506e4d97e0054efa81f65ca8df0b276044dedc3386fd4ecb65498ad2bd659bff"} Feb 27 10:52:40 crc kubenswrapper[4728]: I0227 10:52:40.578154 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-fktbl" podUID="24ef4f58-08f3-4576-9e84-83c0575600a3" containerName="registry-server" containerID="cri-o://6a60be2da75ed4d772840f030bca0061f9f590e49ec3f22b4bbe70bf113faf77" gracePeriod=2 Feb 27 10:52:40 crc kubenswrapper[4728]: I0227 10:52:40.617389 4728 scope.go:117] "RemoveContainer" containerID="a340d4a1674ed3d59c7300b26bdc066d546b7c44c64c94ca95931667f740f667" Feb 27 10:52:40 crc kubenswrapper[4728]: I0227 10:52:40.645573 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 27 10:52:40 crc kubenswrapper[4728]: I0227 10:52:40.660577 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 27 10:52:40 crc kubenswrapper[4728]: I0227 10:52:40.670350 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-0"] Feb 27 10:52:40 crc kubenswrapper[4728]: I0227 10:52:40.761491 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed5c2715-a8a7-4d10-ba69-32133e2b6e51" path="/var/lib/kubelet/pods/ed5c2715-a8a7-4d10-ba69-32133e2b6e51/volumes" Feb 27 10:52:40 crc kubenswrapper[4728]: I0227 10:52:40.762808 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-0"] Feb 27 10:52:40 crc kubenswrapper[4728]: I0227 10:52:40.762850 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 27 10:52:40 crc kubenswrapper[4728]: E0227 10:52:40.763963 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed5c2715-a8a7-4d10-ba69-32133e2b6e51" containerName="kube-state-metrics" Feb 27 10:52:40 crc kubenswrapper[4728]: I0227 10:52:40.763987 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed5c2715-a8a7-4d10-ba69-32133e2b6e51" containerName="kube-state-metrics" Feb 27 10:52:40 crc kubenswrapper[4728]: E0227 10:52:40.764029 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60433146-3d7a-433d-a3c3-3152b7591e49" containerName="mysqld-exporter" Feb 27 10:52:40 crc kubenswrapper[4728]: I0227 10:52:40.764038 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="60433146-3d7a-433d-a3c3-3152b7591e49" containerName="mysqld-exporter" Feb 27 10:52:40 crc kubenswrapper[4728]: I0227 10:52:40.764369 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="60433146-3d7a-433d-a3c3-3152b7591e49" containerName="mysqld-exporter" Feb 27 10:52:40 crc kubenswrapper[4728]: I0227 10:52:40.764424 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed5c2715-a8a7-4d10-ba69-32133e2b6e51" containerName="kube-state-metrics" Feb 27 10:52:40 crc kubenswrapper[4728]: I0227 10:52:40.765582 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 27 10:52:40 crc kubenswrapper[4728]: I0227 10:52:40.767936 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Feb 27 10:52:40 crc kubenswrapper[4728]: I0227 10:52:40.769566 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Feb 27 10:52:40 crc kubenswrapper[4728]: I0227 10:52:40.774072 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-h4gl8"] Feb 27 10:52:40 crc kubenswrapper[4728]: I0227 10:52:40.774538 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-h4gl8" podUID="efa9e238-79b0-4757-acab-53537b5ae93a" containerName="registry-server" containerID="cri-o://226995caa65f7b453cbb5dd7edc24c62282b79109c70e17cb47825c47482f362" gracePeriod=2 Feb 27 10:52:40 crc kubenswrapper[4728]: I0227 10:52:40.794684 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-0"] Feb 27 10:52:40 crc kubenswrapper[4728]: I0227 10:52:40.796653 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Feb 27 10:52:40 crc kubenswrapper[4728]: I0227 10:52:40.798848 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-mysqld-exporter-svc" Feb 27 10:52:40 crc kubenswrapper[4728]: I0227 10:52:40.800035 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-config-data" Feb 27 10:52:40 crc kubenswrapper[4728]: I0227 10:52:40.808734 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 27 10:52:40 crc kubenswrapper[4728]: I0227 10:52:40.823567 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Feb 27 10:52:40 crc kubenswrapper[4728]: I0227 10:52:40.883492 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a0e671c-98d8-42e6-bde3-624ca42b4d48-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"9a0e671c-98d8-42e6-bde3-624ca42b4d48\") " pod="openstack/kube-state-metrics-0" Feb 27 10:52:40 crc kubenswrapper[4728]: I0227 10:52:40.883569 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2g22\" (UniqueName: \"kubernetes.io/projected/9a0e671c-98d8-42e6-bde3-624ca42b4d48-kube-api-access-r2g22\") pod \"kube-state-metrics-0\" (UID: \"9a0e671c-98d8-42e6-bde3-624ca42b4d48\") " pod="openstack/kube-state-metrics-0" Feb 27 10:52:40 crc kubenswrapper[4728]: I0227 10:52:40.884141 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/9a0e671c-98d8-42e6-bde3-624ca42b4d48-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"9a0e671c-98d8-42e6-bde3-624ca42b4d48\") " pod="openstack/kube-state-metrics-0" Feb 27 10:52:40 crc kubenswrapper[4728]: I0227 10:52:40.892208 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/9a0e671c-98d8-42e6-bde3-624ca42b4d48-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"9a0e671c-98d8-42e6-bde3-624ca42b4d48\") " pod="openstack/kube-state-metrics-0" Feb 27 10:52:40 crc kubenswrapper[4728]: I0227 10:52:40.994823 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14f791fe-acdb-4266-a973-2bf0aa766623-config-data\") pod \"mysqld-exporter-0\" (UID: \"14f791fe-acdb-4266-a973-2bf0aa766623\") " pod="openstack/mysqld-exporter-0" Feb 27 10:52:40 crc kubenswrapper[4728]: I0227 10:52:40.994883 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/9a0e671c-98d8-42e6-bde3-624ca42b4d48-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"9a0e671c-98d8-42e6-bde3-624ca42b4d48\") " pod="openstack/kube-state-metrics-0" Feb 27 10:52:40 crc kubenswrapper[4728]: I0227 10:52:40.994926 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mysqld-exporter-tls-certs\" (UniqueName: \"kubernetes.io/secret/14f791fe-acdb-4266-a973-2bf0aa766623-mysqld-exporter-tls-certs\") pod \"mysqld-exporter-0\" (UID: \"14f791fe-acdb-4266-a973-2bf0aa766623\") " pod="openstack/mysqld-exporter-0" Feb 27 10:52:41 crc kubenswrapper[4728]: I0227 10:52:40.995161 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/9a0e671c-98d8-42e6-bde3-624ca42b4d48-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"9a0e671c-98d8-42e6-bde3-624ca42b4d48\") " pod="openstack/kube-state-metrics-0" Feb 27 10:52:41 crc kubenswrapper[4728]: I0227 10:52:40.995201 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-srd9q\" (UniqueName: \"kubernetes.io/projected/14f791fe-acdb-4266-a973-2bf0aa766623-kube-api-access-srd9q\") pod \"mysqld-exporter-0\" (UID: \"14f791fe-acdb-4266-a973-2bf0aa766623\") " pod="openstack/mysqld-exporter-0" Feb 27 10:52:41 crc kubenswrapper[4728]: I0227 10:52:40.995442 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a0e671c-98d8-42e6-bde3-624ca42b4d48-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"9a0e671c-98d8-42e6-bde3-624ca42b4d48\") " pod="openstack/kube-state-metrics-0" Feb 27 10:52:41 crc kubenswrapper[4728]: I0227 10:52:40.995480 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r2g22\" (UniqueName: \"kubernetes.io/projected/9a0e671c-98d8-42e6-bde3-624ca42b4d48-kube-api-access-r2g22\") pod \"kube-state-metrics-0\" (UID: \"9a0e671c-98d8-42e6-bde3-624ca42b4d48\") " pod="openstack/kube-state-metrics-0" Feb 27 10:52:41 crc kubenswrapper[4728]: I0227 10:52:40.995497 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14f791fe-acdb-4266-a973-2bf0aa766623-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"14f791fe-acdb-4266-a973-2bf0aa766623\") " pod="openstack/mysqld-exporter-0" Feb 27 10:52:41 crc kubenswrapper[4728]: I0227 10:52:41.020244 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/9a0e671c-98d8-42e6-bde3-624ca42b4d48-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"9a0e671c-98d8-42e6-bde3-624ca42b4d48\") " pod="openstack/kube-state-metrics-0" Feb 27 10:52:41 crc kubenswrapper[4728]: I0227 10:52:41.022183 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/9a0e671c-98d8-42e6-bde3-624ca42b4d48-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"9a0e671c-98d8-42e6-bde3-624ca42b4d48\") " pod="openstack/kube-state-metrics-0" Feb 27 10:52:41 crc kubenswrapper[4728]: I0227 10:52:41.023993 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r2g22\" (UniqueName: \"kubernetes.io/projected/9a0e671c-98d8-42e6-bde3-624ca42b4d48-kube-api-access-r2g22\") pod \"kube-state-metrics-0\" (UID: \"9a0e671c-98d8-42e6-bde3-624ca42b4d48\") " pod="openstack/kube-state-metrics-0" Feb 27 10:52:41 crc kubenswrapper[4728]: I0227 10:52:41.043719 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a0e671c-98d8-42e6-bde3-624ca42b4d48-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"9a0e671c-98d8-42e6-bde3-624ca42b4d48\") " pod="openstack/kube-state-metrics-0" Feb 27 10:52:41 crc kubenswrapper[4728]: I0227 10:52:41.098765 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14f791fe-acdb-4266-a973-2bf0aa766623-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"14f791fe-acdb-4266-a973-2bf0aa766623\") " pod="openstack/mysqld-exporter-0" Feb 27 10:52:41 crc kubenswrapper[4728]: I0227 10:52:41.098808 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14f791fe-acdb-4266-a973-2bf0aa766623-config-data\") pod \"mysqld-exporter-0\" (UID: \"14f791fe-acdb-4266-a973-2bf0aa766623\") " pod="openstack/mysqld-exporter-0" Feb 27 10:52:41 crc kubenswrapper[4728]: I0227 10:52:41.098863 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mysqld-exporter-tls-certs\" (UniqueName: \"kubernetes.io/secret/14f791fe-acdb-4266-a973-2bf0aa766623-mysqld-exporter-tls-certs\") pod \"mysqld-exporter-0\" (UID: \"14f791fe-acdb-4266-a973-2bf0aa766623\") " pod="openstack/mysqld-exporter-0" Feb 27 10:52:41 crc kubenswrapper[4728]: I0227 10:52:41.098970 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-srd9q\" (UniqueName: \"kubernetes.io/projected/14f791fe-acdb-4266-a973-2bf0aa766623-kube-api-access-srd9q\") pod \"mysqld-exporter-0\" (UID: \"14f791fe-acdb-4266-a973-2bf0aa766623\") " pod="openstack/mysqld-exporter-0" Feb 27 10:52:41 crc kubenswrapper[4728]: I0227 10:52:41.102999 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14f791fe-acdb-4266-a973-2bf0aa766623-config-data\") pod \"mysqld-exporter-0\" (UID: \"14f791fe-acdb-4266-a973-2bf0aa766623\") " pod="openstack/mysqld-exporter-0" Feb 27 10:52:41 crc kubenswrapper[4728]: I0227 10:52:41.104186 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 27 10:52:41 crc kubenswrapper[4728]: I0227 10:52:41.110896 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14f791fe-acdb-4266-a973-2bf0aa766623-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"14f791fe-acdb-4266-a973-2bf0aa766623\") " pod="openstack/mysqld-exporter-0" Feb 27 10:52:41 crc kubenswrapper[4728]: I0227 10:52:41.116190 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mysqld-exporter-tls-certs\" (UniqueName: \"kubernetes.io/secret/14f791fe-acdb-4266-a973-2bf0aa766623-mysqld-exporter-tls-certs\") pod \"mysqld-exporter-0\" (UID: \"14f791fe-acdb-4266-a973-2bf0aa766623\") " pod="openstack/mysqld-exporter-0" Feb 27 10:52:41 crc kubenswrapper[4728]: I0227 10:52:41.140685 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-srd9q\" (UniqueName: \"kubernetes.io/projected/14f791fe-acdb-4266-a973-2bf0aa766623-kube-api-access-srd9q\") pod \"mysqld-exporter-0\" (UID: \"14f791fe-acdb-4266-a973-2bf0aa766623\") " pod="openstack/mysqld-exporter-0" Feb 27 10:52:41 crc kubenswrapper[4728]: I0227 10:52:41.287103 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fktbl" Feb 27 10:52:41 crc kubenswrapper[4728]: I0227 10:52:41.413617 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lgckv\" (UniqueName: \"kubernetes.io/projected/24ef4f58-08f3-4576-9e84-83c0575600a3-kube-api-access-lgckv\") pod \"24ef4f58-08f3-4576-9e84-83c0575600a3\" (UID: \"24ef4f58-08f3-4576-9e84-83c0575600a3\") " Feb 27 10:52:41 crc kubenswrapper[4728]: I0227 10:52:41.413919 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24ef4f58-08f3-4576-9e84-83c0575600a3-catalog-content\") pod \"24ef4f58-08f3-4576-9e84-83c0575600a3\" (UID: \"24ef4f58-08f3-4576-9e84-83c0575600a3\") " Feb 27 10:52:41 crc kubenswrapper[4728]: I0227 10:52:41.414005 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24ef4f58-08f3-4576-9e84-83c0575600a3-utilities\") pod \"24ef4f58-08f3-4576-9e84-83c0575600a3\" (UID: \"24ef4f58-08f3-4576-9e84-83c0575600a3\") " Feb 27 10:52:41 crc kubenswrapper[4728]: I0227 10:52:41.415120 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/24ef4f58-08f3-4576-9e84-83c0575600a3-utilities" (OuterVolumeSpecName: "utilities") pod "24ef4f58-08f3-4576-9e84-83c0575600a3" (UID: "24ef4f58-08f3-4576-9e84-83c0575600a3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:52:41 crc kubenswrapper[4728]: I0227 10:52:41.417242 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Feb 27 10:52:41 crc kubenswrapper[4728]: I0227 10:52:41.418035 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-h4gl8" Feb 27 10:52:41 crc kubenswrapper[4728]: I0227 10:52:41.421292 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24ef4f58-08f3-4576-9e84-83c0575600a3-kube-api-access-lgckv" (OuterVolumeSpecName: "kube-api-access-lgckv") pod "24ef4f58-08f3-4576-9e84-83c0575600a3" (UID: "24ef4f58-08f3-4576-9e84-83c0575600a3"). InnerVolumeSpecName "kube-api-access-lgckv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:52:41 crc kubenswrapper[4728]: I0227 10:52:41.516295 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/efa9e238-79b0-4757-acab-53537b5ae93a-utilities\") pod \"efa9e238-79b0-4757-acab-53537b5ae93a\" (UID: \"efa9e238-79b0-4757-acab-53537b5ae93a\") " Feb 27 10:52:41 crc kubenswrapper[4728]: I0227 10:52:41.516675 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rljkb\" (UniqueName: \"kubernetes.io/projected/efa9e238-79b0-4757-acab-53537b5ae93a-kube-api-access-rljkb\") pod \"efa9e238-79b0-4757-acab-53537b5ae93a\" (UID: \"efa9e238-79b0-4757-acab-53537b5ae93a\") " Feb 27 10:52:41 crc kubenswrapper[4728]: I0227 10:52:41.516855 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/efa9e238-79b0-4757-acab-53537b5ae93a-catalog-content\") pod \"efa9e238-79b0-4757-acab-53537b5ae93a\" (UID: \"efa9e238-79b0-4757-acab-53537b5ae93a\") " Feb 27 10:52:41 crc kubenswrapper[4728]: I0227 10:52:41.516883 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/efa9e238-79b0-4757-acab-53537b5ae93a-utilities" (OuterVolumeSpecName: "utilities") pod "efa9e238-79b0-4757-acab-53537b5ae93a" (UID: "efa9e238-79b0-4757-acab-53537b5ae93a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:52:41 crc kubenswrapper[4728]: I0227 10:52:41.517616 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lgckv\" (UniqueName: \"kubernetes.io/projected/24ef4f58-08f3-4576-9e84-83c0575600a3-kube-api-access-lgckv\") on node \"crc\" DevicePath \"\"" Feb 27 10:52:41 crc kubenswrapper[4728]: I0227 10:52:41.517645 4728 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/efa9e238-79b0-4757-acab-53537b5ae93a-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 10:52:41 crc kubenswrapper[4728]: I0227 10:52:41.517659 4728 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24ef4f58-08f3-4576-9e84-83c0575600a3-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 10:52:41 crc kubenswrapper[4728]: I0227 10:52:41.524649 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efa9e238-79b0-4757-acab-53537b5ae93a-kube-api-access-rljkb" (OuterVolumeSpecName: "kube-api-access-rljkb") pod "efa9e238-79b0-4757-acab-53537b5ae93a" (UID: "efa9e238-79b0-4757-acab-53537b5ae93a"). InnerVolumeSpecName "kube-api-access-rljkb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:52:41 crc kubenswrapper[4728]: I0227 10:52:41.535852 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/24ef4f58-08f3-4576-9e84-83c0575600a3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "24ef4f58-08f3-4576-9e84-83c0575600a3" (UID: "24ef4f58-08f3-4576-9e84-83c0575600a3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:52:41 crc kubenswrapper[4728]: I0227 10:52:41.606236 4728 generic.go:334] "Generic (PLEG): container finished" podID="24ef4f58-08f3-4576-9e84-83c0575600a3" containerID="6a60be2da75ed4d772840f030bca0061f9f590e49ec3f22b4bbe70bf113faf77" exitCode=0 Feb 27 10:52:41 crc kubenswrapper[4728]: I0227 10:52:41.606487 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fktbl" event={"ID":"24ef4f58-08f3-4576-9e84-83c0575600a3","Type":"ContainerDied","Data":"6a60be2da75ed4d772840f030bca0061f9f590e49ec3f22b4bbe70bf113faf77"} Feb 27 10:52:41 crc kubenswrapper[4728]: I0227 10:52:41.606527 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fktbl" event={"ID":"24ef4f58-08f3-4576-9e84-83c0575600a3","Type":"ContainerDied","Data":"7e7f0f243897ab79e9975f4b8a099c33f571749c4c82bec4cbb380e19fce3fd5"} Feb 27 10:52:41 crc kubenswrapper[4728]: I0227 10:52:41.606545 4728 scope.go:117] "RemoveContainer" containerID="6a60be2da75ed4d772840f030bca0061f9f590e49ec3f22b4bbe70bf113faf77" Feb 27 10:52:41 crc kubenswrapper[4728]: I0227 10:52:41.606794 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fktbl" Feb 27 10:52:41 crc kubenswrapper[4728]: I0227 10:52:41.614622 4728 generic.go:334] "Generic (PLEG): container finished" podID="efa9e238-79b0-4757-acab-53537b5ae93a" containerID="226995caa65f7b453cbb5dd7edc24c62282b79109c70e17cb47825c47482f362" exitCode=0 Feb 27 10:52:41 crc kubenswrapper[4728]: I0227 10:52:41.614690 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h4gl8" event={"ID":"efa9e238-79b0-4757-acab-53537b5ae93a","Type":"ContainerDied","Data":"226995caa65f7b453cbb5dd7edc24c62282b79109c70e17cb47825c47482f362"} Feb 27 10:52:41 crc kubenswrapper[4728]: I0227 10:52:41.614722 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h4gl8" event={"ID":"efa9e238-79b0-4757-acab-53537b5ae93a","Type":"ContainerDied","Data":"74ef8f99282344ecb8d0dcb39e15ce74bec942b00c024aa6cbda5afb9c354ad4"} Feb 27 10:52:41 crc kubenswrapper[4728]: I0227 10:52:41.614794 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-h4gl8" Feb 27 10:52:41 crc kubenswrapper[4728]: I0227 10:52:41.619286 4728 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24ef4f58-08f3-4576-9e84-83c0575600a3-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 10:52:41 crc kubenswrapper[4728]: I0227 10:52:41.619304 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rljkb\" (UniqueName: \"kubernetes.io/projected/efa9e238-79b0-4757-acab-53537b5ae93a-kube-api-access-rljkb\") on node \"crc\" DevicePath \"\"" Feb 27 10:52:41 crc kubenswrapper[4728]: I0227 10:52:41.657662 4728 scope.go:117] "RemoveContainer" containerID="5bb2caa527a84e5e3a90245b5c0d8aa47dc0c1a4fc4f2b69dacfc36afd0260b4" Feb 27 10:52:41 crc kubenswrapper[4728]: I0227 10:52:41.657846 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fktbl"] Feb 27 10:52:41 crc kubenswrapper[4728]: I0227 10:52:41.661589 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/efa9e238-79b0-4757-acab-53537b5ae93a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "efa9e238-79b0-4757-acab-53537b5ae93a" (UID: "efa9e238-79b0-4757-acab-53537b5ae93a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:52:41 crc kubenswrapper[4728]: I0227 10:52:41.694685 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-fktbl"] Feb 27 10:52:41 crc kubenswrapper[4728]: I0227 10:52:41.696654 4728 scope.go:117] "RemoveContainer" containerID="b4a17788435898136838b62b6e261a0b47b31af4c9fa2253a3639cc8a2f55203" Feb 27 10:52:41 crc kubenswrapper[4728]: W0227 10:52:41.714951 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9a0e671c_98d8_42e6_bde3_624ca42b4d48.slice/crio-7cb854fccbe396317f925484451848228421b6d1f029f188c7d176072119e0d4 WatchSource:0}: Error finding container 7cb854fccbe396317f925484451848228421b6d1f029f188c7d176072119e0d4: Status 404 returned error can't find the container with id 7cb854fccbe396317f925484451848228421b6d1f029f188c7d176072119e0d4 Feb 27 10:52:41 crc kubenswrapper[4728]: I0227 10:52:41.722941 4728 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/efa9e238-79b0-4757-acab-53537b5ae93a-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 10:52:41 crc kubenswrapper[4728]: I0227 10:52:41.735620 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 27 10:52:41 crc kubenswrapper[4728]: I0227 10:52:41.737079 4728 scope.go:117] "RemoveContainer" containerID="6a60be2da75ed4d772840f030bca0061f9f590e49ec3f22b4bbe70bf113faf77" Feb 27 10:52:41 crc kubenswrapper[4728]: E0227 10:52:41.737478 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a60be2da75ed4d772840f030bca0061f9f590e49ec3f22b4bbe70bf113faf77\": container with ID starting with 6a60be2da75ed4d772840f030bca0061f9f590e49ec3f22b4bbe70bf113faf77 not found: ID does not exist" containerID="6a60be2da75ed4d772840f030bca0061f9f590e49ec3f22b4bbe70bf113faf77" Feb 27 10:52:41 crc kubenswrapper[4728]: I0227 10:52:41.737541 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a60be2da75ed4d772840f030bca0061f9f590e49ec3f22b4bbe70bf113faf77"} err="failed to get container status \"6a60be2da75ed4d772840f030bca0061f9f590e49ec3f22b4bbe70bf113faf77\": rpc error: code = NotFound desc = could not find container \"6a60be2da75ed4d772840f030bca0061f9f590e49ec3f22b4bbe70bf113faf77\": container with ID starting with 6a60be2da75ed4d772840f030bca0061f9f590e49ec3f22b4bbe70bf113faf77 not found: ID does not exist" Feb 27 10:52:41 crc kubenswrapper[4728]: I0227 10:52:41.737564 4728 scope.go:117] "RemoveContainer" containerID="5bb2caa527a84e5e3a90245b5c0d8aa47dc0c1a4fc4f2b69dacfc36afd0260b4" Feb 27 10:52:41 crc kubenswrapper[4728]: E0227 10:52:41.740280 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5bb2caa527a84e5e3a90245b5c0d8aa47dc0c1a4fc4f2b69dacfc36afd0260b4\": container with ID starting with 5bb2caa527a84e5e3a90245b5c0d8aa47dc0c1a4fc4f2b69dacfc36afd0260b4 not found: ID does not exist" containerID="5bb2caa527a84e5e3a90245b5c0d8aa47dc0c1a4fc4f2b69dacfc36afd0260b4" Feb 27 10:52:41 crc kubenswrapper[4728]: I0227 10:52:41.740337 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5bb2caa527a84e5e3a90245b5c0d8aa47dc0c1a4fc4f2b69dacfc36afd0260b4"} err="failed to get container status \"5bb2caa527a84e5e3a90245b5c0d8aa47dc0c1a4fc4f2b69dacfc36afd0260b4\": rpc error: code = NotFound desc = could not find container \"5bb2caa527a84e5e3a90245b5c0d8aa47dc0c1a4fc4f2b69dacfc36afd0260b4\": container with ID starting with 5bb2caa527a84e5e3a90245b5c0d8aa47dc0c1a4fc4f2b69dacfc36afd0260b4 not found: ID does not exist" Feb 27 10:52:41 crc kubenswrapper[4728]: I0227 10:52:41.740366 4728 scope.go:117] "RemoveContainer" containerID="b4a17788435898136838b62b6e261a0b47b31af4c9fa2253a3639cc8a2f55203" Feb 27 10:52:41 crc kubenswrapper[4728]: E0227 10:52:41.742061 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b4a17788435898136838b62b6e261a0b47b31af4c9fa2253a3639cc8a2f55203\": container with ID starting with b4a17788435898136838b62b6e261a0b47b31af4c9fa2253a3639cc8a2f55203 not found: ID does not exist" containerID="b4a17788435898136838b62b6e261a0b47b31af4c9fa2253a3639cc8a2f55203" Feb 27 10:52:41 crc kubenswrapper[4728]: I0227 10:52:41.742090 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b4a17788435898136838b62b6e261a0b47b31af4c9fa2253a3639cc8a2f55203"} err="failed to get container status \"b4a17788435898136838b62b6e261a0b47b31af4c9fa2253a3639cc8a2f55203\": rpc error: code = NotFound desc = could not find container \"b4a17788435898136838b62b6e261a0b47b31af4c9fa2253a3639cc8a2f55203\": container with ID starting with b4a17788435898136838b62b6e261a0b47b31af4c9fa2253a3639cc8a2f55203 not found: ID does not exist" Feb 27 10:52:41 crc kubenswrapper[4728]: I0227 10:52:41.742129 4728 scope.go:117] "RemoveContainer" containerID="226995caa65f7b453cbb5dd7edc24c62282b79109c70e17cb47825c47482f362" Feb 27 10:52:41 crc kubenswrapper[4728]: I0227 10:52:41.777238 4728 scope.go:117] "RemoveContainer" containerID="e8364fc84ab88348ec296479a71d7b4bfcacbc70aaba3595152c2dd8e66ef77a" Feb 27 10:52:41 crc kubenswrapper[4728]: I0227 10:52:41.818552 4728 scope.go:117] "RemoveContainer" containerID="20c67cb254710b9dab1898fc699ab336aed6ee306edafed02c5eff4caee96ea8" Feb 27 10:52:41 crc kubenswrapper[4728]: I0227 10:52:41.870742 4728 scope.go:117] "RemoveContainer" containerID="226995caa65f7b453cbb5dd7edc24c62282b79109c70e17cb47825c47482f362" Feb 27 10:52:41 crc kubenswrapper[4728]: E0227 10:52:41.871233 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"226995caa65f7b453cbb5dd7edc24c62282b79109c70e17cb47825c47482f362\": container with ID starting with 226995caa65f7b453cbb5dd7edc24c62282b79109c70e17cb47825c47482f362 not found: ID does not exist" containerID="226995caa65f7b453cbb5dd7edc24c62282b79109c70e17cb47825c47482f362" Feb 27 10:52:41 crc kubenswrapper[4728]: I0227 10:52:41.871269 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"226995caa65f7b453cbb5dd7edc24c62282b79109c70e17cb47825c47482f362"} err="failed to get container status \"226995caa65f7b453cbb5dd7edc24c62282b79109c70e17cb47825c47482f362\": rpc error: code = NotFound desc = could not find container \"226995caa65f7b453cbb5dd7edc24c62282b79109c70e17cb47825c47482f362\": container with ID starting with 226995caa65f7b453cbb5dd7edc24c62282b79109c70e17cb47825c47482f362 not found: ID does not exist" Feb 27 10:52:41 crc kubenswrapper[4728]: I0227 10:52:41.871316 4728 scope.go:117] "RemoveContainer" containerID="e8364fc84ab88348ec296479a71d7b4bfcacbc70aaba3595152c2dd8e66ef77a" Feb 27 10:52:41 crc kubenswrapper[4728]: E0227 10:52:41.871630 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e8364fc84ab88348ec296479a71d7b4bfcacbc70aaba3595152c2dd8e66ef77a\": container with ID starting with e8364fc84ab88348ec296479a71d7b4bfcacbc70aaba3595152c2dd8e66ef77a not found: ID does not exist" containerID="e8364fc84ab88348ec296479a71d7b4bfcacbc70aaba3595152c2dd8e66ef77a" Feb 27 10:52:41 crc kubenswrapper[4728]: I0227 10:52:41.871671 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e8364fc84ab88348ec296479a71d7b4bfcacbc70aaba3595152c2dd8e66ef77a"} err="failed to get container status \"e8364fc84ab88348ec296479a71d7b4bfcacbc70aaba3595152c2dd8e66ef77a\": rpc error: code = NotFound desc = could not find container \"e8364fc84ab88348ec296479a71d7b4bfcacbc70aaba3595152c2dd8e66ef77a\": container with ID starting with e8364fc84ab88348ec296479a71d7b4bfcacbc70aaba3595152c2dd8e66ef77a not found: ID does not exist" Feb 27 10:52:41 crc kubenswrapper[4728]: I0227 10:52:41.871701 4728 scope.go:117] "RemoveContainer" containerID="20c67cb254710b9dab1898fc699ab336aed6ee306edafed02c5eff4caee96ea8" Feb 27 10:52:41 crc kubenswrapper[4728]: E0227 10:52:41.872144 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"20c67cb254710b9dab1898fc699ab336aed6ee306edafed02c5eff4caee96ea8\": container with ID starting with 20c67cb254710b9dab1898fc699ab336aed6ee306edafed02c5eff4caee96ea8 not found: ID does not exist" containerID="20c67cb254710b9dab1898fc699ab336aed6ee306edafed02c5eff4caee96ea8" Feb 27 10:52:41 crc kubenswrapper[4728]: I0227 10:52:41.872172 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"20c67cb254710b9dab1898fc699ab336aed6ee306edafed02c5eff4caee96ea8"} err="failed to get container status \"20c67cb254710b9dab1898fc699ab336aed6ee306edafed02c5eff4caee96ea8\": rpc error: code = NotFound desc = could not find container \"20c67cb254710b9dab1898fc699ab336aed6ee306edafed02c5eff4caee96ea8\": container with ID starting with 20c67cb254710b9dab1898fc699ab336aed6ee306edafed02c5eff4caee96ea8 not found: ID does not exist" Feb 27 10:52:41 crc kubenswrapper[4728]: I0227 10:52:41.956013 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-h4gl8"] Feb 27 10:52:41 crc kubenswrapper[4728]: I0227 10:52:41.976095 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-h4gl8"] Feb 27 10:52:42 crc kubenswrapper[4728]: I0227 10:52:42.004222 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Feb 27 10:52:42 crc kubenswrapper[4728]: I0227 10:52:42.070566 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 27 10:52:42 crc kubenswrapper[4728]: I0227 10:52:42.070884 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c0b373eb-8903-41d1-b698-5e2a0a87aae7" containerName="ceilometer-central-agent" containerID="cri-o://d34e6f3f51c21e903d239d9ff8d2e3625e8ad1365225d3419b8e3f0ff4c71928" gracePeriod=30 Feb 27 10:52:42 crc kubenswrapper[4728]: I0227 10:52:42.071438 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c0b373eb-8903-41d1-b698-5e2a0a87aae7" containerName="proxy-httpd" containerID="cri-o://2af07107f345b07c0265df730d5c3a3cc12c941598a5cf2874e2da81d16d043b" gracePeriod=30 Feb 27 10:52:42 crc kubenswrapper[4728]: I0227 10:52:42.071529 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c0b373eb-8903-41d1-b698-5e2a0a87aae7" containerName="sg-core" containerID="cri-o://4872384e50c57811fe680473d1af185d03e153375b72a73f74b00dcf699ea81a" gracePeriod=30 Feb 27 10:52:42 crc kubenswrapper[4728]: I0227 10:52:42.071586 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c0b373eb-8903-41d1-b698-5e2a0a87aae7" containerName="ceilometer-notification-agent" containerID="cri-o://ff5386bc9ce4ec7bc6ad56a5dc3d71b3f25aefa437b4d819fb231decd7b4e545" gracePeriod=30 Feb 27 10:52:42 crc kubenswrapper[4728]: I0227 10:52:42.673702 4728 generic.go:334] "Generic (PLEG): container finished" podID="c0b373eb-8903-41d1-b698-5e2a0a87aae7" containerID="2af07107f345b07c0265df730d5c3a3cc12c941598a5cf2874e2da81d16d043b" exitCode=0 Feb 27 10:52:42 crc kubenswrapper[4728]: I0227 10:52:42.674223 4728 generic.go:334] "Generic (PLEG): container finished" podID="c0b373eb-8903-41d1-b698-5e2a0a87aae7" containerID="4872384e50c57811fe680473d1af185d03e153375b72a73f74b00dcf699ea81a" exitCode=2 Feb 27 10:52:42 crc kubenswrapper[4728]: I0227 10:52:42.674232 4728 generic.go:334] "Generic (PLEG): container finished" podID="c0b373eb-8903-41d1-b698-5e2a0a87aae7" containerID="d34e6f3f51c21e903d239d9ff8d2e3625e8ad1365225d3419b8e3f0ff4c71928" exitCode=0 Feb 27 10:52:42 crc kubenswrapper[4728]: I0227 10:52:42.674276 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c0b373eb-8903-41d1-b698-5e2a0a87aae7","Type":"ContainerDied","Data":"2af07107f345b07c0265df730d5c3a3cc12c941598a5cf2874e2da81d16d043b"} Feb 27 10:52:42 crc kubenswrapper[4728]: I0227 10:52:42.674303 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c0b373eb-8903-41d1-b698-5e2a0a87aae7","Type":"ContainerDied","Data":"4872384e50c57811fe680473d1af185d03e153375b72a73f74b00dcf699ea81a"} Feb 27 10:52:42 crc kubenswrapper[4728]: I0227 10:52:42.674313 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c0b373eb-8903-41d1-b698-5e2a0a87aae7","Type":"ContainerDied","Data":"d34e6f3f51c21e903d239d9ff8d2e3625e8ad1365225d3419b8e3f0ff4c71928"} Feb 27 10:52:42 crc kubenswrapper[4728]: I0227 10:52:42.701761 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"9a0e671c-98d8-42e6-bde3-624ca42b4d48","Type":"ContainerStarted","Data":"5767f96947b142faca18e8c692f926cddb872256cc442bf253fff87f62b9dc28"} Feb 27 10:52:42 crc kubenswrapper[4728]: I0227 10:52:42.701807 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"9a0e671c-98d8-42e6-bde3-624ca42b4d48","Type":"ContainerStarted","Data":"7cb854fccbe396317f925484451848228421b6d1f029f188c7d176072119e0d4"} Feb 27 10:52:42 crc kubenswrapper[4728]: I0227 10:52:42.703346 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 27 10:52:42 crc kubenswrapper[4728]: I0227 10:52:42.761742 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24ef4f58-08f3-4576-9e84-83c0575600a3" path="/var/lib/kubelet/pods/24ef4f58-08f3-4576-9e84-83c0575600a3/volumes" Feb 27 10:52:42 crc kubenswrapper[4728]: I0227 10:52:42.762724 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="60433146-3d7a-433d-a3c3-3152b7591e49" path="/var/lib/kubelet/pods/60433146-3d7a-433d-a3c3-3152b7591e49/volumes" Feb 27 10:52:42 crc kubenswrapper[4728]: I0227 10:52:42.763414 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efa9e238-79b0-4757-acab-53537b5ae93a" path="/var/lib/kubelet/pods/efa9e238-79b0-4757-acab-53537b5ae93a/volumes" Feb 27 10:52:42 crc kubenswrapper[4728]: I0227 10:52:42.764344 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"14f791fe-acdb-4266-a973-2bf0aa766623","Type":"ContainerStarted","Data":"b4eb06eddbb574b6e877c7f3e3cab8eff0d3c5d3bf26945703d02589eec239f5"} Feb 27 10:52:42 crc kubenswrapper[4728]: I0227 10:52:42.770076 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.361725843 podStartE2EDuration="2.770055443s" podCreationTimestamp="2026-02-27 10:52:40 +0000 UTC" firstStartedPulling="2026-02-27 10:52:41.737727091 +0000 UTC m=+1581.700093197" lastFinishedPulling="2026-02-27 10:52:42.146056681 +0000 UTC m=+1582.108422797" observedRunningTime="2026-02-27 10:52:42.746891175 +0000 UTC m=+1582.709257281" watchObservedRunningTime="2026-02-27 10:52:42.770055443 +0000 UTC m=+1582.732421549" Feb 27 10:52:43 crc kubenswrapper[4728]: I0227 10:52:43.696814 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 27 10:52:43 crc kubenswrapper[4728]: I0227 10:52:43.756290 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"14f791fe-acdb-4266-a973-2bf0aa766623","Type":"ContainerStarted","Data":"4e3ad11ee7c02bf887a9f5f0116b5752be94442ddc58bbbd09f5d1f40c5aee2f"} Feb 27 10:52:43 crc kubenswrapper[4728]: I0227 10:52:43.766567 4728 generic.go:334] "Generic (PLEG): container finished" podID="c0b373eb-8903-41d1-b698-5e2a0a87aae7" containerID="ff5386bc9ce4ec7bc6ad56a5dc3d71b3f25aefa437b4d819fb231decd7b4e545" exitCode=0 Feb 27 10:52:43 crc kubenswrapper[4728]: I0227 10:52:43.766649 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 27 10:52:43 crc kubenswrapper[4728]: I0227 10:52:43.766697 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c0b373eb-8903-41d1-b698-5e2a0a87aae7","Type":"ContainerDied","Data":"ff5386bc9ce4ec7bc6ad56a5dc3d71b3f25aefa437b4d819fb231decd7b4e545"} Feb 27 10:52:43 crc kubenswrapper[4728]: I0227 10:52:43.766733 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c0b373eb-8903-41d1-b698-5e2a0a87aae7","Type":"ContainerDied","Data":"dfbcb593b17c42c849d4bc93bb5c402a3d5faa2bf73fc99ed6b8d3f9d7c799d0"} Feb 27 10:52:43 crc kubenswrapper[4728]: I0227 10:52:43.766755 4728 scope.go:117] "RemoveContainer" containerID="2af07107f345b07c0265df730d5c3a3cc12c941598a5cf2874e2da81d16d043b" Feb 27 10:52:43 crc kubenswrapper[4728]: I0227 10:52:43.775004 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-0" podStartSLOduration=3.232296615 podStartE2EDuration="3.774983342s" podCreationTimestamp="2026-02-27 10:52:40 +0000 UTC" firstStartedPulling="2026-02-27 10:52:41.990072008 +0000 UTC m=+1581.952438114" lastFinishedPulling="2026-02-27 10:52:42.532758735 +0000 UTC m=+1582.495124841" observedRunningTime="2026-02-27 10:52:43.772885905 +0000 UTC m=+1583.735252031" watchObservedRunningTime="2026-02-27 10:52:43.774983342 +0000 UTC m=+1583.737349448" Feb 27 10:52:43 crc kubenswrapper[4728]: I0227 10:52:43.830995 4728 scope.go:117] "RemoveContainer" containerID="4872384e50c57811fe680473d1af185d03e153375b72a73f74b00dcf699ea81a" Feb 27 10:52:43 crc kubenswrapper[4728]: I0227 10:52:43.860086 4728 scope.go:117] "RemoveContainer" containerID="ff5386bc9ce4ec7bc6ad56a5dc3d71b3f25aefa437b4d819fb231decd7b4e545" Feb 27 10:52:43 crc kubenswrapper[4728]: I0227 10:52:43.879653 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0b373eb-8903-41d1-b698-5e2a0a87aae7-combined-ca-bundle\") pod \"c0b373eb-8903-41d1-b698-5e2a0a87aae7\" (UID: \"c0b373eb-8903-41d1-b698-5e2a0a87aae7\") " Feb 27 10:52:43 crc kubenswrapper[4728]: I0227 10:52:43.879780 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c0b373eb-8903-41d1-b698-5e2a0a87aae7-scripts\") pod \"c0b373eb-8903-41d1-b698-5e2a0a87aae7\" (UID: \"c0b373eb-8903-41d1-b698-5e2a0a87aae7\") " Feb 27 10:52:43 crc kubenswrapper[4728]: I0227 10:52:43.879835 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nwc9g\" (UniqueName: \"kubernetes.io/projected/c0b373eb-8903-41d1-b698-5e2a0a87aae7-kube-api-access-nwc9g\") pod \"c0b373eb-8903-41d1-b698-5e2a0a87aae7\" (UID: \"c0b373eb-8903-41d1-b698-5e2a0a87aae7\") " Feb 27 10:52:43 crc kubenswrapper[4728]: I0227 10:52:43.879864 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c0b373eb-8903-41d1-b698-5e2a0a87aae7-sg-core-conf-yaml\") pod \"c0b373eb-8903-41d1-b698-5e2a0a87aae7\" (UID: \"c0b373eb-8903-41d1-b698-5e2a0a87aae7\") " Feb 27 10:52:43 crc kubenswrapper[4728]: I0227 10:52:43.879930 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c0b373eb-8903-41d1-b698-5e2a0a87aae7-log-httpd\") pod \"c0b373eb-8903-41d1-b698-5e2a0a87aae7\" (UID: \"c0b373eb-8903-41d1-b698-5e2a0a87aae7\") " Feb 27 10:52:43 crc kubenswrapper[4728]: I0227 10:52:43.879947 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0b373eb-8903-41d1-b698-5e2a0a87aae7-config-data\") pod \"c0b373eb-8903-41d1-b698-5e2a0a87aae7\" (UID: \"c0b373eb-8903-41d1-b698-5e2a0a87aae7\") " Feb 27 10:52:43 crc kubenswrapper[4728]: I0227 10:52:43.879988 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c0b373eb-8903-41d1-b698-5e2a0a87aae7-run-httpd\") pod \"c0b373eb-8903-41d1-b698-5e2a0a87aae7\" (UID: \"c0b373eb-8903-41d1-b698-5e2a0a87aae7\") " Feb 27 10:52:43 crc kubenswrapper[4728]: I0227 10:52:43.881280 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c0b373eb-8903-41d1-b698-5e2a0a87aae7-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "c0b373eb-8903-41d1-b698-5e2a0a87aae7" (UID: "c0b373eb-8903-41d1-b698-5e2a0a87aae7"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:52:43 crc kubenswrapper[4728]: I0227 10:52:43.883243 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c0b373eb-8903-41d1-b698-5e2a0a87aae7-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "c0b373eb-8903-41d1-b698-5e2a0a87aae7" (UID: "c0b373eb-8903-41d1-b698-5e2a0a87aae7"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:52:43 crc kubenswrapper[4728]: I0227 10:52:43.901996 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0b373eb-8903-41d1-b698-5e2a0a87aae7-kube-api-access-nwc9g" (OuterVolumeSpecName: "kube-api-access-nwc9g") pod "c0b373eb-8903-41d1-b698-5e2a0a87aae7" (UID: "c0b373eb-8903-41d1-b698-5e2a0a87aae7"). InnerVolumeSpecName "kube-api-access-nwc9g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:52:43 crc kubenswrapper[4728]: I0227 10:52:43.906352 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0b373eb-8903-41d1-b698-5e2a0a87aae7-scripts" (OuterVolumeSpecName: "scripts") pod "c0b373eb-8903-41d1-b698-5e2a0a87aae7" (UID: "c0b373eb-8903-41d1-b698-5e2a0a87aae7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:52:43 crc kubenswrapper[4728]: I0227 10:52:43.907047 4728 scope.go:117] "RemoveContainer" containerID="d34e6f3f51c21e903d239d9ff8d2e3625e8ad1365225d3419b8e3f0ff4c71928" Feb 27 10:52:43 crc kubenswrapper[4728]: I0227 10:52:43.946445 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0b373eb-8903-41d1-b698-5e2a0a87aae7-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "c0b373eb-8903-41d1-b698-5e2a0a87aae7" (UID: "c0b373eb-8903-41d1-b698-5e2a0a87aae7"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:52:43 crc kubenswrapper[4728]: I0227 10:52:43.978875 4728 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-fn4cs" podUID="9c6a8916-fd56-45ba-837e-78eb7fd7b7f2" containerName="registry-server" probeResult="failure" output=< Feb 27 10:52:43 crc kubenswrapper[4728]: timeout: failed to connect service ":50051" within 1s Feb 27 10:52:43 crc kubenswrapper[4728]: > Feb 27 10:52:43 crc kubenswrapper[4728]: I0227 10:52:43.984569 4728 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c0b373eb-8903-41d1-b698-5e2a0a87aae7-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 27 10:52:43 crc kubenswrapper[4728]: I0227 10:52:43.984606 4728 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c0b373eb-8903-41d1-b698-5e2a0a87aae7-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 27 10:52:43 crc kubenswrapper[4728]: I0227 10:52:43.984617 4728 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c0b373eb-8903-41d1-b698-5e2a0a87aae7-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 10:52:43 crc kubenswrapper[4728]: I0227 10:52:43.984629 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nwc9g\" (UniqueName: \"kubernetes.io/projected/c0b373eb-8903-41d1-b698-5e2a0a87aae7-kube-api-access-nwc9g\") on node \"crc\" DevicePath \"\"" Feb 27 10:52:43 crc kubenswrapper[4728]: I0227 10:52:43.984641 4728 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c0b373eb-8903-41d1-b698-5e2a0a87aae7-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 27 10:52:44 crc kubenswrapper[4728]: I0227 10:52:44.001216 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0b373eb-8903-41d1-b698-5e2a0a87aae7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c0b373eb-8903-41d1-b698-5e2a0a87aae7" (UID: "c0b373eb-8903-41d1-b698-5e2a0a87aae7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:52:44 crc kubenswrapper[4728]: I0227 10:52:44.031739 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0b373eb-8903-41d1-b698-5e2a0a87aae7-config-data" (OuterVolumeSpecName: "config-data") pod "c0b373eb-8903-41d1-b698-5e2a0a87aae7" (UID: "c0b373eb-8903-41d1-b698-5e2a0a87aae7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:52:44 crc kubenswrapper[4728]: I0227 10:52:44.046960 4728 scope.go:117] "RemoveContainer" containerID="2af07107f345b07c0265df730d5c3a3cc12c941598a5cf2874e2da81d16d043b" Feb 27 10:52:44 crc kubenswrapper[4728]: E0227 10:52:44.048309 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2af07107f345b07c0265df730d5c3a3cc12c941598a5cf2874e2da81d16d043b\": container with ID starting with 2af07107f345b07c0265df730d5c3a3cc12c941598a5cf2874e2da81d16d043b not found: ID does not exist" containerID="2af07107f345b07c0265df730d5c3a3cc12c941598a5cf2874e2da81d16d043b" Feb 27 10:52:44 crc kubenswrapper[4728]: I0227 10:52:44.048355 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2af07107f345b07c0265df730d5c3a3cc12c941598a5cf2874e2da81d16d043b"} err="failed to get container status \"2af07107f345b07c0265df730d5c3a3cc12c941598a5cf2874e2da81d16d043b\": rpc error: code = NotFound desc = could not find container \"2af07107f345b07c0265df730d5c3a3cc12c941598a5cf2874e2da81d16d043b\": container with ID starting with 2af07107f345b07c0265df730d5c3a3cc12c941598a5cf2874e2da81d16d043b not found: ID does not exist" Feb 27 10:52:44 crc kubenswrapper[4728]: I0227 10:52:44.048382 4728 scope.go:117] "RemoveContainer" containerID="4872384e50c57811fe680473d1af185d03e153375b72a73f74b00dcf699ea81a" Feb 27 10:52:44 crc kubenswrapper[4728]: E0227 10:52:44.048866 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4872384e50c57811fe680473d1af185d03e153375b72a73f74b00dcf699ea81a\": container with ID starting with 4872384e50c57811fe680473d1af185d03e153375b72a73f74b00dcf699ea81a not found: ID does not exist" containerID="4872384e50c57811fe680473d1af185d03e153375b72a73f74b00dcf699ea81a" Feb 27 10:52:44 crc kubenswrapper[4728]: I0227 10:52:44.048899 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4872384e50c57811fe680473d1af185d03e153375b72a73f74b00dcf699ea81a"} err="failed to get container status \"4872384e50c57811fe680473d1af185d03e153375b72a73f74b00dcf699ea81a\": rpc error: code = NotFound desc = could not find container \"4872384e50c57811fe680473d1af185d03e153375b72a73f74b00dcf699ea81a\": container with ID starting with 4872384e50c57811fe680473d1af185d03e153375b72a73f74b00dcf699ea81a not found: ID does not exist" Feb 27 10:52:44 crc kubenswrapper[4728]: I0227 10:52:44.048916 4728 scope.go:117] "RemoveContainer" containerID="ff5386bc9ce4ec7bc6ad56a5dc3d71b3f25aefa437b4d819fb231decd7b4e545" Feb 27 10:52:44 crc kubenswrapper[4728]: E0227 10:52:44.049197 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ff5386bc9ce4ec7bc6ad56a5dc3d71b3f25aefa437b4d819fb231decd7b4e545\": container with ID starting with ff5386bc9ce4ec7bc6ad56a5dc3d71b3f25aefa437b4d819fb231decd7b4e545 not found: ID does not exist" containerID="ff5386bc9ce4ec7bc6ad56a5dc3d71b3f25aefa437b4d819fb231decd7b4e545" Feb 27 10:52:44 crc kubenswrapper[4728]: I0227 10:52:44.049241 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ff5386bc9ce4ec7bc6ad56a5dc3d71b3f25aefa437b4d819fb231decd7b4e545"} err="failed to get container status \"ff5386bc9ce4ec7bc6ad56a5dc3d71b3f25aefa437b4d819fb231decd7b4e545\": rpc error: code = NotFound desc = could not find container \"ff5386bc9ce4ec7bc6ad56a5dc3d71b3f25aefa437b4d819fb231decd7b4e545\": container with ID starting with ff5386bc9ce4ec7bc6ad56a5dc3d71b3f25aefa437b4d819fb231decd7b4e545 not found: ID does not exist" Feb 27 10:52:44 crc kubenswrapper[4728]: I0227 10:52:44.049271 4728 scope.go:117] "RemoveContainer" containerID="d34e6f3f51c21e903d239d9ff8d2e3625e8ad1365225d3419b8e3f0ff4c71928" Feb 27 10:52:44 crc kubenswrapper[4728]: E0227 10:52:44.049552 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d34e6f3f51c21e903d239d9ff8d2e3625e8ad1365225d3419b8e3f0ff4c71928\": container with ID starting with d34e6f3f51c21e903d239d9ff8d2e3625e8ad1365225d3419b8e3f0ff4c71928 not found: ID does not exist" containerID="d34e6f3f51c21e903d239d9ff8d2e3625e8ad1365225d3419b8e3f0ff4c71928" Feb 27 10:52:44 crc kubenswrapper[4728]: I0227 10:52:44.049578 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d34e6f3f51c21e903d239d9ff8d2e3625e8ad1365225d3419b8e3f0ff4c71928"} err="failed to get container status \"d34e6f3f51c21e903d239d9ff8d2e3625e8ad1365225d3419b8e3f0ff4c71928\": rpc error: code = NotFound desc = could not find container \"d34e6f3f51c21e903d239d9ff8d2e3625e8ad1365225d3419b8e3f0ff4c71928\": container with ID starting with d34e6f3f51c21e903d239d9ff8d2e3625e8ad1365225d3419b8e3f0ff4c71928 not found: ID does not exist" Feb 27 10:52:44 crc kubenswrapper[4728]: I0227 10:52:44.087834 4728 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0b373eb-8903-41d1-b698-5e2a0a87aae7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 10:52:44 crc kubenswrapper[4728]: I0227 10:52:44.087872 4728 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0b373eb-8903-41d1-b698-5e2a0a87aae7-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 10:52:44 crc kubenswrapper[4728]: I0227 10:52:44.114645 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 27 10:52:44 crc kubenswrapper[4728]: I0227 10:52:44.128694 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 27 10:52:44 crc kubenswrapper[4728]: I0227 10:52:44.148340 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 27 10:52:44 crc kubenswrapper[4728]: E0227 10:52:44.148874 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0b373eb-8903-41d1-b698-5e2a0a87aae7" containerName="ceilometer-notification-agent" Feb 27 10:52:44 crc kubenswrapper[4728]: I0227 10:52:44.148895 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0b373eb-8903-41d1-b698-5e2a0a87aae7" containerName="ceilometer-notification-agent" Feb 27 10:52:44 crc kubenswrapper[4728]: E0227 10:52:44.148922 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efa9e238-79b0-4757-acab-53537b5ae93a" containerName="extract-content" Feb 27 10:52:44 crc kubenswrapper[4728]: I0227 10:52:44.148927 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="efa9e238-79b0-4757-acab-53537b5ae93a" containerName="extract-content" Feb 27 10:52:44 crc kubenswrapper[4728]: E0227 10:52:44.148940 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efa9e238-79b0-4757-acab-53537b5ae93a" containerName="registry-server" Feb 27 10:52:44 crc kubenswrapper[4728]: I0227 10:52:44.148949 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="efa9e238-79b0-4757-acab-53537b5ae93a" containerName="registry-server" Feb 27 10:52:44 crc kubenswrapper[4728]: E0227 10:52:44.148956 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24ef4f58-08f3-4576-9e84-83c0575600a3" containerName="extract-utilities" Feb 27 10:52:44 crc kubenswrapper[4728]: I0227 10:52:44.148962 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="24ef4f58-08f3-4576-9e84-83c0575600a3" containerName="extract-utilities" Feb 27 10:52:44 crc kubenswrapper[4728]: E0227 10:52:44.148973 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0b373eb-8903-41d1-b698-5e2a0a87aae7" containerName="proxy-httpd" Feb 27 10:52:44 crc kubenswrapper[4728]: I0227 10:52:44.148979 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0b373eb-8903-41d1-b698-5e2a0a87aae7" containerName="proxy-httpd" Feb 27 10:52:44 crc kubenswrapper[4728]: E0227 10:52:44.148989 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24ef4f58-08f3-4576-9e84-83c0575600a3" containerName="registry-server" Feb 27 10:52:44 crc kubenswrapper[4728]: I0227 10:52:44.148995 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="24ef4f58-08f3-4576-9e84-83c0575600a3" containerName="registry-server" Feb 27 10:52:44 crc kubenswrapper[4728]: E0227 10:52:44.149014 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efa9e238-79b0-4757-acab-53537b5ae93a" containerName="extract-utilities" Feb 27 10:52:44 crc kubenswrapper[4728]: I0227 10:52:44.149020 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="efa9e238-79b0-4757-acab-53537b5ae93a" containerName="extract-utilities" Feb 27 10:52:44 crc kubenswrapper[4728]: E0227 10:52:44.149033 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0b373eb-8903-41d1-b698-5e2a0a87aae7" containerName="sg-core" Feb 27 10:52:44 crc kubenswrapper[4728]: I0227 10:52:44.149039 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0b373eb-8903-41d1-b698-5e2a0a87aae7" containerName="sg-core" Feb 27 10:52:44 crc kubenswrapper[4728]: E0227 10:52:44.149048 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24ef4f58-08f3-4576-9e84-83c0575600a3" containerName="extract-content" Feb 27 10:52:44 crc kubenswrapper[4728]: I0227 10:52:44.149054 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="24ef4f58-08f3-4576-9e84-83c0575600a3" containerName="extract-content" Feb 27 10:52:44 crc kubenswrapper[4728]: E0227 10:52:44.149069 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0b373eb-8903-41d1-b698-5e2a0a87aae7" containerName="ceilometer-central-agent" Feb 27 10:52:44 crc kubenswrapper[4728]: I0227 10:52:44.149076 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0b373eb-8903-41d1-b698-5e2a0a87aae7" containerName="ceilometer-central-agent" Feb 27 10:52:44 crc kubenswrapper[4728]: I0227 10:52:44.149272 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0b373eb-8903-41d1-b698-5e2a0a87aae7" containerName="proxy-httpd" Feb 27 10:52:44 crc kubenswrapper[4728]: I0227 10:52:44.149287 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0b373eb-8903-41d1-b698-5e2a0a87aae7" containerName="ceilometer-notification-agent" Feb 27 10:52:44 crc kubenswrapper[4728]: I0227 10:52:44.149299 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="24ef4f58-08f3-4576-9e84-83c0575600a3" containerName="registry-server" Feb 27 10:52:44 crc kubenswrapper[4728]: I0227 10:52:44.149311 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="efa9e238-79b0-4757-acab-53537b5ae93a" containerName="registry-server" Feb 27 10:52:44 crc kubenswrapper[4728]: I0227 10:52:44.149325 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0b373eb-8903-41d1-b698-5e2a0a87aae7" containerName="sg-core" Feb 27 10:52:44 crc kubenswrapper[4728]: I0227 10:52:44.149335 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0b373eb-8903-41d1-b698-5e2a0a87aae7" containerName="ceilometer-central-agent" Feb 27 10:52:44 crc kubenswrapper[4728]: I0227 10:52:44.151434 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 27 10:52:44 crc kubenswrapper[4728]: I0227 10:52:44.154237 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 27 10:52:44 crc kubenswrapper[4728]: I0227 10:52:44.154523 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 27 10:52:44 crc kubenswrapper[4728]: I0227 10:52:44.154687 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 27 10:52:44 crc kubenswrapper[4728]: I0227 10:52:44.179195 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 27 10:52:44 crc kubenswrapper[4728]: I0227 10:52:44.205154 4728 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-6xxv8" podUID="d87df44b-fc24-4c81-8c22-94a12665da84" containerName="registry-server" probeResult="failure" output=< Feb 27 10:52:44 crc kubenswrapper[4728]: timeout: failed to connect service ":50051" within 1s Feb 27 10:52:44 crc kubenswrapper[4728]: > Feb 27 10:52:44 crc kubenswrapper[4728]: I0227 10:52:44.292404 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/70ad0ef0-1165-4657-a3ec-04899eb20cef-log-httpd\") pod \"ceilometer-0\" (UID: \"70ad0ef0-1165-4657-a3ec-04899eb20cef\") " pod="openstack/ceilometer-0" Feb 27 10:52:44 crc kubenswrapper[4728]: I0227 10:52:44.292454 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72hvn\" (UniqueName: \"kubernetes.io/projected/70ad0ef0-1165-4657-a3ec-04899eb20cef-kube-api-access-72hvn\") pod \"ceilometer-0\" (UID: \"70ad0ef0-1165-4657-a3ec-04899eb20cef\") " pod="openstack/ceilometer-0" Feb 27 10:52:44 crc kubenswrapper[4728]: I0227 10:52:44.292616 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70ad0ef0-1165-4657-a3ec-04899eb20cef-config-data\") pod \"ceilometer-0\" (UID: \"70ad0ef0-1165-4657-a3ec-04899eb20cef\") " pod="openstack/ceilometer-0" Feb 27 10:52:44 crc kubenswrapper[4728]: I0227 10:52:44.292650 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/70ad0ef0-1165-4657-a3ec-04899eb20cef-scripts\") pod \"ceilometer-0\" (UID: \"70ad0ef0-1165-4657-a3ec-04899eb20cef\") " pod="openstack/ceilometer-0" Feb 27 10:52:44 crc kubenswrapper[4728]: I0227 10:52:44.292776 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70ad0ef0-1165-4657-a3ec-04899eb20cef-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"70ad0ef0-1165-4657-a3ec-04899eb20cef\") " pod="openstack/ceilometer-0" Feb 27 10:52:44 crc kubenswrapper[4728]: I0227 10:52:44.292896 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/70ad0ef0-1165-4657-a3ec-04899eb20cef-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"70ad0ef0-1165-4657-a3ec-04899eb20cef\") " pod="openstack/ceilometer-0" Feb 27 10:52:44 crc kubenswrapper[4728]: I0227 10:52:44.292950 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/70ad0ef0-1165-4657-a3ec-04899eb20cef-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"70ad0ef0-1165-4657-a3ec-04899eb20cef\") " pod="openstack/ceilometer-0" Feb 27 10:52:44 crc kubenswrapper[4728]: I0227 10:52:44.293103 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/70ad0ef0-1165-4657-a3ec-04899eb20cef-run-httpd\") pod \"ceilometer-0\" (UID: \"70ad0ef0-1165-4657-a3ec-04899eb20cef\") " pod="openstack/ceilometer-0" Feb 27 10:52:44 crc kubenswrapper[4728]: I0227 10:52:44.395592 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/70ad0ef0-1165-4657-a3ec-04899eb20cef-log-httpd\") pod \"ceilometer-0\" (UID: \"70ad0ef0-1165-4657-a3ec-04899eb20cef\") " pod="openstack/ceilometer-0" Feb 27 10:52:44 crc kubenswrapper[4728]: I0227 10:52:44.395631 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-72hvn\" (UniqueName: \"kubernetes.io/projected/70ad0ef0-1165-4657-a3ec-04899eb20cef-kube-api-access-72hvn\") pod \"ceilometer-0\" (UID: \"70ad0ef0-1165-4657-a3ec-04899eb20cef\") " pod="openstack/ceilometer-0" Feb 27 10:52:44 crc kubenswrapper[4728]: I0227 10:52:44.395724 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70ad0ef0-1165-4657-a3ec-04899eb20cef-config-data\") pod \"ceilometer-0\" (UID: \"70ad0ef0-1165-4657-a3ec-04899eb20cef\") " pod="openstack/ceilometer-0" Feb 27 10:52:44 crc kubenswrapper[4728]: I0227 10:52:44.395752 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/70ad0ef0-1165-4657-a3ec-04899eb20cef-scripts\") pod \"ceilometer-0\" (UID: \"70ad0ef0-1165-4657-a3ec-04899eb20cef\") " pod="openstack/ceilometer-0" Feb 27 10:52:44 crc kubenswrapper[4728]: I0227 10:52:44.395811 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70ad0ef0-1165-4657-a3ec-04899eb20cef-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"70ad0ef0-1165-4657-a3ec-04899eb20cef\") " pod="openstack/ceilometer-0" Feb 27 10:52:44 crc kubenswrapper[4728]: I0227 10:52:44.395839 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/70ad0ef0-1165-4657-a3ec-04899eb20cef-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"70ad0ef0-1165-4657-a3ec-04899eb20cef\") " pod="openstack/ceilometer-0" Feb 27 10:52:44 crc kubenswrapper[4728]: I0227 10:52:44.395855 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/70ad0ef0-1165-4657-a3ec-04899eb20cef-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"70ad0ef0-1165-4657-a3ec-04899eb20cef\") " pod="openstack/ceilometer-0" Feb 27 10:52:44 crc kubenswrapper[4728]: I0227 10:52:44.395899 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/70ad0ef0-1165-4657-a3ec-04899eb20cef-run-httpd\") pod \"ceilometer-0\" (UID: \"70ad0ef0-1165-4657-a3ec-04899eb20cef\") " pod="openstack/ceilometer-0" Feb 27 10:52:44 crc kubenswrapper[4728]: I0227 10:52:44.396314 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/70ad0ef0-1165-4657-a3ec-04899eb20cef-run-httpd\") pod \"ceilometer-0\" (UID: \"70ad0ef0-1165-4657-a3ec-04899eb20cef\") " pod="openstack/ceilometer-0" Feb 27 10:52:44 crc kubenswrapper[4728]: I0227 10:52:44.396530 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/70ad0ef0-1165-4657-a3ec-04899eb20cef-log-httpd\") pod \"ceilometer-0\" (UID: \"70ad0ef0-1165-4657-a3ec-04899eb20cef\") " pod="openstack/ceilometer-0" Feb 27 10:52:44 crc kubenswrapper[4728]: I0227 10:52:44.402744 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/70ad0ef0-1165-4657-a3ec-04899eb20cef-scripts\") pod \"ceilometer-0\" (UID: \"70ad0ef0-1165-4657-a3ec-04899eb20cef\") " pod="openstack/ceilometer-0" Feb 27 10:52:44 crc kubenswrapper[4728]: I0227 10:52:44.402928 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/70ad0ef0-1165-4657-a3ec-04899eb20cef-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"70ad0ef0-1165-4657-a3ec-04899eb20cef\") " pod="openstack/ceilometer-0" Feb 27 10:52:44 crc kubenswrapper[4728]: I0227 10:52:44.405160 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70ad0ef0-1165-4657-a3ec-04899eb20cef-config-data\") pod \"ceilometer-0\" (UID: \"70ad0ef0-1165-4657-a3ec-04899eb20cef\") " pod="openstack/ceilometer-0" Feb 27 10:52:44 crc kubenswrapper[4728]: I0227 10:52:44.405717 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70ad0ef0-1165-4657-a3ec-04899eb20cef-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"70ad0ef0-1165-4657-a3ec-04899eb20cef\") " pod="openstack/ceilometer-0" Feb 27 10:52:44 crc kubenswrapper[4728]: I0227 10:52:44.407922 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/70ad0ef0-1165-4657-a3ec-04899eb20cef-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"70ad0ef0-1165-4657-a3ec-04899eb20cef\") " pod="openstack/ceilometer-0" Feb 27 10:52:44 crc kubenswrapper[4728]: I0227 10:52:44.412776 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-72hvn\" (UniqueName: \"kubernetes.io/projected/70ad0ef0-1165-4657-a3ec-04899eb20cef-kube-api-access-72hvn\") pod \"ceilometer-0\" (UID: \"70ad0ef0-1165-4657-a3ec-04899eb20cef\") " pod="openstack/ceilometer-0" Feb 27 10:52:44 crc kubenswrapper[4728]: I0227 10:52:44.476903 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 27 10:52:44 crc kubenswrapper[4728]: I0227 10:52:44.753874 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c0b373eb-8903-41d1-b698-5e2a0a87aae7" path="/var/lib/kubelet/pods/c0b373eb-8903-41d1-b698-5e2a0a87aae7/volumes" Feb 27 10:52:45 crc kubenswrapper[4728]: I0227 10:52:45.027555 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 27 10:52:45 crc kubenswrapper[4728]: I0227 10:52:45.800840 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"70ad0ef0-1165-4657-a3ec-04899eb20cef","Type":"ContainerStarted","Data":"b330e9a42e24520e7a6ab067764adec99bc42f8252ae6db7c07c9fbbeeb201fa"} Feb 27 10:52:46 crc kubenswrapper[4728]: I0227 10:52:46.820709 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"70ad0ef0-1165-4657-a3ec-04899eb20cef","Type":"ContainerStarted","Data":"4dac54798872b4072bb5aefc050c1191a2c216b050f28e90aa15e404c24dd0f8"} Feb 27 10:52:46 crc kubenswrapper[4728]: I0227 10:52:46.821095 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"70ad0ef0-1165-4657-a3ec-04899eb20cef","Type":"ContainerStarted","Data":"4126d162ae10b0aa3daa96a38b803ade7e58bc32b95e9d337099ed13d3cbe52f"} Feb 27 10:52:47 crc kubenswrapper[4728]: I0227 10:52:47.835017 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"70ad0ef0-1165-4657-a3ec-04899eb20cef","Type":"ContainerStarted","Data":"87d108b3b1f0feb359fd3c67b088cf3b6f45bae17d2bb18ee7d6df92afc02a2f"} Feb 27 10:52:48 crc kubenswrapper[4728]: I0227 10:52:48.218671 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-sync-dgjpm"] Feb 27 10:52:48 crc kubenswrapper[4728]: I0227 10:52:48.233602 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-sync-dgjpm"] Feb 27 10:52:48 crc kubenswrapper[4728]: I0227 10:52:48.317129 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-sync-tqhw2"] Feb 27 10:52:48 crc kubenswrapper[4728]: I0227 10:52:48.320049 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-tqhw2" Feb 27 10:52:48 crc kubenswrapper[4728]: I0227 10:52:48.358020 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-tqhw2"] Feb 27 10:52:48 crc kubenswrapper[4728]: I0227 10:52:48.409676 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c112afc6-4352-4004-885a-0b1d88caffae-combined-ca-bundle\") pod \"heat-db-sync-tqhw2\" (UID: \"c112afc6-4352-4004-885a-0b1d88caffae\") " pod="openstack/heat-db-sync-tqhw2" Feb 27 10:52:48 crc kubenswrapper[4728]: I0227 10:52:48.409749 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvwg2\" (UniqueName: \"kubernetes.io/projected/c112afc6-4352-4004-885a-0b1d88caffae-kube-api-access-bvwg2\") pod \"heat-db-sync-tqhw2\" (UID: \"c112afc6-4352-4004-885a-0b1d88caffae\") " pod="openstack/heat-db-sync-tqhw2" Feb 27 10:52:48 crc kubenswrapper[4728]: I0227 10:52:48.410017 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c112afc6-4352-4004-885a-0b1d88caffae-config-data\") pod \"heat-db-sync-tqhw2\" (UID: \"c112afc6-4352-4004-885a-0b1d88caffae\") " pod="openstack/heat-db-sync-tqhw2" Feb 27 10:52:48 crc kubenswrapper[4728]: I0227 10:52:48.512559 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c112afc6-4352-4004-885a-0b1d88caffae-combined-ca-bundle\") pod \"heat-db-sync-tqhw2\" (UID: \"c112afc6-4352-4004-885a-0b1d88caffae\") " pod="openstack/heat-db-sync-tqhw2" Feb 27 10:52:48 crc kubenswrapper[4728]: I0227 10:52:48.512630 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bvwg2\" (UniqueName: \"kubernetes.io/projected/c112afc6-4352-4004-885a-0b1d88caffae-kube-api-access-bvwg2\") pod \"heat-db-sync-tqhw2\" (UID: \"c112afc6-4352-4004-885a-0b1d88caffae\") " pod="openstack/heat-db-sync-tqhw2" Feb 27 10:52:48 crc kubenswrapper[4728]: I0227 10:52:48.512719 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c112afc6-4352-4004-885a-0b1d88caffae-config-data\") pod \"heat-db-sync-tqhw2\" (UID: \"c112afc6-4352-4004-885a-0b1d88caffae\") " pod="openstack/heat-db-sync-tqhw2" Feb 27 10:52:48 crc kubenswrapper[4728]: I0227 10:52:48.521815 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c112afc6-4352-4004-885a-0b1d88caffae-config-data\") pod \"heat-db-sync-tqhw2\" (UID: \"c112afc6-4352-4004-885a-0b1d88caffae\") " pod="openstack/heat-db-sync-tqhw2" Feb 27 10:52:48 crc kubenswrapper[4728]: I0227 10:52:48.528803 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c112afc6-4352-4004-885a-0b1d88caffae-combined-ca-bundle\") pod \"heat-db-sync-tqhw2\" (UID: \"c112afc6-4352-4004-885a-0b1d88caffae\") " pod="openstack/heat-db-sync-tqhw2" Feb 27 10:52:48 crc kubenswrapper[4728]: I0227 10:52:48.531753 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bvwg2\" (UniqueName: \"kubernetes.io/projected/c112afc6-4352-4004-885a-0b1d88caffae-kube-api-access-bvwg2\") pod \"heat-db-sync-tqhw2\" (UID: \"c112afc6-4352-4004-885a-0b1d88caffae\") " pod="openstack/heat-db-sync-tqhw2" Feb 27 10:52:48 crc kubenswrapper[4728]: I0227 10:52:48.650704 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-tqhw2" Feb 27 10:52:48 crc kubenswrapper[4728]: I0227 10:52:48.752704 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e977ffad-2764-4871-bdc8-24f0c3b4caf1" path="/var/lib/kubelet/pods/e977ffad-2764-4871-bdc8-24f0c3b4caf1/volumes" Feb 27 10:52:49 crc kubenswrapper[4728]: W0227 10:52:49.166479 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc112afc6_4352_4004_885a_0b1d88caffae.slice/crio-0869876b9b5cb3dd83cb9bbfb69cabc1a1326d8268840c67a59a8e6e1a2ed001 WatchSource:0}: Error finding container 0869876b9b5cb3dd83cb9bbfb69cabc1a1326d8268840c67a59a8e6e1a2ed001: Status 404 returned error can't find the container with id 0869876b9b5cb3dd83cb9bbfb69cabc1a1326d8268840c67a59a8e6e1a2ed001 Feb 27 10:52:49 crc kubenswrapper[4728]: I0227 10:52:49.179880 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-tqhw2"] Feb 27 10:52:49 crc kubenswrapper[4728]: I0227 10:52:49.861843 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"70ad0ef0-1165-4657-a3ec-04899eb20cef","Type":"ContainerStarted","Data":"378dd42f3e62e4c7d82f39fa3d085f7c82795a118964f4db571797653ac95ded"} Feb 27 10:52:49 crc kubenswrapper[4728]: I0227 10:52:49.862474 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 27 10:52:49 crc kubenswrapper[4728]: I0227 10:52:49.863741 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-tqhw2" event={"ID":"c112afc6-4352-4004-885a-0b1d88caffae","Type":"ContainerStarted","Data":"0869876b9b5cb3dd83cb9bbfb69cabc1a1326d8268840c67a59a8e6e1a2ed001"} Feb 27 10:52:49 crc kubenswrapper[4728]: I0227 10:52:49.888675 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.03397258 podStartE2EDuration="5.888656847s" podCreationTimestamp="2026-02-27 10:52:44 +0000 UTC" firstStartedPulling="2026-02-27 10:52:45.029772941 +0000 UTC m=+1584.992139047" lastFinishedPulling="2026-02-27 10:52:48.884457218 +0000 UTC m=+1588.846823314" observedRunningTime="2026-02-27 10:52:49.884466063 +0000 UTC m=+1589.846832179" watchObservedRunningTime="2026-02-27 10:52:49.888656847 +0000 UTC m=+1589.851022953" Feb 27 10:52:50 crc kubenswrapper[4728]: I0227 10:52:50.805162 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-2"] Feb 27 10:52:51 crc kubenswrapper[4728]: I0227 10:52:51.118712 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 27 10:52:51 crc kubenswrapper[4728]: I0227 10:52:51.920036 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 27 10:52:52 crc kubenswrapper[4728]: I0227 10:52:52.239636 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 27 10:52:52 crc kubenswrapper[4728]: I0227 10:52:52.239946 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="70ad0ef0-1165-4657-a3ec-04899eb20cef" containerName="ceilometer-central-agent" containerID="cri-o://4126d162ae10b0aa3daa96a38b803ade7e58bc32b95e9d337099ed13d3cbe52f" gracePeriod=30 Feb 27 10:52:52 crc kubenswrapper[4728]: I0227 10:52:52.240360 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="70ad0ef0-1165-4657-a3ec-04899eb20cef" containerName="proxy-httpd" containerID="cri-o://378dd42f3e62e4c7d82f39fa3d085f7c82795a118964f4db571797653ac95ded" gracePeriod=30 Feb 27 10:52:52 crc kubenswrapper[4728]: I0227 10:52:52.240439 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="70ad0ef0-1165-4657-a3ec-04899eb20cef" containerName="ceilometer-notification-agent" containerID="cri-o://4dac54798872b4072bb5aefc050c1191a2c216b050f28e90aa15e404c24dd0f8" gracePeriod=30 Feb 27 10:52:52 crc kubenswrapper[4728]: I0227 10:52:52.240483 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="70ad0ef0-1165-4657-a3ec-04899eb20cef" containerName="sg-core" containerID="cri-o://87d108b3b1f0feb359fd3c67b088cf3b6f45bae17d2bb18ee7d6df92afc02a2f" gracePeriod=30 Feb 27 10:52:52 crc kubenswrapper[4728]: I0227 10:52:52.947105 4728 generic.go:334] "Generic (PLEG): container finished" podID="70ad0ef0-1165-4657-a3ec-04899eb20cef" containerID="378dd42f3e62e4c7d82f39fa3d085f7c82795a118964f4db571797653ac95ded" exitCode=0 Feb 27 10:52:52 crc kubenswrapper[4728]: I0227 10:52:52.947420 4728 generic.go:334] "Generic (PLEG): container finished" podID="70ad0ef0-1165-4657-a3ec-04899eb20cef" containerID="87d108b3b1f0feb359fd3c67b088cf3b6f45bae17d2bb18ee7d6df92afc02a2f" exitCode=2 Feb 27 10:52:52 crc kubenswrapper[4728]: I0227 10:52:52.947430 4728 generic.go:334] "Generic (PLEG): container finished" podID="70ad0ef0-1165-4657-a3ec-04899eb20cef" containerID="4dac54798872b4072bb5aefc050c1191a2c216b050f28e90aa15e404c24dd0f8" exitCode=0 Feb 27 10:52:52 crc kubenswrapper[4728]: I0227 10:52:52.947193 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"70ad0ef0-1165-4657-a3ec-04899eb20cef","Type":"ContainerDied","Data":"378dd42f3e62e4c7d82f39fa3d085f7c82795a118964f4db571797653ac95ded"} Feb 27 10:52:52 crc kubenswrapper[4728]: I0227 10:52:52.947465 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"70ad0ef0-1165-4657-a3ec-04899eb20cef","Type":"ContainerDied","Data":"87d108b3b1f0feb359fd3c67b088cf3b6f45bae17d2bb18ee7d6df92afc02a2f"} Feb 27 10:52:52 crc kubenswrapper[4728]: I0227 10:52:52.947480 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"70ad0ef0-1165-4657-a3ec-04899eb20cef","Type":"ContainerDied","Data":"4dac54798872b4072bb5aefc050c1191a2c216b050f28e90aa15e404c24dd0f8"} Feb 27 10:52:53 crc kubenswrapper[4728]: I0227 10:52:53.003291 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-fn4cs" Feb 27 10:52:53 crc kubenswrapper[4728]: I0227 10:52:53.072030 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-fn4cs" Feb 27 10:52:53 crc kubenswrapper[4728]: I0227 10:52:53.234013 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-6xxv8" Feb 27 10:52:53 crc kubenswrapper[4728]: I0227 10:52:53.315059 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-6xxv8" Feb 27 10:52:53 crc kubenswrapper[4728]: I0227 10:52:53.866788 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fn4cs"] Feb 27 10:52:55 crc kubenswrapper[4728]: I0227 10:52:55.005738 4728 generic.go:334] "Generic (PLEG): container finished" podID="70ad0ef0-1165-4657-a3ec-04899eb20cef" containerID="4126d162ae10b0aa3daa96a38b803ade7e58bc32b95e9d337099ed13d3cbe52f" exitCode=0 Feb 27 10:52:55 crc kubenswrapper[4728]: I0227 10:52:55.006308 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-fn4cs" podUID="9c6a8916-fd56-45ba-837e-78eb7fd7b7f2" containerName="registry-server" containerID="cri-o://7909ca97f45415011ac59b761937371303c6829059f1d23ea11e9524cd9fb36a" gracePeriod=2 Feb 27 10:52:55 crc kubenswrapper[4728]: I0227 10:52:55.006579 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"70ad0ef0-1165-4657-a3ec-04899eb20cef","Type":"ContainerDied","Data":"4126d162ae10b0aa3daa96a38b803ade7e58bc32b95e9d337099ed13d3cbe52f"} Feb 27 10:52:55 crc kubenswrapper[4728]: I0227 10:52:55.201475 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 27 10:52:55 crc kubenswrapper[4728]: I0227 10:52:55.325806 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-72hvn\" (UniqueName: \"kubernetes.io/projected/70ad0ef0-1165-4657-a3ec-04899eb20cef-kube-api-access-72hvn\") pod \"70ad0ef0-1165-4657-a3ec-04899eb20cef\" (UID: \"70ad0ef0-1165-4657-a3ec-04899eb20cef\") " Feb 27 10:52:55 crc kubenswrapper[4728]: I0227 10:52:55.326196 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70ad0ef0-1165-4657-a3ec-04899eb20cef-config-data\") pod \"70ad0ef0-1165-4657-a3ec-04899eb20cef\" (UID: \"70ad0ef0-1165-4657-a3ec-04899eb20cef\") " Feb 27 10:52:55 crc kubenswrapper[4728]: I0227 10:52:55.326295 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/70ad0ef0-1165-4657-a3ec-04899eb20cef-scripts\") pod \"70ad0ef0-1165-4657-a3ec-04899eb20cef\" (UID: \"70ad0ef0-1165-4657-a3ec-04899eb20cef\") " Feb 27 10:52:55 crc kubenswrapper[4728]: I0227 10:52:55.326332 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/70ad0ef0-1165-4657-a3ec-04899eb20cef-log-httpd\") pod \"70ad0ef0-1165-4657-a3ec-04899eb20cef\" (UID: \"70ad0ef0-1165-4657-a3ec-04899eb20cef\") " Feb 27 10:52:55 crc kubenswrapper[4728]: I0227 10:52:55.326354 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/70ad0ef0-1165-4657-a3ec-04899eb20cef-run-httpd\") pod \"70ad0ef0-1165-4657-a3ec-04899eb20cef\" (UID: \"70ad0ef0-1165-4657-a3ec-04899eb20cef\") " Feb 27 10:52:55 crc kubenswrapper[4728]: I0227 10:52:55.326375 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/70ad0ef0-1165-4657-a3ec-04899eb20cef-sg-core-conf-yaml\") pod \"70ad0ef0-1165-4657-a3ec-04899eb20cef\" (UID: \"70ad0ef0-1165-4657-a3ec-04899eb20cef\") " Feb 27 10:52:55 crc kubenswrapper[4728]: I0227 10:52:55.326466 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70ad0ef0-1165-4657-a3ec-04899eb20cef-combined-ca-bundle\") pod \"70ad0ef0-1165-4657-a3ec-04899eb20cef\" (UID: \"70ad0ef0-1165-4657-a3ec-04899eb20cef\") " Feb 27 10:52:55 crc kubenswrapper[4728]: I0227 10:52:55.326561 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/70ad0ef0-1165-4657-a3ec-04899eb20cef-ceilometer-tls-certs\") pod \"70ad0ef0-1165-4657-a3ec-04899eb20cef\" (UID: \"70ad0ef0-1165-4657-a3ec-04899eb20cef\") " Feb 27 10:52:55 crc kubenswrapper[4728]: I0227 10:52:55.328361 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/70ad0ef0-1165-4657-a3ec-04899eb20cef-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "70ad0ef0-1165-4657-a3ec-04899eb20cef" (UID: "70ad0ef0-1165-4657-a3ec-04899eb20cef"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:52:55 crc kubenswrapper[4728]: I0227 10:52:55.329033 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/70ad0ef0-1165-4657-a3ec-04899eb20cef-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "70ad0ef0-1165-4657-a3ec-04899eb20cef" (UID: "70ad0ef0-1165-4657-a3ec-04899eb20cef"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:52:55 crc kubenswrapper[4728]: I0227 10:52:55.358418 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70ad0ef0-1165-4657-a3ec-04899eb20cef-kube-api-access-72hvn" (OuterVolumeSpecName: "kube-api-access-72hvn") pod "70ad0ef0-1165-4657-a3ec-04899eb20cef" (UID: "70ad0ef0-1165-4657-a3ec-04899eb20cef"). InnerVolumeSpecName "kube-api-access-72hvn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:52:55 crc kubenswrapper[4728]: I0227 10:52:55.368936 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70ad0ef0-1165-4657-a3ec-04899eb20cef-scripts" (OuterVolumeSpecName: "scripts") pod "70ad0ef0-1165-4657-a3ec-04899eb20cef" (UID: "70ad0ef0-1165-4657-a3ec-04899eb20cef"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:52:55 crc kubenswrapper[4728]: I0227 10:52:55.429305 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-72hvn\" (UniqueName: \"kubernetes.io/projected/70ad0ef0-1165-4657-a3ec-04899eb20cef-kube-api-access-72hvn\") on node \"crc\" DevicePath \"\"" Feb 27 10:52:55 crc kubenswrapper[4728]: I0227 10:52:55.429340 4728 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/70ad0ef0-1165-4657-a3ec-04899eb20cef-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 10:52:55 crc kubenswrapper[4728]: I0227 10:52:55.429349 4728 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/70ad0ef0-1165-4657-a3ec-04899eb20cef-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 27 10:52:55 crc kubenswrapper[4728]: I0227 10:52:55.429358 4728 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/70ad0ef0-1165-4657-a3ec-04899eb20cef-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 27 10:52:55 crc kubenswrapper[4728]: I0227 10:52:55.450861 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70ad0ef0-1165-4657-a3ec-04899eb20cef-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "70ad0ef0-1165-4657-a3ec-04899eb20cef" (UID: "70ad0ef0-1165-4657-a3ec-04899eb20cef"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:52:55 crc kubenswrapper[4728]: I0227 10:52:55.516285 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70ad0ef0-1165-4657-a3ec-04899eb20cef-config-data" (OuterVolumeSpecName: "config-data") pod "70ad0ef0-1165-4657-a3ec-04899eb20cef" (UID: "70ad0ef0-1165-4657-a3ec-04899eb20cef"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:52:55 crc kubenswrapper[4728]: I0227 10:52:55.531131 4728 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70ad0ef0-1165-4657-a3ec-04899eb20cef-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 10:52:55 crc kubenswrapper[4728]: I0227 10:52:55.533108 4728 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/70ad0ef0-1165-4657-a3ec-04899eb20cef-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 27 10:52:55 crc kubenswrapper[4728]: I0227 10:52:55.553826 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70ad0ef0-1165-4657-a3ec-04899eb20cef-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "70ad0ef0-1165-4657-a3ec-04899eb20cef" (UID: "70ad0ef0-1165-4657-a3ec-04899eb20cef"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:52:55 crc kubenswrapper[4728]: I0227 10:52:55.564214 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70ad0ef0-1165-4657-a3ec-04899eb20cef-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "70ad0ef0-1165-4657-a3ec-04899eb20cef" (UID: "70ad0ef0-1165-4657-a3ec-04899eb20cef"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:52:55 crc kubenswrapper[4728]: I0227 10:52:55.637029 4728 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70ad0ef0-1165-4657-a3ec-04899eb20cef-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 10:52:55 crc kubenswrapper[4728]: I0227 10:52:55.637064 4728 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/70ad0ef0-1165-4657-a3ec-04899eb20cef-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 27 10:52:55 crc kubenswrapper[4728]: I0227 10:52:55.639334 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6xxv8"] Feb 27 10:52:55 crc kubenswrapper[4728]: I0227 10:52:55.869849 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-spm46"] Feb 27 10:52:55 crc kubenswrapper[4728]: I0227 10:52:55.870308 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-spm46" podUID="57410d64-6726-4c64-b9f4-e1eaad0aa42e" containerName="registry-server" containerID="cri-o://c92115c63a3d04786afb7d35601bf52b4816f3ef644c288fd306cd0bcb0d2784" gracePeriod=2 Feb 27 10:52:55 crc kubenswrapper[4728]: I0227 10:52:55.891746 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fn4cs" Feb 27 10:52:55 crc kubenswrapper[4728]: I0227 10:52:55.944603 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g6vjl\" (UniqueName: \"kubernetes.io/projected/9c6a8916-fd56-45ba-837e-78eb7fd7b7f2-kube-api-access-g6vjl\") pod \"9c6a8916-fd56-45ba-837e-78eb7fd7b7f2\" (UID: \"9c6a8916-fd56-45ba-837e-78eb7fd7b7f2\") " Feb 27 10:52:55 crc kubenswrapper[4728]: I0227 10:52:55.944768 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c6a8916-fd56-45ba-837e-78eb7fd7b7f2-utilities\") pod \"9c6a8916-fd56-45ba-837e-78eb7fd7b7f2\" (UID: \"9c6a8916-fd56-45ba-837e-78eb7fd7b7f2\") " Feb 27 10:52:55 crc kubenswrapper[4728]: I0227 10:52:55.944792 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c6a8916-fd56-45ba-837e-78eb7fd7b7f2-catalog-content\") pod \"9c6a8916-fd56-45ba-837e-78eb7fd7b7f2\" (UID: \"9c6a8916-fd56-45ba-837e-78eb7fd7b7f2\") " Feb 27 10:52:55 crc kubenswrapper[4728]: I0227 10:52:55.946409 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9c6a8916-fd56-45ba-837e-78eb7fd7b7f2-utilities" (OuterVolumeSpecName: "utilities") pod "9c6a8916-fd56-45ba-837e-78eb7fd7b7f2" (UID: "9c6a8916-fd56-45ba-837e-78eb7fd7b7f2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:52:55 crc kubenswrapper[4728]: I0227 10:52:55.951577 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c6a8916-fd56-45ba-837e-78eb7fd7b7f2-kube-api-access-g6vjl" (OuterVolumeSpecName: "kube-api-access-g6vjl") pod "9c6a8916-fd56-45ba-837e-78eb7fd7b7f2" (UID: "9c6a8916-fd56-45ba-837e-78eb7fd7b7f2"). InnerVolumeSpecName "kube-api-access-g6vjl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:52:55 crc kubenswrapper[4728]: I0227 10:52:55.988244 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9c6a8916-fd56-45ba-837e-78eb7fd7b7f2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9c6a8916-fd56-45ba-837e-78eb7fd7b7f2" (UID: "9c6a8916-fd56-45ba-837e-78eb7fd7b7f2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:52:56 crc kubenswrapper[4728]: I0227 10:52:56.048243 4728 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c6a8916-fd56-45ba-837e-78eb7fd7b7f2-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 10:52:56 crc kubenswrapper[4728]: I0227 10:52:56.048271 4728 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c6a8916-fd56-45ba-837e-78eb7fd7b7f2-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 10:52:56 crc kubenswrapper[4728]: I0227 10:52:56.048282 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g6vjl\" (UniqueName: \"kubernetes.io/projected/9c6a8916-fd56-45ba-837e-78eb7fd7b7f2-kube-api-access-g6vjl\") on node \"crc\" DevicePath \"\"" Feb 27 10:52:56 crc kubenswrapper[4728]: I0227 10:52:56.048314 4728 generic.go:334] "Generic (PLEG): container finished" podID="57410d64-6726-4c64-b9f4-e1eaad0aa42e" containerID="c92115c63a3d04786afb7d35601bf52b4816f3ef644c288fd306cd0bcb0d2784" exitCode=0 Feb 27 10:52:56 crc kubenswrapper[4728]: I0227 10:52:56.048432 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-spm46" event={"ID":"57410d64-6726-4c64-b9f4-e1eaad0aa42e","Type":"ContainerDied","Data":"c92115c63a3d04786afb7d35601bf52b4816f3ef644c288fd306cd0bcb0d2784"} Feb 27 10:52:56 crc kubenswrapper[4728]: I0227 10:52:56.070409 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 27 10:52:56 crc kubenswrapper[4728]: I0227 10:52:56.070419 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"70ad0ef0-1165-4657-a3ec-04899eb20cef","Type":"ContainerDied","Data":"b330e9a42e24520e7a6ab067764adec99bc42f8252ae6db7c07c9fbbeeb201fa"} Feb 27 10:52:56 crc kubenswrapper[4728]: I0227 10:52:56.070473 4728 scope.go:117] "RemoveContainer" containerID="378dd42f3e62e4c7d82f39fa3d085f7c82795a118964f4db571797653ac95ded" Feb 27 10:52:56 crc kubenswrapper[4728]: I0227 10:52:56.088068 4728 generic.go:334] "Generic (PLEG): container finished" podID="9c6a8916-fd56-45ba-837e-78eb7fd7b7f2" containerID="7909ca97f45415011ac59b761937371303c6829059f1d23ea11e9524cd9fb36a" exitCode=0 Feb 27 10:52:56 crc kubenswrapper[4728]: I0227 10:52:56.088156 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fn4cs" Feb 27 10:52:56 crc kubenswrapper[4728]: I0227 10:52:56.088149 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fn4cs" event={"ID":"9c6a8916-fd56-45ba-837e-78eb7fd7b7f2","Type":"ContainerDied","Data":"7909ca97f45415011ac59b761937371303c6829059f1d23ea11e9524cd9fb36a"} Feb 27 10:52:56 crc kubenswrapper[4728]: I0227 10:52:56.088306 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fn4cs" event={"ID":"9c6a8916-fd56-45ba-837e-78eb7fd7b7f2","Type":"ContainerDied","Data":"7b6367e016331b46edd4c1c7b40213ef835c171f53d35d597281c688e29cf1c2"} Feb 27 10:52:56 crc kubenswrapper[4728]: I0227 10:52:56.122297 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 27 10:52:56 crc kubenswrapper[4728]: I0227 10:52:56.148763 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 27 10:52:56 crc kubenswrapper[4728]: I0227 10:52:56.156905 4728 scope.go:117] "RemoveContainer" containerID="87d108b3b1f0feb359fd3c67b088cf3b6f45bae17d2bb18ee7d6df92afc02a2f" Feb 27 10:52:56 crc kubenswrapper[4728]: I0227 10:52:56.172746 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fn4cs"] Feb 27 10:52:56 crc kubenswrapper[4728]: I0227 10:52:56.236417 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-2" podUID="ad00da50-2e05-4612-a862-5cccd698e77b" containerName="rabbitmq" containerID="cri-o://e23737f0ce37aa688192a9b087adece80e03ee173aeb672a9a8026ba67e7e977" gracePeriod=604795 Feb 27 10:52:56 crc kubenswrapper[4728]: I0227 10:52:56.264321 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-fn4cs"] Feb 27 10:52:56 crc kubenswrapper[4728]: I0227 10:52:56.284318 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 27 10:52:56 crc kubenswrapper[4728]: E0227 10:52:56.292305 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70ad0ef0-1165-4657-a3ec-04899eb20cef" containerName="proxy-httpd" Feb 27 10:52:56 crc kubenswrapper[4728]: I0227 10:52:56.295465 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="70ad0ef0-1165-4657-a3ec-04899eb20cef" containerName="proxy-httpd" Feb 27 10:52:56 crc kubenswrapper[4728]: E0227 10:52:56.295684 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70ad0ef0-1165-4657-a3ec-04899eb20cef" containerName="ceilometer-notification-agent" Feb 27 10:52:56 crc kubenswrapper[4728]: I0227 10:52:56.295767 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="70ad0ef0-1165-4657-a3ec-04899eb20cef" containerName="ceilometer-notification-agent" Feb 27 10:52:56 crc kubenswrapper[4728]: E0227 10:52:56.295834 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c6a8916-fd56-45ba-837e-78eb7fd7b7f2" containerName="registry-server" Feb 27 10:52:56 crc kubenswrapper[4728]: I0227 10:52:56.295886 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c6a8916-fd56-45ba-837e-78eb7fd7b7f2" containerName="registry-server" Feb 27 10:52:56 crc kubenswrapper[4728]: E0227 10:52:56.295952 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c6a8916-fd56-45ba-837e-78eb7fd7b7f2" containerName="extract-content" Feb 27 10:52:56 crc kubenswrapper[4728]: I0227 10:52:56.296003 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c6a8916-fd56-45ba-837e-78eb7fd7b7f2" containerName="extract-content" Feb 27 10:52:56 crc kubenswrapper[4728]: E0227 10:52:56.296076 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c6a8916-fd56-45ba-837e-78eb7fd7b7f2" containerName="extract-utilities" Feb 27 10:52:56 crc kubenswrapper[4728]: I0227 10:52:56.296128 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c6a8916-fd56-45ba-837e-78eb7fd7b7f2" containerName="extract-utilities" Feb 27 10:52:56 crc kubenswrapper[4728]: E0227 10:52:56.296206 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70ad0ef0-1165-4657-a3ec-04899eb20cef" containerName="ceilometer-central-agent" Feb 27 10:52:56 crc kubenswrapper[4728]: I0227 10:52:56.296264 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="70ad0ef0-1165-4657-a3ec-04899eb20cef" containerName="ceilometer-central-agent" Feb 27 10:52:56 crc kubenswrapper[4728]: E0227 10:52:56.296333 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70ad0ef0-1165-4657-a3ec-04899eb20cef" containerName="sg-core" Feb 27 10:52:56 crc kubenswrapper[4728]: I0227 10:52:56.296387 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="70ad0ef0-1165-4657-a3ec-04899eb20cef" containerName="sg-core" Feb 27 10:52:56 crc kubenswrapper[4728]: I0227 10:52:56.296800 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="70ad0ef0-1165-4657-a3ec-04899eb20cef" containerName="sg-core" Feb 27 10:52:56 crc kubenswrapper[4728]: I0227 10:52:56.297520 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="70ad0ef0-1165-4657-a3ec-04899eb20cef" containerName="ceilometer-central-agent" Feb 27 10:52:56 crc kubenswrapper[4728]: I0227 10:52:56.297641 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="70ad0ef0-1165-4657-a3ec-04899eb20cef" containerName="proxy-httpd" Feb 27 10:52:56 crc kubenswrapper[4728]: I0227 10:52:56.297736 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="70ad0ef0-1165-4657-a3ec-04899eb20cef" containerName="ceilometer-notification-agent" Feb 27 10:52:56 crc kubenswrapper[4728]: I0227 10:52:56.297795 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c6a8916-fd56-45ba-837e-78eb7fd7b7f2" containerName="registry-server" Feb 27 10:52:56 crc kubenswrapper[4728]: I0227 10:52:56.300044 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 27 10:52:56 crc kubenswrapper[4728]: I0227 10:52:56.305205 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 27 10:52:56 crc kubenswrapper[4728]: I0227 10:52:56.305382 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 27 10:52:56 crc kubenswrapper[4728]: I0227 10:52:56.305566 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 27 10:52:56 crc kubenswrapper[4728]: I0227 10:52:56.330875 4728 scope.go:117] "RemoveContainer" containerID="4dac54798872b4072bb5aefc050c1191a2c216b050f28e90aa15e404c24dd0f8" Feb 27 10:52:56 crc kubenswrapper[4728]: I0227 10:52:56.357065 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/77499a0a-be50-4d60-ae26-461a8c9742e5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"77499a0a-be50-4d60-ae26-461a8c9742e5\") " pod="openstack/ceilometer-0" Feb 27 10:52:56 crc kubenswrapper[4728]: I0227 10:52:56.357148 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/77499a0a-be50-4d60-ae26-461a8c9742e5-run-httpd\") pod \"ceilometer-0\" (UID: \"77499a0a-be50-4d60-ae26-461a8c9742e5\") " pod="openstack/ceilometer-0" Feb 27 10:52:56 crc kubenswrapper[4728]: I0227 10:52:56.357189 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/77499a0a-be50-4d60-ae26-461a8c9742e5-log-httpd\") pod \"ceilometer-0\" (UID: \"77499a0a-be50-4d60-ae26-461a8c9742e5\") " pod="openstack/ceilometer-0" Feb 27 10:52:56 crc kubenswrapper[4728]: I0227 10:52:56.357237 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9tttb\" (UniqueName: \"kubernetes.io/projected/77499a0a-be50-4d60-ae26-461a8c9742e5-kube-api-access-9tttb\") pod \"ceilometer-0\" (UID: \"77499a0a-be50-4d60-ae26-461a8c9742e5\") " pod="openstack/ceilometer-0" Feb 27 10:52:56 crc kubenswrapper[4728]: I0227 10:52:56.357256 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/77499a0a-be50-4d60-ae26-461a8c9742e5-scripts\") pod \"ceilometer-0\" (UID: \"77499a0a-be50-4d60-ae26-461a8c9742e5\") " pod="openstack/ceilometer-0" Feb 27 10:52:56 crc kubenswrapper[4728]: I0227 10:52:56.357293 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/77499a0a-be50-4d60-ae26-461a8c9742e5-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"77499a0a-be50-4d60-ae26-461a8c9742e5\") " pod="openstack/ceilometer-0" Feb 27 10:52:56 crc kubenswrapper[4728]: I0227 10:52:56.357320 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77499a0a-be50-4d60-ae26-461a8c9742e5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"77499a0a-be50-4d60-ae26-461a8c9742e5\") " pod="openstack/ceilometer-0" Feb 27 10:52:56 crc kubenswrapper[4728]: I0227 10:52:56.357373 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77499a0a-be50-4d60-ae26-461a8c9742e5-config-data\") pod \"ceilometer-0\" (UID: \"77499a0a-be50-4d60-ae26-461a8c9742e5\") " pod="openstack/ceilometer-0" Feb 27 10:52:56 crc kubenswrapper[4728]: I0227 10:52:56.360093 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 27 10:52:56 crc kubenswrapper[4728]: I0227 10:52:56.449448 4728 scope.go:117] "RemoveContainer" containerID="4126d162ae10b0aa3daa96a38b803ade7e58bc32b95e9d337099ed13d3cbe52f" Feb 27 10:52:56 crc kubenswrapper[4728]: I0227 10:52:56.460242 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9tttb\" (UniqueName: \"kubernetes.io/projected/77499a0a-be50-4d60-ae26-461a8c9742e5-kube-api-access-9tttb\") pod \"ceilometer-0\" (UID: \"77499a0a-be50-4d60-ae26-461a8c9742e5\") " pod="openstack/ceilometer-0" Feb 27 10:52:56 crc kubenswrapper[4728]: I0227 10:52:56.460520 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/77499a0a-be50-4d60-ae26-461a8c9742e5-scripts\") pod \"ceilometer-0\" (UID: \"77499a0a-be50-4d60-ae26-461a8c9742e5\") " pod="openstack/ceilometer-0" Feb 27 10:52:56 crc kubenswrapper[4728]: I0227 10:52:56.460801 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/77499a0a-be50-4d60-ae26-461a8c9742e5-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"77499a0a-be50-4d60-ae26-461a8c9742e5\") " pod="openstack/ceilometer-0" Feb 27 10:52:56 crc kubenswrapper[4728]: I0227 10:52:56.460969 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77499a0a-be50-4d60-ae26-461a8c9742e5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"77499a0a-be50-4d60-ae26-461a8c9742e5\") " pod="openstack/ceilometer-0" Feb 27 10:52:56 crc kubenswrapper[4728]: I0227 10:52:56.462397 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77499a0a-be50-4d60-ae26-461a8c9742e5-config-data\") pod \"ceilometer-0\" (UID: \"77499a0a-be50-4d60-ae26-461a8c9742e5\") " pod="openstack/ceilometer-0" Feb 27 10:52:56 crc kubenswrapper[4728]: I0227 10:52:56.462773 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/77499a0a-be50-4d60-ae26-461a8c9742e5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"77499a0a-be50-4d60-ae26-461a8c9742e5\") " pod="openstack/ceilometer-0" Feb 27 10:52:56 crc kubenswrapper[4728]: I0227 10:52:56.462895 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/77499a0a-be50-4d60-ae26-461a8c9742e5-run-httpd\") pod \"ceilometer-0\" (UID: \"77499a0a-be50-4d60-ae26-461a8c9742e5\") " pod="openstack/ceilometer-0" Feb 27 10:52:56 crc kubenswrapper[4728]: I0227 10:52:56.463063 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/77499a0a-be50-4d60-ae26-461a8c9742e5-log-httpd\") pod \"ceilometer-0\" (UID: \"77499a0a-be50-4d60-ae26-461a8c9742e5\") " pod="openstack/ceilometer-0" Feb 27 10:52:56 crc kubenswrapper[4728]: I0227 10:52:56.463696 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/77499a0a-be50-4d60-ae26-461a8c9742e5-log-httpd\") pod \"ceilometer-0\" (UID: \"77499a0a-be50-4d60-ae26-461a8c9742e5\") " pod="openstack/ceilometer-0" Feb 27 10:52:56 crc kubenswrapper[4728]: I0227 10:52:56.464721 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/77499a0a-be50-4d60-ae26-461a8c9742e5-run-httpd\") pod \"ceilometer-0\" (UID: \"77499a0a-be50-4d60-ae26-461a8c9742e5\") " pod="openstack/ceilometer-0" Feb 27 10:52:56 crc kubenswrapper[4728]: I0227 10:52:56.476199 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/77499a0a-be50-4d60-ae26-461a8c9742e5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"77499a0a-be50-4d60-ae26-461a8c9742e5\") " pod="openstack/ceilometer-0" Feb 27 10:52:56 crc kubenswrapper[4728]: I0227 10:52:56.476225 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/77499a0a-be50-4d60-ae26-461a8c9742e5-scripts\") pod \"ceilometer-0\" (UID: \"77499a0a-be50-4d60-ae26-461a8c9742e5\") " pod="openstack/ceilometer-0" Feb 27 10:52:56 crc kubenswrapper[4728]: I0227 10:52:56.476239 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77499a0a-be50-4d60-ae26-461a8c9742e5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"77499a0a-be50-4d60-ae26-461a8c9742e5\") " pod="openstack/ceilometer-0" Feb 27 10:52:56 crc kubenswrapper[4728]: I0227 10:52:56.476825 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/77499a0a-be50-4d60-ae26-461a8c9742e5-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"77499a0a-be50-4d60-ae26-461a8c9742e5\") " pod="openstack/ceilometer-0" Feb 27 10:52:56 crc kubenswrapper[4728]: I0227 10:52:56.485298 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9tttb\" (UniqueName: \"kubernetes.io/projected/77499a0a-be50-4d60-ae26-461a8c9742e5-kube-api-access-9tttb\") pod \"ceilometer-0\" (UID: \"77499a0a-be50-4d60-ae26-461a8c9742e5\") " pod="openstack/ceilometer-0" Feb 27 10:52:56 crc kubenswrapper[4728]: I0227 10:52:56.505422 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77499a0a-be50-4d60-ae26-461a8c9742e5-config-data\") pod \"ceilometer-0\" (UID: \"77499a0a-be50-4d60-ae26-461a8c9742e5\") " pod="openstack/ceilometer-0" Feb 27 10:52:56 crc kubenswrapper[4728]: I0227 10:52:56.576386 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-spm46" Feb 27 10:52:56 crc kubenswrapper[4728]: I0227 10:52:56.596310 4728 scope.go:117] "RemoveContainer" containerID="7909ca97f45415011ac59b761937371303c6829059f1d23ea11e9524cd9fb36a" Feb 27 10:52:56 crc kubenswrapper[4728]: I0227 10:52:56.645342 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 27 10:52:56 crc kubenswrapper[4728]: I0227 10:52:56.667880 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57410d64-6726-4c64-b9f4-e1eaad0aa42e-utilities\") pod \"57410d64-6726-4c64-b9f4-e1eaad0aa42e\" (UID: \"57410d64-6726-4c64-b9f4-e1eaad0aa42e\") " Feb 27 10:52:56 crc kubenswrapper[4728]: I0227 10:52:56.668053 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57410d64-6726-4c64-b9f4-e1eaad0aa42e-catalog-content\") pod \"57410d64-6726-4c64-b9f4-e1eaad0aa42e\" (UID: \"57410d64-6726-4c64-b9f4-e1eaad0aa42e\") " Feb 27 10:52:56 crc kubenswrapper[4728]: I0227 10:52:56.668161 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dlbdr\" (UniqueName: \"kubernetes.io/projected/57410d64-6726-4c64-b9f4-e1eaad0aa42e-kube-api-access-dlbdr\") pod \"57410d64-6726-4c64-b9f4-e1eaad0aa42e\" (UID: \"57410d64-6726-4c64-b9f4-e1eaad0aa42e\") " Feb 27 10:52:56 crc kubenswrapper[4728]: I0227 10:52:56.670420 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57410d64-6726-4c64-b9f4-e1eaad0aa42e-utilities" (OuterVolumeSpecName: "utilities") pod "57410d64-6726-4c64-b9f4-e1eaad0aa42e" (UID: "57410d64-6726-4c64-b9f4-e1eaad0aa42e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:52:56 crc kubenswrapper[4728]: I0227 10:52:56.699091 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57410d64-6726-4c64-b9f4-e1eaad0aa42e-kube-api-access-dlbdr" (OuterVolumeSpecName: "kube-api-access-dlbdr") pod "57410d64-6726-4c64-b9f4-e1eaad0aa42e" (UID: "57410d64-6726-4c64-b9f4-e1eaad0aa42e"). InnerVolumeSpecName "kube-api-access-dlbdr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:52:56 crc kubenswrapper[4728]: I0227 10:52:56.702678 4728 scope.go:117] "RemoveContainer" containerID="4314e9dc2109a3698b2863ddc6375131f7055ac70f7d9683a7f306b7d70d1187" Feb 27 10:52:56 crc kubenswrapper[4728]: I0227 10:52:56.777998 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dlbdr\" (UniqueName: \"kubernetes.io/projected/57410d64-6726-4c64-b9f4-e1eaad0aa42e-kube-api-access-dlbdr\") on node \"crc\" DevicePath \"\"" Feb 27 10:52:56 crc kubenswrapper[4728]: I0227 10:52:56.778054 4728 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57410d64-6726-4c64-b9f4-e1eaad0aa42e-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 10:52:56 crc kubenswrapper[4728]: I0227 10:52:56.786041 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="70ad0ef0-1165-4657-a3ec-04899eb20cef" path="/var/lib/kubelet/pods/70ad0ef0-1165-4657-a3ec-04899eb20cef/volumes" Feb 27 10:52:56 crc kubenswrapper[4728]: I0227 10:52:56.787436 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c6a8916-fd56-45ba-837e-78eb7fd7b7f2" path="/var/lib/kubelet/pods/9c6a8916-fd56-45ba-837e-78eb7fd7b7f2/volumes" Feb 27 10:52:56 crc kubenswrapper[4728]: I0227 10:52:56.809710 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57410d64-6726-4c64-b9f4-e1eaad0aa42e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57410d64-6726-4c64-b9f4-e1eaad0aa42e" (UID: "57410d64-6726-4c64-b9f4-e1eaad0aa42e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:52:56 crc kubenswrapper[4728]: I0227 10:52:56.829966 4728 scope.go:117] "RemoveContainer" containerID="7d0a526bea657ac3b7f0454548f59bac988c33f070c6af16b0bead05a183ecf3" Feb 27 10:52:56 crc kubenswrapper[4728]: I0227 10:52:56.881328 4728 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57410d64-6726-4c64-b9f4-e1eaad0aa42e-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 10:52:56 crc kubenswrapper[4728]: I0227 10:52:56.889103 4728 scope.go:117] "RemoveContainer" containerID="7909ca97f45415011ac59b761937371303c6829059f1d23ea11e9524cd9fb36a" Feb 27 10:52:56 crc kubenswrapper[4728]: E0227 10:52:56.889736 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7909ca97f45415011ac59b761937371303c6829059f1d23ea11e9524cd9fb36a\": container with ID starting with 7909ca97f45415011ac59b761937371303c6829059f1d23ea11e9524cd9fb36a not found: ID does not exist" containerID="7909ca97f45415011ac59b761937371303c6829059f1d23ea11e9524cd9fb36a" Feb 27 10:52:56 crc kubenswrapper[4728]: I0227 10:52:56.889786 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7909ca97f45415011ac59b761937371303c6829059f1d23ea11e9524cd9fb36a"} err="failed to get container status \"7909ca97f45415011ac59b761937371303c6829059f1d23ea11e9524cd9fb36a\": rpc error: code = NotFound desc = could not find container \"7909ca97f45415011ac59b761937371303c6829059f1d23ea11e9524cd9fb36a\": container with ID starting with 7909ca97f45415011ac59b761937371303c6829059f1d23ea11e9524cd9fb36a not found: ID does not exist" Feb 27 10:52:56 crc kubenswrapper[4728]: I0227 10:52:56.889819 4728 scope.go:117] "RemoveContainer" containerID="4314e9dc2109a3698b2863ddc6375131f7055ac70f7d9683a7f306b7d70d1187" Feb 27 10:52:56 crc kubenswrapper[4728]: E0227 10:52:56.891203 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4314e9dc2109a3698b2863ddc6375131f7055ac70f7d9683a7f306b7d70d1187\": container with ID starting with 4314e9dc2109a3698b2863ddc6375131f7055ac70f7d9683a7f306b7d70d1187 not found: ID does not exist" containerID="4314e9dc2109a3698b2863ddc6375131f7055ac70f7d9683a7f306b7d70d1187" Feb 27 10:52:56 crc kubenswrapper[4728]: I0227 10:52:56.891239 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4314e9dc2109a3698b2863ddc6375131f7055ac70f7d9683a7f306b7d70d1187"} err="failed to get container status \"4314e9dc2109a3698b2863ddc6375131f7055ac70f7d9683a7f306b7d70d1187\": rpc error: code = NotFound desc = could not find container \"4314e9dc2109a3698b2863ddc6375131f7055ac70f7d9683a7f306b7d70d1187\": container with ID starting with 4314e9dc2109a3698b2863ddc6375131f7055ac70f7d9683a7f306b7d70d1187 not found: ID does not exist" Feb 27 10:52:56 crc kubenswrapper[4728]: I0227 10:52:56.891263 4728 scope.go:117] "RemoveContainer" containerID="7d0a526bea657ac3b7f0454548f59bac988c33f070c6af16b0bead05a183ecf3" Feb 27 10:52:56 crc kubenswrapper[4728]: E0227 10:52:56.892248 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d0a526bea657ac3b7f0454548f59bac988c33f070c6af16b0bead05a183ecf3\": container with ID starting with 7d0a526bea657ac3b7f0454548f59bac988c33f070c6af16b0bead05a183ecf3 not found: ID does not exist" containerID="7d0a526bea657ac3b7f0454548f59bac988c33f070c6af16b0bead05a183ecf3" Feb 27 10:52:56 crc kubenswrapper[4728]: I0227 10:52:56.892278 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d0a526bea657ac3b7f0454548f59bac988c33f070c6af16b0bead05a183ecf3"} err="failed to get container status \"7d0a526bea657ac3b7f0454548f59bac988c33f070c6af16b0bead05a183ecf3\": rpc error: code = NotFound desc = could not find container \"7d0a526bea657ac3b7f0454548f59bac988c33f070c6af16b0bead05a183ecf3\": container with ID starting with 7d0a526bea657ac3b7f0454548f59bac988c33f070c6af16b0bead05a183ecf3 not found: ID does not exist" Feb 27 10:52:57 crc kubenswrapper[4728]: I0227 10:52:57.115576 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-spm46" event={"ID":"57410d64-6726-4c64-b9f4-e1eaad0aa42e","Type":"ContainerDied","Data":"fff444b2a9eea7d510780b700c768f44bce8071c641caf45f904e79e8411d930"} Feb 27 10:52:57 crc kubenswrapper[4728]: I0227 10:52:57.115634 4728 scope.go:117] "RemoveContainer" containerID="c92115c63a3d04786afb7d35601bf52b4816f3ef644c288fd306cd0bcb0d2784" Feb 27 10:52:57 crc kubenswrapper[4728]: I0227 10:52:57.115777 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-spm46" Feb 27 10:52:57 crc kubenswrapper[4728]: I0227 10:52:57.187388 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-spm46"] Feb 27 10:52:57 crc kubenswrapper[4728]: I0227 10:52:57.190832 4728 scope.go:117] "RemoveContainer" containerID="1317e781975b21b186df5dbfe0094b2fad674a6e8d198ca9385741c716a27134" Feb 27 10:52:57 crc kubenswrapper[4728]: I0227 10:52:57.212959 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-spm46"] Feb 27 10:52:57 crc kubenswrapper[4728]: I0227 10:52:57.242427 4728 scope.go:117] "RemoveContainer" containerID="b32ca1b4d615aafe881cd8f44e80a65caea0b120141194e8e3df6b47bbeaf098" Feb 27 10:52:57 crc kubenswrapper[4728]: I0227 10:52:57.423690 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="26ecfb63-8476-497d-9cb3-3729c4961b4e" containerName="rabbitmq" containerID="cri-o://a163ccc7b13763efebb2e9f850417e278c80c19e153b529f3a59ccf7d23a4aec" gracePeriod=604795 Feb 27 10:52:57 crc kubenswrapper[4728]: I0227 10:52:57.569919 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 27 10:52:58 crc kubenswrapper[4728]: I0227 10:52:58.192611 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"77499a0a-be50-4d60-ae26-461a8c9742e5","Type":"ContainerStarted","Data":"cbec47c01f36e0fb8cc0bad1d25065cfab628a95b87a13ded181bc5eca51ac0f"} Feb 27 10:52:58 crc kubenswrapper[4728]: I0227 10:52:58.755928 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57410d64-6726-4c64-b9f4-e1eaad0aa42e" path="/var/lib/kubelet/pods/57410d64-6726-4c64-b9f4-e1eaad0aa42e/volumes" Feb 27 10:53:01 crc kubenswrapper[4728]: I0227 10:53:01.009216 4728 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="26ecfb63-8476-497d-9cb3-3729c4961b4e" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.134:5671: connect: connection refused" Feb 27 10:53:01 crc kubenswrapper[4728]: I0227 10:53:01.692305 4728 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-2" podUID="ad00da50-2e05-4612-a862-5cccd698e77b" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.136:5671: connect: connection refused" Feb 27 10:53:03 crc kubenswrapper[4728]: I0227 10:53:03.295034 4728 generic.go:334] "Generic (PLEG): container finished" podID="ad00da50-2e05-4612-a862-5cccd698e77b" containerID="e23737f0ce37aa688192a9b087adece80e03ee173aeb672a9a8026ba67e7e977" exitCode=0 Feb 27 10:53:03 crc kubenswrapper[4728]: I0227 10:53:03.295352 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"ad00da50-2e05-4612-a862-5cccd698e77b","Type":"ContainerDied","Data":"e23737f0ce37aa688192a9b087adece80e03ee173aeb672a9a8026ba67e7e977"} Feb 27 10:53:04 crc kubenswrapper[4728]: I0227 10:53:04.308847 4728 generic.go:334] "Generic (PLEG): container finished" podID="26ecfb63-8476-497d-9cb3-3729c4961b4e" containerID="a163ccc7b13763efebb2e9f850417e278c80c19e153b529f3a59ccf7d23a4aec" exitCode=0 Feb 27 10:53:04 crc kubenswrapper[4728]: I0227 10:53:04.309194 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"26ecfb63-8476-497d-9cb3-3729c4961b4e","Type":"ContainerDied","Data":"a163ccc7b13763efebb2e9f850417e278c80c19e153b529f3a59ccf7d23a4aec"} Feb 27 10:53:06 crc kubenswrapper[4728]: I0227 10:53:06.590833 4728 scope.go:117] "RemoveContainer" containerID="76552ff94d7d3ab0f442fe3ba6993cf6738e1eed6e0320cd6d09dbc5a8eb995e" Feb 27 10:53:07 crc kubenswrapper[4728]: I0227 10:53:07.490065 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-594cb89c79-lcsng"] Feb 27 10:53:07 crc kubenswrapper[4728]: E0227 10:53:07.491142 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57410d64-6726-4c64-b9f4-e1eaad0aa42e" containerName="registry-server" Feb 27 10:53:07 crc kubenswrapper[4728]: I0227 10:53:07.491163 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="57410d64-6726-4c64-b9f4-e1eaad0aa42e" containerName="registry-server" Feb 27 10:53:07 crc kubenswrapper[4728]: E0227 10:53:07.491173 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57410d64-6726-4c64-b9f4-e1eaad0aa42e" containerName="extract-content" Feb 27 10:53:07 crc kubenswrapper[4728]: I0227 10:53:07.491201 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="57410d64-6726-4c64-b9f4-e1eaad0aa42e" containerName="extract-content" Feb 27 10:53:07 crc kubenswrapper[4728]: E0227 10:53:07.491214 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57410d64-6726-4c64-b9f4-e1eaad0aa42e" containerName="extract-utilities" Feb 27 10:53:07 crc kubenswrapper[4728]: I0227 10:53:07.491220 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="57410d64-6726-4c64-b9f4-e1eaad0aa42e" containerName="extract-utilities" Feb 27 10:53:07 crc kubenswrapper[4728]: I0227 10:53:07.491524 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="57410d64-6726-4c64-b9f4-e1eaad0aa42e" containerName="registry-server" Feb 27 10:53:07 crc kubenswrapper[4728]: I0227 10:53:07.495459 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-594cb89c79-lcsng" Feb 27 10:53:07 crc kubenswrapper[4728]: I0227 10:53:07.504462 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Feb 27 10:53:07 crc kubenswrapper[4728]: I0227 10:53:07.512794 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-594cb89c79-lcsng"] Feb 27 10:53:07 crc kubenswrapper[4728]: I0227 10:53:07.590611 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/53be9ab0-5753-4942-bdd4-3efb5bb0d4d0-openstack-edpm-ipam\") pod \"dnsmasq-dns-594cb89c79-lcsng\" (UID: \"53be9ab0-5753-4942-bdd4-3efb5bb0d4d0\") " pod="openstack/dnsmasq-dns-594cb89c79-lcsng" Feb 27 10:53:07 crc kubenswrapper[4728]: I0227 10:53:07.590671 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/53be9ab0-5753-4942-bdd4-3efb5bb0d4d0-dns-svc\") pod \"dnsmasq-dns-594cb89c79-lcsng\" (UID: \"53be9ab0-5753-4942-bdd4-3efb5bb0d4d0\") " pod="openstack/dnsmasq-dns-594cb89c79-lcsng" Feb 27 10:53:07 crc kubenswrapper[4728]: I0227 10:53:07.591085 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/53be9ab0-5753-4942-bdd4-3efb5bb0d4d0-ovsdbserver-nb\") pod \"dnsmasq-dns-594cb89c79-lcsng\" (UID: \"53be9ab0-5753-4942-bdd4-3efb5bb0d4d0\") " pod="openstack/dnsmasq-dns-594cb89c79-lcsng" Feb 27 10:53:07 crc kubenswrapper[4728]: I0227 10:53:07.591395 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/53be9ab0-5753-4942-bdd4-3efb5bb0d4d0-dns-swift-storage-0\") pod \"dnsmasq-dns-594cb89c79-lcsng\" (UID: \"53be9ab0-5753-4942-bdd4-3efb5bb0d4d0\") " pod="openstack/dnsmasq-dns-594cb89c79-lcsng" Feb 27 10:53:07 crc kubenswrapper[4728]: I0227 10:53:07.591874 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/53be9ab0-5753-4942-bdd4-3efb5bb0d4d0-config\") pod \"dnsmasq-dns-594cb89c79-lcsng\" (UID: \"53be9ab0-5753-4942-bdd4-3efb5bb0d4d0\") " pod="openstack/dnsmasq-dns-594cb89c79-lcsng" Feb 27 10:53:07 crc kubenswrapper[4728]: I0227 10:53:07.592098 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/53be9ab0-5753-4942-bdd4-3efb5bb0d4d0-ovsdbserver-sb\") pod \"dnsmasq-dns-594cb89c79-lcsng\" (UID: \"53be9ab0-5753-4942-bdd4-3efb5bb0d4d0\") " pod="openstack/dnsmasq-dns-594cb89c79-lcsng" Feb 27 10:53:07 crc kubenswrapper[4728]: I0227 10:53:07.592350 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chppb\" (UniqueName: \"kubernetes.io/projected/53be9ab0-5753-4942-bdd4-3efb5bb0d4d0-kube-api-access-chppb\") pod \"dnsmasq-dns-594cb89c79-lcsng\" (UID: \"53be9ab0-5753-4942-bdd4-3efb5bb0d4d0\") " pod="openstack/dnsmasq-dns-594cb89c79-lcsng" Feb 27 10:53:07 crc kubenswrapper[4728]: I0227 10:53:07.695262 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-chppb\" (UniqueName: \"kubernetes.io/projected/53be9ab0-5753-4942-bdd4-3efb5bb0d4d0-kube-api-access-chppb\") pod \"dnsmasq-dns-594cb89c79-lcsng\" (UID: \"53be9ab0-5753-4942-bdd4-3efb5bb0d4d0\") " pod="openstack/dnsmasq-dns-594cb89c79-lcsng" Feb 27 10:53:07 crc kubenswrapper[4728]: I0227 10:53:07.701820 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/53be9ab0-5753-4942-bdd4-3efb5bb0d4d0-openstack-edpm-ipam\") pod \"dnsmasq-dns-594cb89c79-lcsng\" (UID: \"53be9ab0-5753-4942-bdd4-3efb5bb0d4d0\") " pod="openstack/dnsmasq-dns-594cb89c79-lcsng" Feb 27 10:53:07 crc kubenswrapper[4728]: I0227 10:53:07.697496 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/53be9ab0-5753-4942-bdd4-3efb5bb0d4d0-openstack-edpm-ipam\") pod \"dnsmasq-dns-594cb89c79-lcsng\" (UID: \"53be9ab0-5753-4942-bdd4-3efb5bb0d4d0\") " pod="openstack/dnsmasq-dns-594cb89c79-lcsng" Feb 27 10:53:07 crc kubenswrapper[4728]: I0227 10:53:07.701968 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/53be9ab0-5753-4942-bdd4-3efb5bb0d4d0-dns-svc\") pod \"dnsmasq-dns-594cb89c79-lcsng\" (UID: \"53be9ab0-5753-4942-bdd4-3efb5bb0d4d0\") " pod="openstack/dnsmasq-dns-594cb89c79-lcsng" Feb 27 10:53:07 crc kubenswrapper[4728]: I0227 10:53:07.702113 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/53be9ab0-5753-4942-bdd4-3efb5bb0d4d0-ovsdbserver-nb\") pod \"dnsmasq-dns-594cb89c79-lcsng\" (UID: \"53be9ab0-5753-4942-bdd4-3efb5bb0d4d0\") " pod="openstack/dnsmasq-dns-594cb89c79-lcsng" Feb 27 10:53:07 crc kubenswrapper[4728]: I0227 10:53:07.702222 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/53be9ab0-5753-4942-bdd4-3efb5bb0d4d0-dns-swift-storage-0\") pod \"dnsmasq-dns-594cb89c79-lcsng\" (UID: \"53be9ab0-5753-4942-bdd4-3efb5bb0d4d0\") " pod="openstack/dnsmasq-dns-594cb89c79-lcsng" Feb 27 10:53:07 crc kubenswrapper[4728]: I0227 10:53:07.702395 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/53be9ab0-5753-4942-bdd4-3efb5bb0d4d0-config\") pod \"dnsmasq-dns-594cb89c79-lcsng\" (UID: \"53be9ab0-5753-4942-bdd4-3efb5bb0d4d0\") " pod="openstack/dnsmasq-dns-594cb89c79-lcsng" Feb 27 10:53:07 crc kubenswrapper[4728]: I0227 10:53:07.702493 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/53be9ab0-5753-4942-bdd4-3efb5bb0d4d0-ovsdbserver-sb\") pod \"dnsmasq-dns-594cb89c79-lcsng\" (UID: \"53be9ab0-5753-4942-bdd4-3efb5bb0d4d0\") " pod="openstack/dnsmasq-dns-594cb89c79-lcsng" Feb 27 10:53:07 crc kubenswrapper[4728]: I0227 10:53:07.703689 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/53be9ab0-5753-4942-bdd4-3efb5bb0d4d0-ovsdbserver-sb\") pod \"dnsmasq-dns-594cb89c79-lcsng\" (UID: \"53be9ab0-5753-4942-bdd4-3efb5bb0d4d0\") " pod="openstack/dnsmasq-dns-594cb89c79-lcsng" Feb 27 10:53:07 crc kubenswrapper[4728]: I0227 10:53:07.703786 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/53be9ab0-5753-4942-bdd4-3efb5bb0d4d0-dns-swift-storage-0\") pod \"dnsmasq-dns-594cb89c79-lcsng\" (UID: \"53be9ab0-5753-4942-bdd4-3efb5bb0d4d0\") " pod="openstack/dnsmasq-dns-594cb89c79-lcsng" Feb 27 10:53:07 crc kubenswrapper[4728]: I0227 10:53:07.703920 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/53be9ab0-5753-4942-bdd4-3efb5bb0d4d0-config\") pod \"dnsmasq-dns-594cb89c79-lcsng\" (UID: \"53be9ab0-5753-4942-bdd4-3efb5bb0d4d0\") " pod="openstack/dnsmasq-dns-594cb89c79-lcsng" Feb 27 10:53:07 crc kubenswrapper[4728]: I0227 10:53:07.703927 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/53be9ab0-5753-4942-bdd4-3efb5bb0d4d0-dns-svc\") pod \"dnsmasq-dns-594cb89c79-lcsng\" (UID: \"53be9ab0-5753-4942-bdd4-3efb5bb0d4d0\") " pod="openstack/dnsmasq-dns-594cb89c79-lcsng" Feb 27 10:53:07 crc kubenswrapper[4728]: I0227 10:53:07.703955 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/53be9ab0-5753-4942-bdd4-3efb5bb0d4d0-ovsdbserver-nb\") pod \"dnsmasq-dns-594cb89c79-lcsng\" (UID: \"53be9ab0-5753-4942-bdd4-3efb5bb0d4d0\") " pod="openstack/dnsmasq-dns-594cb89c79-lcsng" Feb 27 10:53:07 crc kubenswrapper[4728]: I0227 10:53:07.717829 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-chppb\" (UniqueName: \"kubernetes.io/projected/53be9ab0-5753-4942-bdd4-3efb5bb0d4d0-kube-api-access-chppb\") pod \"dnsmasq-dns-594cb89c79-lcsng\" (UID: \"53be9ab0-5753-4942-bdd4-3efb5bb0d4d0\") " pod="openstack/dnsmasq-dns-594cb89c79-lcsng" Feb 27 10:53:07 crc kubenswrapper[4728]: I0227 10:53:07.817228 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-594cb89c79-lcsng" Feb 27 10:53:15 crc kubenswrapper[4728]: I0227 10:53:15.988330 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 27 10:53:15 crc kubenswrapper[4728]: I0227 10:53:15.998944 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Feb 27 10:53:16 crc kubenswrapper[4728]: I0227 10:53:16.009143 4728 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="26ecfb63-8476-497d-9cb3-3729c4961b4e" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.134:5671: i/o timeout" Feb 27 10:53:16 crc kubenswrapper[4728]: I0227 10:53:16.168854 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ad00da50-2e05-4612-a862-5cccd698e77b-config-data\") pod \"ad00da50-2e05-4612-a862-5cccd698e77b\" (UID: \"ad00da50-2e05-4612-a862-5cccd698e77b\") " Feb 27 10:53:16 crc kubenswrapper[4728]: I0227 10:53:16.168906 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xqc66\" (UniqueName: \"kubernetes.io/projected/ad00da50-2e05-4612-a862-5cccd698e77b-kube-api-access-xqc66\") pod \"ad00da50-2e05-4612-a862-5cccd698e77b\" (UID: \"ad00da50-2e05-4612-a862-5cccd698e77b\") " Feb 27 10:53:16 crc kubenswrapper[4728]: I0227 10:53:16.169564 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6483969b-d2a4-484b-83ca-44d2967b94b0\") pod \"ad00da50-2e05-4612-a862-5cccd698e77b\" (UID: \"ad00da50-2e05-4612-a862-5cccd698e77b\") " Feb 27 10:53:16 crc kubenswrapper[4728]: I0227 10:53:16.169643 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ad00da50-2e05-4612-a862-5cccd698e77b-rabbitmq-plugins\") pod \"ad00da50-2e05-4612-a862-5cccd698e77b\" (UID: \"ad00da50-2e05-4612-a862-5cccd698e77b\") " Feb 27 10:53:16 crc kubenswrapper[4728]: I0227 10:53:16.169681 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ad00da50-2e05-4612-a862-5cccd698e77b-plugins-conf\") pod \"ad00da50-2e05-4612-a862-5cccd698e77b\" (UID: \"ad00da50-2e05-4612-a862-5cccd698e77b\") " Feb 27 10:53:16 crc kubenswrapper[4728]: I0227 10:53:16.169739 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/26ecfb63-8476-497d-9cb3-3729c4961b4e-rabbitmq-confd\") pod \"26ecfb63-8476-497d-9cb3-3729c4961b4e\" (UID: \"26ecfb63-8476-497d-9cb3-3729c4961b4e\") " Feb 27 10:53:16 crc kubenswrapper[4728]: I0227 10:53:16.169800 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/26ecfb63-8476-497d-9cb3-3729c4961b4e-rabbitmq-tls\") pod \"26ecfb63-8476-497d-9cb3-3729c4961b4e\" (UID: \"26ecfb63-8476-497d-9cb3-3729c4961b4e\") " Feb 27 10:53:16 crc kubenswrapper[4728]: I0227 10:53:16.169830 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ad00da50-2e05-4612-a862-5cccd698e77b-server-conf\") pod \"ad00da50-2e05-4612-a862-5cccd698e77b\" (UID: \"ad00da50-2e05-4612-a862-5cccd698e77b\") " Feb 27 10:53:16 crc kubenswrapper[4728]: I0227 10:53:16.170070 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ad00da50-2e05-4612-a862-5cccd698e77b-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "ad00da50-2e05-4612-a862-5cccd698e77b" (UID: "ad00da50-2e05-4612-a862-5cccd698e77b"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:53:16 crc kubenswrapper[4728]: I0227 10:53:16.170283 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0766c1d2-55b8-4e58-9f95-8902126e782c\") pod \"26ecfb63-8476-497d-9cb3-3729c4961b4e\" (UID: \"26ecfb63-8476-497d-9cb3-3729c4961b4e\") " Feb 27 10:53:16 crc kubenswrapper[4728]: I0227 10:53:16.170350 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/26ecfb63-8476-497d-9cb3-3729c4961b4e-rabbitmq-erlang-cookie\") pod \"26ecfb63-8476-497d-9cb3-3729c4961b4e\" (UID: \"26ecfb63-8476-497d-9cb3-3729c4961b4e\") " Feb 27 10:53:16 crc kubenswrapper[4728]: I0227 10:53:16.170380 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ad00da50-2e05-4612-a862-5cccd698e77b-rabbitmq-confd\") pod \"ad00da50-2e05-4612-a862-5cccd698e77b\" (UID: \"ad00da50-2e05-4612-a862-5cccd698e77b\") " Feb 27 10:53:16 crc kubenswrapper[4728]: I0227 10:53:16.170402 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ad00da50-2e05-4612-a862-5cccd698e77b-rabbitmq-tls\") pod \"ad00da50-2e05-4612-a862-5cccd698e77b\" (UID: \"ad00da50-2e05-4612-a862-5cccd698e77b\") " Feb 27 10:53:16 crc kubenswrapper[4728]: I0227 10:53:16.170429 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/26ecfb63-8476-497d-9cb3-3729c4961b4e-plugins-conf\") pod \"26ecfb63-8476-497d-9cb3-3729c4961b4e\" (UID: \"26ecfb63-8476-497d-9cb3-3729c4961b4e\") " Feb 27 10:53:16 crc kubenswrapper[4728]: I0227 10:53:16.170454 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ad00da50-2e05-4612-a862-5cccd698e77b-rabbitmq-erlang-cookie\") pod \"ad00da50-2e05-4612-a862-5cccd698e77b\" (UID: \"ad00da50-2e05-4612-a862-5cccd698e77b\") " Feb 27 10:53:16 crc kubenswrapper[4728]: I0227 10:53:16.170479 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/26ecfb63-8476-497d-9cb3-3729c4961b4e-server-conf\") pod \"26ecfb63-8476-497d-9cb3-3729c4961b4e\" (UID: \"26ecfb63-8476-497d-9cb3-3729c4961b4e\") " Feb 27 10:53:16 crc kubenswrapper[4728]: I0227 10:53:16.170541 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xlhqr\" (UniqueName: \"kubernetes.io/projected/26ecfb63-8476-497d-9cb3-3729c4961b4e-kube-api-access-xlhqr\") pod \"26ecfb63-8476-497d-9cb3-3729c4961b4e\" (UID: \"26ecfb63-8476-497d-9cb3-3729c4961b4e\") " Feb 27 10:53:16 crc kubenswrapper[4728]: I0227 10:53:16.170566 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ad00da50-2e05-4612-a862-5cccd698e77b-erlang-cookie-secret\") pod \"ad00da50-2e05-4612-a862-5cccd698e77b\" (UID: \"ad00da50-2e05-4612-a862-5cccd698e77b\") " Feb 27 10:53:16 crc kubenswrapper[4728]: I0227 10:53:16.170601 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/26ecfb63-8476-497d-9cb3-3729c4961b4e-erlang-cookie-secret\") pod \"26ecfb63-8476-497d-9cb3-3729c4961b4e\" (UID: \"26ecfb63-8476-497d-9cb3-3729c4961b4e\") " Feb 27 10:53:16 crc kubenswrapper[4728]: I0227 10:53:16.170661 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/26ecfb63-8476-497d-9cb3-3729c4961b4e-pod-info\") pod \"26ecfb63-8476-497d-9cb3-3729c4961b4e\" (UID: \"26ecfb63-8476-497d-9cb3-3729c4961b4e\") " Feb 27 10:53:16 crc kubenswrapper[4728]: I0227 10:53:16.170743 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ad00da50-2e05-4612-a862-5cccd698e77b-pod-info\") pod \"ad00da50-2e05-4612-a862-5cccd698e77b\" (UID: \"ad00da50-2e05-4612-a862-5cccd698e77b\") " Feb 27 10:53:16 crc kubenswrapper[4728]: I0227 10:53:16.170756 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/26ecfb63-8476-497d-9cb3-3729c4961b4e-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "26ecfb63-8476-497d-9cb3-3729c4961b4e" (UID: "26ecfb63-8476-497d-9cb3-3729c4961b4e"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:53:16 crc kubenswrapper[4728]: I0227 10:53:16.170774 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/26ecfb63-8476-497d-9cb3-3729c4961b4e-rabbitmq-plugins\") pod \"26ecfb63-8476-497d-9cb3-3729c4961b4e\" (UID: \"26ecfb63-8476-497d-9cb3-3729c4961b4e\") " Feb 27 10:53:16 crc kubenswrapper[4728]: I0227 10:53:16.170859 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/26ecfb63-8476-497d-9cb3-3729c4961b4e-config-data\") pod \"26ecfb63-8476-497d-9cb3-3729c4961b4e\" (UID: \"26ecfb63-8476-497d-9cb3-3729c4961b4e\") " Feb 27 10:53:16 crc kubenswrapper[4728]: I0227 10:53:16.171095 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/26ecfb63-8476-497d-9cb3-3729c4961b4e-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "26ecfb63-8476-497d-9cb3-3729c4961b4e" (UID: "26ecfb63-8476-497d-9cb3-3729c4961b4e"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:53:16 crc kubenswrapper[4728]: I0227 10:53:16.171953 4728 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/26ecfb63-8476-497d-9cb3-3729c4961b4e-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 27 10:53:16 crc kubenswrapper[4728]: I0227 10:53:16.171971 4728 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/26ecfb63-8476-497d-9cb3-3729c4961b4e-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 27 10:53:16 crc kubenswrapper[4728]: I0227 10:53:16.171980 4728 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ad00da50-2e05-4612-a862-5cccd698e77b-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 27 10:53:16 crc kubenswrapper[4728]: I0227 10:53:16.172698 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ad00da50-2e05-4612-a862-5cccd698e77b-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "ad00da50-2e05-4612-a862-5cccd698e77b" (UID: "ad00da50-2e05-4612-a862-5cccd698e77b"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:53:16 crc kubenswrapper[4728]: I0227 10:53:16.179576 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad00da50-2e05-4612-a862-5cccd698e77b-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "ad00da50-2e05-4612-a862-5cccd698e77b" (UID: "ad00da50-2e05-4612-a862-5cccd698e77b"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:53:16 crc kubenswrapper[4728]: I0227 10:53:16.180382 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26ecfb63-8476-497d-9cb3-3729c4961b4e-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "26ecfb63-8476-497d-9cb3-3729c4961b4e" (UID: "26ecfb63-8476-497d-9cb3-3729c4961b4e"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:53:16 crc kubenswrapper[4728]: I0227 10:53:16.180970 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/26ecfb63-8476-497d-9cb3-3729c4961b4e-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "26ecfb63-8476-497d-9cb3-3729c4961b4e" (UID: "26ecfb63-8476-497d-9cb3-3729c4961b4e"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:53:16 crc kubenswrapper[4728]: I0227 10:53:16.183722 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/ad00da50-2e05-4612-a862-5cccd698e77b-pod-info" (OuterVolumeSpecName: "pod-info") pod "ad00da50-2e05-4612-a862-5cccd698e77b" (UID: "ad00da50-2e05-4612-a862-5cccd698e77b"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 27 10:53:16 crc kubenswrapper[4728]: I0227 10:53:16.185673 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad00da50-2e05-4612-a862-5cccd698e77b-kube-api-access-xqc66" (OuterVolumeSpecName: "kube-api-access-xqc66") pod "ad00da50-2e05-4612-a862-5cccd698e77b" (UID: "ad00da50-2e05-4612-a862-5cccd698e77b"). InnerVolumeSpecName "kube-api-access-xqc66". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:53:16 crc kubenswrapper[4728]: I0227 10:53:16.185743 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26ecfb63-8476-497d-9cb3-3729c4961b4e-kube-api-access-xlhqr" (OuterVolumeSpecName: "kube-api-access-xlhqr") pod "26ecfb63-8476-497d-9cb3-3729c4961b4e" (UID: "26ecfb63-8476-497d-9cb3-3729c4961b4e"). InnerVolumeSpecName "kube-api-access-xlhqr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:53:16 crc kubenswrapper[4728]: I0227 10:53:16.185736 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26ecfb63-8476-497d-9cb3-3729c4961b4e-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "26ecfb63-8476-497d-9cb3-3729c4961b4e" (UID: "26ecfb63-8476-497d-9cb3-3729c4961b4e"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:53:16 crc kubenswrapper[4728]: I0227 10:53:16.189149 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ad00da50-2e05-4612-a862-5cccd698e77b-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "ad00da50-2e05-4612-a862-5cccd698e77b" (UID: "ad00da50-2e05-4612-a862-5cccd698e77b"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:53:16 crc kubenswrapper[4728]: I0227 10:53:16.199622 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad00da50-2e05-4612-a862-5cccd698e77b-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "ad00da50-2e05-4612-a862-5cccd698e77b" (UID: "ad00da50-2e05-4612-a862-5cccd698e77b"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:53:16 crc kubenswrapper[4728]: I0227 10:53:16.230123 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/26ecfb63-8476-497d-9cb3-3729c4961b4e-pod-info" (OuterVolumeSpecName: "pod-info") pod "26ecfb63-8476-497d-9cb3-3729c4961b4e" (UID: "26ecfb63-8476-497d-9cb3-3729c4961b4e"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 27 10:53:16 crc kubenswrapper[4728]: I0227 10:53:16.244836 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/26ecfb63-8476-497d-9cb3-3729c4961b4e-config-data" (OuterVolumeSpecName: "config-data") pod "26ecfb63-8476-497d-9cb3-3729c4961b4e" (UID: "26ecfb63-8476-497d-9cb3-3729c4961b4e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:53:16 crc kubenswrapper[4728]: I0227 10:53:16.256489 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ad00da50-2e05-4612-a862-5cccd698e77b-config-data" (OuterVolumeSpecName: "config-data") pod "ad00da50-2e05-4612-a862-5cccd698e77b" (UID: "ad00da50-2e05-4612-a862-5cccd698e77b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:53:16 crc kubenswrapper[4728]: E0227 10:53:16.256844 4728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0766c1d2-55b8-4e58-9f95-8902126e782c podName:26ecfb63-8476-497d-9cb3-3729c4961b4e nodeName:}" failed. No retries permitted until 2026-02-27 10:53:16.75682007 +0000 UTC m=+1616.719186176 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "persistence" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0766c1d2-55b8-4e58-9f95-8902126e782c") pod "26ecfb63-8476-497d-9cb3-3729c4961b4e" (UID: "26ecfb63-8476-497d-9cb3-3729c4961b4e") : kubernetes.io/csi: Unmounter.TearDownAt failed: rpc error: code = Unknown desc = check target path: could not get consistent content of /proc/mounts after 3 attempts Feb 27 10:53:16 crc kubenswrapper[4728]: I0227 10:53:16.273315 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xqc66\" (UniqueName: \"kubernetes.io/projected/ad00da50-2e05-4612-a862-5cccd698e77b-kube-api-access-xqc66\") on node \"crc\" DevicePath \"\"" Feb 27 10:53:16 crc kubenswrapper[4728]: I0227 10:53:16.273342 4728 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ad00da50-2e05-4612-a862-5cccd698e77b-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 27 10:53:16 crc kubenswrapper[4728]: I0227 10:53:16.273352 4728 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/26ecfb63-8476-497d-9cb3-3729c4961b4e-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 27 10:53:16 crc kubenswrapper[4728]: I0227 10:53:16.273360 4728 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ad00da50-2e05-4612-a862-5cccd698e77b-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 27 10:53:16 crc kubenswrapper[4728]: I0227 10:53:16.273368 4728 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/26ecfb63-8476-497d-9cb3-3729c4961b4e-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 27 10:53:16 crc kubenswrapper[4728]: I0227 10:53:16.273376 4728 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ad00da50-2e05-4612-a862-5cccd698e77b-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 27 10:53:16 crc kubenswrapper[4728]: I0227 10:53:16.273385 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xlhqr\" (UniqueName: \"kubernetes.io/projected/26ecfb63-8476-497d-9cb3-3729c4961b4e-kube-api-access-xlhqr\") on node \"crc\" DevicePath \"\"" Feb 27 10:53:16 crc kubenswrapper[4728]: I0227 10:53:16.273394 4728 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ad00da50-2e05-4612-a862-5cccd698e77b-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 27 10:53:16 crc kubenswrapper[4728]: I0227 10:53:16.273412 4728 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/26ecfb63-8476-497d-9cb3-3729c4961b4e-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 27 10:53:16 crc kubenswrapper[4728]: I0227 10:53:16.273419 4728 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/26ecfb63-8476-497d-9cb3-3729c4961b4e-pod-info\") on node \"crc\" DevicePath \"\"" Feb 27 10:53:16 crc kubenswrapper[4728]: I0227 10:53:16.273428 4728 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ad00da50-2e05-4612-a862-5cccd698e77b-pod-info\") on node \"crc\" DevicePath \"\"" Feb 27 10:53:16 crc kubenswrapper[4728]: I0227 10:53:16.273437 4728 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/26ecfb63-8476-497d-9cb3-3729c4961b4e-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 10:53:16 crc kubenswrapper[4728]: I0227 10:53:16.273447 4728 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ad00da50-2e05-4612-a862-5cccd698e77b-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 10:53:16 crc kubenswrapper[4728]: I0227 10:53:16.275978 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6483969b-d2a4-484b-83ca-44d2967b94b0" (OuterVolumeSpecName: "persistence") pod "ad00da50-2e05-4612-a862-5cccd698e77b" (UID: "ad00da50-2e05-4612-a862-5cccd698e77b"). InnerVolumeSpecName "pvc-6483969b-d2a4-484b-83ca-44d2967b94b0". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 27 10:53:16 crc kubenswrapper[4728]: I0227 10:53:16.305262 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/26ecfb63-8476-497d-9cb3-3729c4961b4e-server-conf" (OuterVolumeSpecName: "server-conf") pod "26ecfb63-8476-497d-9cb3-3729c4961b4e" (UID: "26ecfb63-8476-497d-9cb3-3729c4961b4e"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:53:16 crc kubenswrapper[4728]: I0227 10:53:16.341248 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ad00da50-2e05-4612-a862-5cccd698e77b-server-conf" (OuterVolumeSpecName: "server-conf") pod "ad00da50-2e05-4612-a862-5cccd698e77b" (UID: "ad00da50-2e05-4612-a862-5cccd698e77b"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:53:16 crc kubenswrapper[4728]: I0227 10:53:16.375431 4728 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-6483969b-d2a4-484b-83ca-44d2967b94b0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6483969b-d2a4-484b-83ca-44d2967b94b0\") on node \"crc\" " Feb 27 10:53:16 crc kubenswrapper[4728]: I0227 10:53:16.375472 4728 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ad00da50-2e05-4612-a862-5cccd698e77b-server-conf\") on node \"crc\" DevicePath \"\"" Feb 27 10:53:16 crc kubenswrapper[4728]: I0227 10:53:16.375516 4728 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/26ecfb63-8476-497d-9cb3-3729c4961b4e-server-conf\") on node \"crc\" DevicePath \"\"" Feb 27 10:53:16 crc kubenswrapper[4728]: I0227 10:53:16.411084 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad00da50-2e05-4612-a862-5cccd698e77b-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "ad00da50-2e05-4612-a862-5cccd698e77b" (UID: "ad00da50-2e05-4612-a862-5cccd698e77b"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:53:16 crc kubenswrapper[4728]: I0227 10:53:16.453076 4728 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 27 10:53:16 crc kubenswrapper[4728]: I0227 10:53:16.453477 4728 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-6483969b-d2a4-484b-83ca-44d2967b94b0" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6483969b-d2a4-484b-83ca-44d2967b94b0") on node "crc" Feb 27 10:53:16 crc kubenswrapper[4728]: I0227 10:53:16.465721 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26ecfb63-8476-497d-9cb3-3729c4961b4e-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "26ecfb63-8476-497d-9cb3-3729c4961b4e" (UID: "26ecfb63-8476-497d-9cb3-3729c4961b4e"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:53:16 crc kubenswrapper[4728]: I0227 10:53:16.477848 4728 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/26ecfb63-8476-497d-9cb3-3729c4961b4e-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 27 10:53:16 crc kubenswrapper[4728]: I0227 10:53:16.480973 4728 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ad00da50-2e05-4612-a862-5cccd698e77b-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 27 10:53:16 crc kubenswrapper[4728]: I0227 10:53:16.481167 4728 reconciler_common.go:293] "Volume detached for volume \"pvc-6483969b-d2a4-484b-83ca-44d2967b94b0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6483969b-d2a4-484b-83ca-44d2967b94b0\") on node \"crc\" DevicePath \"\"" Feb 27 10:53:16 crc kubenswrapper[4728]: I0227 10:53:16.509110 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 27 10:53:16 crc kubenswrapper[4728]: I0227 10:53:16.509105 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"26ecfb63-8476-497d-9cb3-3729c4961b4e","Type":"ContainerDied","Data":"20bd26906667b33fbc7ef32e65d32ccef8bfad6703dd271d66749056416e5032"} Feb 27 10:53:16 crc kubenswrapper[4728]: I0227 10:53:16.509354 4728 scope.go:117] "RemoveContainer" containerID="a163ccc7b13763efebb2e9f850417e278c80c19e153b529f3a59ccf7d23a4aec" Feb 27 10:53:16 crc kubenswrapper[4728]: I0227 10:53:16.515050 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"ad00da50-2e05-4612-a862-5cccd698e77b","Type":"ContainerDied","Data":"6e35e91f6db4e63f3741e13105b339155b27780cf82cd562159503978bb6aacc"} Feb 27 10:53:16 crc kubenswrapper[4728]: I0227 10:53:16.515107 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Feb 27 10:53:16 crc kubenswrapper[4728]: I0227 10:53:16.571353 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-2"] Feb 27 10:53:16 crc kubenswrapper[4728]: I0227 10:53:16.597134 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-2"] Feb 27 10:53:16 crc kubenswrapper[4728]: I0227 10:53:16.609040 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-2"] Feb 27 10:53:16 crc kubenswrapper[4728]: E0227 10:53:16.609732 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad00da50-2e05-4612-a862-5cccd698e77b" containerName="setup-container" Feb 27 10:53:16 crc kubenswrapper[4728]: I0227 10:53:16.609760 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad00da50-2e05-4612-a862-5cccd698e77b" containerName="setup-container" Feb 27 10:53:16 crc kubenswrapper[4728]: E0227 10:53:16.609826 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26ecfb63-8476-497d-9cb3-3729c4961b4e" containerName="rabbitmq" Feb 27 10:53:16 crc kubenswrapper[4728]: I0227 10:53:16.609837 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="26ecfb63-8476-497d-9cb3-3729c4961b4e" containerName="rabbitmq" Feb 27 10:53:16 crc kubenswrapper[4728]: E0227 10:53:16.609857 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26ecfb63-8476-497d-9cb3-3729c4961b4e" containerName="setup-container" Feb 27 10:53:16 crc kubenswrapper[4728]: I0227 10:53:16.609866 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="26ecfb63-8476-497d-9cb3-3729c4961b4e" containerName="setup-container" Feb 27 10:53:16 crc kubenswrapper[4728]: E0227 10:53:16.609882 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad00da50-2e05-4612-a862-5cccd698e77b" containerName="rabbitmq" Feb 27 10:53:16 crc kubenswrapper[4728]: I0227 10:53:16.609890 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad00da50-2e05-4612-a862-5cccd698e77b" containerName="rabbitmq" Feb 27 10:53:16 crc kubenswrapper[4728]: I0227 10:53:16.610166 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad00da50-2e05-4612-a862-5cccd698e77b" containerName="rabbitmq" Feb 27 10:53:16 crc kubenswrapper[4728]: I0227 10:53:16.610206 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="26ecfb63-8476-497d-9cb3-3729c4961b4e" containerName="rabbitmq" Feb 27 10:53:16 crc kubenswrapper[4728]: I0227 10:53:16.611810 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Feb 27 10:53:16 crc kubenswrapper[4728]: I0227 10:53:16.625474 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-2"] Feb 27 10:53:16 crc kubenswrapper[4728]: I0227 10:53:16.686573 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/008a6414-799f-47de-a238-a5fdefc314ca-pod-info\") pod \"rabbitmq-server-2\" (UID: \"008a6414-799f-47de-a238-a5fdefc314ca\") " pod="openstack/rabbitmq-server-2" Feb 27 10:53:16 crc kubenswrapper[4728]: I0227 10:53:16.686632 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/008a6414-799f-47de-a238-a5fdefc314ca-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"008a6414-799f-47de-a238-a5fdefc314ca\") " pod="openstack/rabbitmq-server-2" Feb 27 10:53:16 crc kubenswrapper[4728]: I0227 10:53:16.686666 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/008a6414-799f-47de-a238-a5fdefc314ca-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"008a6414-799f-47de-a238-a5fdefc314ca\") " pod="openstack/rabbitmq-server-2" Feb 27 10:53:16 crc kubenswrapper[4728]: I0227 10:53:16.686702 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/008a6414-799f-47de-a238-a5fdefc314ca-config-data\") pod \"rabbitmq-server-2\" (UID: \"008a6414-799f-47de-a238-a5fdefc314ca\") " pod="openstack/rabbitmq-server-2" Feb 27 10:53:16 crc kubenswrapper[4728]: I0227 10:53:16.686751 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/008a6414-799f-47de-a238-a5fdefc314ca-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"008a6414-799f-47de-a238-a5fdefc314ca\") " pod="openstack/rabbitmq-server-2" Feb 27 10:53:16 crc kubenswrapper[4728]: I0227 10:53:16.686845 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/008a6414-799f-47de-a238-a5fdefc314ca-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"008a6414-799f-47de-a238-a5fdefc314ca\") " pod="openstack/rabbitmq-server-2" Feb 27 10:53:16 crc kubenswrapper[4728]: I0227 10:53:16.686869 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-6483969b-d2a4-484b-83ca-44d2967b94b0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6483969b-d2a4-484b-83ca-44d2967b94b0\") pod \"rabbitmq-server-2\" (UID: \"008a6414-799f-47de-a238-a5fdefc314ca\") " pod="openstack/rabbitmq-server-2" Feb 27 10:53:16 crc kubenswrapper[4728]: I0227 10:53:16.686941 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6g8t9\" (UniqueName: \"kubernetes.io/projected/008a6414-799f-47de-a238-a5fdefc314ca-kube-api-access-6g8t9\") pod \"rabbitmq-server-2\" (UID: \"008a6414-799f-47de-a238-a5fdefc314ca\") " pod="openstack/rabbitmq-server-2" Feb 27 10:53:16 crc kubenswrapper[4728]: I0227 10:53:16.687065 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/008a6414-799f-47de-a238-a5fdefc314ca-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"008a6414-799f-47de-a238-a5fdefc314ca\") " pod="openstack/rabbitmq-server-2" Feb 27 10:53:16 crc kubenswrapper[4728]: I0227 10:53:16.687084 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/008a6414-799f-47de-a238-a5fdefc314ca-server-conf\") pod \"rabbitmq-server-2\" (UID: \"008a6414-799f-47de-a238-a5fdefc314ca\") " pod="openstack/rabbitmq-server-2" Feb 27 10:53:16 crc kubenswrapper[4728]: I0227 10:53:16.687146 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/008a6414-799f-47de-a238-a5fdefc314ca-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"008a6414-799f-47de-a238-a5fdefc314ca\") " pod="openstack/rabbitmq-server-2" Feb 27 10:53:16 crc kubenswrapper[4728]: I0227 10:53:16.692061 4728 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-2" podUID="ad00da50-2e05-4612-a862-5cccd698e77b" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.136:5671: i/o timeout" Feb 27 10:53:16 crc kubenswrapper[4728]: I0227 10:53:16.738575 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad00da50-2e05-4612-a862-5cccd698e77b" path="/var/lib/kubelet/pods/ad00da50-2e05-4612-a862-5cccd698e77b/volumes" Feb 27 10:53:16 crc kubenswrapper[4728]: I0227 10:53:16.795595 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0766c1d2-55b8-4e58-9f95-8902126e782c\") pod \"26ecfb63-8476-497d-9cb3-3729c4961b4e\" (UID: \"26ecfb63-8476-497d-9cb3-3729c4961b4e\") " Feb 27 10:53:16 crc kubenswrapper[4728]: I0227 10:53:16.796133 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/008a6414-799f-47de-a238-a5fdefc314ca-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"008a6414-799f-47de-a238-a5fdefc314ca\") " pod="openstack/rabbitmq-server-2" Feb 27 10:53:16 crc kubenswrapper[4728]: I0227 10:53:16.796178 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-6483969b-d2a4-484b-83ca-44d2967b94b0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6483969b-d2a4-484b-83ca-44d2967b94b0\") pod \"rabbitmq-server-2\" (UID: \"008a6414-799f-47de-a238-a5fdefc314ca\") " pod="openstack/rabbitmq-server-2" Feb 27 10:53:16 crc kubenswrapper[4728]: I0227 10:53:16.796203 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6g8t9\" (UniqueName: \"kubernetes.io/projected/008a6414-799f-47de-a238-a5fdefc314ca-kube-api-access-6g8t9\") pod \"rabbitmq-server-2\" (UID: \"008a6414-799f-47de-a238-a5fdefc314ca\") " pod="openstack/rabbitmq-server-2" Feb 27 10:53:16 crc kubenswrapper[4728]: I0227 10:53:16.796372 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/008a6414-799f-47de-a238-a5fdefc314ca-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"008a6414-799f-47de-a238-a5fdefc314ca\") " pod="openstack/rabbitmq-server-2" Feb 27 10:53:16 crc kubenswrapper[4728]: I0227 10:53:16.796391 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/008a6414-799f-47de-a238-a5fdefc314ca-server-conf\") pod \"rabbitmq-server-2\" (UID: \"008a6414-799f-47de-a238-a5fdefc314ca\") " pod="openstack/rabbitmq-server-2" Feb 27 10:53:16 crc kubenswrapper[4728]: I0227 10:53:16.796453 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/008a6414-799f-47de-a238-a5fdefc314ca-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"008a6414-799f-47de-a238-a5fdefc314ca\") " pod="openstack/rabbitmq-server-2" Feb 27 10:53:16 crc kubenswrapper[4728]: I0227 10:53:16.796492 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/008a6414-799f-47de-a238-a5fdefc314ca-pod-info\") pod \"rabbitmq-server-2\" (UID: \"008a6414-799f-47de-a238-a5fdefc314ca\") " pod="openstack/rabbitmq-server-2" Feb 27 10:53:16 crc kubenswrapper[4728]: I0227 10:53:16.796558 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/008a6414-799f-47de-a238-a5fdefc314ca-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"008a6414-799f-47de-a238-a5fdefc314ca\") " pod="openstack/rabbitmq-server-2" Feb 27 10:53:16 crc kubenswrapper[4728]: I0227 10:53:16.796592 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/008a6414-799f-47de-a238-a5fdefc314ca-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"008a6414-799f-47de-a238-a5fdefc314ca\") " pod="openstack/rabbitmq-server-2" Feb 27 10:53:16 crc kubenswrapper[4728]: I0227 10:53:16.796680 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/008a6414-799f-47de-a238-a5fdefc314ca-config-data\") pod \"rabbitmq-server-2\" (UID: \"008a6414-799f-47de-a238-a5fdefc314ca\") " pod="openstack/rabbitmq-server-2" Feb 27 10:53:16 crc kubenswrapper[4728]: I0227 10:53:16.796746 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/008a6414-799f-47de-a238-a5fdefc314ca-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"008a6414-799f-47de-a238-a5fdefc314ca\") " pod="openstack/rabbitmq-server-2" Feb 27 10:53:16 crc kubenswrapper[4728]: I0227 10:53:16.799634 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/008a6414-799f-47de-a238-a5fdefc314ca-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"008a6414-799f-47de-a238-a5fdefc314ca\") " pod="openstack/rabbitmq-server-2" Feb 27 10:53:16 crc kubenswrapper[4728]: I0227 10:53:16.801215 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/008a6414-799f-47de-a238-a5fdefc314ca-config-data\") pod \"rabbitmq-server-2\" (UID: \"008a6414-799f-47de-a238-a5fdefc314ca\") " pod="openstack/rabbitmq-server-2" Feb 27 10:53:16 crc kubenswrapper[4728]: I0227 10:53:16.801716 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/008a6414-799f-47de-a238-a5fdefc314ca-pod-info\") pod \"rabbitmq-server-2\" (UID: \"008a6414-799f-47de-a238-a5fdefc314ca\") " pod="openstack/rabbitmq-server-2" Feb 27 10:53:16 crc kubenswrapper[4728]: I0227 10:53:16.801792 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/008a6414-799f-47de-a238-a5fdefc314ca-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"008a6414-799f-47de-a238-a5fdefc314ca\") " pod="openstack/rabbitmq-server-2" Feb 27 10:53:16 crc kubenswrapper[4728]: I0227 10:53:16.802272 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/008a6414-799f-47de-a238-a5fdefc314ca-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"008a6414-799f-47de-a238-a5fdefc314ca\") " pod="openstack/rabbitmq-server-2" Feb 27 10:53:16 crc kubenswrapper[4728]: I0227 10:53:16.802901 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/008a6414-799f-47de-a238-a5fdefc314ca-server-conf\") pod \"rabbitmq-server-2\" (UID: \"008a6414-799f-47de-a238-a5fdefc314ca\") " pod="openstack/rabbitmq-server-2" Feb 27 10:53:16 crc kubenswrapper[4728]: I0227 10:53:16.803149 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/008a6414-799f-47de-a238-a5fdefc314ca-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"008a6414-799f-47de-a238-a5fdefc314ca\") " pod="openstack/rabbitmq-server-2" Feb 27 10:53:16 crc kubenswrapper[4728]: I0227 10:53:16.804251 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/008a6414-799f-47de-a238-a5fdefc314ca-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"008a6414-799f-47de-a238-a5fdefc314ca\") " pod="openstack/rabbitmq-server-2" Feb 27 10:53:16 crc kubenswrapper[4728]: I0227 10:53:16.804635 4728 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 27 10:53:16 crc kubenswrapper[4728]: I0227 10:53:16.804675 4728 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-6483969b-d2a4-484b-83ca-44d2967b94b0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6483969b-d2a4-484b-83ca-44d2967b94b0\") pod \"rabbitmq-server-2\" (UID: \"008a6414-799f-47de-a238-a5fdefc314ca\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/425f9233c08c849daa276787437e4d0b866a73669054d95a34efdd19e1b082af/globalmount\"" pod="openstack/rabbitmq-server-2" Feb 27 10:53:16 crc kubenswrapper[4728]: I0227 10:53:16.805335 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/008a6414-799f-47de-a238-a5fdefc314ca-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"008a6414-799f-47de-a238-a5fdefc314ca\") " pod="openstack/rabbitmq-server-2" Feb 27 10:53:16 crc kubenswrapper[4728]: I0227 10:53:16.813855 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6g8t9\" (UniqueName: \"kubernetes.io/projected/008a6414-799f-47de-a238-a5fdefc314ca-kube-api-access-6g8t9\") pod \"rabbitmq-server-2\" (UID: \"008a6414-799f-47de-a238-a5fdefc314ca\") " pod="openstack/rabbitmq-server-2" Feb 27 10:53:16 crc kubenswrapper[4728]: I0227 10:53:16.828406 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0766c1d2-55b8-4e58-9f95-8902126e782c" (OuterVolumeSpecName: "persistence") pod "26ecfb63-8476-497d-9cb3-3729c4961b4e" (UID: "26ecfb63-8476-497d-9cb3-3729c4961b4e"). InnerVolumeSpecName "pvc-0766c1d2-55b8-4e58-9f95-8902126e782c". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 27 10:53:16 crc kubenswrapper[4728]: E0227 10:53:16.856155 4728 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 27 10:53:16 crc kubenswrapper[4728]: E0227 10:53:16.856404 4728 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 27 10:53:16 crc kubenswrapper[4728]: E0227 10:53:16.856647 4728 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nf9h7bh665hf7h65ch54h9bh7h5c7h548h565h77h5d4hc4h5b9h66dh99hbchfh65h5cch66bhffhb6h84hf8h5b4h57fh5f9h674h5ch5d9q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9tttb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(77499a0a-be50-4d60-ae26-461a8c9742e5): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 27 10:53:16 crc kubenswrapper[4728]: I0227 10:53:16.875726 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-6483969b-d2a4-484b-83ca-44d2967b94b0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6483969b-d2a4-484b-83ca-44d2967b94b0\") pod \"rabbitmq-server-2\" (UID: \"008a6414-799f-47de-a238-a5fdefc314ca\") " pod="openstack/rabbitmq-server-2" Feb 27 10:53:16 crc kubenswrapper[4728]: I0227 10:53:16.905634 4728 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-0766c1d2-55b8-4e58-9f95-8902126e782c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0766c1d2-55b8-4e58-9f95-8902126e782c\") on node \"crc\" " Feb 27 10:53:16 crc kubenswrapper[4728]: I0227 10:53:16.936783 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Feb 27 10:53:16 crc kubenswrapper[4728]: I0227 10:53:16.954457 4728 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 27 10:53:16 crc kubenswrapper[4728]: I0227 10:53:16.955395 4728 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-0766c1d2-55b8-4e58-9f95-8902126e782c" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0766c1d2-55b8-4e58-9f95-8902126e782c") on node "crc" Feb 27 10:53:16 crc kubenswrapper[4728]: I0227 10:53:16.974639 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 27 10:53:16 crc kubenswrapper[4728]: I0227 10:53:16.985750 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 27 10:53:16 crc kubenswrapper[4728]: I0227 10:53:16.997000 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 27 10:53:17 crc kubenswrapper[4728]: I0227 10:53:17.000368 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 27 10:53:17 crc kubenswrapper[4728]: I0227 10:53:17.003755 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 27 10:53:17 crc kubenswrapper[4728]: I0227 10:53:17.004066 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 27 10:53:17 crc kubenswrapper[4728]: I0227 10:53:17.004250 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 27 10:53:17 crc kubenswrapper[4728]: I0227 10:53:17.004418 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 27 10:53:17 crc kubenswrapper[4728]: I0227 10:53:17.004609 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 27 10:53:17 crc kubenswrapper[4728]: I0227 10:53:17.004802 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 27 10:53:17 crc kubenswrapper[4728]: I0227 10:53:17.004985 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-6zc2k" Feb 27 10:53:17 crc kubenswrapper[4728]: I0227 10:53:17.007496 4728 reconciler_common.go:293] "Volume detached for volume \"pvc-0766c1d2-55b8-4e58-9f95-8902126e782c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0766c1d2-55b8-4e58-9f95-8902126e782c\") on node \"crc\" DevicePath \"\"" Feb 27 10:53:17 crc kubenswrapper[4728]: I0227 10:53:17.018794 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 27 10:53:17 crc kubenswrapper[4728]: I0227 10:53:17.112070 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7363c956-6c7e-4e11-bfb1-6be6ba94771e-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"7363c956-6c7e-4e11-bfb1-6be6ba94771e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 10:53:17 crc kubenswrapper[4728]: I0227 10:53:17.112173 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7363c956-6c7e-4e11-bfb1-6be6ba94771e-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"7363c956-6c7e-4e11-bfb1-6be6ba94771e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 10:53:17 crc kubenswrapper[4728]: I0227 10:53:17.112232 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7363c956-6c7e-4e11-bfb1-6be6ba94771e-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"7363c956-6c7e-4e11-bfb1-6be6ba94771e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 10:53:17 crc kubenswrapper[4728]: I0227 10:53:17.112283 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7363c956-6c7e-4e11-bfb1-6be6ba94771e-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"7363c956-6c7e-4e11-bfb1-6be6ba94771e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 10:53:17 crc kubenswrapper[4728]: I0227 10:53:17.112322 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6pr9p\" (UniqueName: \"kubernetes.io/projected/7363c956-6c7e-4e11-bfb1-6be6ba94771e-kube-api-access-6pr9p\") pod \"rabbitmq-cell1-server-0\" (UID: \"7363c956-6c7e-4e11-bfb1-6be6ba94771e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 10:53:17 crc kubenswrapper[4728]: I0227 10:53:17.112399 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7363c956-6c7e-4e11-bfb1-6be6ba94771e-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"7363c956-6c7e-4e11-bfb1-6be6ba94771e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 10:53:17 crc kubenswrapper[4728]: I0227 10:53:17.112457 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-0766c1d2-55b8-4e58-9f95-8902126e782c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0766c1d2-55b8-4e58-9f95-8902126e782c\") pod \"rabbitmq-cell1-server-0\" (UID: \"7363c956-6c7e-4e11-bfb1-6be6ba94771e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 10:53:17 crc kubenswrapper[4728]: I0227 10:53:17.112529 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7363c956-6c7e-4e11-bfb1-6be6ba94771e-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"7363c956-6c7e-4e11-bfb1-6be6ba94771e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 10:53:17 crc kubenswrapper[4728]: I0227 10:53:17.112556 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7363c956-6c7e-4e11-bfb1-6be6ba94771e-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"7363c956-6c7e-4e11-bfb1-6be6ba94771e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 10:53:17 crc kubenswrapper[4728]: I0227 10:53:17.112599 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7363c956-6c7e-4e11-bfb1-6be6ba94771e-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"7363c956-6c7e-4e11-bfb1-6be6ba94771e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 10:53:17 crc kubenswrapper[4728]: I0227 10:53:17.112666 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7363c956-6c7e-4e11-bfb1-6be6ba94771e-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"7363c956-6c7e-4e11-bfb1-6be6ba94771e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 10:53:17 crc kubenswrapper[4728]: I0227 10:53:17.214262 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7363c956-6c7e-4e11-bfb1-6be6ba94771e-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"7363c956-6c7e-4e11-bfb1-6be6ba94771e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 10:53:17 crc kubenswrapper[4728]: I0227 10:53:17.214329 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-0766c1d2-55b8-4e58-9f95-8902126e782c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0766c1d2-55b8-4e58-9f95-8902126e782c\") pod \"rabbitmq-cell1-server-0\" (UID: \"7363c956-6c7e-4e11-bfb1-6be6ba94771e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 10:53:17 crc kubenswrapper[4728]: I0227 10:53:17.214367 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7363c956-6c7e-4e11-bfb1-6be6ba94771e-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"7363c956-6c7e-4e11-bfb1-6be6ba94771e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 10:53:17 crc kubenswrapper[4728]: I0227 10:53:17.214383 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7363c956-6c7e-4e11-bfb1-6be6ba94771e-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"7363c956-6c7e-4e11-bfb1-6be6ba94771e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 10:53:17 crc kubenswrapper[4728]: I0227 10:53:17.214417 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7363c956-6c7e-4e11-bfb1-6be6ba94771e-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"7363c956-6c7e-4e11-bfb1-6be6ba94771e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 10:53:17 crc kubenswrapper[4728]: I0227 10:53:17.214450 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7363c956-6c7e-4e11-bfb1-6be6ba94771e-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"7363c956-6c7e-4e11-bfb1-6be6ba94771e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 10:53:17 crc kubenswrapper[4728]: I0227 10:53:17.214484 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7363c956-6c7e-4e11-bfb1-6be6ba94771e-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"7363c956-6c7e-4e11-bfb1-6be6ba94771e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 10:53:17 crc kubenswrapper[4728]: I0227 10:53:17.214617 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7363c956-6c7e-4e11-bfb1-6be6ba94771e-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"7363c956-6c7e-4e11-bfb1-6be6ba94771e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 10:53:17 crc kubenswrapper[4728]: I0227 10:53:17.214661 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7363c956-6c7e-4e11-bfb1-6be6ba94771e-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"7363c956-6c7e-4e11-bfb1-6be6ba94771e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 10:53:17 crc kubenswrapper[4728]: I0227 10:53:17.214697 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7363c956-6c7e-4e11-bfb1-6be6ba94771e-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"7363c956-6c7e-4e11-bfb1-6be6ba94771e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 10:53:17 crc kubenswrapper[4728]: I0227 10:53:17.214726 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6pr9p\" (UniqueName: \"kubernetes.io/projected/7363c956-6c7e-4e11-bfb1-6be6ba94771e-kube-api-access-6pr9p\") pod \"rabbitmq-cell1-server-0\" (UID: \"7363c956-6c7e-4e11-bfb1-6be6ba94771e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 10:53:17 crc kubenswrapper[4728]: I0227 10:53:17.215387 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7363c956-6c7e-4e11-bfb1-6be6ba94771e-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"7363c956-6c7e-4e11-bfb1-6be6ba94771e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 10:53:17 crc kubenswrapper[4728]: I0227 10:53:17.215961 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7363c956-6c7e-4e11-bfb1-6be6ba94771e-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"7363c956-6c7e-4e11-bfb1-6be6ba94771e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 10:53:17 crc kubenswrapper[4728]: I0227 10:53:17.216298 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7363c956-6c7e-4e11-bfb1-6be6ba94771e-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"7363c956-6c7e-4e11-bfb1-6be6ba94771e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 10:53:17 crc kubenswrapper[4728]: I0227 10:53:17.216399 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7363c956-6c7e-4e11-bfb1-6be6ba94771e-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"7363c956-6c7e-4e11-bfb1-6be6ba94771e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 10:53:17 crc kubenswrapper[4728]: I0227 10:53:17.217630 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7363c956-6c7e-4e11-bfb1-6be6ba94771e-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"7363c956-6c7e-4e11-bfb1-6be6ba94771e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 10:53:17 crc kubenswrapper[4728]: I0227 10:53:17.218409 4728 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 27 10:53:17 crc kubenswrapper[4728]: I0227 10:53:17.218448 4728 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-0766c1d2-55b8-4e58-9f95-8902126e782c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0766c1d2-55b8-4e58-9f95-8902126e782c\") pod \"rabbitmq-cell1-server-0\" (UID: \"7363c956-6c7e-4e11-bfb1-6be6ba94771e\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/323f8e7c36144bd439e3d750bd883408bca281a20485637d7338f56eabb24e88/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Feb 27 10:53:17 crc kubenswrapper[4728]: I0227 10:53:17.220675 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7363c956-6c7e-4e11-bfb1-6be6ba94771e-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"7363c956-6c7e-4e11-bfb1-6be6ba94771e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 10:53:17 crc kubenswrapper[4728]: I0227 10:53:17.223703 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7363c956-6c7e-4e11-bfb1-6be6ba94771e-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"7363c956-6c7e-4e11-bfb1-6be6ba94771e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 10:53:17 crc kubenswrapper[4728]: I0227 10:53:17.224232 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7363c956-6c7e-4e11-bfb1-6be6ba94771e-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"7363c956-6c7e-4e11-bfb1-6be6ba94771e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 10:53:17 crc kubenswrapper[4728]: I0227 10:53:17.251956 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7363c956-6c7e-4e11-bfb1-6be6ba94771e-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"7363c956-6c7e-4e11-bfb1-6be6ba94771e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 10:53:17 crc kubenswrapper[4728]: I0227 10:53:17.253211 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6pr9p\" (UniqueName: \"kubernetes.io/projected/7363c956-6c7e-4e11-bfb1-6be6ba94771e-kube-api-access-6pr9p\") pod \"rabbitmq-cell1-server-0\" (UID: \"7363c956-6c7e-4e11-bfb1-6be6ba94771e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 10:53:17 crc kubenswrapper[4728]: I0227 10:53:17.354647 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-0766c1d2-55b8-4e58-9f95-8902126e782c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0766c1d2-55b8-4e58-9f95-8902126e782c\") pod \"rabbitmq-cell1-server-0\" (UID: \"7363c956-6c7e-4e11-bfb1-6be6ba94771e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 10:53:17 crc kubenswrapper[4728]: I0227 10:53:17.581920 4728 scope.go:117] "RemoveContainer" containerID="7085535c1cf06df2af491ea6ba1e48ccf7c883b1ebac3eccf340158c02955b37" Feb 27 10:53:17 crc kubenswrapper[4728]: I0227 10:53:17.622544 4728 scope.go:117] "RemoveContainer" containerID="e23737f0ce37aa688192a9b087adece80e03ee173aeb672a9a8026ba67e7e977" Feb 27 10:53:17 crc kubenswrapper[4728]: E0227 10:53:17.628234 4728 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 27 10:53:17 crc kubenswrapper[4728]: E0227 10:53:17.628317 4728 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 27 10:53:17 crc kubenswrapper[4728]: E0227 10:53:17.628579 4728 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bvwg2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-tqhw2_openstack(c112afc6-4352-4004-885a-0b1d88caffae): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 27 10:53:17 crc kubenswrapper[4728]: E0227 10:53:17.631879 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/heat-db-sync-tqhw2" podUID="c112afc6-4352-4004-885a-0b1d88caffae" Feb 27 10:53:17 crc kubenswrapper[4728]: I0227 10:53:17.631999 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 27 10:53:17 crc kubenswrapper[4728]: I0227 10:53:17.716796 4728 scope.go:117] "RemoveContainer" containerID="ece13434c955547aaf3f7f164eaf74b912d99426d2f94d33488bf7c110f9b30c" Feb 27 10:53:18 crc kubenswrapper[4728]: I0227 10:53:18.028252 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-594cb89c79-lcsng"] Feb 27 10:53:18 crc kubenswrapper[4728]: I0227 10:53:18.243686 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-2"] Feb 27 10:53:18 crc kubenswrapper[4728]: W0227 10:53:18.249666 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod008a6414_799f_47de_a238_a5fdefc314ca.slice/crio-1627e6b03fe928047096803c9c4d11be9031fbb16c78f4b95da7bd4e40765cdd WatchSource:0}: Error finding container 1627e6b03fe928047096803c9c4d11be9031fbb16c78f4b95da7bd4e40765cdd: Status 404 returned error can't find the container with id 1627e6b03fe928047096803c9c4d11be9031fbb16c78f4b95da7bd4e40765cdd Feb 27 10:53:18 crc kubenswrapper[4728]: I0227 10:53:18.270281 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 27 10:53:18 crc kubenswrapper[4728]: I0227 10:53:18.537127 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"008a6414-799f-47de-a238-a5fdefc314ca","Type":"ContainerStarted","Data":"1627e6b03fe928047096803c9c4d11be9031fbb16c78f4b95da7bd4e40765cdd"} Feb 27 10:53:18 crc kubenswrapper[4728]: I0227 10:53:18.539833 4728 generic.go:334] "Generic (PLEG): container finished" podID="53be9ab0-5753-4942-bdd4-3efb5bb0d4d0" containerID="8c9c24f15aa05ffc7a725ace2e3b567b40a0dda2935a7d2a99f09423c4c81a04" exitCode=0 Feb 27 10:53:18 crc kubenswrapper[4728]: I0227 10:53:18.539889 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-594cb89c79-lcsng" event={"ID":"53be9ab0-5753-4942-bdd4-3efb5bb0d4d0","Type":"ContainerDied","Data":"8c9c24f15aa05ffc7a725ace2e3b567b40a0dda2935a7d2a99f09423c4c81a04"} Feb 27 10:53:18 crc kubenswrapper[4728]: I0227 10:53:18.539906 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-594cb89c79-lcsng" event={"ID":"53be9ab0-5753-4942-bdd4-3efb5bb0d4d0","Type":"ContainerStarted","Data":"1bb5105a78c31da7e469bc50e14f50be0b8e0bfcb876c244df1723a0e08a86a1"} Feb 27 10:53:18 crc kubenswrapper[4728]: I0227 10:53:18.541521 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"77499a0a-be50-4d60-ae26-461a8c9742e5","Type":"ContainerStarted","Data":"b724b18ecc05fc964c1c55d300dd79ed494a99c6dad9540dea1392d0cd1205ed"} Feb 27 10:53:18 crc kubenswrapper[4728]: I0227 10:53:18.543766 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"7363c956-6c7e-4e11-bfb1-6be6ba94771e","Type":"ContainerStarted","Data":"463adfaab5f198bbb66b1517faf3487f675443c016e5b971c2e4e66ff2efc58f"} Feb 27 10:53:18 crc kubenswrapper[4728]: E0227 10:53:18.549260 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tqhw2" podUID="c112afc6-4352-4004-885a-0b1d88caffae" Feb 27 10:53:18 crc kubenswrapper[4728]: I0227 10:53:18.738834 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="26ecfb63-8476-497d-9cb3-3729c4961b4e" path="/var/lib/kubelet/pods/26ecfb63-8476-497d-9cb3-3729c4961b4e/volumes" Feb 27 10:53:19 crc kubenswrapper[4728]: I0227 10:53:19.567283 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-594cb89c79-lcsng" event={"ID":"53be9ab0-5753-4942-bdd4-3efb5bb0d4d0","Type":"ContainerStarted","Data":"6fd08f4cb4ef4327f8e3ca9fb5b0de6c23f3d95ff52c2ab049c43b1514738394"} Feb 27 10:53:19 crc kubenswrapper[4728]: I0227 10:53:19.567627 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-594cb89c79-lcsng" Feb 27 10:53:19 crc kubenswrapper[4728]: I0227 10:53:19.570333 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"77499a0a-be50-4d60-ae26-461a8c9742e5","Type":"ContainerStarted","Data":"583b5b003b302f1bd33e1ab893f78c64d53211c5c51476304fab2dbb4275288e"} Feb 27 10:53:19 crc kubenswrapper[4728]: I0227 10:53:19.599253 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-594cb89c79-lcsng" podStartSLOduration=12.599121824000001 podStartE2EDuration="12.599121824s" podCreationTimestamp="2026-02-27 10:53:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:53:19.591178108 +0000 UTC m=+1619.553544234" watchObservedRunningTime="2026-02-27 10:53:19.599121824 +0000 UTC m=+1619.561487950" Feb 27 10:53:20 crc kubenswrapper[4728]: I0227 10:53:20.583027 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"7363c956-6c7e-4e11-bfb1-6be6ba94771e","Type":"ContainerStarted","Data":"49b6f2d70816ca5d6d9a4f15531ca6dd8fb3d1bb5e8d896989730843fbcb73ec"} Feb 27 10:53:20 crc kubenswrapper[4728]: I0227 10:53:20.584649 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"008a6414-799f-47de-a238-a5fdefc314ca","Type":"ContainerStarted","Data":"34f4e5c27c04871abd7ae8cd54db5d7011624e9e28950f8858b26d9e9b76f67b"} Feb 27 10:53:21 crc kubenswrapper[4728]: E0227 10:53:21.321305 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ceilometer-0" podUID="77499a0a-be50-4d60-ae26-461a8c9742e5" Feb 27 10:53:21 crc kubenswrapper[4728]: I0227 10:53:21.599398 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"77499a0a-be50-4d60-ae26-461a8c9742e5","Type":"ContainerStarted","Data":"8ed56e0e258dd964e907cf97a976a39c11afaab0ea592a46a8e6fa288dc77b0e"} Feb 27 10:53:21 crc kubenswrapper[4728]: E0227 10:53:21.601530 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="77499a0a-be50-4d60-ae26-461a8c9742e5" Feb 27 10:53:22 crc kubenswrapper[4728]: I0227 10:53:22.608627 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 27 10:53:22 crc kubenswrapper[4728]: E0227 10:53:22.610971 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="77499a0a-be50-4d60-ae26-461a8c9742e5" Feb 27 10:53:23 crc kubenswrapper[4728]: E0227 10:53:23.621804 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="77499a0a-be50-4d60-ae26-461a8c9742e5" Feb 27 10:53:27 crc kubenswrapper[4728]: I0227 10:53:27.819708 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-594cb89c79-lcsng" Feb 27 10:53:27 crc kubenswrapper[4728]: I0227 10:53:27.900621 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6d99f6bc7f-gr6np"] Feb 27 10:53:27 crc kubenswrapper[4728]: I0227 10:53:27.901188 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6d99f6bc7f-gr6np" podUID="4447fe65-418c-43b8-aa5a-b78a7d97fe56" containerName="dnsmasq-dns" containerID="cri-o://9daef0f852c0139f194c3aa591c9673245e5aaef34c3d7f724a39223b54d9f05" gracePeriod=10 Feb 27 10:53:28 crc kubenswrapper[4728]: I0227 10:53:28.104980 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5596c69fcc-6qtwm"] Feb 27 10:53:28 crc kubenswrapper[4728]: I0227 10:53:28.107088 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5596c69fcc-6qtwm" Feb 27 10:53:28 crc kubenswrapper[4728]: I0227 10:53:28.132073 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5596c69fcc-6qtwm"] Feb 27 10:53:28 crc kubenswrapper[4728]: I0227 10:53:28.222332 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/13a77c59-db09-46ca-aa8c-88651c29be68-openstack-edpm-ipam\") pod \"dnsmasq-dns-5596c69fcc-6qtwm\" (UID: \"13a77c59-db09-46ca-aa8c-88651c29be68\") " pod="openstack/dnsmasq-dns-5596c69fcc-6qtwm" Feb 27 10:53:28 crc kubenswrapper[4728]: I0227 10:53:28.222816 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/13a77c59-db09-46ca-aa8c-88651c29be68-dns-swift-storage-0\") pod \"dnsmasq-dns-5596c69fcc-6qtwm\" (UID: \"13a77c59-db09-46ca-aa8c-88651c29be68\") " pod="openstack/dnsmasq-dns-5596c69fcc-6qtwm" Feb 27 10:53:28 crc kubenswrapper[4728]: I0227 10:53:28.222946 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/13a77c59-db09-46ca-aa8c-88651c29be68-ovsdbserver-sb\") pod \"dnsmasq-dns-5596c69fcc-6qtwm\" (UID: \"13a77c59-db09-46ca-aa8c-88651c29be68\") " pod="openstack/dnsmasq-dns-5596c69fcc-6qtwm" Feb 27 10:53:28 crc kubenswrapper[4728]: I0227 10:53:28.222979 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13a77c59-db09-46ca-aa8c-88651c29be68-config\") pod \"dnsmasq-dns-5596c69fcc-6qtwm\" (UID: \"13a77c59-db09-46ca-aa8c-88651c29be68\") " pod="openstack/dnsmasq-dns-5596c69fcc-6qtwm" Feb 27 10:53:28 crc kubenswrapper[4728]: I0227 10:53:28.223020 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/13a77c59-db09-46ca-aa8c-88651c29be68-dns-svc\") pod \"dnsmasq-dns-5596c69fcc-6qtwm\" (UID: \"13a77c59-db09-46ca-aa8c-88651c29be68\") " pod="openstack/dnsmasq-dns-5596c69fcc-6qtwm" Feb 27 10:53:28 crc kubenswrapper[4728]: I0227 10:53:28.223123 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/13a77c59-db09-46ca-aa8c-88651c29be68-ovsdbserver-nb\") pod \"dnsmasq-dns-5596c69fcc-6qtwm\" (UID: \"13a77c59-db09-46ca-aa8c-88651c29be68\") " pod="openstack/dnsmasq-dns-5596c69fcc-6qtwm" Feb 27 10:53:28 crc kubenswrapper[4728]: I0227 10:53:28.223163 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9vt6\" (UniqueName: \"kubernetes.io/projected/13a77c59-db09-46ca-aa8c-88651c29be68-kube-api-access-l9vt6\") pod \"dnsmasq-dns-5596c69fcc-6qtwm\" (UID: \"13a77c59-db09-46ca-aa8c-88651c29be68\") " pod="openstack/dnsmasq-dns-5596c69fcc-6qtwm" Feb 27 10:53:28 crc kubenswrapper[4728]: I0227 10:53:28.333878 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/13a77c59-db09-46ca-aa8c-88651c29be68-dns-swift-storage-0\") pod \"dnsmasq-dns-5596c69fcc-6qtwm\" (UID: \"13a77c59-db09-46ca-aa8c-88651c29be68\") " pod="openstack/dnsmasq-dns-5596c69fcc-6qtwm" Feb 27 10:53:28 crc kubenswrapper[4728]: I0227 10:53:28.334139 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/13a77c59-db09-46ca-aa8c-88651c29be68-ovsdbserver-sb\") pod \"dnsmasq-dns-5596c69fcc-6qtwm\" (UID: \"13a77c59-db09-46ca-aa8c-88651c29be68\") " pod="openstack/dnsmasq-dns-5596c69fcc-6qtwm" Feb 27 10:53:28 crc kubenswrapper[4728]: I0227 10:53:28.334205 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13a77c59-db09-46ca-aa8c-88651c29be68-config\") pod \"dnsmasq-dns-5596c69fcc-6qtwm\" (UID: \"13a77c59-db09-46ca-aa8c-88651c29be68\") " pod="openstack/dnsmasq-dns-5596c69fcc-6qtwm" Feb 27 10:53:28 crc kubenswrapper[4728]: I0227 10:53:28.334260 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/13a77c59-db09-46ca-aa8c-88651c29be68-dns-svc\") pod \"dnsmasq-dns-5596c69fcc-6qtwm\" (UID: \"13a77c59-db09-46ca-aa8c-88651c29be68\") " pod="openstack/dnsmasq-dns-5596c69fcc-6qtwm" Feb 27 10:53:28 crc kubenswrapper[4728]: I0227 10:53:28.334447 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/13a77c59-db09-46ca-aa8c-88651c29be68-ovsdbserver-nb\") pod \"dnsmasq-dns-5596c69fcc-6qtwm\" (UID: \"13a77c59-db09-46ca-aa8c-88651c29be68\") " pod="openstack/dnsmasq-dns-5596c69fcc-6qtwm" Feb 27 10:53:28 crc kubenswrapper[4728]: I0227 10:53:28.334492 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l9vt6\" (UniqueName: \"kubernetes.io/projected/13a77c59-db09-46ca-aa8c-88651c29be68-kube-api-access-l9vt6\") pod \"dnsmasq-dns-5596c69fcc-6qtwm\" (UID: \"13a77c59-db09-46ca-aa8c-88651c29be68\") " pod="openstack/dnsmasq-dns-5596c69fcc-6qtwm" Feb 27 10:53:28 crc kubenswrapper[4728]: I0227 10:53:28.334568 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/13a77c59-db09-46ca-aa8c-88651c29be68-openstack-edpm-ipam\") pod \"dnsmasq-dns-5596c69fcc-6qtwm\" (UID: \"13a77c59-db09-46ca-aa8c-88651c29be68\") " pod="openstack/dnsmasq-dns-5596c69fcc-6qtwm" Feb 27 10:53:28 crc kubenswrapper[4728]: I0227 10:53:28.335786 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/13a77c59-db09-46ca-aa8c-88651c29be68-openstack-edpm-ipam\") pod \"dnsmasq-dns-5596c69fcc-6qtwm\" (UID: \"13a77c59-db09-46ca-aa8c-88651c29be68\") " pod="openstack/dnsmasq-dns-5596c69fcc-6qtwm" Feb 27 10:53:28 crc kubenswrapper[4728]: I0227 10:53:28.336458 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/13a77c59-db09-46ca-aa8c-88651c29be68-dns-swift-storage-0\") pod \"dnsmasq-dns-5596c69fcc-6qtwm\" (UID: \"13a77c59-db09-46ca-aa8c-88651c29be68\") " pod="openstack/dnsmasq-dns-5596c69fcc-6qtwm" Feb 27 10:53:28 crc kubenswrapper[4728]: I0227 10:53:28.337188 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/13a77c59-db09-46ca-aa8c-88651c29be68-ovsdbserver-sb\") pod \"dnsmasq-dns-5596c69fcc-6qtwm\" (UID: \"13a77c59-db09-46ca-aa8c-88651c29be68\") " pod="openstack/dnsmasq-dns-5596c69fcc-6qtwm" Feb 27 10:53:28 crc kubenswrapper[4728]: I0227 10:53:28.342697 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/13a77c59-db09-46ca-aa8c-88651c29be68-dns-svc\") pod \"dnsmasq-dns-5596c69fcc-6qtwm\" (UID: \"13a77c59-db09-46ca-aa8c-88651c29be68\") " pod="openstack/dnsmasq-dns-5596c69fcc-6qtwm" Feb 27 10:53:28 crc kubenswrapper[4728]: I0227 10:53:28.343291 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/13a77c59-db09-46ca-aa8c-88651c29be68-ovsdbserver-nb\") pod \"dnsmasq-dns-5596c69fcc-6qtwm\" (UID: \"13a77c59-db09-46ca-aa8c-88651c29be68\") " pod="openstack/dnsmasq-dns-5596c69fcc-6qtwm" Feb 27 10:53:28 crc kubenswrapper[4728]: I0227 10:53:28.343569 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13a77c59-db09-46ca-aa8c-88651c29be68-config\") pod \"dnsmasq-dns-5596c69fcc-6qtwm\" (UID: \"13a77c59-db09-46ca-aa8c-88651c29be68\") " pod="openstack/dnsmasq-dns-5596c69fcc-6qtwm" Feb 27 10:53:28 crc kubenswrapper[4728]: I0227 10:53:28.377695 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l9vt6\" (UniqueName: \"kubernetes.io/projected/13a77c59-db09-46ca-aa8c-88651c29be68-kube-api-access-l9vt6\") pod \"dnsmasq-dns-5596c69fcc-6qtwm\" (UID: \"13a77c59-db09-46ca-aa8c-88651c29be68\") " pod="openstack/dnsmasq-dns-5596c69fcc-6qtwm" Feb 27 10:53:28 crc kubenswrapper[4728]: I0227 10:53:28.435998 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5596c69fcc-6qtwm" Feb 27 10:53:28 crc kubenswrapper[4728]: I0227 10:53:28.564200 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d99f6bc7f-gr6np" Feb 27 10:53:28 crc kubenswrapper[4728]: I0227 10:53:28.685857 4728 generic.go:334] "Generic (PLEG): container finished" podID="4447fe65-418c-43b8-aa5a-b78a7d97fe56" containerID="9daef0f852c0139f194c3aa591c9673245e5aaef34c3d7f724a39223b54d9f05" exitCode=0 Feb 27 10:53:28 crc kubenswrapper[4728]: I0227 10:53:28.685927 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d99f6bc7f-gr6np" event={"ID":"4447fe65-418c-43b8-aa5a-b78a7d97fe56","Type":"ContainerDied","Data":"9daef0f852c0139f194c3aa591c9673245e5aaef34c3d7f724a39223b54d9f05"} Feb 27 10:53:28 crc kubenswrapper[4728]: I0227 10:53:28.686007 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d99f6bc7f-gr6np" event={"ID":"4447fe65-418c-43b8-aa5a-b78a7d97fe56","Type":"ContainerDied","Data":"14de993c05220d694141d7f7be1545bef0f4cd44c062b4ff6edf1371b0bbd17d"} Feb 27 10:53:28 crc kubenswrapper[4728]: I0227 10:53:28.686054 4728 scope.go:117] "RemoveContainer" containerID="9daef0f852c0139f194c3aa591c9673245e5aaef34c3d7f724a39223b54d9f05" Feb 27 10:53:28 crc kubenswrapper[4728]: I0227 10:53:28.686391 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d99f6bc7f-gr6np" Feb 27 10:53:28 crc kubenswrapper[4728]: I0227 10:53:28.752520 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4447fe65-418c-43b8-aa5a-b78a7d97fe56-ovsdbserver-nb\") pod \"4447fe65-418c-43b8-aa5a-b78a7d97fe56\" (UID: \"4447fe65-418c-43b8-aa5a-b78a7d97fe56\") " Feb 27 10:53:28 crc kubenswrapper[4728]: I0227 10:53:28.752617 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4447fe65-418c-43b8-aa5a-b78a7d97fe56-ovsdbserver-sb\") pod \"4447fe65-418c-43b8-aa5a-b78a7d97fe56\" (UID: \"4447fe65-418c-43b8-aa5a-b78a7d97fe56\") " Feb 27 10:53:28 crc kubenswrapper[4728]: I0227 10:53:28.752643 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4447fe65-418c-43b8-aa5a-b78a7d97fe56-dns-svc\") pod \"4447fe65-418c-43b8-aa5a-b78a7d97fe56\" (UID: \"4447fe65-418c-43b8-aa5a-b78a7d97fe56\") " Feb 27 10:53:28 crc kubenswrapper[4728]: I0227 10:53:28.752674 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4447fe65-418c-43b8-aa5a-b78a7d97fe56-dns-swift-storage-0\") pod \"4447fe65-418c-43b8-aa5a-b78a7d97fe56\" (UID: \"4447fe65-418c-43b8-aa5a-b78a7d97fe56\") " Feb 27 10:53:28 crc kubenswrapper[4728]: I0227 10:53:28.752787 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wp2g7\" (UniqueName: \"kubernetes.io/projected/4447fe65-418c-43b8-aa5a-b78a7d97fe56-kube-api-access-wp2g7\") pod \"4447fe65-418c-43b8-aa5a-b78a7d97fe56\" (UID: \"4447fe65-418c-43b8-aa5a-b78a7d97fe56\") " Feb 27 10:53:28 crc kubenswrapper[4728]: I0227 10:53:28.752899 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4447fe65-418c-43b8-aa5a-b78a7d97fe56-config\") pod \"4447fe65-418c-43b8-aa5a-b78a7d97fe56\" (UID: \"4447fe65-418c-43b8-aa5a-b78a7d97fe56\") " Feb 27 10:53:28 crc kubenswrapper[4728]: I0227 10:53:28.772664 4728 scope.go:117] "RemoveContainer" containerID="a602bd244e0453c0b392a6c151070755904dbf737d422bc8d0d1a2b5487ba005" Feb 27 10:53:28 crc kubenswrapper[4728]: I0227 10:53:28.777930 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4447fe65-418c-43b8-aa5a-b78a7d97fe56-kube-api-access-wp2g7" (OuterVolumeSpecName: "kube-api-access-wp2g7") pod "4447fe65-418c-43b8-aa5a-b78a7d97fe56" (UID: "4447fe65-418c-43b8-aa5a-b78a7d97fe56"). InnerVolumeSpecName "kube-api-access-wp2g7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:53:28 crc kubenswrapper[4728]: I0227 10:53:28.856293 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wp2g7\" (UniqueName: \"kubernetes.io/projected/4447fe65-418c-43b8-aa5a-b78a7d97fe56-kube-api-access-wp2g7\") on node \"crc\" DevicePath \"\"" Feb 27 10:53:28 crc kubenswrapper[4728]: I0227 10:53:28.859420 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4447fe65-418c-43b8-aa5a-b78a7d97fe56-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "4447fe65-418c-43b8-aa5a-b78a7d97fe56" (UID: "4447fe65-418c-43b8-aa5a-b78a7d97fe56"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:53:28 crc kubenswrapper[4728]: I0227 10:53:28.865145 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4447fe65-418c-43b8-aa5a-b78a7d97fe56-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4447fe65-418c-43b8-aa5a-b78a7d97fe56" (UID: "4447fe65-418c-43b8-aa5a-b78a7d97fe56"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:53:28 crc kubenswrapper[4728]: I0227 10:53:28.866148 4728 scope.go:117] "RemoveContainer" containerID="9daef0f852c0139f194c3aa591c9673245e5aaef34c3d7f724a39223b54d9f05" Feb 27 10:53:28 crc kubenswrapper[4728]: E0227 10:53:28.866483 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9daef0f852c0139f194c3aa591c9673245e5aaef34c3d7f724a39223b54d9f05\": container with ID starting with 9daef0f852c0139f194c3aa591c9673245e5aaef34c3d7f724a39223b54d9f05 not found: ID does not exist" containerID="9daef0f852c0139f194c3aa591c9673245e5aaef34c3d7f724a39223b54d9f05" Feb 27 10:53:28 crc kubenswrapper[4728]: I0227 10:53:28.866551 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9daef0f852c0139f194c3aa591c9673245e5aaef34c3d7f724a39223b54d9f05"} err="failed to get container status \"9daef0f852c0139f194c3aa591c9673245e5aaef34c3d7f724a39223b54d9f05\": rpc error: code = NotFound desc = could not find container \"9daef0f852c0139f194c3aa591c9673245e5aaef34c3d7f724a39223b54d9f05\": container with ID starting with 9daef0f852c0139f194c3aa591c9673245e5aaef34c3d7f724a39223b54d9f05 not found: ID does not exist" Feb 27 10:53:28 crc kubenswrapper[4728]: I0227 10:53:28.866571 4728 scope.go:117] "RemoveContainer" containerID="a602bd244e0453c0b392a6c151070755904dbf737d422bc8d0d1a2b5487ba005" Feb 27 10:53:28 crc kubenswrapper[4728]: E0227 10:53:28.867677 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a602bd244e0453c0b392a6c151070755904dbf737d422bc8d0d1a2b5487ba005\": container with ID starting with a602bd244e0453c0b392a6c151070755904dbf737d422bc8d0d1a2b5487ba005 not found: ID does not exist" containerID="a602bd244e0453c0b392a6c151070755904dbf737d422bc8d0d1a2b5487ba005" Feb 27 10:53:28 crc kubenswrapper[4728]: I0227 10:53:28.867699 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a602bd244e0453c0b392a6c151070755904dbf737d422bc8d0d1a2b5487ba005"} err="failed to get container status \"a602bd244e0453c0b392a6c151070755904dbf737d422bc8d0d1a2b5487ba005\": rpc error: code = NotFound desc = could not find container \"a602bd244e0453c0b392a6c151070755904dbf737d422bc8d0d1a2b5487ba005\": container with ID starting with a602bd244e0453c0b392a6c151070755904dbf737d422bc8d0d1a2b5487ba005 not found: ID does not exist" Feb 27 10:53:28 crc kubenswrapper[4728]: I0227 10:53:28.871090 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4447fe65-418c-43b8-aa5a-b78a7d97fe56-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "4447fe65-418c-43b8-aa5a-b78a7d97fe56" (UID: "4447fe65-418c-43b8-aa5a-b78a7d97fe56"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:53:28 crc kubenswrapper[4728]: I0227 10:53:28.871861 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4447fe65-418c-43b8-aa5a-b78a7d97fe56-config" (OuterVolumeSpecName: "config") pod "4447fe65-418c-43b8-aa5a-b78a7d97fe56" (UID: "4447fe65-418c-43b8-aa5a-b78a7d97fe56"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:53:28 crc kubenswrapper[4728]: I0227 10:53:28.874983 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4447fe65-418c-43b8-aa5a-b78a7d97fe56-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "4447fe65-418c-43b8-aa5a-b78a7d97fe56" (UID: "4447fe65-418c-43b8-aa5a-b78a7d97fe56"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:53:28 crc kubenswrapper[4728]: I0227 10:53:28.958339 4728 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4447fe65-418c-43b8-aa5a-b78a7d97fe56-config\") on node \"crc\" DevicePath \"\"" Feb 27 10:53:28 crc kubenswrapper[4728]: I0227 10:53:28.958375 4728 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4447fe65-418c-43b8-aa5a-b78a7d97fe56-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 27 10:53:28 crc kubenswrapper[4728]: I0227 10:53:28.958387 4728 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4447fe65-418c-43b8-aa5a-b78a7d97fe56-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 27 10:53:28 crc kubenswrapper[4728]: I0227 10:53:28.958395 4728 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4447fe65-418c-43b8-aa5a-b78a7d97fe56-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 27 10:53:28 crc kubenswrapper[4728]: I0227 10:53:28.958405 4728 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4447fe65-418c-43b8-aa5a-b78a7d97fe56-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 27 10:53:28 crc kubenswrapper[4728]: I0227 10:53:28.990740 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5596c69fcc-6qtwm"] Feb 27 10:53:28 crc kubenswrapper[4728]: W0227 10:53:28.994014 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod13a77c59_db09_46ca_aa8c_88651c29be68.slice/crio-749018cdc44a0f3f42491d9b002b18277541ba045087123ba8aa1b67f95c817e WatchSource:0}: Error finding container 749018cdc44a0f3f42491d9b002b18277541ba045087123ba8aa1b67f95c817e: Status 404 returned error can't find the container with id 749018cdc44a0f3f42491d9b002b18277541ba045087123ba8aa1b67f95c817e Feb 27 10:53:29 crc kubenswrapper[4728]: I0227 10:53:29.025345 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6d99f6bc7f-gr6np"] Feb 27 10:53:29 crc kubenswrapper[4728]: I0227 10:53:29.037645 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6d99f6bc7f-gr6np"] Feb 27 10:53:29 crc kubenswrapper[4728]: I0227 10:53:29.700836 4728 generic.go:334] "Generic (PLEG): container finished" podID="13a77c59-db09-46ca-aa8c-88651c29be68" containerID="f397b1c8faa7f909116ffeb19c5aba9790fa8365a8bf32d8d3035932a88d087b" exitCode=0 Feb 27 10:53:29 crc kubenswrapper[4728]: I0227 10:53:29.700951 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5596c69fcc-6qtwm" event={"ID":"13a77c59-db09-46ca-aa8c-88651c29be68","Type":"ContainerDied","Data":"f397b1c8faa7f909116ffeb19c5aba9790fa8365a8bf32d8d3035932a88d087b"} Feb 27 10:53:29 crc kubenswrapper[4728]: I0227 10:53:29.701178 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5596c69fcc-6qtwm" event={"ID":"13a77c59-db09-46ca-aa8c-88651c29be68","Type":"ContainerStarted","Data":"749018cdc44a0f3f42491d9b002b18277541ba045087123ba8aa1b67f95c817e"} Feb 27 10:53:30 crc kubenswrapper[4728]: I0227 10:53:30.715362 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5596c69fcc-6qtwm" event={"ID":"13a77c59-db09-46ca-aa8c-88651c29be68","Type":"ContainerStarted","Data":"8be102c8db6795a994f320901c8bd6418c9a69854f0bf8737152550889288170"} Feb 27 10:53:30 crc kubenswrapper[4728]: I0227 10:53:30.715952 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5596c69fcc-6qtwm" Feb 27 10:53:30 crc kubenswrapper[4728]: I0227 10:53:30.755672 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5596c69fcc-6qtwm" podStartSLOduration=2.755644927 podStartE2EDuration="2.755644927s" podCreationTimestamp="2026-02-27 10:53:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:53:30.746046727 +0000 UTC m=+1630.708412843" watchObservedRunningTime="2026-02-27 10:53:30.755644927 +0000 UTC m=+1630.718011053" Feb 27 10:53:30 crc kubenswrapper[4728]: I0227 10:53:30.762676 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4447fe65-418c-43b8-aa5a-b78a7d97fe56" path="/var/lib/kubelet/pods/4447fe65-418c-43b8-aa5a-b78a7d97fe56/volumes" Feb 27 10:53:33 crc kubenswrapper[4728]: I0227 10:53:33.752357 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-tqhw2" event={"ID":"c112afc6-4352-4004-885a-0b1d88caffae","Type":"ContainerStarted","Data":"5b684cf6d573b4c93d90a97b6cc27fcc4526db3433995506c07a94d098162944"} Feb 27 10:53:33 crc kubenswrapper[4728]: I0227 10:53:33.773094 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-sync-tqhw2" podStartSLOduration=1.885345448 podStartE2EDuration="45.773075436s" podCreationTimestamp="2026-02-27 10:52:48 +0000 UTC" firstStartedPulling="2026-02-27 10:52:49.17491312 +0000 UTC m=+1589.137279226" lastFinishedPulling="2026-02-27 10:53:33.062643108 +0000 UTC m=+1633.025009214" observedRunningTime="2026-02-27 10:53:33.772085159 +0000 UTC m=+1633.734451265" watchObservedRunningTime="2026-02-27 10:53:33.773075436 +0000 UTC m=+1633.735441542" Feb 27 10:53:35 crc kubenswrapper[4728]: I0227 10:53:35.737046 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 27 10:53:35 crc kubenswrapper[4728]: I0227 10:53:35.780710 4728 generic.go:334] "Generic (PLEG): container finished" podID="c112afc6-4352-4004-885a-0b1d88caffae" containerID="5b684cf6d573b4c93d90a97b6cc27fcc4526db3433995506c07a94d098162944" exitCode=0 Feb 27 10:53:35 crc kubenswrapper[4728]: I0227 10:53:35.780830 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-tqhw2" event={"ID":"c112afc6-4352-4004-885a-0b1d88caffae","Type":"ContainerDied","Data":"5b684cf6d573b4c93d90a97b6cc27fcc4526db3433995506c07a94d098162944"} Feb 27 10:53:36 crc kubenswrapper[4728]: I0227 10:53:36.795882 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"77499a0a-be50-4d60-ae26-461a8c9742e5","Type":"ContainerStarted","Data":"82feca42f099c0af97e41144aabf5066839a665e3dcf6d3b5d6f1b739c500224"} Feb 27 10:53:36 crc kubenswrapper[4728]: I0227 10:53:36.839782 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.33942997 podStartE2EDuration="40.83975243s" podCreationTimestamp="2026-02-27 10:52:56 +0000 UTC" firstStartedPulling="2026-02-27 10:52:57.573146057 +0000 UTC m=+1597.535512153" lastFinishedPulling="2026-02-27 10:53:36.073468507 +0000 UTC m=+1636.035834613" observedRunningTime="2026-02-27 10:53:36.82055472 +0000 UTC m=+1636.782920876" watchObservedRunningTime="2026-02-27 10:53:36.83975243 +0000 UTC m=+1636.802118546" Feb 27 10:53:37 crc kubenswrapper[4728]: I0227 10:53:37.341070 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-tqhw2" Feb 27 10:53:37 crc kubenswrapper[4728]: I0227 10:53:37.476956 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bvwg2\" (UniqueName: \"kubernetes.io/projected/c112afc6-4352-4004-885a-0b1d88caffae-kube-api-access-bvwg2\") pod \"c112afc6-4352-4004-885a-0b1d88caffae\" (UID: \"c112afc6-4352-4004-885a-0b1d88caffae\") " Feb 27 10:53:37 crc kubenswrapper[4728]: I0227 10:53:37.477142 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c112afc6-4352-4004-885a-0b1d88caffae-config-data\") pod \"c112afc6-4352-4004-885a-0b1d88caffae\" (UID: \"c112afc6-4352-4004-885a-0b1d88caffae\") " Feb 27 10:53:37 crc kubenswrapper[4728]: I0227 10:53:37.477322 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c112afc6-4352-4004-885a-0b1d88caffae-combined-ca-bundle\") pod \"c112afc6-4352-4004-885a-0b1d88caffae\" (UID: \"c112afc6-4352-4004-885a-0b1d88caffae\") " Feb 27 10:53:37 crc kubenswrapper[4728]: I0227 10:53:37.495440 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c112afc6-4352-4004-885a-0b1d88caffae-kube-api-access-bvwg2" (OuterVolumeSpecName: "kube-api-access-bvwg2") pod "c112afc6-4352-4004-885a-0b1d88caffae" (UID: "c112afc6-4352-4004-885a-0b1d88caffae"). InnerVolumeSpecName "kube-api-access-bvwg2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:53:37 crc kubenswrapper[4728]: I0227 10:53:37.540344 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c112afc6-4352-4004-885a-0b1d88caffae-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c112afc6-4352-4004-885a-0b1d88caffae" (UID: "c112afc6-4352-4004-885a-0b1d88caffae"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:53:37 crc kubenswrapper[4728]: I0227 10:53:37.582698 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c112afc6-4352-4004-885a-0b1d88caffae-config-data" (OuterVolumeSpecName: "config-data") pod "c112afc6-4352-4004-885a-0b1d88caffae" (UID: "c112afc6-4352-4004-885a-0b1d88caffae"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:53:37 crc kubenswrapper[4728]: I0227 10:53:37.583862 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c112afc6-4352-4004-885a-0b1d88caffae-config-data\") pod \"c112afc6-4352-4004-885a-0b1d88caffae\" (UID: \"c112afc6-4352-4004-885a-0b1d88caffae\") " Feb 27 10:53:37 crc kubenswrapper[4728]: W0227 10:53:37.584009 4728 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/c112afc6-4352-4004-885a-0b1d88caffae/volumes/kubernetes.io~secret/config-data Feb 27 10:53:37 crc kubenswrapper[4728]: I0227 10:53:37.584043 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c112afc6-4352-4004-885a-0b1d88caffae-config-data" (OuterVolumeSpecName: "config-data") pod "c112afc6-4352-4004-885a-0b1d88caffae" (UID: "c112afc6-4352-4004-885a-0b1d88caffae"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:53:37 crc kubenswrapper[4728]: I0227 10:53:37.584852 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bvwg2\" (UniqueName: \"kubernetes.io/projected/c112afc6-4352-4004-885a-0b1d88caffae-kube-api-access-bvwg2\") on node \"crc\" DevicePath \"\"" Feb 27 10:53:37 crc kubenswrapper[4728]: I0227 10:53:37.584901 4728 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c112afc6-4352-4004-885a-0b1d88caffae-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 10:53:37 crc kubenswrapper[4728]: I0227 10:53:37.584913 4728 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c112afc6-4352-4004-885a-0b1d88caffae-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 10:53:37 crc kubenswrapper[4728]: I0227 10:53:37.814665 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-tqhw2" Feb 27 10:53:37 crc kubenswrapper[4728]: I0227 10:53:37.819289 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-tqhw2" event={"ID":"c112afc6-4352-4004-885a-0b1d88caffae","Type":"ContainerDied","Data":"0869876b9b5cb3dd83cb9bbfb69cabc1a1326d8268840c67a59a8e6e1a2ed001"} Feb 27 10:53:37 crc kubenswrapper[4728]: I0227 10:53:37.819351 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0869876b9b5cb3dd83cb9bbfb69cabc1a1326d8268840c67a59a8e6e1a2ed001" Feb 27 10:53:38 crc kubenswrapper[4728]: I0227 10:53:38.438736 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5596c69fcc-6qtwm" Feb 27 10:53:38 crc kubenswrapper[4728]: I0227 10:53:38.537020 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-594cb89c79-lcsng"] Feb 27 10:53:38 crc kubenswrapper[4728]: I0227 10:53:38.538904 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-594cb89c79-lcsng" podUID="53be9ab0-5753-4942-bdd4-3efb5bb0d4d0" containerName="dnsmasq-dns" containerID="cri-o://6fd08f4cb4ef4327f8e3ca9fb5b0de6c23f3d95ff52c2ab049c43b1514738394" gracePeriod=10 Feb 27 10:53:38 crc kubenswrapper[4728]: I0227 10:53:38.829210 4728 generic.go:334] "Generic (PLEG): container finished" podID="53be9ab0-5753-4942-bdd4-3efb5bb0d4d0" containerID="6fd08f4cb4ef4327f8e3ca9fb5b0de6c23f3d95ff52c2ab049c43b1514738394" exitCode=0 Feb 27 10:53:38 crc kubenswrapper[4728]: I0227 10:53:38.829252 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-594cb89c79-lcsng" event={"ID":"53be9ab0-5753-4942-bdd4-3efb5bb0d4d0","Type":"ContainerDied","Data":"6fd08f4cb4ef4327f8e3ca9fb5b0de6c23f3d95ff52c2ab049c43b1514738394"} Feb 27 10:53:39 crc kubenswrapper[4728]: I0227 10:53:39.131614 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-594cb89c79-lcsng" Feb 27 10:53:39 crc kubenswrapper[4728]: I0227 10:53:39.188228 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-76cf949bdc-hqx69"] Feb 27 10:53:39 crc kubenswrapper[4728]: E0227 10:53:39.188704 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4447fe65-418c-43b8-aa5a-b78a7d97fe56" containerName="init" Feb 27 10:53:39 crc kubenswrapper[4728]: I0227 10:53:39.188715 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="4447fe65-418c-43b8-aa5a-b78a7d97fe56" containerName="init" Feb 27 10:53:39 crc kubenswrapper[4728]: E0227 10:53:39.188736 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53be9ab0-5753-4942-bdd4-3efb5bb0d4d0" containerName="init" Feb 27 10:53:39 crc kubenswrapper[4728]: I0227 10:53:39.188742 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="53be9ab0-5753-4942-bdd4-3efb5bb0d4d0" containerName="init" Feb 27 10:53:39 crc kubenswrapper[4728]: E0227 10:53:39.188751 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4447fe65-418c-43b8-aa5a-b78a7d97fe56" containerName="dnsmasq-dns" Feb 27 10:53:39 crc kubenswrapper[4728]: I0227 10:53:39.188757 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="4447fe65-418c-43b8-aa5a-b78a7d97fe56" containerName="dnsmasq-dns" Feb 27 10:53:39 crc kubenswrapper[4728]: E0227 10:53:39.188776 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53be9ab0-5753-4942-bdd4-3efb5bb0d4d0" containerName="dnsmasq-dns" Feb 27 10:53:39 crc kubenswrapper[4728]: I0227 10:53:39.188781 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="53be9ab0-5753-4942-bdd4-3efb5bb0d4d0" containerName="dnsmasq-dns" Feb 27 10:53:39 crc kubenswrapper[4728]: E0227 10:53:39.188807 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c112afc6-4352-4004-885a-0b1d88caffae" containerName="heat-db-sync" Feb 27 10:53:39 crc kubenswrapper[4728]: I0227 10:53:39.188813 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="c112afc6-4352-4004-885a-0b1d88caffae" containerName="heat-db-sync" Feb 27 10:53:39 crc kubenswrapper[4728]: I0227 10:53:39.189023 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="c112afc6-4352-4004-885a-0b1d88caffae" containerName="heat-db-sync" Feb 27 10:53:39 crc kubenswrapper[4728]: I0227 10:53:39.189037 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="53be9ab0-5753-4942-bdd4-3efb5bb0d4d0" containerName="dnsmasq-dns" Feb 27 10:53:39 crc kubenswrapper[4728]: I0227 10:53:39.189056 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="4447fe65-418c-43b8-aa5a-b78a7d97fe56" containerName="dnsmasq-dns" Feb 27 10:53:39 crc kubenswrapper[4728]: I0227 10:53:39.189875 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-76cf949bdc-hqx69" Feb 27 10:53:39 crc kubenswrapper[4728]: I0227 10:53:39.232974 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/53be9ab0-5753-4942-bdd4-3efb5bb0d4d0-openstack-edpm-ipam\") pod \"53be9ab0-5753-4942-bdd4-3efb5bb0d4d0\" (UID: \"53be9ab0-5753-4942-bdd4-3efb5bb0d4d0\") " Feb 27 10:53:39 crc kubenswrapper[4728]: I0227 10:53:39.233085 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/53be9ab0-5753-4942-bdd4-3efb5bb0d4d0-config\") pod \"53be9ab0-5753-4942-bdd4-3efb5bb0d4d0\" (UID: \"53be9ab0-5753-4942-bdd4-3efb5bb0d4d0\") " Feb 27 10:53:39 crc kubenswrapper[4728]: I0227 10:53:39.233200 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/53be9ab0-5753-4942-bdd4-3efb5bb0d4d0-ovsdbserver-sb\") pod \"53be9ab0-5753-4942-bdd4-3efb5bb0d4d0\" (UID: \"53be9ab0-5753-4942-bdd4-3efb5bb0d4d0\") " Feb 27 10:53:39 crc kubenswrapper[4728]: I0227 10:53:39.233241 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/53be9ab0-5753-4942-bdd4-3efb5bb0d4d0-ovsdbserver-nb\") pod \"53be9ab0-5753-4942-bdd4-3efb5bb0d4d0\" (UID: \"53be9ab0-5753-4942-bdd4-3efb5bb0d4d0\") " Feb 27 10:53:39 crc kubenswrapper[4728]: I0227 10:53:39.233369 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-chppb\" (UniqueName: \"kubernetes.io/projected/53be9ab0-5753-4942-bdd4-3efb5bb0d4d0-kube-api-access-chppb\") pod \"53be9ab0-5753-4942-bdd4-3efb5bb0d4d0\" (UID: \"53be9ab0-5753-4942-bdd4-3efb5bb0d4d0\") " Feb 27 10:53:39 crc kubenswrapper[4728]: I0227 10:53:39.233419 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/53be9ab0-5753-4942-bdd4-3efb5bb0d4d0-dns-svc\") pod \"53be9ab0-5753-4942-bdd4-3efb5bb0d4d0\" (UID: \"53be9ab0-5753-4942-bdd4-3efb5bb0d4d0\") " Feb 27 10:53:39 crc kubenswrapper[4728]: I0227 10:53:39.233441 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/53be9ab0-5753-4942-bdd4-3efb5bb0d4d0-dns-swift-storage-0\") pod \"53be9ab0-5753-4942-bdd4-3efb5bb0d4d0\" (UID: \"53be9ab0-5753-4942-bdd4-3efb5bb0d4d0\") " Feb 27 10:53:39 crc kubenswrapper[4728]: I0227 10:53:39.292894 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53be9ab0-5753-4942-bdd4-3efb5bb0d4d0-kube-api-access-chppb" (OuterVolumeSpecName: "kube-api-access-chppb") pod "53be9ab0-5753-4942-bdd4-3efb5bb0d4d0" (UID: "53be9ab0-5753-4942-bdd4-3efb5bb0d4d0"). InnerVolumeSpecName "kube-api-access-chppb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:53:39 crc kubenswrapper[4728]: I0227 10:53:39.316210 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-76cf949bdc-hqx69"] Feb 27 10:53:39 crc kubenswrapper[4728]: I0227 10:53:39.337686 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01624ee9-8f8f-4c71-9eba-0fb900755b87-combined-ca-bundle\") pod \"heat-engine-76cf949bdc-hqx69\" (UID: \"01624ee9-8f8f-4c71-9eba-0fb900755b87\") " pod="openstack/heat-engine-76cf949bdc-hqx69" Feb 27 10:53:39 crc kubenswrapper[4728]: I0227 10:53:39.337794 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/01624ee9-8f8f-4c71-9eba-0fb900755b87-config-data-custom\") pod \"heat-engine-76cf949bdc-hqx69\" (UID: \"01624ee9-8f8f-4c71-9eba-0fb900755b87\") " pod="openstack/heat-engine-76cf949bdc-hqx69" Feb 27 10:53:39 crc kubenswrapper[4728]: I0227 10:53:39.337910 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjdjm\" (UniqueName: \"kubernetes.io/projected/01624ee9-8f8f-4c71-9eba-0fb900755b87-kube-api-access-pjdjm\") pod \"heat-engine-76cf949bdc-hqx69\" (UID: \"01624ee9-8f8f-4c71-9eba-0fb900755b87\") " pod="openstack/heat-engine-76cf949bdc-hqx69" Feb 27 10:53:39 crc kubenswrapper[4728]: I0227 10:53:39.338063 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01624ee9-8f8f-4c71-9eba-0fb900755b87-config-data\") pod \"heat-engine-76cf949bdc-hqx69\" (UID: \"01624ee9-8f8f-4c71-9eba-0fb900755b87\") " pod="openstack/heat-engine-76cf949bdc-hqx69" Feb 27 10:53:39 crc kubenswrapper[4728]: I0227 10:53:39.338220 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-chppb\" (UniqueName: \"kubernetes.io/projected/53be9ab0-5753-4942-bdd4-3efb5bb0d4d0-kube-api-access-chppb\") on node \"crc\" DevicePath \"\"" Feb 27 10:53:39 crc kubenswrapper[4728]: I0227 10:53:39.383223 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/53be9ab0-5753-4942-bdd4-3efb5bb0d4d0-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "53be9ab0-5753-4942-bdd4-3efb5bb0d4d0" (UID: "53be9ab0-5753-4942-bdd4-3efb5bb0d4d0"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:53:39 crc kubenswrapper[4728]: I0227 10:53:39.422339 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-65cd755798-zq9gs"] Feb 27 10:53:39 crc kubenswrapper[4728]: I0227 10:53:39.427700 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-65cd755798-zq9gs" Feb 27 10:53:39 crc kubenswrapper[4728]: I0227 10:53:39.439637 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pjdjm\" (UniqueName: \"kubernetes.io/projected/01624ee9-8f8f-4c71-9eba-0fb900755b87-kube-api-access-pjdjm\") pod \"heat-engine-76cf949bdc-hqx69\" (UID: \"01624ee9-8f8f-4c71-9eba-0fb900755b87\") " pod="openstack/heat-engine-76cf949bdc-hqx69" Feb 27 10:53:39 crc kubenswrapper[4728]: I0227 10:53:39.439762 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01624ee9-8f8f-4c71-9eba-0fb900755b87-config-data\") pod \"heat-engine-76cf949bdc-hqx69\" (UID: \"01624ee9-8f8f-4c71-9eba-0fb900755b87\") " pod="openstack/heat-engine-76cf949bdc-hqx69" Feb 27 10:53:39 crc kubenswrapper[4728]: I0227 10:53:39.439835 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01624ee9-8f8f-4c71-9eba-0fb900755b87-combined-ca-bundle\") pod \"heat-engine-76cf949bdc-hqx69\" (UID: \"01624ee9-8f8f-4c71-9eba-0fb900755b87\") " pod="openstack/heat-engine-76cf949bdc-hqx69" Feb 27 10:53:39 crc kubenswrapper[4728]: I0227 10:53:39.439876 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/01624ee9-8f8f-4c71-9eba-0fb900755b87-config-data-custom\") pod \"heat-engine-76cf949bdc-hqx69\" (UID: \"01624ee9-8f8f-4c71-9eba-0fb900755b87\") " pod="openstack/heat-engine-76cf949bdc-hqx69" Feb 27 10:53:39 crc kubenswrapper[4728]: I0227 10:53:39.439977 4728 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/53be9ab0-5753-4942-bdd4-3efb5bb0d4d0-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 27 10:53:39 crc kubenswrapper[4728]: I0227 10:53:39.443427 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/53be9ab0-5753-4942-bdd4-3efb5bb0d4d0-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "53be9ab0-5753-4942-bdd4-3efb5bb0d4d0" (UID: "53be9ab0-5753-4942-bdd4-3efb5bb0d4d0"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:53:39 crc kubenswrapper[4728]: I0227 10:53:39.446381 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/01624ee9-8f8f-4c71-9eba-0fb900755b87-config-data-custom\") pod \"heat-engine-76cf949bdc-hqx69\" (UID: \"01624ee9-8f8f-4c71-9eba-0fb900755b87\") " pod="openstack/heat-engine-76cf949bdc-hqx69" Feb 27 10:53:39 crc kubenswrapper[4728]: I0227 10:53:39.452867 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01624ee9-8f8f-4c71-9eba-0fb900755b87-config-data\") pod \"heat-engine-76cf949bdc-hqx69\" (UID: \"01624ee9-8f8f-4c71-9eba-0fb900755b87\") " pod="openstack/heat-engine-76cf949bdc-hqx69" Feb 27 10:53:39 crc kubenswrapper[4728]: I0227 10:53:39.457515 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01624ee9-8f8f-4c71-9eba-0fb900755b87-combined-ca-bundle\") pod \"heat-engine-76cf949bdc-hqx69\" (UID: \"01624ee9-8f8f-4c71-9eba-0fb900755b87\") " pod="openstack/heat-engine-76cf949bdc-hqx69" Feb 27 10:53:39 crc kubenswrapper[4728]: I0227 10:53:39.459721 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/53be9ab0-5753-4942-bdd4-3efb5bb0d4d0-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "53be9ab0-5753-4942-bdd4-3efb5bb0d4d0" (UID: "53be9ab0-5753-4942-bdd4-3efb5bb0d4d0"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:53:39 crc kubenswrapper[4728]: I0227 10:53:39.460639 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-7b6bc5d6b-5bs8p"] Feb 27 10:53:39 crc kubenswrapper[4728]: I0227 10:53:39.464079 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pjdjm\" (UniqueName: \"kubernetes.io/projected/01624ee9-8f8f-4c71-9eba-0fb900755b87-kube-api-access-pjdjm\") pod \"heat-engine-76cf949bdc-hqx69\" (UID: \"01624ee9-8f8f-4c71-9eba-0fb900755b87\") " pod="openstack/heat-engine-76cf949bdc-hqx69" Feb 27 10:53:39 crc kubenswrapper[4728]: I0227 10:53:39.467708 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-7b6bc5d6b-5bs8p" Feb 27 10:53:39 crc kubenswrapper[4728]: I0227 10:53:39.491129 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-65cd755798-zq9gs"] Feb 27 10:53:39 crc kubenswrapper[4728]: I0227 10:53:39.496572 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/53be9ab0-5753-4942-bdd4-3efb5bb0d4d0-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "53be9ab0-5753-4942-bdd4-3efb5bb0d4d0" (UID: "53be9ab0-5753-4942-bdd4-3efb5bb0d4d0"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:53:39 crc kubenswrapper[4728]: I0227 10:53:39.517022 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/53be9ab0-5753-4942-bdd4-3efb5bb0d4d0-config" (OuterVolumeSpecName: "config") pod "53be9ab0-5753-4942-bdd4-3efb5bb0d4d0" (UID: "53be9ab0-5753-4942-bdd4-3efb5bb0d4d0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:53:39 crc kubenswrapper[4728]: I0227 10:53:39.518787 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/53be9ab0-5753-4942-bdd4-3efb5bb0d4d0-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "53be9ab0-5753-4942-bdd4-3efb5bb0d4d0" (UID: "53be9ab0-5753-4942-bdd4-3efb5bb0d4d0"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:53:39 crc kubenswrapper[4728]: I0227 10:53:39.525577 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-7b6bc5d6b-5bs8p"] Feb 27 10:53:39 crc kubenswrapper[4728]: I0227 10:53:39.529059 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-76cf949bdc-hqx69" Feb 27 10:53:39 crc kubenswrapper[4728]: I0227 10:53:39.543900 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f59eff63-8d3b-4ebb-bbb6-cbb1c678a51f-combined-ca-bundle\") pod \"heat-cfnapi-7b6bc5d6b-5bs8p\" (UID: \"f59eff63-8d3b-4ebb-bbb6-cbb1c678a51f\") " pod="openstack/heat-cfnapi-7b6bc5d6b-5bs8p" Feb 27 10:53:39 crc kubenswrapper[4728]: I0227 10:53:39.543991 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/33024243-7ee6-4299-9858-0f66d98188a4-config-data-custom\") pod \"heat-api-65cd755798-zq9gs\" (UID: \"33024243-7ee6-4299-9858-0f66d98188a4\") " pod="openstack/heat-api-65cd755798-zq9gs" Feb 27 10:53:39 crc kubenswrapper[4728]: I0227 10:53:39.544016 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/33024243-7ee6-4299-9858-0f66d98188a4-public-tls-certs\") pod \"heat-api-65cd755798-zq9gs\" (UID: \"33024243-7ee6-4299-9858-0f66d98188a4\") " pod="openstack/heat-api-65cd755798-zq9gs" Feb 27 10:53:39 crc kubenswrapper[4728]: I0227 10:53:39.544039 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f59eff63-8d3b-4ebb-bbb6-cbb1c678a51f-config-data\") pod \"heat-cfnapi-7b6bc5d6b-5bs8p\" (UID: \"f59eff63-8d3b-4ebb-bbb6-cbb1c678a51f\") " pod="openstack/heat-cfnapi-7b6bc5d6b-5bs8p" Feb 27 10:53:39 crc kubenswrapper[4728]: I0227 10:53:39.544091 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33024243-7ee6-4299-9858-0f66d98188a4-config-data\") pod \"heat-api-65cd755798-zq9gs\" (UID: \"33024243-7ee6-4299-9858-0f66d98188a4\") " pod="openstack/heat-api-65cd755798-zq9gs" Feb 27 10:53:39 crc kubenswrapper[4728]: I0227 10:53:39.544121 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f59eff63-8d3b-4ebb-bbb6-cbb1c678a51f-config-data-custom\") pod \"heat-cfnapi-7b6bc5d6b-5bs8p\" (UID: \"f59eff63-8d3b-4ebb-bbb6-cbb1c678a51f\") " pod="openstack/heat-cfnapi-7b6bc5d6b-5bs8p" Feb 27 10:53:39 crc kubenswrapper[4728]: I0227 10:53:39.544149 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b58cz\" (UniqueName: \"kubernetes.io/projected/33024243-7ee6-4299-9858-0f66d98188a4-kube-api-access-b58cz\") pod \"heat-api-65cd755798-zq9gs\" (UID: \"33024243-7ee6-4299-9858-0f66d98188a4\") " pod="openstack/heat-api-65cd755798-zq9gs" Feb 27 10:53:39 crc kubenswrapper[4728]: I0227 10:53:39.544189 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7dzrv\" (UniqueName: \"kubernetes.io/projected/f59eff63-8d3b-4ebb-bbb6-cbb1c678a51f-kube-api-access-7dzrv\") pod \"heat-cfnapi-7b6bc5d6b-5bs8p\" (UID: \"f59eff63-8d3b-4ebb-bbb6-cbb1c678a51f\") " pod="openstack/heat-cfnapi-7b6bc5d6b-5bs8p" Feb 27 10:53:39 crc kubenswrapper[4728]: I0227 10:53:39.544225 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33024243-7ee6-4299-9858-0f66d98188a4-combined-ca-bundle\") pod \"heat-api-65cd755798-zq9gs\" (UID: \"33024243-7ee6-4299-9858-0f66d98188a4\") " pod="openstack/heat-api-65cd755798-zq9gs" Feb 27 10:53:39 crc kubenswrapper[4728]: I0227 10:53:39.544252 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f59eff63-8d3b-4ebb-bbb6-cbb1c678a51f-internal-tls-certs\") pod \"heat-cfnapi-7b6bc5d6b-5bs8p\" (UID: \"f59eff63-8d3b-4ebb-bbb6-cbb1c678a51f\") " pod="openstack/heat-cfnapi-7b6bc5d6b-5bs8p" Feb 27 10:53:39 crc kubenswrapper[4728]: I0227 10:53:39.544270 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f59eff63-8d3b-4ebb-bbb6-cbb1c678a51f-public-tls-certs\") pod \"heat-cfnapi-7b6bc5d6b-5bs8p\" (UID: \"f59eff63-8d3b-4ebb-bbb6-cbb1c678a51f\") " pod="openstack/heat-cfnapi-7b6bc5d6b-5bs8p" Feb 27 10:53:39 crc kubenswrapper[4728]: I0227 10:53:39.544293 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/33024243-7ee6-4299-9858-0f66d98188a4-internal-tls-certs\") pod \"heat-api-65cd755798-zq9gs\" (UID: \"33024243-7ee6-4299-9858-0f66d98188a4\") " pod="openstack/heat-api-65cd755798-zq9gs" Feb 27 10:53:39 crc kubenswrapper[4728]: I0227 10:53:39.544466 4728 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/53be9ab0-5753-4942-bdd4-3efb5bb0d4d0-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 27 10:53:39 crc kubenswrapper[4728]: I0227 10:53:39.544518 4728 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/53be9ab0-5753-4942-bdd4-3efb5bb0d4d0-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 27 10:53:39 crc kubenswrapper[4728]: I0227 10:53:39.544534 4728 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/53be9ab0-5753-4942-bdd4-3efb5bb0d4d0-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 27 10:53:39 crc kubenswrapper[4728]: I0227 10:53:39.544548 4728 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/53be9ab0-5753-4942-bdd4-3efb5bb0d4d0-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 27 10:53:39 crc kubenswrapper[4728]: I0227 10:53:39.544561 4728 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/53be9ab0-5753-4942-bdd4-3efb5bb0d4d0-config\") on node \"crc\" DevicePath \"\"" Feb 27 10:53:39 crc kubenswrapper[4728]: I0227 10:53:39.646777 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b58cz\" (UniqueName: \"kubernetes.io/projected/33024243-7ee6-4299-9858-0f66d98188a4-kube-api-access-b58cz\") pod \"heat-api-65cd755798-zq9gs\" (UID: \"33024243-7ee6-4299-9858-0f66d98188a4\") " pod="openstack/heat-api-65cd755798-zq9gs" Feb 27 10:53:39 crc kubenswrapper[4728]: I0227 10:53:39.647172 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7dzrv\" (UniqueName: \"kubernetes.io/projected/f59eff63-8d3b-4ebb-bbb6-cbb1c678a51f-kube-api-access-7dzrv\") pod \"heat-cfnapi-7b6bc5d6b-5bs8p\" (UID: \"f59eff63-8d3b-4ebb-bbb6-cbb1c678a51f\") " pod="openstack/heat-cfnapi-7b6bc5d6b-5bs8p" Feb 27 10:53:39 crc kubenswrapper[4728]: I0227 10:53:39.647250 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33024243-7ee6-4299-9858-0f66d98188a4-combined-ca-bundle\") pod \"heat-api-65cd755798-zq9gs\" (UID: \"33024243-7ee6-4299-9858-0f66d98188a4\") " pod="openstack/heat-api-65cd755798-zq9gs" Feb 27 10:53:39 crc kubenswrapper[4728]: I0227 10:53:39.647289 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f59eff63-8d3b-4ebb-bbb6-cbb1c678a51f-internal-tls-certs\") pod \"heat-cfnapi-7b6bc5d6b-5bs8p\" (UID: \"f59eff63-8d3b-4ebb-bbb6-cbb1c678a51f\") " pod="openstack/heat-cfnapi-7b6bc5d6b-5bs8p" Feb 27 10:53:39 crc kubenswrapper[4728]: I0227 10:53:39.647332 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f59eff63-8d3b-4ebb-bbb6-cbb1c678a51f-public-tls-certs\") pod \"heat-cfnapi-7b6bc5d6b-5bs8p\" (UID: \"f59eff63-8d3b-4ebb-bbb6-cbb1c678a51f\") " pod="openstack/heat-cfnapi-7b6bc5d6b-5bs8p" Feb 27 10:53:39 crc kubenswrapper[4728]: I0227 10:53:39.647366 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/33024243-7ee6-4299-9858-0f66d98188a4-internal-tls-certs\") pod \"heat-api-65cd755798-zq9gs\" (UID: \"33024243-7ee6-4299-9858-0f66d98188a4\") " pod="openstack/heat-api-65cd755798-zq9gs" Feb 27 10:53:39 crc kubenswrapper[4728]: I0227 10:53:39.647481 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f59eff63-8d3b-4ebb-bbb6-cbb1c678a51f-combined-ca-bundle\") pod \"heat-cfnapi-7b6bc5d6b-5bs8p\" (UID: \"f59eff63-8d3b-4ebb-bbb6-cbb1c678a51f\") " pod="openstack/heat-cfnapi-7b6bc5d6b-5bs8p" Feb 27 10:53:39 crc kubenswrapper[4728]: I0227 10:53:39.647620 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/33024243-7ee6-4299-9858-0f66d98188a4-config-data-custom\") pod \"heat-api-65cd755798-zq9gs\" (UID: \"33024243-7ee6-4299-9858-0f66d98188a4\") " pod="openstack/heat-api-65cd755798-zq9gs" Feb 27 10:53:39 crc kubenswrapper[4728]: I0227 10:53:39.647655 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/33024243-7ee6-4299-9858-0f66d98188a4-public-tls-certs\") pod \"heat-api-65cd755798-zq9gs\" (UID: \"33024243-7ee6-4299-9858-0f66d98188a4\") " pod="openstack/heat-api-65cd755798-zq9gs" Feb 27 10:53:39 crc kubenswrapper[4728]: I0227 10:53:39.647723 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f59eff63-8d3b-4ebb-bbb6-cbb1c678a51f-config-data\") pod \"heat-cfnapi-7b6bc5d6b-5bs8p\" (UID: \"f59eff63-8d3b-4ebb-bbb6-cbb1c678a51f\") " pod="openstack/heat-cfnapi-7b6bc5d6b-5bs8p" Feb 27 10:53:39 crc kubenswrapper[4728]: I0227 10:53:39.647858 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33024243-7ee6-4299-9858-0f66d98188a4-config-data\") pod \"heat-api-65cd755798-zq9gs\" (UID: \"33024243-7ee6-4299-9858-0f66d98188a4\") " pod="openstack/heat-api-65cd755798-zq9gs" Feb 27 10:53:39 crc kubenswrapper[4728]: I0227 10:53:39.647913 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f59eff63-8d3b-4ebb-bbb6-cbb1c678a51f-config-data-custom\") pod \"heat-cfnapi-7b6bc5d6b-5bs8p\" (UID: \"f59eff63-8d3b-4ebb-bbb6-cbb1c678a51f\") " pod="openstack/heat-cfnapi-7b6bc5d6b-5bs8p" Feb 27 10:53:39 crc kubenswrapper[4728]: I0227 10:53:39.658057 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/33024243-7ee6-4299-9858-0f66d98188a4-public-tls-certs\") pod \"heat-api-65cd755798-zq9gs\" (UID: \"33024243-7ee6-4299-9858-0f66d98188a4\") " pod="openstack/heat-api-65cd755798-zq9gs" Feb 27 10:53:39 crc kubenswrapper[4728]: I0227 10:53:39.658236 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f59eff63-8d3b-4ebb-bbb6-cbb1c678a51f-config-data\") pod \"heat-cfnapi-7b6bc5d6b-5bs8p\" (UID: \"f59eff63-8d3b-4ebb-bbb6-cbb1c678a51f\") " pod="openstack/heat-cfnapi-7b6bc5d6b-5bs8p" Feb 27 10:53:39 crc kubenswrapper[4728]: I0227 10:53:39.659716 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f59eff63-8d3b-4ebb-bbb6-cbb1c678a51f-config-data-custom\") pod \"heat-cfnapi-7b6bc5d6b-5bs8p\" (UID: \"f59eff63-8d3b-4ebb-bbb6-cbb1c678a51f\") " pod="openstack/heat-cfnapi-7b6bc5d6b-5bs8p" Feb 27 10:53:39 crc kubenswrapper[4728]: I0227 10:53:39.664483 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/33024243-7ee6-4299-9858-0f66d98188a4-internal-tls-certs\") pod \"heat-api-65cd755798-zq9gs\" (UID: \"33024243-7ee6-4299-9858-0f66d98188a4\") " pod="openstack/heat-api-65cd755798-zq9gs" Feb 27 10:53:39 crc kubenswrapper[4728]: I0227 10:53:39.664738 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f59eff63-8d3b-4ebb-bbb6-cbb1c678a51f-public-tls-certs\") pod \"heat-cfnapi-7b6bc5d6b-5bs8p\" (UID: \"f59eff63-8d3b-4ebb-bbb6-cbb1c678a51f\") " pod="openstack/heat-cfnapi-7b6bc5d6b-5bs8p" Feb 27 10:53:39 crc kubenswrapper[4728]: I0227 10:53:39.664948 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33024243-7ee6-4299-9858-0f66d98188a4-config-data\") pod \"heat-api-65cd755798-zq9gs\" (UID: \"33024243-7ee6-4299-9858-0f66d98188a4\") " pod="openstack/heat-api-65cd755798-zq9gs" Feb 27 10:53:39 crc kubenswrapper[4728]: I0227 10:53:39.665105 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/33024243-7ee6-4299-9858-0f66d98188a4-config-data-custom\") pod \"heat-api-65cd755798-zq9gs\" (UID: \"33024243-7ee6-4299-9858-0f66d98188a4\") " pod="openstack/heat-api-65cd755798-zq9gs" Feb 27 10:53:39 crc kubenswrapper[4728]: I0227 10:53:39.666239 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33024243-7ee6-4299-9858-0f66d98188a4-combined-ca-bundle\") pod \"heat-api-65cd755798-zq9gs\" (UID: \"33024243-7ee6-4299-9858-0f66d98188a4\") " pod="openstack/heat-api-65cd755798-zq9gs" Feb 27 10:53:39 crc kubenswrapper[4728]: I0227 10:53:39.667941 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f59eff63-8d3b-4ebb-bbb6-cbb1c678a51f-combined-ca-bundle\") pod \"heat-cfnapi-7b6bc5d6b-5bs8p\" (UID: \"f59eff63-8d3b-4ebb-bbb6-cbb1c678a51f\") " pod="openstack/heat-cfnapi-7b6bc5d6b-5bs8p" Feb 27 10:53:39 crc kubenswrapper[4728]: I0227 10:53:39.668620 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f59eff63-8d3b-4ebb-bbb6-cbb1c678a51f-internal-tls-certs\") pod \"heat-cfnapi-7b6bc5d6b-5bs8p\" (UID: \"f59eff63-8d3b-4ebb-bbb6-cbb1c678a51f\") " pod="openstack/heat-cfnapi-7b6bc5d6b-5bs8p" Feb 27 10:53:39 crc kubenswrapper[4728]: I0227 10:53:39.677287 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7dzrv\" (UniqueName: \"kubernetes.io/projected/f59eff63-8d3b-4ebb-bbb6-cbb1c678a51f-kube-api-access-7dzrv\") pod \"heat-cfnapi-7b6bc5d6b-5bs8p\" (UID: \"f59eff63-8d3b-4ebb-bbb6-cbb1c678a51f\") " pod="openstack/heat-cfnapi-7b6bc5d6b-5bs8p" Feb 27 10:53:39 crc kubenswrapper[4728]: I0227 10:53:39.679673 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b58cz\" (UniqueName: \"kubernetes.io/projected/33024243-7ee6-4299-9858-0f66d98188a4-kube-api-access-b58cz\") pod \"heat-api-65cd755798-zq9gs\" (UID: \"33024243-7ee6-4299-9858-0f66d98188a4\") " pod="openstack/heat-api-65cd755798-zq9gs" Feb 27 10:53:39 crc kubenswrapper[4728]: I0227 10:53:39.762123 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-65cd755798-zq9gs" Feb 27 10:53:39 crc kubenswrapper[4728]: I0227 10:53:39.791882 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-7b6bc5d6b-5bs8p" Feb 27 10:53:39 crc kubenswrapper[4728]: I0227 10:53:39.866115 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-594cb89c79-lcsng" event={"ID":"53be9ab0-5753-4942-bdd4-3efb5bb0d4d0","Type":"ContainerDied","Data":"1bb5105a78c31da7e469bc50e14f50be0b8e0bfcb876c244df1723a0e08a86a1"} Feb 27 10:53:39 crc kubenswrapper[4728]: I0227 10:53:39.866352 4728 scope.go:117] "RemoveContainer" containerID="6fd08f4cb4ef4327f8e3ca9fb5b0de6c23f3d95ff52c2ab049c43b1514738394" Feb 27 10:53:39 crc kubenswrapper[4728]: I0227 10:53:39.866497 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-594cb89c79-lcsng" Feb 27 10:53:39 crc kubenswrapper[4728]: I0227 10:53:39.968400 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-594cb89c79-lcsng"] Feb 27 10:53:39 crc kubenswrapper[4728]: I0227 10:53:39.969747 4728 scope.go:117] "RemoveContainer" containerID="8c9c24f15aa05ffc7a725ace2e3b567b40a0dda2935a7d2a99f09423c4c81a04" Feb 27 10:53:39 crc kubenswrapper[4728]: I0227 10:53:39.991189 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-594cb89c79-lcsng"] Feb 27 10:53:40 crc kubenswrapper[4728]: I0227 10:53:40.072262 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-76cf949bdc-hqx69"] Feb 27 10:53:40 crc kubenswrapper[4728]: I0227 10:53:40.322092 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-65cd755798-zq9gs"] Feb 27 10:53:40 crc kubenswrapper[4728]: I0227 10:53:40.432159 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-7b6bc5d6b-5bs8p"] Feb 27 10:53:40 crc kubenswrapper[4728]: I0227 10:53:40.753417 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="53be9ab0-5753-4942-bdd4-3efb5bb0d4d0" path="/var/lib/kubelet/pods/53be9ab0-5753-4942-bdd4-3efb5bb0d4d0/volumes" Feb 27 10:53:40 crc kubenswrapper[4728]: I0227 10:53:40.884197 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-65cd755798-zq9gs" event={"ID":"33024243-7ee6-4299-9858-0f66d98188a4","Type":"ContainerStarted","Data":"c9164272c4decc1412cfa3a9460acc620c17525d1068f12409f2e31b9e367a01"} Feb 27 10:53:40 crc kubenswrapper[4728]: I0227 10:53:40.887517 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-76cf949bdc-hqx69" event={"ID":"01624ee9-8f8f-4c71-9eba-0fb900755b87","Type":"ContainerStarted","Data":"ebc769a1b30ce15881de88741354f535e365e0ac2bd48ec7de584a56ad2a4912"} Feb 27 10:53:40 crc kubenswrapper[4728]: I0227 10:53:40.887558 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-76cf949bdc-hqx69" event={"ID":"01624ee9-8f8f-4c71-9eba-0fb900755b87","Type":"ContainerStarted","Data":"ceb5bf4188aeabce119858cb877e77ea6300fcf7b49d58264c77eb591ce954fd"} Feb 27 10:53:40 crc kubenswrapper[4728]: I0227 10:53:40.887956 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-76cf949bdc-hqx69" Feb 27 10:53:40 crc kubenswrapper[4728]: I0227 10:53:40.894150 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7b6bc5d6b-5bs8p" event={"ID":"f59eff63-8d3b-4ebb-bbb6-cbb1c678a51f","Type":"ContainerStarted","Data":"4f5bb104503d6213ae05585a33983d1112d924a09e1306921bfaf4ae9da9a6ea"} Feb 27 10:53:40 crc kubenswrapper[4728]: I0227 10:53:40.918522 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-76cf949bdc-hqx69" podStartSLOduration=1.918484316 podStartE2EDuration="1.918484316s" podCreationTimestamp="2026-02-27 10:53:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:53:40.905790082 +0000 UTC m=+1640.868156188" watchObservedRunningTime="2026-02-27 10:53:40.918484316 +0000 UTC m=+1640.880850422" Feb 27 10:53:42 crc kubenswrapper[4728]: I0227 10:53:42.922550 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-65cd755798-zq9gs" event={"ID":"33024243-7ee6-4299-9858-0f66d98188a4","Type":"ContainerStarted","Data":"93f76fcd835321fcdd408c3dc9f962b3a0db8db890ab572e6e487763fdc6f6bc"} Feb 27 10:53:42 crc kubenswrapper[4728]: I0227 10:53:42.923155 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-65cd755798-zq9gs" Feb 27 10:53:42 crc kubenswrapper[4728]: I0227 10:53:42.924318 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7b6bc5d6b-5bs8p" event={"ID":"f59eff63-8d3b-4ebb-bbb6-cbb1c678a51f","Type":"ContainerStarted","Data":"32af7ec0cf335a26838a5433b9488da012d6d5c61d00e07f7cd4942fe2f0a136"} Feb 27 10:53:42 crc kubenswrapper[4728]: I0227 10:53:42.924738 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-7b6bc5d6b-5bs8p" Feb 27 10:53:42 crc kubenswrapper[4728]: I0227 10:53:42.945376 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-65cd755798-zq9gs" podStartSLOduration=1.948139522 podStartE2EDuration="3.945354436s" podCreationTimestamp="2026-02-27 10:53:39 +0000 UTC" firstStartedPulling="2026-02-27 10:53:40.321123557 +0000 UTC m=+1640.283489663" lastFinishedPulling="2026-02-27 10:53:42.318338471 +0000 UTC m=+1642.280704577" observedRunningTime="2026-02-27 10:53:42.943984258 +0000 UTC m=+1642.906350364" watchObservedRunningTime="2026-02-27 10:53:42.945354436 +0000 UTC m=+1642.907720542" Feb 27 10:53:42 crc kubenswrapper[4728]: I0227 10:53:42.970870 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-7b6bc5d6b-5bs8p" podStartSLOduration=2.093628619 podStartE2EDuration="3.970844217s" podCreationTimestamp="2026-02-27 10:53:39 +0000 UTC" firstStartedPulling="2026-02-27 10:53:40.439114129 +0000 UTC m=+1640.401480235" lastFinishedPulling="2026-02-27 10:53:42.316329727 +0000 UTC m=+1642.278695833" observedRunningTime="2026-02-27 10:53:42.96099363 +0000 UTC m=+1642.923359736" watchObservedRunningTime="2026-02-27 10:53:42.970844217 +0000 UTC m=+1642.933210323" Feb 27 10:53:51 crc kubenswrapper[4728]: I0227 10:53:51.363782 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-65cd755798-zq9gs" Feb 27 10:53:51 crc kubenswrapper[4728]: I0227 10:53:51.441991 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-76558d5849-k75gx"] Feb 27 10:53:51 crc kubenswrapper[4728]: I0227 10:53:51.442209 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-api-76558d5849-k75gx" podUID="6386555e-93c8-46af-bdc9-ca0db04f8712" containerName="heat-api" containerID="cri-o://6f020602c3a8c0f26c04aafcd0a976ad62bbbc9693061a0b5c11b4fb5b6081ac" gracePeriod=60 Feb 27 10:53:51 crc kubenswrapper[4728]: I0227 10:53:51.894900 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-7b6bc5d6b-5bs8p" Feb 27 10:53:51 crc kubenswrapper[4728]: I0227 10:53:51.945541 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-69498cf5f9-8b2rb"] Feb 27 10:53:51 crc kubenswrapper[4728]: I0227 10:53:51.945766 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-cfnapi-69498cf5f9-8b2rb" podUID="f411351c-a796-4df4-9e09-407e93afb4a9" containerName="heat-cfnapi" containerID="cri-o://2ff6ea4b5f4988efbb09b919d8070765b7955305081f3fda1358c992bd6cff30" gracePeriod=60 Feb 27 10:53:52 crc kubenswrapper[4728]: E0227 10:53:52.506198 4728 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7363c956_6c7e_4e11_bfb1_6be6ba94771e.slice/crio-49b6f2d70816ca5d6d9a4f15531ca6dd8fb3d1bb5e8d896989730843fbcb73ec.scope\": RecentStats: unable to find data in memory cache]" Feb 27 10:53:53 crc kubenswrapper[4728]: I0227 10:53:53.062389 4728 generic.go:334] "Generic (PLEG): container finished" podID="008a6414-799f-47de-a238-a5fdefc314ca" containerID="34f4e5c27c04871abd7ae8cd54db5d7011624e9e28950f8858b26d9e9b76f67b" exitCode=0 Feb 27 10:53:53 crc kubenswrapper[4728]: I0227 10:53:53.062462 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"008a6414-799f-47de-a238-a5fdefc314ca","Type":"ContainerDied","Data":"34f4e5c27c04871abd7ae8cd54db5d7011624e9e28950f8858b26d9e9b76f67b"} Feb 27 10:53:53 crc kubenswrapper[4728]: I0227 10:53:53.065542 4728 generic.go:334] "Generic (PLEG): container finished" podID="7363c956-6c7e-4e11-bfb1-6be6ba94771e" containerID="49b6f2d70816ca5d6d9a4f15531ca6dd8fb3d1bb5e8d896989730843fbcb73ec" exitCode=0 Feb 27 10:53:53 crc kubenswrapper[4728]: I0227 10:53:53.065595 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"7363c956-6c7e-4e11-bfb1-6be6ba94771e","Type":"ContainerDied","Data":"49b6f2d70816ca5d6d9a4f15531ca6dd8fb3d1bb5e8d896989730843fbcb73ec"} Feb 27 10:53:54 crc kubenswrapper[4728]: I0227 10:53:54.080315 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"7363c956-6c7e-4e11-bfb1-6be6ba94771e","Type":"ContainerStarted","Data":"ac8be1f77ea9ef3cc0d18816534c8fcfe79d8dd1395c4d61b117086db1272a72"} Feb 27 10:53:54 crc kubenswrapper[4728]: I0227 10:53:54.082256 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 27 10:53:54 crc kubenswrapper[4728]: I0227 10:53:54.085195 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"008a6414-799f-47de-a238-a5fdefc314ca","Type":"ContainerStarted","Data":"b1468fce53db775f684827a8cbb50d6aefece98b5470e4da97918df44418f2bb"} Feb 27 10:53:54 crc kubenswrapper[4728]: I0227 10:53:54.085445 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-2" Feb 27 10:53:54 crc kubenswrapper[4728]: I0227 10:53:54.122891 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=38.122873228 podStartE2EDuration="38.122873228s" podCreationTimestamp="2026-02-27 10:53:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:53:54.116846215 +0000 UTC m=+1654.079212321" watchObservedRunningTime="2026-02-27 10:53:54.122873228 +0000 UTC m=+1654.085239344" Feb 27 10:53:54 crc kubenswrapper[4728]: I0227 10:53:54.171335 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-2" podStartSLOduration=38.171307553 podStartE2EDuration="38.171307553s" podCreationTimestamp="2026-02-27 10:53:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:53:54.147492007 +0000 UTC m=+1654.109858133" watchObservedRunningTime="2026-02-27 10:53:54.171307553 +0000 UTC m=+1654.133673659" Feb 27 10:53:55 crc kubenswrapper[4728]: I0227 10:53:55.097382 4728 generic.go:334] "Generic (PLEG): container finished" podID="6386555e-93c8-46af-bdc9-ca0db04f8712" containerID="6f020602c3a8c0f26c04aafcd0a976ad62bbbc9693061a0b5c11b4fb5b6081ac" exitCode=0 Feb 27 10:53:55 crc kubenswrapper[4728]: I0227 10:53:55.097463 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-76558d5849-k75gx" event={"ID":"6386555e-93c8-46af-bdc9-ca0db04f8712","Type":"ContainerDied","Data":"6f020602c3a8c0f26c04aafcd0a976ad62bbbc9693061a0b5c11b4fb5b6081ac"} Feb 27 10:53:55 crc kubenswrapper[4728]: I0227 10:53:55.943699 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-76558d5849-k75gx" Feb 27 10:53:55 crc kubenswrapper[4728]: I0227 10:53:55.951364 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-69498cf5f9-8b2rb" Feb 27 10:53:56 crc kubenswrapper[4728]: I0227 10:53:56.025689 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f411351c-a796-4df4-9e09-407e93afb4a9-config-data\") pod \"f411351c-a796-4df4-9e09-407e93afb4a9\" (UID: \"f411351c-a796-4df4-9e09-407e93afb4a9\") " Feb 27 10:53:56 crc kubenswrapper[4728]: I0227 10:53:56.025745 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6386555e-93c8-46af-bdc9-ca0db04f8712-public-tls-certs\") pod \"6386555e-93c8-46af-bdc9-ca0db04f8712\" (UID: \"6386555e-93c8-46af-bdc9-ca0db04f8712\") " Feb 27 10:53:56 crc kubenswrapper[4728]: I0227 10:53:56.025784 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-865fc\" (UniqueName: \"kubernetes.io/projected/6386555e-93c8-46af-bdc9-ca0db04f8712-kube-api-access-865fc\") pod \"6386555e-93c8-46af-bdc9-ca0db04f8712\" (UID: \"6386555e-93c8-46af-bdc9-ca0db04f8712\") " Feb 27 10:53:56 crc kubenswrapper[4728]: I0227 10:53:56.025837 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f411351c-a796-4df4-9e09-407e93afb4a9-combined-ca-bundle\") pod \"f411351c-a796-4df4-9e09-407e93afb4a9\" (UID: \"f411351c-a796-4df4-9e09-407e93afb4a9\") " Feb 27 10:53:56 crc kubenswrapper[4728]: I0227 10:53:56.025865 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lvnv4\" (UniqueName: \"kubernetes.io/projected/f411351c-a796-4df4-9e09-407e93afb4a9-kube-api-access-lvnv4\") pod \"f411351c-a796-4df4-9e09-407e93afb4a9\" (UID: \"f411351c-a796-4df4-9e09-407e93afb4a9\") " Feb 27 10:53:56 crc kubenswrapper[4728]: I0227 10:53:56.025899 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6386555e-93c8-46af-bdc9-ca0db04f8712-config-data\") pod \"6386555e-93c8-46af-bdc9-ca0db04f8712\" (UID: \"6386555e-93c8-46af-bdc9-ca0db04f8712\") " Feb 27 10:53:56 crc kubenswrapper[4728]: I0227 10:53:56.025970 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f411351c-a796-4df4-9e09-407e93afb4a9-config-data-custom\") pod \"f411351c-a796-4df4-9e09-407e93afb4a9\" (UID: \"f411351c-a796-4df4-9e09-407e93afb4a9\") " Feb 27 10:53:56 crc kubenswrapper[4728]: I0227 10:53:56.025998 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f411351c-a796-4df4-9e09-407e93afb4a9-public-tls-certs\") pod \"f411351c-a796-4df4-9e09-407e93afb4a9\" (UID: \"f411351c-a796-4df4-9e09-407e93afb4a9\") " Feb 27 10:53:56 crc kubenswrapper[4728]: I0227 10:53:56.026080 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6386555e-93c8-46af-bdc9-ca0db04f8712-internal-tls-certs\") pod \"6386555e-93c8-46af-bdc9-ca0db04f8712\" (UID: \"6386555e-93c8-46af-bdc9-ca0db04f8712\") " Feb 27 10:53:56 crc kubenswrapper[4728]: I0227 10:53:56.026109 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f411351c-a796-4df4-9e09-407e93afb4a9-internal-tls-certs\") pod \"f411351c-a796-4df4-9e09-407e93afb4a9\" (UID: \"f411351c-a796-4df4-9e09-407e93afb4a9\") " Feb 27 10:53:56 crc kubenswrapper[4728]: I0227 10:53:56.026129 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6386555e-93c8-46af-bdc9-ca0db04f8712-config-data-custom\") pod \"6386555e-93c8-46af-bdc9-ca0db04f8712\" (UID: \"6386555e-93c8-46af-bdc9-ca0db04f8712\") " Feb 27 10:53:56 crc kubenswrapper[4728]: I0227 10:53:56.026173 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6386555e-93c8-46af-bdc9-ca0db04f8712-combined-ca-bundle\") pod \"6386555e-93c8-46af-bdc9-ca0db04f8712\" (UID: \"6386555e-93c8-46af-bdc9-ca0db04f8712\") " Feb 27 10:53:56 crc kubenswrapper[4728]: I0227 10:53:56.063133 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f411351c-a796-4df4-9e09-407e93afb4a9-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "f411351c-a796-4df4-9e09-407e93afb4a9" (UID: "f411351c-a796-4df4-9e09-407e93afb4a9"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:53:56 crc kubenswrapper[4728]: I0227 10:53:56.071058 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6386555e-93c8-46af-bdc9-ca0db04f8712-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "6386555e-93c8-46af-bdc9-ca0db04f8712" (UID: "6386555e-93c8-46af-bdc9-ca0db04f8712"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:53:56 crc kubenswrapper[4728]: I0227 10:53:56.077780 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6386555e-93c8-46af-bdc9-ca0db04f8712-kube-api-access-865fc" (OuterVolumeSpecName: "kube-api-access-865fc") pod "6386555e-93c8-46af-bdc9-ca0db04f8712" (UID: "6386555e-93c8-46af-bdc9-ca0db04f8712"). InnerVolumeSpecName "kube-api-access-865fc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:53:56 crc kubenswrapper[4728]: I0227 10:53:56.088730 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f411351c-a796-4df4-9e09-407e93afb4a9-kube-api-access-lvnv4" (OuterVolumeSpecName: "kube-api-access-lvnv4") pod "f411351c-a796-4df4-9e09-407e93afb4a9" (UID: "f411351c-a796-4df4-9e09-407e93afb4a9"). InnerVolumeSpecName "kube-api-access-lvnv4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:53:56 crc kubenswrapper[4728]: I0227 10:53:56.109786 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f411351c-a796-4df4-9e09-407e93afb4a9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f411351c-a796-4df4-9e09-407e93afb4a9" (UID: "f411351c-a796-4df4-9e09-407e93afb4a9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:53:56 crc kubenswrapper[4728]: I0227 10:53:56.111361 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6386555e-93c8-46af-bdc9-ca0db04f8712-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6386555e-93c8-46af-bdc9-ca0db04f8712" (UID: "6386555e-93c8-46af-bdc9-ca0db04f8712"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:53:56 crc kubenswrapper[4728]: I0227 10:53:56.120016 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-76558d5849-k75gx" event={"ID":"6386555e-93c8-46af-bdc9-ca0db04f8712","Type":"ContainerDied","Data":"ae4733a7733525ee17960b7759b8897ada40f8b766a7ee8fff1e5c6b51c0a284"} Feb 27 10:53:56 crc kubenswrapper[4728]: I0227 10:53:56.120064 4728 scope.go:117] "RemoveContainer" containerID="6f020602c3a8c0f26c04aafcd0a976ad62bbbc9693061a0b5c11b4fb5b6081ac" Feb 27 10:53:56 crc kubenswrapper[4728]: I0227 10:53:56.120177 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-76558d5849-k75gx" Feb 27 10:53:56 crc kubenswrapper[4728]: I0227 10:53:56.129341 4728 generic.go:334] "Generic (PLEG): container finished" podID="f411351c-a796-4df4-9e09-407e93afb4a9" containerID="2ff6ea4b5f4988efbb09b919d8070765b7955305081f3fda1358c992bd6cff30" exitCode=0 Feb 27 10:53:56 crc kubenswrapper[4728]: I0227 10:53:56.129402 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-69498cf5f9-8b2rb" event={"ID":"f411351c-a796-4df4-9e09-407e93afb4a9","Type":"ContainerDied","Data":"2ff6ea4b5f4988efbb09b919d8070765b7955305081f3fda1358c992bd6cff30"} Feb 27 10:53:56 crc kubenswrapper[4728]: I0227 10:53:56.129426 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-69498cf5f9-8b2rb" event={"ID":"f411351c-a796-4df4-9e09-407e93afb4a9","Type":"ContainerDied","Data":"677782ef9a7f0f243bb72121b2c438f9e9dab82dca6213829dad6eb9e97c85be"} Feb 27 10:53:56 crc kubenswrapper[4728]: I0227 10:53:56.129494 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-69498cf5f9-8b2rb" Feb 27 10:53:56 crc kubenswrapper[4728]: I0227 10:53:56.136609 4728 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f411351c-a796-4df4-9e09-407e93afb4a9-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 27 10:53:56 crc kubenswrapper[4728]: I0227 10:53:56.136652 4728 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6386555e-93c8-46af-bdc9-ca0db04f8712-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 27 10:53:56 crc kubenswrapper[4728]: I0227 10:53:56.136663 4728 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6386555e-93c8-46af-bdc9-ca0db04f8712-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 10:53:56 crc kubenswrapper[4728]: I0227 10:53:56.136675 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-865fc\" (UniqueName: \"kubernetes.io/projected/6386555e-93c8-46af-bdc9-ca0db04f8712-kube-api-access-865fc\") on node \"crc\" DevicePath \"\"" Feb 27 10:53:56 crc kubenswrapper[4728]: I0227 10:53:56.136686 4728 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f411351c-a796-4df4-9e09-407e93afb4a9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 10:53:56 crc kubenswrapper[4728]: I0227 10:53:56.136701 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lvnv4\" (UniqueName: \"kubernetes.io/projected/f411351c-a796-4df4-9e09-407e93afb4a9-kube-api-access-lvnv4\") on node \"crc\" DevicePath \"\"" Feb 27 10:53:56 crc kubenswrapper[4728]: I0227 10:53:56.159002 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f411351c-a796-4df4-9e09-407e93afb4a9-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "f411351c-a796-4df4-9e09-407e93afb4a9" (UID: "f411351c-a796-4df4-9e09-407e93afb4a9"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:53:56 crc kubenswrapper[4728]: I0227 10:53:56.164682 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f411351c-a796-4df4-9e09-407e93afb4a9-config-data" (OuterVolumeSpecName: "config-data") pod "f411351c-a796-4df4-9e09-407e93afb4a9" (UID: "f411351c-a796-4df4-9e09-407e93afb4a9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:53:56 crc kubenswrapper[4728]: I0227 10:53:56.197822 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6386555e-93c8-46af-bdc9-ca0db04f8712-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "6386555e-93c8-46af-bdc9-ca0db04f8712" (UID: "6386555e-93c8-46af-bdc9-ca0db04f8712"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:53:56 crc kubenswrapper[4728]: I0227 10:53:56.209765 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f411351c-a796-4df4-9e09-407e93afb4a9-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "f411351c-a796-4df4-9e09-407e93afb4a9" (UID: "f411351c-a796-4df4-9e09-407e93afb4a9"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:53:56 crc kubenswrapper[4728]: I0227 10:53:56.217182 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6386555e-93c8-46af-bdc9-ca0db04f8712-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "6386555e-93c8-46af-bdc9-ca0db04f8712" (UID: "6386555e-93c8-46af-bdc9-ca0db04f8712"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:53:56 crc kubenswrapper[4728]: I0227 10:53:56.234669 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6386555e-93c8-46af-bdc9-ca0db04f8712-config-data" (OuterVolumeSpecName: "config-data") pod "6386555e-93c8-46af-bdc9-ca0db04f8712" (UID: "6386555e-93c8-46af-bdc9-ca0db04f8712"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:53:56 crc kubenswrapper[4728]: I0227 10:53:56.239287 4728 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f411351c-a796-4df4-9e09-407e93afb4a9-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 27 10:53:56 crc kubenswrapper[4728]: I0227 10:53:56.239322 4728 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6386555e-93c8-46af-bdc9-ca0db04f8712-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 27 10:53:56 crc kubenswrapper[4728]: I0227 10:53:56.239332 4728 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f411351c-a796-4df4-9e09-407e93afb4a9-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 27 10:53:56 crc kubenswrapper[4728]: I0227 10:53:56.239341 4728 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f411351c-a796-4df4-9e09-407e93afb4a9-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 10:53:56 crc kubenswrapper[4728]: I0227 10:53:56.239352 4728 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6386555e-93c8-46af-bdc9-ca0db04f8712-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 27 10:53:56 crc kubenswrapper[4728]: I0227 10:53:56.239361 4728 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6386555e-93c8-46af-bdc9-ca0db04f8712-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 10:53:56 crc kubenswrapper[4728]: I0227 10:53:56.281992 4728 scope.go:117] "RemoveContainer" containerID="2ff6ea4b5f4988efbb09b919d8070765b7955305081f3fda1358c992bd6cff30" Feb 27 10:53:56 crc kubenswrapper[4728]: I0227 10:53:56.310443 4728 scope.go:117] "RemoveContainer" containerID="2ff6ea4b5f4988efbb09b919d8070765b7955305081f3fda1358c992bd6cff30" Feb 27 10:53:56 crc kubenswrapper[4728]: E0227 10:53:56.310934 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2ff6ea4b5f4988efbb09b919d8070765b7955305081f3fda1358c992bd6cff30\": container with ID starting with 2ff6ea4b5f4988efbb09b919d8070765b7955305081f3fda1358c992bd6cff30 not found: ID does not exist" containerID="2ff6ea4b5f4988efbb09b919d8070765b7955305081f3fda1358c992bd6cff30" Feb 27 10:53:56 crc kubenswrapper[4728]: I0227 10:53:56.310965 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ff6ea4b5f4988efbb09b919d8070765b7955305081f3fda1358c992bd6cff30"} err="failed to get container status \"2ff6ea4b5f4988efbb09b919d8070765b7955305081f3fda1358c992bd6cff30\": rpc error: code = NotFound desc = could not find container \"2ff6ea4b5f4988efbb09b919d8070765b7955305081f3fda1358c992bd6cff30\": container with ID starting with 2ff6ea4b5f4988efbb09b919d8070765b7955305081f3fda1358c992bd6cff30 not found: ID does not exist" Feb 27 10:53:56 crc kubenswrapper[4728]: I0227 10:53:56.463080 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-76558d5849-k75gx"] Feb 27 10:53:56 crc kubenswrapper[4728]: I0227 10:53:56.484764 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-76558d5849-k75gx"] Feb 27 10:53:56 crc kubenswrapper[4728]: I0227 10:53:56.499317 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-69498cf5f9-8b2rb"] Feb 27 10:53:56 crc kubenswrapper[4728]: I0227 10:53:56.515984 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-69498cf5f9-8b2rb"] Feb 27 10:53:56 crc kubenswrapper[4728]: I0227 10:53:56.742580 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6386555e-93c8-46af-bdc9-ca0db04f8712" path="/var/lib/kubelet/pods/6386555e-93c8-46af-bdc9-ca0db04f8712/volumes" Feb 27 10:53:56 crc kubenswrapper[4728]: I0227 10:53:56.743619 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f411351c-a796-4df4-9e09-407e93afb4a9" path="/var/lib/kubelet/pods/f411351c-a796-4df4-9e09-407e93afb4a9/volumes" Feb 27 10:53:58 crc kubenswrapper[4728]: I0227 10:53:58.018615 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7mr2p"] Feb 27 10:53:58 crc kubenswrapper[4728]: E0227 10:53:58.019885 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f411351c-a796-4df4-9e09-407e93afb4a9" containerName="heat-cfnapi" Feb 27 10:53:58 crc kubenswrapper[4728]: I0227 10:53:58.019909 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="f411351c-a796-4df4-9e09-407e93afb4a9" containerName="heat-cfnapi" Feb 27 10:53:58 crc kubenswrapper[4728]: E0227 10:53:58.019970 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6386555e-93c8-46af-bdc9-ca0db04f8712" containerName="heat-api" Feb 27 10:53:58 crc kubenswrapper[4728]: I0227 10:53:58.019981 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="6386555e-93c8-46af-bdc9-ca0db04f8712" containerName="heat-api" Feb 27 10:53:58 crc kubenswrapper[4728]: I0227 10:53:58.020465 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="f411351c-a796-4df4-9e09-407e93afb4a9" containerName="heat-cfnapi" Feb 27 10:53:58 crc kubenswrapper[4728]: I0227 10:53:58.020595 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="6386555e-93c8-46af-bdc9-ca0db04f8712" containerName="heat-api" Feb 27 10:53:58 crc kubenswrapper[4728]: I0227 10:53:58.022386 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7mr2p" Feb 27 10:53:58 crc kubenswrapper[4728]: I0227 10:53:58.034091 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 27 10:53:58 crc kubenswrapper[4728]: I0227 10:53:58.034552 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 27 10:53:58 crc kubenswrapper[4728]: I0227 10:53:58.034669 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 27 10:53:58 crc kubenswrapper[4728]: I0227 10:53:58.034714 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-r9nq7" Feb 27 10:53:58 crc kubenswrapper[4728]: I0227 10:53:58.042080 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7mr2p"] Feb 27 10:53:58 crc kubenswrapper[4728]: I0227 10:53:58.186128 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7b022b91-04fe-443e-af6c-d47673e6f22f-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-7mr2p\" (UID: \"7b022b91-04fe-443e-af6c-d47673e6f22f\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7mr2p" Feb 27 10:53:58 crc kubenswrapper[4728]: I0227 10:53:58.186189 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjw75\" (UniqueName: \"kubernetes.io/projected/7b022b91-04fe-443e-af6c-d47673e6f22f-kube-api-access-xjw75\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-7mr2p\" (UID: \"7b022b91-04fe-443e-af6c-d47673e6f22f\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7mr2p" Feb 27 10:53:58 crc kubenswrapper[4728]: I0227 10:53:58.186292 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b022b91-04fe-443e-af6c-d47673e6f22f-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-7mr2p\" (UID: \"7b022b91-04fe-443e-af6c-d47673e6f22f\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7mr2p" Feb 27 10:53:58 crc kubenswrapper[4728]: I0227 10:53:58.186434 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7b022b91-04fe-443e-af6c-d47673e6f22f-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-7mr2p\" (UID: \"7b022b91-04fe-443e-af6c-d47673e6f22f\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7mr2p" Feb 27 10:53:58 crc kubenswrapper[4728]: I0227 10:53:58.289140 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7b022b91-04fe-443e-af6c-d47673e6f22f-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-7mr2p\" (UID: \"7b022b91-04fe-443e-af6c-d47673e6f22f\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7mr2p" Feb 27 10:53:58 crc kubenswrapper[4728]: I0227 10:53:58.289309 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7b022b91-04fe-443e-af6c-d47673e6f22f-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-7mr2p\" (UID: \"7b022b91-04fe-443e-af6c-d47673e6f22f\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7mr2p" Feb 27 10:53:58 crc kubenswrapper[4728]: I0227 10:53:58.289351 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xjw75\" (UniqueName: \"kubernetes.io/projected/7b022b91-04fe-443e-af6c-d47673e6f22f-kube-api-access-xjw75\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-7mr2p\" (UID: \"7b022b91-04fe-443e-af6c-d47673e6f22f\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7mr2p" Feb 27 10:53:58 crc kubenswrapper[4728]: I0227 10:53:58.289425 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b022b91-04fe-443e-af6c-d47673e6f22f-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-7mr2p\" (UID: \"7b022b91-04fe-443e-af6c-d47673e6f22f\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7mr2p" Feb 27 10:53:58 crc kubenswrapper[4728]: I0227 10:53:58.299774 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b022b91-04fe-443e-af6c-d47673e6f22f-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-7mr2p\" (UID: \"7b022b91-04fe-443e-af6c-d47673e6f22f\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7mr2p" Feb 27 10:53:58 crc kubenswrapper[4728]: I0227 10:53:58.300396 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7b022b91-04fe-443e-af6c-d47673e6f22f-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-7mr2p\" (UID: \"7b022b91-04fe-443e-af6c-d47673e6f22f\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7mr2p" Feb 27 10:53:58 crc kubenswrapper[4728]: I0227 10:53:58.308677 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7b022b91-04fe-443e-af6c-d47673e6f22f-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-7mr2p\" (UID: \"7b022b91-04fe-443e-af6c-d47673e6f22f\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7mr2p" Feb 27 10:53:58 crc kubenswrapper[4728]: I0227 10:53:58.312169 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xjw75\" (UniqueName: \"kubernetes.io/projected/7b022b91-04fe-443e-af6c-d47673e6f22f-kube-api-access-xjw75\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-7mr2p\" (UID: \"7b022b91-04fe-443e-af6c-d47673e6f22f\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7mr2p" Feb 27 10:53:58 crc kubenswrapper[4728]: I0227 10:53:58.371927 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7mr2p" Feb 27 10:53:59 crc kubenswrapper[4728]: W0227 10:53:59.517557 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7b022b91_04fe_443e_af6c_d47673e6f22f.slice/crio-98b79793976192690cf3e5d794cb3b77fb0b0d25c37c9e9cfa11e61040157b99 WatchSource:0}: Error finding container 98b79793976192690cf3e5d794cb3b77fb0b0d25c37c9e9cfa11e61040157b99: Status 404 returned error can't find the container with id 98b79793976192690cf3e5d794cb3b77fb0b0d25c37c9e9cfa11e61040157b99 Feb 27 10:53:59 crc kubenswrapper[4728]: I0227 10:53:59.522392 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7mr2p"] Feb 27 10:53:59 crc kubenswrapper[4728]: I0227 10:53:59.571672 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-76cf949bdc-hqx69" Feb 27 10:53:59 crc kubenswrapper[4728]: I0227 10:53:59.625334 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-7777f965d-45648"] Feb 27 10:53:59 crc kubenswrapper[4728]: I0227 10:53:59.625791 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-engine-7777f965d-45648" podUID="cfcacc51-e5e0-4d12-a6d1-b810a9e93e42" containerName="heat-engine" containerID="cri-o://e710d1a685c340e2ca0f326e59a5ed2d6cbd5f8095465f79d665ac6d5dbb6b35" gracePeriod=60 Feb 27 10:54:00 crc kubenswrapper[4728]: I0227 10:54:00.140104 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536494-97ggp"] Feb 27 10:54:00 crc kubenswrapper[4728]: I0227 10:54:00.141933 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536494-97ggp" Feb 27 10:54:00 crc kubenswrapper[4728]: I0227 10:54:00.143897 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-wxdfj" Feb 27 10:54:00 crc kubenswrapper[4728]: I0227 10:54:00.144916 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 10:54:00 crc kubenswrapper[4728]: I0227 10:54:00.154813 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536494-97ggp"] Feb 27 10:54:00 crc kubenswrapper[4728]: I0227 10:54:00.161555 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 10:54:00 crc kubenswrapper[4728]: I0227 10:54:00.197391 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7mr2p" event={"ID":"7b022b91-04fe-443e-af6c-d47673e6f22f","Type":"ContainerStarted","Data":"98b79793976192690cf3e5d794cb3b77fb0b0d25c37c9e9cfa11e61040157b99"} Feb 27 10:54:00 crc kubenswrapper[4728]: I0227 10:54:00.251356 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qlvnn\" (UniqueName: \"kubernetes.io/projected/829ffeda-5f1b-4cfa-8417-71c47d4e621f-kube-api-access-qlvnn\") pod \"auto-csr-approver-29536494-97ggp\" (UID: \"829ffeda-5f1b-4cfa-8417-71c47d4e621f\") " pod="openshift-infra/auto-csr-approver-29536494-97ggp" Feb 27 10:54:00 crc kubenswrapper[4728]: I0227 10:54:00.353964 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qlvnn\" (UniqueName: \"kubernetes.io/projected/829ffeda-5f1b-4cfa-8417-71c47d4e621f-kube-api-access-qlvnn\") pod \"auto-csr-approver-29536494-97ggp\" (UID: \"829ffeda-5f1b-4cfa-8417-71c47d4e621f\") " pod="openshift-infra/auto-csr-approver-29536494-97ggp" Feb 27 10:54:00 crc kubenswrapper[4728]: I0227 10:54:00.375500 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qlvnn\" (UniqueName: \"kubernetes.io/projected/829ffeda-5f1b-4cfa-8417-71c47d4e621f-kube-api-access-qlvnn\") pod \"auto-csr-approver-29536494-97ggp\" (UID: \"829ffeda-5f1b-4cfa-8417-71c47d4e621f\") " pod="openshift-infra/auto-csr-approver-29536494-97ggp" Feb 27 10:54:00 crc kubenswrapper[4728]: I0227 10:54:00.461980 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536494-97ggp" Feb 27 10:54:01 crc kubenswrapper[4728]: I0227 10:54:01.030215 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536494-97ggp"] Feb 27 10:54:01 crc kubenswrapper[4728]: I0227 10:54:01.214628 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536494-97ggp" event={"ID":"829ffeda-5f1b-4cfa-8417-71c47d4e621f","Type":"ContainerStarted","Data":"b3973d72b5188cb5923466c27606175aa42a0bde61456aa608e956d4514282e0"} Feb 27 10:54:03 crc kubenswrapper[4728]: E0227 10:54:03.218341 4728 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e710d1a685c340e2ca0f326e59a5ed2d6cbd5f8095465f79d665ac6d5dbb6b35" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 27 10:54:03 crc kubenswrapper[4728]: E0227 10:54:03.222700 4728 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e710d1a685c340e2ca0f326e59a5ed2d6cbd5f8095465f79d665ac6d5dbb6b35" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 27 10:54:03 crc kubenswrapper[4728]: E0227 10:54:03.224918 4728 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e710d1a685c340e2ca0f326e59a5ed2d6cbd5f8095465f79d665ac6d5dbb6b35" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 27 10:54:03 crc kubenswrapper[4728]: E0227 10:54:03.224993 4728 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-7777f965d-45648" podUID="cfcacc51-e5e0-4d12-a6d1-b810a9e93e42" containerName="heat-engine" Feb 27 10:54:03 crc kubenswrapper[4728]: I0227 10:54:03.280226 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536494-97ggp" event={"ID":"829ffeda-5f1b-4cfa-8417-71c47d4e621f","Type":"ContainerStarted","Data":"9047859afd057f8c632d00166deebd1664cf8f80449e4893837ed2245fd4deae"} Feb 27 10:54:03 crc kubenswrapper[4728]: I0227 10:54:03.310069 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29536494-97ggp" podStartSLOduration=2.264732 podStartE2EDuration="3.310048624s" podCreationTimestamp="2026-02-27 10:54:00 +0000 UTC" firstStartedPulling="2026-02-27 10:54:01.044386315 +0000 UTC m=+1661.006752421" lastFinishedPulling="2026-02-27 10:54:02.089702939 +0000 UTC m=+1662.052069045" observedRunningTime="2026-02-27 10:54:03.296590488 +0000 UTC m=+1663.258956594" watchObservedRunningTime="2026-02-27 10:54:03.310048624 +0000 UTC m=+1663.272414730" Feb 27 10:54:04 crc kubenswrapper[4728]: I0227 10:54:04.300150 4728 generic.go:334] "Generic (PLEG): container finished" podID="829ffeda-5f1b-4cfa-8417-71c47d4e621f" containerID="9047859afd057f8c632d00166deebd1664cf8f80449e4893837ed2245fd4deae" exitCode=0 Feb 27 10:54:04 crc kubenswrapper[4728]: I0227 10:54:04.300542 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536494-97ggp" event={"ID":"829ffeda-5f1b-4cfa-8417-71c47d4e621f","Type":"ContainerDied","Data":"9047859afd057f8c632d00166deebd1664cf8f80449e4893837ed2245fd4deae"} Feb 27 10:54:06 crc kubenswrapper[4728]: I0227 10:54:06.940659 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-2" Feb 27 10:54:07 crc kubenswrapper[4728]: I0227 10:54:07.056435 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-1"] Feb 27 10:54:07 crc kubenswrapper[4728]: I0227 10:54:07.215392 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-sync-txsbm"] Feb 27 10:54:07 crc kubenswrapper[4728]: I0227 10:54:07.234726 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-sync-txsbm"] Feb 27 10:54:07 crc kubenswrapper[4728]: I0227 10:54:07.381610 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-sync-t5pxb"] Feb 27 10:54:07 crc kubenswrapper[4728]: I0227 10:54:07.383340 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-t5pxb" Feb 27 10:54:07 crc kubenswrapper[4728]: I0227 10:54:07.386854 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 27 10:54:07 crc kubenswrapper[4728]: I0227 10:54:07.402967 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-t5pxb"] Feb 27 10:54:07 crc kubenswrapper[4728]: I0227 10:54:07.461131 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/132c4b8b-7345-46a9-8bfa-70bbb048d6f5-combined-ca-bundle\") pod \"aodh-db-sync-t5pxb\" (UID: \"132c4b8b-7345-46a9-8bfa-70bbb048d6f5\") " pod="openstack/aodh-db-sync-t5pxb" Feb 27 10:54:07 crc kubenswrapper[4728]: I0227 10:54:07.461194 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/132c4b8b-7345-46a9-8bfa-70bbb048d6f5-config-data\") pod \"aodh-db-sync-t5pxb\" (UID: \"132c4b8b-7345-46a9-8bfa-70bbb048d6f5\") " pod="openstack/aodh-db-sync-t5pxb" Feb 27 10:54:07 crc kubenswrapper[4728]: I0227 10:54:07.461349 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/132c4b8b-7345-46a9-8bfa-70bbb048d6f5-scripts\") pod \"aodh-db-sync-t5pxb\" (UID: \"132c4b8b-7345-46a9-8bfa-70bbb048d6f5\") " pod="openstack/aodh-db-sync-t5pxb" Feb 27 10:54:07 crc kubenswrapper[4728]: I0227 10:54:07.461375 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xv7hc\" (UniqueName: \"kubernetes.io/projected/132c4b8b-7345-46a9-8bfa-70bbb048d6f5-kube-api-access-xv7hc\") pod \"aodh-db-sync-t5pxb\" (UID: \"132c4b8b-7345-46a9-8bfa-70bbb048d6f5\") " pod="openstack/aodh-db-sync-t5pxb" Feb 27 10:54:07 crc kubenswrapper[4728]: I0227 10:54:07.563667 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/132c4b8b-7345-46a9-8bfa-70bbb048d6f5-scripts\") pod \"aodh-db-sync-t5pxb\" (UID: \"132c4b8b-7345-46a9-8bfa-70bbb048d6f5\") " pod="openstack/aodh-db-sync-t5pxb" Feb 27 10:54:07 crc kubenswrapper[4728]: I0227 10:54:07.563718 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xv7hc\" (UniqueName: \"kubernetes.io/projected/132c4b8b-7345-46a9-8bfa-70bbb048d6f5-kube-api-access-xv7hc\") pod \"aodh-db-sync-t5pxb\" (UID: \"132c4b8b-7345-46a9-8bfa-70bbb048d6f5\") " pod="openstack/aodh-db-sync-t5pxb" Feb 27 10:54:07 crc kubenswrapper[4728]: I0227 10:54:07.563831 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/132c4b8b-7345-46a9-8bfa-70bbb048d6f5-combined-ca-bundle\") pod \"aodh-db-sync-t5pxb\" (UID: \"132c4b8b-7345-46a9-8bfa-70bbb048d6f5\") " pod="openstack/aodh-db-sync-t5pxb" Feb 27 10:54:07 crc kubenswrapper[4728]: I0227 10:54:07.563875 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/132c4b8b-7345-46a9-8bfa-70bbb048d6f5-config-data\") pod \"aodh-db-sync-t5pxb\" (UID: \"132c4b8b-7345-46a9-8bfa-70bbb048d6f5\") " pod="openstack/aodh-db-sync-t5pxb" Feb 27 10:54:07 crc kubenswrapper[4728]: I0227 10:54:07.570140 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/132c4b8b-7345-46a9-8bfa-70bbb048d6f5-combined-ca-bundle\") pod \"aodh-db-sync-t5pxb\" (UID: \"132c4b8b-7345-46a9-8bfa-70bbb048d6f5\") " pod="openstack/aodh-db-sync-t5pxb" Feb 27 10:54:07 crc kubenswrapper[4728]: I0227 10:54:07.570468 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/132c4b8b-7345-46a9-8bfa-70bbb048d6f5-scripts\") pod \"aodh-db-sync-t5pxb\" (UID: \"132c4b8b-7345-46a9-8bfa-70bbb048d6f5\") " pod="openstack/aodh-db-sync-t5pxb" Feb 27 10:54:07 crc kubenswrapper[4728]: I0227 10:54:07.571756 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/132c4b8b-7345-46a9-8bfa-70bbb048d6f5-config-data\") pod \"aodh-db-sync-t5pxb\" (UID: \"132c4b8b-7345-46a9-8bfa-70bbb048d6f5\") " pod="openstack/aodh-db-sync-t5pxb" Feb 27 10:54:07 crc kubenswrapper[4728]: I0227 10:54:07.580402 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xv7hc\" (UniqueName: \"kubernetes.io/projected/132c4b8b-7345-46a9-8bfa-70bbb048d6f5-kube-api-access-xv7hc\") pod \"aodh-db-sync-t5pxb\" (UID: \"132c4b8b-7345-46a9-8bfa-70bbb048d6f5\") " pod="openstack/aodh-db-sync-t5pxb" Feb 27 10:54:07 crc kubenswrapper[4728]: I0227 10:54:07.636769 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 27 10:54:07 crc kubenswrapper[4728]: I0227 10:54:07.699471 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-t5pxb" Feb 27 10:54:08 crc kubenswrapper[4728]: I0227 10:54:08.739899 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="138d00ee-f707-4482-8966-5a7f182ae3bf" path="/var/lib/kubelet/pods/138d00ee-f707-4482-8966-5a7f182ae3bf/volumes" Feb 27 10:54:13 crc kubenswrapper[4728]: I0227 10:54:13.050557 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536494-97ggp" Feb 27 10:54:13 crc kubenswrapper[4728]: I0227 10:54:13.217055 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qlvnn\" (UniqueName: \"kubernetes.io/projected/829ffeda-5f1b-4cfa-8417-71c47d4e621f-kube-api-access-qlvnn\") pod \"829ffeda-5f1b-4cfa-8417-71c47d4e621f\" (UID: \"829ffeda-5f1b-4cfa-8417-71c47d4e621f\") " Feb 27 10:54:13 crc kubenswrapper[4728]: E0227 10:54:13.218842 4728 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e710d1a685c340e2ca0f326e59a5ed2d6cbd5f8095465f79d665ac6d5dbb6b35" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 27 10:54:13 crc kubenswrapper[4728]: E0227 10:54:13.220660 4728 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e710d1a685c340e2ca0f326e59a5ed2d6cbd5f8095465f79d665ac6d5dbb6b35" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 27 10:54:13 crc kubenswrapper[4728]: E0227 10:54:13.222122 4728 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e710d1a685c340e2ca0f326e59a5ed2d6cbd5f8095465f79d665ac6d5dbb6b35" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 27 10:54:13 crc kubenswrapper[4728]: E0227 10:54:13.222168 4728 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-7777f965d-45648" podUID="cfcacc51-e5e0-4d12-a6d1-b810a9e93e42" containerName="heat-engine" Feb 27 10:54:13 crc kubenswrapper[4728]: I0227 10:54:13.225208 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/829ffeda-5f1b-4cfa-8417-71c47d4e621f-kube-api-access-qlvnn" (OuterVolumeSpecName: "kube-api-access-qlvnn") pod "829ffeda-5f1b-4cfa-8417-71c47d4e621f" (UID: "829ffeda-5f1b-4cfa-8417-71c47d4e621f"). InnerVolumeSpecName "kube-api-access-qlvnn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:54:13 crc kubenswrapper[4728]: I0227 10:54:13.307218 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-t5pxb"] Feb 27 10:54:13 crc kubenswrapper[4728]: W0227 10:54:13.308638 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod132c4b8b_7345_46a9_8bfa_70bbb048d6f5.slice/crio-76c9a83b88cac52a56b1ff028226b013effcb906b8e29f28feb2ce431c60b99d WatchSource:0}: Error finding container 76c9a83b88cac52a56b1ff028226b013effcb906b8e29f28feb2ce431c60b99d: Status 404 returned error can't find the container with id 76c9a83b88cac52a56b1ff028226b013effcb906b8e29f28feb2ce431c60b99d Feb 27 10:54:13 crc kubenswrapper[4728]: I0227 10:54:13.320206 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qlvnn\" (UniqueName: \"kubernetes.io/projected/829ffeda-5f1b-4cfa-8417-71c47d4e621f-kube-api-access-qlvnn\") on node \"crc\" DevicePath \"\"" Feb 27 10:54:13 crc kubenswrapper[4728]: I0227 10:54:13.409805 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-1" podUID="d96ab6cd-ed9d-4924-9566-91930411701d" containerName="rabbitmq" containerID="cri-o://58a583aacc893b48f9a492c1d4b391f04e81fe2b5efd19706fceab87c90e9031" gracePeriod=604794 Feb 27 10:54:13 crc kubenswrapper[4728]: I0227 10:54:13.419346 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-t5pxb" event={"ID":"132c4b8b-7345-46a9-8bfa-70bbb048d6f5","Type":"ContainerStarted","Data":"76c9a83b88cac52a56b1ff028226b013effcb906b8e29f28feb2ce431c60b99d"} Feb 27 10:54:13 crc kubenswrapper[4728]: I0227 10:54:13.421139 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536494-97ggp" event={"ID":"829ffeda-5f1b-4cfa-8417-71c47d4e621f","Type":"ContainerDied","Data":"b3973d72b5188cb5923466c27606175aa42a0bde61456aa608e956d4514282e0"} Feb 27 10:54:13 crc kubenswrapper[4728]: I0227 10:54:13.421171 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b3973d72b5188cb5923466c27606175aa42a0bde61456aa608e956d4514282e0" Feb 27 10:54:13 crc kubenswrapper[4728]: I0227 10:54:13.421247 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536494-97ggp" Feb 27 10:54:13 crc kubenswrapper[4728]: I0227 10:54:13.423472 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7mr2p" event={"ID":"7b022b91-04fe-443e-af6c-d47673e6f22f","Type":"ContainerStarted","Data":"3cbe49045ed6143dbc2511837d9383d4c2996657e133a463b1770a7cea4ac54c"} Feb 27 10:54:13 crc kubenswrapper[4728]: I0227 10:54:13.454210 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7mr2p" podStartSLOduration=3.134381592 podStartE2EDuration="16.454189447s" podCreationTimestamp="2026-02-27 10:53:57 +0000 UTC" firstStartedPulling="2026-02-27 10:53:59.520173635 +0000 UTC m=+1659.482539761" lastFinishedPulling="2026-02-27 10:54:12.83998151 +0000 UTC m=+1672.802347616" observedRunningTime="2026-02-27 10:54:13.441754199 +0000 UTC m=+1673.404120315" watchObservedRunningTime="2026-02-27 10:54:13.454189447 +0000 UTC m=+1673.416555563" Feb 27 10:54:14 crc kubenswrapper[4728]: I0227 10:54:14.109671 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-7777f965d-45648" Feb 27 10:54:14 crc kubenswrapper[4728]: I0227 10:54:14.132016 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536488-bw8pl"] Feb 27 10:54:14 crc kubenswrapper[4728]: I0227 10:54:14.159213 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536488-bw8pl"] Feb 27 10:54:14 crc kubenswrapper[4728]: I0227 10:54:14.244301 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zsbdf\" (UniqueName: \"kubernetes.io/projected/cfcacc51-e5e0-4d12-a6d1-b810a9e93e42-kube-api-access-zsbdf\") pod \"cfcacc51-e5e0-4d12-a6d1-b810a9e93e42\" (UID: \"cfcacc51-e5e0-4d12-a6d1-b810a9e93e42\") " Feb 27 10:54:14 crc kubenswrapper[4728]: I0227 10:54:14.244428 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cfcacc51-e5e0-4d12-a6d1-b810a9e93e42-config-data\") pod \"cfcacc51-e5e0-4d12-a6d1-b810a9e93e42\" (UID: \"cfcacc51-e5e0-4d12-a6d1-b810a9e93e42\") " Feb 27 10:54:14 crc kubenswrapper[4728]: I0227 10:54:14.244473 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfcacc51-e5e0-4d12-a6d1-b810a9e93e42-combined-ca-bundle\") pod \"cfcacc51-e5e0-4d12-a6d1-b810a9e93e42\" (UID: \"cfcacc51-e5e0-4d12-a6d1-b810a9e93e42\") " Feb 27 10:54:14 crc kubenswrapper[4728]: I0227 10:54:14.244630 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cfcacc51-e5e0-4d12-a6d1-b810a9e93e42-config-data-custom\") pod \"cfcacc51-e5e0-4d12-a6d1-b810a9e93e42\" (UID: \"cfcacc51-e5e0-4d12-a6d1-b810a9e93e42\") " Feb 27 10:54:14 crc kubenswrapper[4728]: I0227 10:54:14.249928 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cfcacc51-e5e0-4d12-a6d1-b810a9e93e42-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "cfcacc51-e5e0-4d12-a6d1-b810a9e93e42" (UID: "cfcacc51-e5e0-4d12-a6d1-b810a9e93e42"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:54:14 crc kubenswrapper[4728]: I0227 10:54:14.267061 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cfcacc51-e5e0-4d12-a6d1-b810a9e93e42-kube-api-access-zsbdf" (OuterVolumeSpecName: "kube-api-access-zsbdf") pod "cfcacc51-e5e0-4d12-a6d1-b810a9e93e42" (UID: "cfcacc51-e5e0-4d12-a6d1-b810a9e93e42"). InnerVolumeSpecName "kube-api-access-zsbdf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:54:14 crc kubenswrapper[4728]: I0227 10:54:14.277614 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cfcacc51-e5e0-4d12-a6d1-b810a9e93e42-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cfcacc51-e5e0-4d12-a6d1-b810a9e93e42" (UID: "cfcacc51-e5e0-4d12-a6d1-b810a9e93e42"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:54:14 crc kubenswrapper[4728]: I0227 10:54:14.310032 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cfcacc51-e5e0-4d12-a6d1-b810a9e93e42-config-data" (OuterVolumeSpecName: "config-data") pod "cfcacc51-e5e0-4d12-a6d1-b810a9e93e42" (UID: "cfcacc51-e5e0-4d12-a6d1-b810a9e93e42"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:54:14 crc kubenswrapper[4728]: I0227 10:54:14.348393 4728 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cfcacc51-e5e0-4d12-a6d1-b810a9e93e42-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 27 10:54:14 crc kubenswrapper[4728]: I0227 10:54:14.348435 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zsbdf\" (UniqueName: \"kubernetes.io/projected/cfcacc51-e5e0-4d12-a6d1-b810a9e93e42-kube-api-access-zsbdf\") on node \"crc\" DevicePath \"\"" Feb 27 10:54:14 crc kubenswrapper[4728]: I0227 10:54:14.348472 4728 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cfcacc51-e5e0-4d12-a6d1-b810a9e93e42-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 10:54:14 crc kubenswrapper[4728]: I0227 10:54:14.348481 4728 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfcacc51-e5e0-4d12-a6d1-b810a9e93e42-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 10:54:14 crc kubenswrapper[4728]: I0227 10:54:14.438638 4728 generic.go:334] "Generic (PLEG): container finished" podID="cfcacc51-e5e0-4d12-a6d1-b810a9e93e42" containerID="e710d1a685c340e2ca0f326e59a5ed2d6cbd5f8095465f79d665ac6d5dbb6b35" exitCode=0 Feb 27 10:54:14 crc kubenswrapper[4728]: I0227 10:54:14.438695 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-7777f965d-45648" Feb 27 10:54:14 crc kubenswrapper[4728]: I0227 10:54:14.438760 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-7777f965d-45648" event={"ID":"cfcacc51-e5e0-4d12-a6d1-b810a9e93e42","Type":"ContainerDied","Data":"e710d1a685c340e2ca0f326e59a5ed2d6cbd5f8095465f79d665ac6d5dbb6b35"} Feb 27 10:54:14 crc kubenswrapper[4728]: I0227 10:54:14.438837 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-7777f965d-45648" event={"ID":"cfcacc51-e5e0-4d12-a6d1-b810a9e93e42","Type":"ContainerDied","Data":"0f1b7aa56710023f2e4d93d945797d4f549ccd265ecdbc7de982dfab662bc1ff"} Feb 27 10:54:14 crc kubenswrapper[4728]: I0227 10:54:14.438899 4728 scope.go:117] "RemoveContainer" containerID="e710d1a685c340e2ca0f326e59a5ed2d6cbd5f8095465f79d665ac6d5dbb6b35" Feb 27 10:54:14 crc kubenswrapper[4728]: I0227 10:54:14.486670 4728 scope.go:117] "RemoveContainer" containerID="e710d1a685c340e2ca0f326e59a5ed2d6cbd5f8095465f79d665ac6d5dbb6b35" Feb 27 10:54:14 crc kubenswrapper[4728]: E0227 10:54:14.487118 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e710d1a685c340e2ca0f326e59a5ed2d6cbd5f8095465f79d665ac6d5dbb6b35\": container with ID starting with e710d1a685c340e2ca0f326e59a5ed2d6cbd5f8095465f79d665ac6d5dbb6b35 not found: ID does not exist" containerID="e710d1a685c340e2ca0f326e59a5ed2d6cbd5f8095465f79d665ac6d5dbb6b35" Feb 27 10:54:14 crc kubenswrapper[4728]: I0227 10:54:14.487151 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e710d1a685c340e2ca0f326e59a5ed2d6cbd5f8095465f79d665ac6d5dbb6b35"} err="failed to get container status \"e710d1a685c340e2ca0f326e59a5ed2d6cbd5f8095465f79d665ac6d5dbb6b35\": rpc error: code = NotFound desc = could not find container \"e710d1a685c340e2ca0f326e59a5ed2d6cbd5f8095465f79d665ac6d5dbb6b35\": container with ID starting with e710d1a685c340e2ca0f326e59a5ed2d6cbd5f8095465f79d665ac6d5dbb6b35 not found: ID does not exist" Feb 27 10:54:14 crc kubenswrapper[4728]: I0227 10:54:14.492222 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-7777f965d-45648"] Feb 27 10:54:14 crc kubenswrapper[4728]: I0227 10:54:14.505636 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-engine-7777f965d-45648"] Feb 27 10:54:14 crc kubenswrapper[4728]: I0227 10:54:14.739905 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cfcacc51-e5e0-4d12-a6d1-b810a9e93e42" path="/var/lib/kubelet/pods/cfcacc51-e5e0-4d12-a6d1-b810a9e93e42/volumes" Feb 27 10:54:14 crc kubenswrapper[4728]: I0227 10:54:14.740886 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f1047017-329b-497b-b8a0-235fd7b5681f" path="/var/lib/kubelet/pods/f1047017-329b-497b-b8a0-235fd7b5681f/volumes" Feb 27 10:54:18 crc kubenswrapper[4728]: I0227 10:54:18.088254 4728 scope.go:117] "RemoveContainer" containerID="9673b4727a86843edf95617bbaae3ebd2e504d020d4c212960c6cba20335ef3a" Feb 27 10:54:18 crc kubenswrapper[4728]: I0227 10:54:18.979905 4728 scope.go:117] "RemoveContainer" containerID="11a744c851f102a1c67cb296ad43cb73ec26b28fbfbbdb0cb2daffeef02f844a" Feb 27 10:54:19 crc kubenswrapper[4728]: I0227 10:54:19.056769 4728 scope.go:117] "RemoveContainer" containerID="44f3186867c78c9f2e5005207e64bf55d17b81abd564ea2079614a63c852129d" Feb 27 10:54:19 crc kubenswrapper[4728]: I0227 10:54:19.272378 4728 scope.go:117] "RemoveContainer" containerID="3fbb2369f8eb8d4dc2f2cefe59f193afaff562c8f0f8aaee9d7219477ee7249e" Feb 27 10:54:19 crc kubenswrapper[4728]: I0227 10:54:19.539655 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-t5pxb" event={"ID":"132c4b8b-7345-46a9-8bfa-70bbb048d6f5","Type":"ContainerStarted","Data":"4003518932fd59a007cd1f943804b2d07f1ed587ed621acdd720ba4840735400"} Feb 27 10:54:19 crc kubenswrapper[4728]: I0227 10:54:19.566954 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-db-sync-t5pxb" podStartSLOduration=6.803008421 podStartE2EDuration="12.566930665s" podCreationTimestamp="2026-02-27 10:54:07 +0000 UTC" firstStartedPulling="2026-02-27 10:54:13.310689712 +0000 UTC m=+1673.273055818" lastFinishedPulling="2026-02-27 10:54:19.074611936 +0000 UTC m=+1679.036978062" observedRunningTime="2026-02-27 10:54:19.551233939 +0000 UTC m=+1679.513600065" watchObservedRunningTime="2026-02-27 10:54:19.566930665 +0000 UTC m=+1679.529296771" Feb 27 10:54:20 crc kubenswrapper[4728]: I0227 10:54:20.149282 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Feb 27 10:54:20 crc kubenswrapper[4728]: I0227 10:54:20.319094 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d96ab6cd-ed9d-4924-9566-91930411701d-server-conf\") pod \"d96ab6cd-ed9d-4924-9566-91930411701d\" (UID: \"d96ab6cd-ed9d-4924-9566-91930411701d\") " Feb 27 10:54:20 crc kubenswrapper[4728]: I0227 10:54:20.319180 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d96ab6cd-ed9d-4924-9566-91930411701d-pod-info\") pod \"d96ab6cd-ed9d-4924-9566-91930411701d\" (UID: \"d96ab6cd-ed9d-4924-9566-91930411701d\") " Feb 27 10:54:20 crc kubenswrapper[4728]: I0227 10:54:20.319215 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d96ab6cd-ed9d-4924-9566-91930411701d-rabbitmq-erlang-cookie\") pod \"d96ab6cd-ed9d-4924-9566-91930411701d\" (UID: \"d96ab6cd-ed9d-4924-9566-91930411701d\") " Feb 27 10:54:20 crc kubenswrapper[4728]: I0227 10:54:20.319272 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d96ab6cd-ed9d-4924-9566-91930411701d-rabbitmq-tls\") pod \"d96ab6cd-ed9d-4924-9566-91930411701d\" (UID: \"d96ab6cd-ed9d-4924-9566-91930411701d\") " Feb 27 10:54:20 crc kubenswrapper[4728]: I0227 10:54:20.319312 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2cgps\" (UniqueName: \"kubernetes.io/projected/d96ab6cd-ed9d-4924-9566-91930411701d-kube-api-access-2cgps\") pod \"d96ab6cd-ed9d-4924-9566-91930411701d\" (UID: \"d96ab6cd-ed9d-4924-9566-91930411701d\") " Feb 27 10:54:20 crc kubenswrapper[4728]: I0227 10:54:20.319335 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d96ab6cd-ed9d-4924-9566-91930411701d-plugins-conf\") pod \"d96ab6cd-ed9d-4924-9566-91930411701d\" (UID: \"d96ab6cd-ed9d-4924-9566-91930411701d\") " Feb 27 10:54:20 crc kubenswrapper[4728]: I0227 10:54:20.319367 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d96ab6cd-ed9d-4924-9566-91930411701d-config-data\") pod \"d96ab6cd-ed9d-4924-9566-91930411701d\" (UID: \"d96ab6cd-ed9d-4924-9566-91930411701d\") " Feb 27 10:54:20 crc kubenswrapper[4728]: I0227 10:54:20.319406 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d96ab6cd-ed9d-4924-9566-91930411701d-rabbitmq-confd\") pod \"d96ab6cd-ed9d-4924-9566-91930411701d\" (UID: \"d96ab6cd-ed9d-4924-9566-91930411701d\") " Feb 27 10:54:20 crc kubenswrapper[4728]: I0227 10:54:20.319459 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d96ab6cd-ed9d-4924-9566-91930411701d-rabbitmq-plugins\") pod \"d96ab6cd-ed9d-4924-9566-91930411701d\" (UID: \"d96ab6cd-ed9d-4924-9566-91930411701d\") " Feb 27 10:54:20 crc kubenswrapper[4728]: I0227 10:54:20.320112 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b8de2248-9600-4d63-9c2b-b5303351b265\") pod \"d96ab6cd-ed9d-4924-9566-91930411701d\" (UID: \"d96ab6cd-ed9d-4924-9566-91930411701d\") " Feb 27 10:54:20 crc kubenswrapper[4728]: I0227 10:54:20.320147 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d96ab6cd-ed9d-4924-9566-91930411701d-erlang-cookie-secret\") pod \"d96ab6cd-ed9d-4924-9566-91930411701d\" (UID: \"d96ab6cd-ed9d-4924-9566-91930411701d\") " Feb 27 10:54:20 crc kubenswrapper[4728]: I0227 10:54:20.320153 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d96ab6cd-ed9d-4924-9566-91930411701d-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "d96ab6cd-ed9d-4924-9566-91930411701d" (UID: "d96ab6cd-ed9d-4924-9566-91930411701d"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:54:20 crc kubenswrapper[4728]: I0227 10:54:20.320190 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d96ab6cd-ed9d-4924-9566-91930411701d-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "d96ab6cd-ed9d-4924-9566-91930411701d" (UID: "d96ab6cd-ed9d-4924-9566-91930411701d"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:54:20 crc kubenswrapper[4728]: I0227 10:54:20.320573 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d96ab6cd-ed9d-4924-9566-91930411701d-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "d96ab6cd-ed9d-4924-9566-91930411701d" (UID: "d96ab6cd-ed9d-4924-9566-91930411701d"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:54:20 crc kubenswrapper[4728]: I0227 10:54:20.320883 4728 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d96ab6cd-ed9d-4924-9566-91930411701d-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 27 10:54:20 crc kubenswrapper[4728]: I0227 10:54:20.320900 4728 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d96ab6cd-ed9d-4924-9566-91930411701d-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 27 10:54:20 crc kubenswrapper[4728]: I0227 10:54:20.320909 4728 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d96ab6cd-ed9d-4924-9566-91930411701d-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 27 10:54:20 crc kubenswrapper[4728]: I0227 10:54:20.326101 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d96ab6cd-ed9d-4924-9566-91930411701d-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "d96ab6cd-ed9d-4924-9566-91930411701d" (UID: "d96ab6cd-ed9d-4924-9566-91930411701d"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:54:20 crc kubenswrapper[4728]: I0227 10:54:20.326706 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/d96ab6cd-ed9d-4924-9566-91930411701d-pod-info" (OuterVolumeSpecName: "pod-info") pod "d96ab6cd-ed9d-4924-9566-91930411701d" (UID: "d96ab6cd-ed9d-4924-9566-91930411701d"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 27 10:54:20 crc kubenswrapper[4728]: I0227 10:54:20.328178 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d96ab6cd-ed9d-4924-9566-91930411701d-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "d96ab6cd-ed9d-4924-9566-91930411701d" (UID: "d96ab6cd-ed9d-4924-9566-91930411701d"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:54:20 crc kubenswrapper[4728]: I0227 10:54:20.328628 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d96ab6cd-ed9d-4924-9566-91930411701d-kube-api-access-2cgps" (OuterVolumeSpecName: "kube-api-access-2cgps") pod "d96ab6cd-ed9d-4924-9566-91930411701d" (UID: "d96ab6cd-ed9d-4924-9566-91930411701d"). InnerVolumeSpecName "kube-api-access-2cgps". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:54:20 crc kubenswrapper[4728]: I0227 10:54:20.366465 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b8de2248-9600-4d63-9c2b-b5303351b265" (OuterVolumeSpecName: "persistence") pod "d96ab6cd-ed9d-4924-9566-91930411701d" (UID: "d96ab6cd-ed9d-4924-9566-91930411701d"). InnerVolumeSpecName "pvc-b8de2248-9600-4d63-9c2b-b5303351b265". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 27 10:54:20 crc kubenswrapper[4728]: I0227 10:54:20.381137 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d96ab6cd-ed9d-4924-9566-91930411701d-config-data" (OuterVolumeSpecName: "config-data") pod "d96ab6cd-ed9d-4924-9566-91930411701d" (UID: "d96ab6cd-ed9d-4924-9566-91930411701d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:54:20 crc kubenswrapper[4728]: I0227 10:54:20.393210 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d96ab6cd-ed9d-4924-9566-91930411701d-server-conf" (OuterVolumeSpecName: "server-conf") pod "d96ab6cd-ed9d-4924-9566-91930411701d" (UID: "d96ab6cd-ed9d-4924-9566-91930411701d"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:54:20 crc kubenswrapper[4728]: I0227 10:54:20.427627 4728 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d96ab6cd-ed9d-4924-9566-91930411701d-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 27 10:54:20 crc kubenswrapper[4728]: I0227 10:54:20.427667 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2cgps\" (UniqueName: \"kubernetes.io/projected/d96ab6cd-ed9d-4924-9566-91930411701d-kube-api-access-2cgps\") on node \"crc\" DevicePath \"\"" Feb 27 10:54:20 crc kubenswrapper[4728]: I0227 10:54:20.427681 4728 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d96ab6cd-ed9d-4924-9566-91930411701d-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 10:54:20 crc kubenswrapper[4728]: I0227 10:54:20.427721 4728 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-b8de2248-9600-4d63-9c2b-b5303351b265\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b8de2248-9600-4d63-9c2b-b5303351b265\") on node \"crc\" " Feb 27 10:54:20 crc kubenswrapper[4728]: I0227 10:54:20.427737 4728 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d96ab6cd-ed9d-4924-9566-91930411701d-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 27 10:54:20 crc kubenswrapper[4728]: I0227 10:54:20.427747 4728 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d96ab6cd-ed9d-4924-9566-91930411701d-server-conf\") on node \"crc\" DevicePath \"\"" Feb 27 10:54:20 crc kubenswrapper[4728]: I0227 10:54:20.427760 4728 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d96ab6cd-ed9d-4924-9566-91930411701d-pod-info\") on node \"crc\" DevicePath \"\"" Feb 27 10:54:20 crc kubenswrapper[4728]: I0227 10:54:20.506813 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d96ab6cd-ed9d-4924-9566-91930411701d-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "d96ab6cd-ed9d-4924-9566-91930411701d" (UID: "d96ab6cd-ed9d-4924-9566-91930411701d"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:54:20 crc kubenswrapper[4728]: I0227 10:54:20.524662 4728 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 27 10:54:20 crc kubenswrapper[4728]: I0227 10:54:20.524848 4728 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-b8de2248-9600-4d63-9c2b-b5303351b265" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b8de2248-9600-4d63-9c2b-b5303351b265") on node "crc" Feb 27 10:54:20 crc kubenswrapper[4728]: I0227 10:54:20.529943 4728 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d96ab6cd-ed9d-4924-9566-91930411701d-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 27 10:54:20 crc kubenswrapper[4728]: I0227 10:54:20.529991 4728 reconciler_common.go:293] "Volume detached for volume \"pvc-b8de2248-9600-4d63-9c2b-b5303351b265\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b8de2248-9600-4d63-9c2b-b5303351b265\") on node \"crc\" DevicePath \"\"" Feb 27 10:54:20 crc kubenswrapper[4728]: I0227 10:54:20.556653 4728 generic.go:334] "Generic (PLEG): container finished" podID="d96ab6cd-ed9d-4924-9566-91930411701d" containerID="58a583aacc893b48f9a492c1d4b391f04e81fe2b5efd19706fceab87c90e9031" exitCode=0 Feb 27 10:54:20 crc kubenswrapper[4728]: I0227 10:54:20.556695 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"d96ab6cd-ed9d-4924-9566-91930411701d","Type":"ContainerDied","Data":"58a583aacc893b48f9a492c1d4b391f04e81fe2b5efd19706fceab87c90e9031"} Feb 27 10:54:20 crc kubenswrapper[4728]: I0227 10:54:20.556751 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Feb 27 10:54:20 crc kubenswrapper[4728]: I0227 10:54:20.556770 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"d96ab6cd-ed9d-4924-9566-91930411701d","Type":"ContainerDied","Data":"2d51f58ce483d814534088b6d04d9e8dce3267e8a571edb8bc4a89e066226ddd"} Feb 27 10:54:20 crc kubenswrapper[4728]: I0227 10:54:20.556794 4728 scope.go:117] "RemoveContainer" containerID="58a583aacc893b48f9a492c1d4b391f04e81fe2b5efd19706fceab87c90e9031" Feb 27 10:54:20 crc kubenswrapper[4728]: I0227 10:54:20.624804 4728 scope.go:117] "RemoveContainer" containerID="e4997d36ad5328d03f64dcc85aa7e6861c52e45b14220b75791b03e527d710b7" Feb 27 10:54:20 crc kubenswrapper[4728]: I0227 10:54:20.634072 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-1"] Feb 27 10:54:20 crc kubenswrapper[4728]: I0227 10:54:20.655663 4728 scope.go:117] "RemoveContainer" containerID="58a583aacc893b48f9a492c1d4b391f04e81fe2b5efd19706fceab87c90e9031" Feb 27 10:54:20 crc kubenswrapper[4728]: E0227 10:54:20.656589 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"58a583aacc893b48f9a492c1d4b391f04e81fe2b5efd19706fceab87c90e9031\": container with ID starting with 58a583aacc893b48f9a492c1d4b391f04e81fe2b5efd19706fceab87c90e9031 not found: ID does not exist" containerID="58a583aacc893b48f9a492c1d4b391f04e81fe2b5efd19706fceab87c90e9031" Feb 27 10:54:20 crc kubenswrapper[4728]: I0227 10:54:20.656624 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58a583aacc893b48f9a492c1d4b391f04e81fe2b5efd19706fceab87c90e9031"} err="failed to get container status \"58a583aacc893b48f9a492c1d4b391f04e81fe2b5efd19706fceab87c90e9031\": rpc error: code = NotFound desc = could not find container \"58a583aacc893b48f9a492c1d4b391f04e81fe2b5efd19706fceab87c90e9031\": container with ID starting with 58a583aacc893b48f9a492c1d4b391f04e81fe2b5efd19706fceab87c90e9031 not found: ID does not exist" Feb 27 10:54:20 crc kubenswrapper[4728]: I0227 10:54:20.656645 4728 scope.go:117] "RemoveContainer" containerID="e4997d36ad5328d03f64dcc85aa7e6861c52e45b14220b75791b03e527d710b7" Feb 27 10:54:20 crc kubenswrapper[4728]: E0227 10:54:20.657194 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e4997d36ad5328d03f64dcc85aa7e6861c52e45b14220b75791b03e527d710b7\": container with ID starting with e4997d36ad5328d03f64dcc85aa7e6861c52e45b14220b75791b03e527d710b7 not found: ID does not exist" containerID="e4997d36ad5328d03f64dcc85aa7e6861c52e45b14220b75791b03e527d710b7" Feb 27 10:54:20 crc kubenswrapper[4728]: I0227 10:54:20.657243 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e4997d36ad5328d03f64dcc85aa7e6861c52e45b14220b75791b03e527d710b7"} err="failed to get container status \"e4997d36ad5328d03f64dcc85aa7e6861c52e45b14220b75791b03e527d710b7\": rpc error: code = NotFound desc = could not find container \"e4997d36ad5328d03f64dcc85aa7e6861c52e45b14220b75791b03e527d710b7\": container with ID starting with e4997d36ad5328d03f64dcc85aa7e6861c52e45b14220b75791b03e527d710b7 not found: ID does not exist" Feb 27 10:54:20 crc kubenswrapper[4728]: I0227 10:54:20.663906 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-1"] Feb 27 10:54:20 crc kubenswrapper[4728]: I0227 10:54:20.679432 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-1"] Feb 27 10:54:20 crc kubenswrapper[4728]: E0227 10:54:20.680124 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d96ab6cd-ed9d-4924-9566-91930411701d" containerName="rabbitmq" Feb 27 10:54:20 crc kubenswrapper[4728]: I0227 10:54:20.680149 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="d96ab6cd-ed9d-4924-9566-91930411701d" containerName="rabbitmq" Feb 27 10:54:20 crc kubenswrapper[4728]: E0227 10:54:20.680162 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="829ffeda-5f1b-4cfa-8417-71c47d4e621f" containerName="oc" Feb 27 10:54:20 crc kubenswrapper[4728]: I0227 10:54:20.680170 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="829ffeda-5f1b-4cfa-8417-71c47d4e621f" containerName="oc" Feb 27 10:54:20 crc kubenswrapper[4728]: E0227 10:54:20.680199 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfcacc51-e5e0-4d12-a6d1-b810a9e93e42" containerName="heat-engine" Feb 27 10:54:20 crc kubenswrapper[4728]: I0227 10:54:20.680208 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfcacc51-e5e0-4d12-a6d1-b810a9e93e42" containerName="heat-engine" Feb 27 10:54:20 crc kubenswrapper[4728]: E0227 10:54:20.680239 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d96ab6cd-ed9d-4924-9566-91930411701d" containerName="setup-container" Feb 27 10:54:20 crc kubenswrapper[4728]: I0227 10:54:20.680249 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="d96ab6cd-ed9d-4924-9566-91930411701d" containerName="setup-container" Feb 27 10:54:20 crc kubenswrapper[4728]: I0227 10:54:20.680547 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="d96ab6cd-ed9d-4924-9566-91930411701d" containerName="rabbitmq" Feb 27 10:54:20 crc kubenswrapper[4728]: I0227 10:54:20.680573 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="829ffeda-5f1b-4cfa-8417-71c47d4e621f" containerName="oc" Feb 27 10:54:20 crc kubenswrapper[4728]: I0227 10:54:20.680612 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="cfcacc51-e5e0-4d12-a6d1-b810a9e93e42" containerName="heat-engine" Feb 27 10:54:20 crc kubenswrapper[4728]: I0227 10:54:20.682264 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Feb 27 10:54:20 crc kubenswrapper[4728]: I0227 10:54:20.701860 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-1"] Feb 27 10:54:20 crc kubenswrapper[4728]: I0227 10:54:20.742963 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d96ab6cd-ed9d-4924-9566-91930411701d" path="/var/lib/kubelet/pods/d96ab6cd-ed9d-4924-9566-91930411701d/volumes" Feb 27 10:54:20 crc kubenswrapper[4728]: I0227 10:54:20.836910 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2fdabfa1-9c8c-4434-8a44-30d40a18a023-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"2fdabfa1-9c8c-4434-8a44-30d40a18a023\") " pod="openstack/rabbitmq-server-1" Feb 27 10:54:20 crc kubenswrapper[4728]: I0227 10:54:20.837076 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2fdabfa1-9c8c-4434-8a44-30d40a18a023-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"2fdabfa1-9c8c-4434-8a44-30d40a18a023\") " pod="openstack/rabbitmq-server-1" Feb 27 10:54:20 crc kubenswrapper[4728]: I0227 10:54:20.837107 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2fdabfa1-9c8c-4434-8a44-30d40a18a023-server-conf\") pod \"rabbitmq-server-1\" (UID: \"2fdabfa1-9c8c-4434-8a44-30d40a18a023\") " pod="openstack/rabbitmq-server-1" Feb 27 10:54:20 crc kubenswrapper[4728]: I0227 10:54:20.837130 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2fdabfa1-9c8c-4434-8a44-30d40a18a023-config-data\") pod \"rabbitmq-server-1\" (UID: \"2fdabfa1-9c8c-4434-8a44-30d40a18a023\") " pod="openstack/rabbitmq-server-1" Feb 27 10:54:20 crc kubenswrapper[4728]: I0227 10:54:20.837201 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2fdabfa1-9c8c-4434-8a44-30d40a18a023-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"2fdabfa1-9c8c-4434-8a44-30d40a18a023\") " pod="openstack/rabbitmq-server-1" Feb 27 10:54:20 crc kubenswrapper[4728]: I0227 10:54:20.837225 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2fdabfa1-9c8c-4434-8a44-30d40a18a023-pod-info\") pod \"rabbitmq-server-1\" (UID: \"2fdabfa1-9c8c-4434-8a44-30d40a18a023\") " pod="openstack/rabbitmq-server-1" Feb 27 10:54:20 crc kubenswrapper[4728]: I0227 10:54:20.837286 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdc95\" (UniqueName: \"kubernetes.io/projected/2fdabfa1-9c8c-4434-8a44-30d40a18a023-kube-api-access-qdc95\") pod \"rabbitmq-server-1\" (UID: \"2fdabfa1-9c8c-4434-8a44-30d40a18a023\") " pod="openstack/rabbitmq-server-1" Feb 27 10:54:20 crc kubenswrapper[4728]: I0227 10:54:20.837333 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2fdabfa1-9c8c-4434-8a44-30d40a18a023-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"2fdabfa1-9c8c-4434-8a44-30d40a18a023\") " pod="openstack/rabbitmq-server-1" Feb 27 10:54:20 crc kubenswrapper[4728]: I0227 10:54:20.837370 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-b8de2248-9600-4d63-9c2b-b5303351b265\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b8de2248-9600-4d63-9c2b-b5303351b265\") pod \"rabbitmq-server-1\" (UID: \"2fdabfa1-9c8c-4434-8a44-30d40a18a023\") " pod="openstack/rabbitmq-server-1" Feb 27 10:54:20 crc kubenswrapper[4728]: I0227 10:54:20.837392 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2fdabfa1-9c8c-4434-8a44-30d40a18a023-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"2fdabfa1-9c8c-4434-8a44-30d40a18a023\") " pod="openstack/rabbitmq-server-1" Feb 27 10:54:20 crc kubenswrapper[4728]: I0227 10:54:20.837447 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/2fdabfa1-9c8c-4434-8a44-30d40a18a023-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"2fdabfa1-9c8c-4434-8a44-30d40a18a023\") " pod="openstack/rabbitmq-server-1" Feb 27 10:54:20 crc kubenswrapper[4728]: I0227 10:54:20.939626 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2fdabfa1-9c8c-4434-8a44-30d40a18a023-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"2fdabfa1-9c8c-4434-8a44-30d40a18a023\") " pod="openstack/rabbitmq-server-1" Feb 27 10:54:20 crc kubenswrapper[4728]: I0227 10:54:20.939688 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2fdabfa1-9c8c-4434-8a44-30d40a18a023-server-conf\") pod \"rabbitmq-server-1\" (UID: \"2fdabfa1-9c8c-4434-8a44-30d40a18a023\") " pod="openstack/rabbitmq-server-1" Feb 27 10:54:20 crc kubenswrapper[4728]: I0227 10:54:20.939718 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2fdabfa1-9c8c-4434-8a44-30d40a18a023-config-data\") pod \"rabbitmq-server-1\" (UID: \"2fdabfa1-9c8c-4434-8a44-30d40a18a023\") " pod="openstack/rabbitmq-server-1" Feb 27 10:54:20 crc kubenswrapper[4728]: I0227 10:54:20.939790 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2fdabfa1-9c8c-4434-8a44-30d40a18a023-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"2fdabfa1-9c8c-4434-8a44-30d40a18a023\") " pod="openstack/rabbitmq-server-1" Feb 27 10:54:20 crc kubenswrapper[4728]: I0227 10:54:20.939818 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2fdabfa1-9c8c-4434-8a44-30d40a18a023-pod-info\") pod \"rabbitmq-server-1\" (UID: \"2fdabfa1-9c8c-4434-8a44-30d40a18a023\") " pod="openstack/rabbitmq-server-1" Feb 27 10:54:20 crc kubenswrapper[4728]: I0227 10:54:20.939858 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qdc95\" (UniqueName: \"kubernetes.io/projected/2fdabfa1-9c8c-4434-8a44-30d40a18a023-kube-api-access-qdc95\") pod \"rabbitmq-server-1\" (UID: \"2fdabfa1-9c8c-4434-8a44-30d40a18a023\") " pod="openstack/rabbitmq-server-1" Feb 27 10:54:20 crc kubenswrapper[4728]: I0227 10:54:20.939891 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2fdabfa1-9c8c-4434-8a44-30d40a18a023-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"2fdabfa1-9c8c-4434-8a44-30d40a18a023\") " pod="openstack/rabbitmq-server-1" Feb 27 10:54:20 crc kubenswrapper[4728]: I0227 10:54:20.939934 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-b8de2248-9600-4d63-9c2b-b5303351b265\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b8de2248-9600-4d63-9c2b-b5303351b265\") pod \"rabbitmq-server-1\" (UID: \"2fdabfa1-9c8c-4434-8a44-30d40a18a023\") " pod="openstack/rabbitmq-server-1" Feb 27 10:54:20 crc kubenswrapper[4728]: I0227 10:54:20.939962 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2fdabfa1-9c8c-4434-8a44-30d40a18a023-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"2fdabfa1-9c8c-4434-8a44-30d40a18a023\") " pod="openstack/rabbitmq-server-1" Feb 27 10:54:20 crc kubenswrapper[4728]: I0227 10:54:20.940038 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/2fdabfa1-9c8c-4434-8a44-30d40a18a023-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"2fdabfa1-9c8c-4434-8a44-30d40a18a023\") " pod="openstack/rabbitmq-server-1" Feb 27 10:54:20 crc kubenswrapper[4728]: I0227 10:54:20.940107 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2fdabfa1-9c8c-4434-8a44-30d40a18a023-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"2fdabfa1-9c8c-4434-8a44-30d40a18a023\") " pod="openstack/rabbitmq-server-1" Feb 27 10:54:20 crc kubenswrapper[4728]: I0227 10:54:20.941962 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2fdabfa1-9c8c-4434-8a44-30d40a18a023-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"2fdabfa1-9c8c-4434-8a44-30d40a18a023\") " pod="openstack/rabbitmq-server-1" Feb 27 10:54:20 crc kubenswrapper[4728]: I0227 10:54:20.942086 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2fdabfa1-9c8c-4434-8a44-30d40a18a023-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"2fdabfa1-9c8c-4434-8a44-30d40a18a023\") " pod="openstack/rabbitmq-server-1" Feb 27 10:54:20 crc kubenswrapper[4728]: I0227 10:54:20.942303 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2fdabfa1-9c8c-4434-8a44-30d40a18a023-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"2fdabfa1-9c8c-4434-8a44-30d40a18a023\") " pod="openstack/rabbitmq-server-1" Feb 27 10:54:20 crc kubenswrapper[4728]: I0227 10:54:20.942462 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2fdabfa1-9c8c-4434-8a44-30d40a18a023-config-data\") pod \"rabbitmq-server-1\" (UID: \"2fdabfa1-9c8c-4434-8a44-30d40a18a023\") " pod="openstack/rabbitmq-server-1" Feb 27 10:54:20 crc kubenswrapper[4728]: I0227 10:54:20.943678 4728 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 27 10:54:20 crc kubenswrapper[4728]: I0227 10:54:20.943720 4728 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-b8de2248-9600-4d63-9c2b-b5303351b265\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b8de2248-9600-4d63-9c2b-b5303351b265\") pod \"rabbitmq-server-1\" (UID: \"2fdabfa1-9c8c-4434-8a44-30d40a18a023\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/a1d6ea4b058ccfdf797f27c81301cba645230fa01a1deef34493c9fa0069be7c/globalmount\"" pod="openstack/rabbitmq-server-1" Feb 27 10:54:20 crc kubenswrapper[4728]: I0227 10:54:20.945044 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2fdabfa1-9c8c-4434-8a44-30d40a18a023-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"2fdabfa1-9c8c-4434-8a44-30d40a18a023\") " pod="openstack/rabbitmq-server-1" Feb 27 10:54:20 crc kubenswrapper[4728]: I0227 10:54:20.945427 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2fdabfa1-9c8c-4434-8a44-30d40a18a023-pod-info\") pod \"rabbitmq-server-1\" (UID: \"2fdabfa1-9c8c-4434-8a44-30d40a18a023\") " pod="openstack/rabbitmq-server-1" Feb 27 10:54:20 crc kubenswrapper[4728]: I0227 10:54:20.945525 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2fdabfa1-9c8c-4434-8a44-30d40a18a023-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"2fdabfa1-9c8c-4434-8a44-30d40a18a023\") " pod="openstack/rabbitmq-server-1" Feb 27 10:54:20 crc kubenswrapper[4728]: I0227 10:54:20.950656 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/2fdabfa1-9c8c-4434-8a44-30d40a18a023-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"2fdabfa1-9c8c-4434-8a44-30d40a18a023\") " pod="openstack/rabbitmq-server-1" Feb 27 10:54:20 crc kubenswrapper[4728]: I0227 10:54:20.951738 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2fdabfa1-9c8c-4434-8a44-30d40a18a023-server-conf\") pod \"rabbitmq-server-1\" (UID: \"2fdabfa1-9c8c-4434-8a44-30d40a18a023\") " pod="openstack/rabbitmq-server-1" Feb 27 10:54:20 crc kubenswrapper[4728]: I0227 10:54:20.968679 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qdc95\" (UniqueName: \"kubernetes.io/projected/2fdabfa1-9c8c-4434-8a44-30d40a18a023-kube-api-access-qdc95\") pod \"rabbitmq-server-1\" (UID: \"2fdabfa1-9c8c-4434-8a44-30d40a18a023\") " pod="openstack/rabbitmq-server-1" Feb 27 10:54:21 crc kubenswrapper[4728]: I0227 10:54:21.030026 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-b8de2248-9600-4d63-9c2b-b5303351b265\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b8de2248-9600-4d63-9c2b-b5303351b265\") pod \"rabbitmq-server-1\" (UID: \"2fdabfa1-9c8c-4434-8a44-30d40a18a023\") " pod="openstack/rabbitmq-server-1" Feb 27 10:54:21 crc kubenswrapper[4728]: I0227 10:54:21.037213 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Feb 27 10:54:21 crc kubenswrapper[4728]: I0227 10:54:21.539694 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-1"] Feb 27 10:54:21 crc kubenswrapper[4728]: W0227 10:54:21.548753 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2fdabfa1_9c8c_4434_8a44_30d40a18a023.slice/crio-a882a26bd98c9cb8201f0f43c21dd7b8ad2b78a7f017b914ec96f7a0712d55c6 WatchSource:0}: Error finding container a882a26bd98c9cb8201f0f43c21dd7b8ad2b78a7f017b914ec96f7a0712d55c6: Status 404 returned error can't find the container with id a882a26bd98c9cb8201f0f43c21dd7b8ad2b78a7f017b914ec96f7a0712d55c6 Feb 27 10:54:21 crc kubenswrapper[4728]: I0227 10:54:21.575781 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"2fdabfa1-9c8c-4434-8a44-30d40a18a023","Type":"ContainerStarted","Data":"a882a26bd98c9cb8201f0f43c21dd7b8ad2b78a7f017b914ec96f7a0712d55c6"} Feb 27 10:54:22 crc kubenswrapper[4728]: I0227 10:54:22.591753 4728 generic.go:334] "Generic (PLEG): container finished" podID="132c4b8b-7345-46a9-8bfa-70bbb048d6f5" containerID="4003518932fd59a007cd1f943804b2d07f1ed587ed621acdd720ba4840735400" exitCode=0 Feb 27 10:54:22 crc kubenswrapper[4728]: I0227 10:54:22.591875 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-t5pxb" event={"ID":"132c4b8b-7345-46a9-8bfa-70bbb048d6f5","Type":"ContainerDied","Data":"4003518932fd59a007cd1f943804b2d07f1ed587ed621acdd720ba4840735400"} Feb 27 10:54:24 crc kubenswrapper[4728]: I0227 10:54:24.058283 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-t5pxb" Feb 27 10:54:24 crc kubenswrapper[4728]: I0227 10:54:24.225870 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/132c4b8b-7345-46a9-8bfa-70bbb048d6f5-scripts\") pod \"132c4b8b-7345-46a9-8bfa-70bbb048d6f5\" (UID: \"132c4b8b-7345-46a9-8bfa-70bbb048d6f5\") " Feb 27 10:54:24 crc kubenswrapper[4728]: I0227 10:54:24.225914 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/132c4b8b-7345-46a9-8bfa-70bbb048d6f5-config-data\") pod \"132c4b8b-7345-46a9-8bfa-70bbb048d6f5\" (UID: \"132c4b8b-7345-46a9-8bfa-70bbb048d6f5\") " Feb 27 10:54:24 crc kubenswrapper[4728]: I0227 10:54:24.225991 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/132c4b8b-7345-46a9-8bfa-70bbb048d6f5-combined-ca-bundle\") pod \"132c4b8b-7345-46a9-8bfa-70bbb048d6f5\" (UID: \"132c4b8b-7345-46a9-8bfa-70bbb048d6f5\") " Feb 27 10:54:24 crc kubenswrapper[4728]: I0227 10:54:24.226020 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xv7hc\" (UniqueName: \"kubernetes.io/projected/132c4b8b-7345-46a9-8bfa-70bbb048d6f5-kube-api-access-xv7hc\") pod \"132c4b8b-7345-46a9-8bfa-70bbb048d6f5\" (UID: \"132c4b8b-7345-46a9-8bfa-70bbb048d6f5\") " Feb 27 10:54:24 crc kubenswrapper[4728]: I0227 10:54:24.231708 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/132c4b8b-7345-46a9-8bfa-70bbb048d6f5-kube-api-access-xv7hc" (OuterVolumeSpecName: "kube-api-access-xv7hc") pod "132c4b8b-7345-46a9-8bfa-70bbb048d6f5" (UID: "132c4b8b-7345-46a9-8bfa-70bbb048d6f5"). InnerVolumeSpecName "kube-api-access-xv7hc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:54:24 crc kubenswrapper[4728]: I0227 10:54:24.232384 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/132c4b8b-7345-46a9-8bfa-70bbb048d6f5-scripts" (OuterVolumeSpecName: "scripts") pod "132c4b8b-7345-46a9-8bfa-70bbb048d6f5" (UID: "132c4b8b-7345-46a9-8bfa-70bbb048d6f5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:54:24 crc kubenswrapper[4728]: I0227 10:54:24.257046 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/132c4b8b-7345-46a9-8bfa-70bbb048d6f5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "132c4b8b-7345-46a9-8bfa-70bbb048d6f5" (UID: "132c4b8b-7345-46a9-8bfa-70bbb048d6f5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:54:24 crc kubenswrapper[4728]: I0227 10:54:24.268524 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/132c4b8b-7345-46a9-8bfa-70bbb048d6f5-config-data" (OuterVolumeSpecName: "config-data") pod "132c4b8b-7345-46a9-8bfa-70bbb048d6f5" (UID: "132c4b8b-7345-46a9-8bfa-70bbb048d6f5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:54:24 crc kubenswrapper[4728]: I0227 10:54:24.330127 4728 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/132c4b8b-7345-46a9-8bfa-70bbb048d6f5-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 10:54:24 crc kubenswrapper[4728]: I0227 10:54:24.330167 4728 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/132c4b8b-7345-46a9-8bfa-70bbb048d6f5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 10:54:24 crc kubenswrapper[4728]: I0227 10:54:24.330181 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xv7hc\" (UniqueName: \"kubernetes.io/projected/132c4b8b-7345-46a9-8bfa-70bbb048d6f5-kube-api-access-xv7hc\") on node \"crc\" DevicePath \"\"" Feb 27 10:54:24 crc kubenswrapper[4728]: I0227 10:54:24.330191 4728 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/132c4b8b-7345-46a9-8bfa-70bbb048d6f5-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 10:54:24 crc kubenswrapper[4728]: I0227 10:54:24.616301 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"2fdabfa1-9c8c-4434-8a44-30d40a18a023","Type":"ContainerStarted","Data":"97908c6f004c817eccf798c8e2009d6855ffd255f254f06e8c61f3c153b09a11"} Feb 27 10:54:24 crc kubenswrapper[4728]: I0227 10:54:24.618290 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-t5pxb" event={"ID":"132c4b8b-7345-46a9-8bfa-70bbb048d6f5","Type":"ContainerDied","Data":"76c9a83b88cac52a56b1ff028226b013effcb906b8e29f28feb2ce431c60b99d"} Feb 27 10:54:24 crc kubenswrapper[4728]: I0227 10:54:24.618316 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="76c9a83b88cac52a56b1ff028226b013effcb906b8e29f28feb2ce431c60b99d" Feb 27 10:54:24 crc kubenswrapper[4728]: I0227 10:54:24.618373 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-t5pxb" Feb 27 10:54:26 crc kubenswrapper[4728]: I0227 10:54:26.666420 4728 generic.go:334] "Generic (PLEG): container finished" podID="7b022b91-04fe-443e-af6c-d47673e6f22f" containerID="3cbe49045ed6143dbc2511837d9383d4c2996657e133a463b1770a7cea4ac54c" exitCode=0 Feb 27 10:54:26 crc kubenswrapper[4728]: I0227 10:54:26.666471 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7mr2p" event={"ID":"7b022b91-04fe-443e-af6c-d47673e6f22f","Type":"ContainerDied","Data":"3cbe49045ed6143dbc2511837d9383d4c2996657e133a463b1770a7cea4ac54c"} Feb 27 10:54:28 crc kubenswrapper[4728]: I0227 10:54:28.243711 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7mr2p" Feb 27 10:54:28 crc kubenswrapper[4728]: I0227 10:54:28.349917 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7b022b91-04fe-443e-af6c-d47673e6f22f-ssh-key-openstack-edpm-ipam\") pod \"7b022b91-04fe-443e-af6c-d47673e6f22f\" (UID: \"7b022b91-04fe-443e-af6c-d47673e6f22f\") " Feb 27 10:54:28 crc kubenswrapper[4728]: I0227 10:54:28.350017 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b022b91-04fe-443e-af6c-d47673e6f22f-repo-setup-combined-ca-bundle\") pod \"7b022b91-04fe-443e-af6c-d47673e6f22f\" (UID: \"7b022b91-04fe-443e-af6c-d47673e6f22f\") " Feb 27 10:54:28 crc kubenswrapper[4728]: I0227 10:54:28.350112 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7b022b91-04fe-443e-af6c-d47673e6f22f-inventory\") pod \"7b022b91-04fe-443e-af6c-d47673e6f22f\" (UID: \"7b022b91-04fe-443e-af6c-d47673e6f22f\") " Feb 27 10:54:28 crc kubenswrapper[4728]: I0227 10:54:28.350257 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xjw75\" (UniqueName: \"kubernetes.io/projected/7b022b91-04fe-443e-af6c-d47673e6f22f-kube-api-access-xjw75\") pod \"7b022b91-04fe-443e-af6c-d47673e6f22f\" (UID: \"7b022b91-04fe-443e-af6c-d47673e6f22f\") " Feb 27 10:54:28 crc kubenswrapper[4728]: I0227 10:54:28.355287 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b022b91-04fe-443e-af6c-d47673e6f22f-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "7b022b91-04fe-443e-af6c-d47673e6f22f" (UID: "7b022b91-04fe-443e-af6c-d47673e6f22f"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:54:28 crc kubenswrapper[4728]: I0227 10:54:28.355598 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b022b91-04fe-443e-af6c-d47673e6f22f-kube-api-access-xjw75" (OuterVolumeSpecName: "kube-api-access-xjw75") pod "7b022b91-04fe-443e-af6c-d47673e6f22f" (UID: "7b022b91-04fe-443e-af6c-d47673e6f22f"). InnerVolumeSpecName "kube-api-access-xjw75". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:54:28 crc kubenswrapper[4728]: I0227 10:54:28.390856 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b022b91-04fe-443e-af6c-d47673e6f22f-inventory" (OuterVolumeSpecName: "inventory") pod "7b022b91-04fe-443e-af6c-d47673e6f22f" (UID: "7b022b91-04fe-443e-af6c-d47673e6f22f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:54:28 crc kubenswrapper[4728]: I0227 10:54:28.403399 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b022b91-04fe-443e-af6c-d47673e6f22f-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "7b022b91-04fe-443e-af6c-d47673e6f22f" (UID: "7b022b91-04fe-443e-af6c-d47673e6f22f"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:54:28 crc kubenswrapper[4728]: I0227 10:54:28.454601 4728 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7b022b91-04fe-443e-af6c-d47673e6f22f-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 27 10:54:28 crc kubenswrapper[4728]: I0227 10:54:28.454646 4728 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b022b91-04fe-443e-af6c-d47673e6f22f-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 10:54:28 crc kubenswrapper[4728]: I0227 10:54:28.454669 4728 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7b022b91-04fe-443e-af6c-d47673e6f22f-inventory\") on node \"crc\" DevicePath \"\"" Feb 27 10:54:28 crc kubenswrapper[4728]: I0227 10:54:28.454692 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xjw75\" (UniqueName: \"kubernetes.io/projected/7b022b91-04fe-443e-af6c-d47673e6f22f-kube-api-access-xjw75\") on node \"crc\" DevicePath \"\"" Feb 27 10:54:28 crc kubenswrapper[4728]: I0227 10:54:28.700138 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7mr2p" event={"ID":"7b022b91-04fe-443e-af6c-d47673e6f22f","Type":"ContainerDied","Data":"98b79793976192690cf3e5d794cb3b77fb0b0d25c37c9e9cfa11e61040157b99"} Feb 27 10:54:28 crc kubenswrapper[4728]: I0227 10:54:28.700200 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="98b79793976192690cf3e5d794cb3b77fb0b0d25c37c9e9cfa11e61040157b99" Feb 27 10:54:28 crc kubenswrapper[4728]: I0227 10:54:28.700286 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7mr2p" Feb 27 10:54:28 crc kubenswrapper[4728]: I0227 10:54:28.796261 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Feb 27 10:54:28 crc kubenswrapper[4728]: I0227 10:54:28.796856 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="a056167f-457f-4547-ab3e-cbe2433d3cfc" containerName="aodh-api" containerID="cri-o://f1abe65966a47425466ace996490bcf4d94d3349b1740e7639e183104240803e" gracePeriod=30 Feb 27 10:54:28 crc kubenswrapper[4728]: I0227 10:54:28.796975 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="a056167f-457f-4547-ab3e-cbe2433d3cfc" containerName="aodh-notifier" containerID="cri-o://c050a67b2f5f62b9a818d67124386db2c589134db2850e4f241133daee7eed5b" gracePeriod=30 Feb 27 10:54:28 crc kubenswrapper[4728]: I0227 10:54:28.797016 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="a056167f-457f-4547-ab3e-cbe2433d3cfc" containerName="aodh-evaluator" containerID="cri-o://9752f11f948dc0098e7f9171797e12dd9bdabf580206df8f6012025a1ce7634e" gracePeriod=30 Feb 27 10:54:28 crc kubenswrapper[4728]: I0227 10:54:28.797110 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="a056167f-457f-4547-ab3e-cbe2433d3cfc" containerName="aodh-listener" containerID="cri-o://bfdc65d6422751035eac485694f2efb82adc4ce16dfd42a905a39c32b4823bbf" gracePeriod=30 Feb 27 10:54:28 crc kubenswrapper[4728]: I0227 10:54:28.817036 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-f57ll"] Feb 27 10:54:28 crc kubenswrapper[4728]: E0227 10:54:28.818815 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="132c4b8b-7345-46a9-8bfa-70bbb048d6f5" containerName="aodh-db-sync" Feb 27 10:54:28 crc kubenswrapper[4728]: I0227 10:54:28.819048 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="132c4b8b-7345-46a9-8bfa-70bbb048d6f5" containerName="aodh-db-sync" Feb 27 10:54:28 crc kubenswrapper[4728]: E0227 10:54:28.819087 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b022b91-04fe-443e-af6c-d47673e6f22f" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 27 10:54:28 crc kubenswrapper[4728]: I0227 10:54:28.819096 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b022b91-04fe-443e-af6c-d47673e6f22f" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 27 10:54:28 crc kubenswrapper[4728]: I0227 10:54:28.819436 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="132c4b8b-7345-46a9-8bfa-70bbb048d6f5" containerName="aodh-db-sync" Feb 27 10:54:28 crc kubenswrapper[4728]: I0227 10:54:28.819454 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b022b91-04fe-443e-af6c-d47673e6f22f" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 27 10:54:28 crc kubenswrapper[4728]: I0227 10:54:28.820406 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-f57ll" Feb 27 10:54:28 crc kubenswrapper[4728]: I0227 10:54:28.834278 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 27 10:54:28 crc kubenswrapper[4728]: I0227 10:54:28.834492 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 27 10:54:28 crc kubenswrapper[4728]: I0227 10:54:28.834718 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-r9nq7" Feb 27 10:54:28 crc kubenswrapper[4728]: I0227 10:54:28.835257 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 27 10:54:28 crc kubenswrapper[4728]: I0227 10:54:28.852222 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-f57ll"] Feb 27 10:54:28 crc kubenswrapper[4728]: I0227 10:54:28.968468 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f08cf8c5-14af-42cf-be14-97e3871f6801-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-f57ll\" (UID: \"f08cf8c5-14af-42cf-be14-97e3871f6801\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-f57ll" Feb 27 10:54:28 crc kubenswrapper[4728]: I0227 10:54:28.968663 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f08cf8c5-14af-42cf-be14-97e3871f6801-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-f57ll\" (UID: \"f08cf8c5-14af-42cf-be14-97e3871f6801\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-f57ll" Feb 27 10:54:28 crc kubenswrapper[4728]: I0227 10:54:28.969079 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vh982\" (UniqueName: \"kubernetes.io/projected/f08cf8c5-14af-42cf-be14-97e3871f6801-kube-api-access-vh982\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-f57ll\" (UID: \"f08cf8c5-14af-42cf-be14-97e3871f6801\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-f57ll" Feb 27 10:54:29 crc kubenswrapper[4728]: I0227 10:54:29.071938 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f08cf8c5-14af-42cf-be14-97e3871f6801-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-f57ll\" (UID: \"f08cf8c5-14af-42cf-be14-97e3871f6801\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-f57ll" Feb 27 10:54:29 crc kubenswrapper[4728]: I0227 10:54:29.072061 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vh982\" (UniqueName: \"kubernetes.io/projected/f08cf8c5-14af-42cf-be14-97e3871f6801-kube-api-access-vh982\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-f57ll\" (UID: \"f08cf8c5-14af-42cf-be14-97e3871f6801\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-f57ll" Feb 27 10:54:29 crc kubenswrapper[4728]: I0227 10:54:29.072187 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f08cf8c5-14af-42cf-be14-97e3871f6801-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-f57ll\" (UID: \"f08cf8c5-14af-42cf-be14-97e3871f6801\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-f57ll" Feb 27 10:54:29 crc kubenswrapper[4728]: I0227 10:54:29.076366 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f08cf8c5-14af-42cf-be14-97e3871f6801-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-f57ll\" (UID: \"f08cf8c5-14af-42cf-be14-97e3871f6801\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-f57ll" Feb 27 10:54:29 crc kubenswrapper[4728]: I0227 10:54:29.076366 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f08cf8c5-14af-42cf-be14-97e3871f6801-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-f57ll\" (UID: \"f08cf8c5-14af-42cf-be14-97e3871f6801\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-f57ll" Feb 27 10:54:29 crc kubenswrapper[4728]: I0227 10:54:29.096192 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vh982\" (UniqueName: \"kubernetes.io/projected/f08cf8c5-14af-42cf-be14-97e3871f6801-kube-api-access-vh982\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-f57ll\" (UID: \"f08cf8c5-14af-42cf-be14-97e3871f6801\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-f57ll" Feb 27 10:54:29 crc kubenswrapper[4728]: I0227 10:54:29.164138 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-f57ll" Feb 27 10:54:29 crc kubenswrapper[4728]: I0227 10:54:29.712973 4728 generic.go:334] "Generic (PLEG): container finished" podID="a056167f-457f-4547-ab3e-cbe2433d3cfc" containerID="9752f11f948dc0098e7f9171797e12dd9bdabf580206df8f6012025a1ce7634e" exitCode=0 Feb 27 10:54:29 crc kubenswrapper[4728]: I0227 10:54:29.713256 4728 generic.go:334] "Generic (PLEG): container finished" podID="a056167f-457f-4547-ab3e-cbe2433d3cfc" containerID="f1abe65966a47425466ace996490bcf4d94d3349b1740e7639e183104240803e" exitCode=0 Feb 27 10:54:29 crc kubenswrapper[4728]: I0227 10:54:29.713028 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"a056167f-457f-4547-ab3e-cbe2433d3cfc","Type":"ContainerDied","Data":"9752f11f948dc0098e7f9171797e12dd9bdabf580206df8f6012025a1ce7634e"} Feb 27 10:54:29 crc kubenswrapper[4728]: I0227 10:54:29.713291 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"a056167f-457f-4547-ab3e-cbe2433d3cfc","Type":"ContainerDied","Data":"f1abe65966a47425466ace996490bcf4d94d3349b1740e7639e183104240803e"} Feb 27 10:54:29 crc kubenswrapper[4728]: I0227 10:54:29.735200 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-f57ll"] Feb 27 10:54:30 crc kubenswrapper[4728]: I0227 10:54:30.759029 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-f57ll" event={"ID":"f08cf8c5-14af-42cf-be14-97e3871f6801","Type":"ContainerStarted","Data":"b79a049669b2207f00fbb77d24f12ba15e87f2850569740c84ec6b8c874c2e1d"} Feb 27 10:54:30 crc kubenswrapper[4728]: I0227 10:54:30.759565 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-f57ll" event={"ID":"f08cf8c5-14af-42cf-be14-97e3871f6801","Type":"ContainerStarted","Data":"50a1a45a0416fe69a35059eeba52269139de76d09aed468967ee0977ac15563e"} Feb 27 10:54:30 crc kubenswrapper[4728]: I0227 10:54:30.798586 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-f57ll" podStartSLOduration=2.290312694 podStartE2EDuration="2.798566316s" podCreationTimestamp="2026-02-27 10:54:28 +0000 UTC" firstStartedPulling="2026-02-27 10:54:29.740051503 +0000 UTC m=+1689.702417609" lastFinishedPulling="2026-02-27 10:54:30.248305125 +0000 UTC m=+1690.210671231" observedRunningTime="2026-02-27 10:54:30.786539691 +0000 UTC m=+1690.748905877" watchObservedRunningTime="2026-02-27 10:54:30.798566316 +0000 UTC m=+1690.760932422" Feb 27 10:54:33 crc kubenswrapper[4728]: I0227 10:54:33.777323 4728 generic.go:334] "Generic (PLEG): container finished" podID="f08cf8c5-14af-42cf-be14-97e3871f6801" containerID="b79a049669b2207f00fbb77d24f12ba15e87f2850569740c84ec6b8c874c2e1d" exitCode=0 Feb 27 10:54:33 crc kubenswrapper[4728]: I0227 10:54:33.777430 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-f57ll" event={"ID":"f08cf8c5-14af-42cf-be14-97e3871f6801","Type":"ContainerDied","Data":"b79a049669b2207f00fbb77d24f12ba15e87f2850569740c84ec6b8c874c2e1d"} Feb 27 10:54:34 crc kubenswrapper[4728]: I0227 10:54:34.791012 4728 generic.go:334] "Generic (PLEG): container finished" podID="a056167f-457f-4547-ab3e-cbe2433d3cfc" containerID="c050a67b2f5f62b9a818d67124386db2c589134db2850e4f241133daee7eed5b" exitCode=0 Feb 27 10:54:34 crc kubenswrapper[4728]: I0227 10:54:34.791091 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"a056167f-457f-4547-ab3e-cbe2433d3cfc","Type":"ContainerDied","Data":"c050a67b2f5f62b9a818d67124386db2c589134db2850e4f241133daee7eed5b"} Feb 27 10:54:35 crc kubenswrapper[4728]: I0227 10:54:35.313341 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-f57ll" Feb 27 10:54:35 crc kubenswrapper[4728]: I0227 10:54:35.464352 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f08cf8c5-14af-42cf-be14-97e3871f6801-ssh-key-openstack-edpm-ipam\") pod \"f08cf8c5-14af-42cf-be14-97e3871f6801\" (UID: \"f08cf8c5-14af-42cf-be14-97e3871f6801\") " Feb 27 10:54:35 crc kubenswrapper[4728]: I0227 10:54:35.464449 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f08cf8c5-14af-42cf-be14-97e3871f6801-inventory\") pod \"f08cf8c5-14af-42cf-be14-97e3871f6801\" (UID: \"f08cf8c5-14af-42cf-be14-97e3871f6801\") " Feb 27 10:54:35 crc kubenswrapper[4728]: I0227 10:54:35.464748 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vh982\" (UniqueName: \"kubernetes.io/projected/f08cf8c5-14af-42cf-be14-97e3871f6801-kube-api-access-vh982\") pod \"f08cf8c5-14af-42cf-be14-97e3871f6801\" (UID: \"f08cf8c5-14af-42cf-be14-97e3871f6801\") " Feb 27 10:54:35 crc kubenswrapper[4728]: I0227 10:54:35.470406 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f08cf8c5-14af-42cf-be14-97e3871f6801-kube-api-access-vh982" (OuterVolumeSpecName: "kube-api-access-vh982") pod "f08cf8c5-14af-42cf-be14-97e3871f6801" (UID: "f08cf8c5-14af-42cf-be14-97e3871f6801"). InnerVolumeSpecName "kube-api-access-vh982". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:54:35 crc kubenswrapper[4728]: I0227 10:54:35.516818 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f08cf8c5-14af-42cf-be14-97e3871f6801-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "f08cf8c5-14af-42cf-be14-97e3871f6801" (UID: "f08cf8c5-14af-42cf-be14-97e3871f6801"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:54:35 crc kubenswrapper[4728]: I0227 10:54:35.518310 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f08cf8c5-14af-42cf-be14-97e3871f6801-inventory" (OuterVolumeSpecName: "inventory") pod "f08cf8c5-14af-42cf-be14-97e3871f6801" (UID: "f08cf8c5-14af-42cf-be14-97e3871f6801"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:54:35 crc kubenswrapper[4728]: I0227 10:54:35.567619 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vh982\" (UniqueName: \"kubernetes.io/projected/f08cf8c5-14af-42cf-be14-97e3871f6801-kube-api-access-vh982\") on node \"crc\" DevicePath \"\"" Feb 27 10:54:35 crc kubenswrapper[4728]: I0227 10:54:35.567652 4728 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f08cf8c5-14af-42cf-be14-97e3871f6801-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 27 10:54:35 crc kubenswrapper[4728]: I0227 10:54:35.567662 4728 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f08cf8c5-14af-42cf-be14-97e3871f6801-inventory\") on node \"crc\" DevicePath \"\"" Feb 27 10:54:35 crc kubenswrapper[4728]: I0227 10:54:35.803766 4728 generic.go:334] "Generic (PLEG): container finished" podID="a056167f-457f-4547-ab3e-cbe2433d3cfc" containerID="bfdc65d6422751035eac485694f2efb82adc4ce16dfd42a905a39c32b4823bbf" exitCode=0 Feb 27 10:54:35 crc kubenswrapper[4728]: I0227 10:54:35.803833 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"a056167f-457f-4547-ab3e-cbe2433d3cfc","Type":"ContainerDied","Data":"bfdc65d6422751035eac485694f2efb82adc4ce16dfd42a905a39c32b4823bbf"} Feb 27 10:54:35 crc kubenswrapper[4728]: I0227 10:54:35.803859 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"a056167f-457f-4547-ab3e-cbe2433d3cfc","Type":"ContainerDied","Data":"ccf35c61e9d68b3146200ae01aa4c1d041ea7f202a2978c87ec24400a03a78eb"} Feb 27 10:54:35 crc kubenswrapper[4728]: I0227 10:54:35.803868 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ccf35c61e9d68b3146200ae01aa4c1d041ea7f202a2978c87ec24400a03a78eb" Feb 27 10:54:35 crc kubenswrapper[4728]: I0227 10:54:35.806421 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-f57ll" event={"ID":"f08cf8c5-14af-42cf-be14-97e3871f6801","Type":"ContainerDied","Data":"50a1a45a0416fe69a35059eeba52269139de76d09aed468967ee0977ac15563e"} Feb 27 10:54:35 crc kubenswrapper[4728]: I0227 10:54:35.806447 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="50a1a45a0416fe69a35059eeba52269139de76d09aed468967ee0977ac15563e" Feb 27 10:54:35 crc kubenswrapper[4728]: I0227 10:54:35.806492 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-f57ll" Feb 27 10:54:35 crc kubenswrapper[4728]: I0227 10:54:35.807896 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 27 10:54:35 crc kubenswrapper[4728]: I0227 10:54:35.873530 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4tvxw\" (UniqueName: \"kubernetes.io/projected/a056167f-457f-4547-ab3e-cbe2433d3cfc-kube-api-access-4tvxw\") pod \"a056167f-457f-4547-ab3e-cbe2433d3cfc\" (UID: \"a056167f-457f-4547-ab3e-cbe2433d3cfc\") " Feb 27 10:54:35 crc kubenswrapper[4728]: I0227 10:54:35.873645 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a056167f-457f-4547-ab3e-cbe2433d3cfc-internal-tls-certs\") pod \"a056167f-457f-4547-ab3e-cbe2433d3cfc\" (UID: \"a056167f-457f-4547-ab3e-cbe2433d3cfc\") " Feb 27 10:54:35 crc kubenswrapper[4728]: I0227 10:54:35.873874 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a056167f-457f-4547-ab3e-cbe2433d3cfc-combined-ca-bundle\") pod \"a056167f-457f-4547-ab3e-cbe2433d3cfc\" (UID: \"a056167f-457f-4547-ab3e-cbe2433d3cfc\") " Feb 27 10:54:35 crc kubenswrapper[4728]: I0227 10:54:35.874035 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a056167f-457f-4547-ab3e-cbe2433d3cfc-scripts\") pod \"a056167f-457f-4547-ab3e-cbe2433d3cfc\" (UID: \"a056167f-457f-4547-ab3e-cbe2433d3cfc\") " Feb 27 10:54:35 crc kubenswrapper[4728]: I0227 10:54:35.874146 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a056167f-457f-4547-ab3e-cbe2433d3cfc-public-tls-certs\") pod \"a056167f-457f-4547-ab3e-cbe2433d3cfc\" (UID: \"a056167f-457f-4547-ab3e-cbe2433d3cfc\") " Feb 27 10:54:35 crc kubenswrapper[4728]: I0227 10:54:35.874212 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a056167f-457f-4547-ab3e-cbe2433d3cfc-config-data\") pod \"a056167f-457f-4547-ab3e-cbe2433d3cfc\" (UID: \"a056167f-457f-4547-ab3e-cbe2433d3cfc\") " Feb 27 10:54:35 crc kubenswrapper[4728]: I0227 10:54:35.882113 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a056167f-457f-4547-ab3e-cbe2433d3cfc-kube-api-access-4tvxw" (OuterVolumeSpecName: "kube-api-access-4tvxw") pod "a056167f-457f-4547-ab3e-cbe2433d3cfc" (UID: "a056167f-457f-4547-ab3e-cbe2433d3cfc"). InnerVolumeSpecName "kube-api-access-4tvxw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:54:35 crc kubenswrapper[4728]: I0227 10:54:35.889294 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a056167f-457f-4547-ab3e-cbe2433d3cfc-scripts" (OuterVolumeSpecName: "scripts") pod "a056167f-457f-4547-ab3e-cbe2433d3cfc" (UID: "a056167f-457f-4547-ab3e-cbe2433d3cfc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:54:35 crc kubenswrapper[4728]: I0227 10:54:35.906744 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-lpx8h"] Feb 27 10:54:35 crc kubenswrapper[4728]: E0227 10:54:35.907644 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a056167f-457f-4547-ab3e-cbe2433d3cfc" containerName="aodh-evaluator" Feb 27 10:54:35 crc kubenswrapper[4728]: I0227 10:54:35.907809 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="a056167f-457f-4547-ab3e-cbe2433d3cfc" containerName="aodh-evaluator" Feb 27 10:54:35 crc kubenswrapper[4728]: E0227 10:54:35.907901 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a056167f-457f-4547-ab3e-cbe2433d3cfc" containerName="aodh-listener" Feb 27 10:54:35 crc kubenswrapper[4728]: I0227 10:54:35.907969 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="a056167f-457f-4547-ab3e-cbe2433d3cfc" containerName="aodh-listener" Feb 27 10:54:35 crc kubenswrapper[4728]: E0227 10:54:35.908058 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a056167f-457f-4547-ab3e-cbe2433d3cfc" containerName="aodh-api" Feb 27 10:54:35 crc kubenswrapper[4728]: I0227 10:54:35.908132 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="a056167f-457f-4547-ab3e-cbe2433d3cfc" containerName="aodh-api" Feb 27 10:54:35 crc kubenswrapper[4728]: E0227 10:54:35.908225 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f08cf8c5-14af-42cf-be14-97e3871f6801" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 27 10:54:35 crc kubenswrapper[4728]: I0227 10:54:35.908310 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="f08cf8c5-14af-42cf-be14-97e3871f6801" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 27 10:54:35 crc kubenswrapper[4728]: E0227 10:54:35.908413 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a056167f-457f-4547-ab3e-cbe2433d3cfc" containerName="aodh-notifier" Feb 27 10:54:35 crc kubenswrapper[4728]: I0227 10:54:35.908636 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="a056167f-457f-4547-ab3e-cbe2433d3cfc" containerName="aodh-notifier" Feb 27 10:54:35 crc kubenswrapper[4728]: I0227 10:54:35.909116 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="a056167f-457f-4547-ab3e-cbe2433d3cfc" containerName="aodh-api" Feb 27 10:54:35 crc kubenswrapper[4728]: I0227 10:54:35.909395 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="f08cf8c5-14af-42cf-be14-97e3871f6801" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 27 10:54:35 crc kubenswrapper[4728]: I0227 10:54:35.909578 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="a056167f-457f-4547-ab3e-cbe2433d3cfc" containerName="aodh-listener" Feb 27 10:54:35 crc kubenswrapper[4728]: I0227 10:54:35.909746 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="a056167f-457f-4547-ab3e-cbe2433d3cfc" containerName="aodh-notifier" Feb 27 10:54:35 crc kubenswrapper[4728]: I0227 10:54:35.909868 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="a056167f-457f-4547-ab3e-cbe2433d3cfc" containerName="aodh-evaluator" Feb 27 10:54:35 crc kubenswrapper[4728]: I0227 10:54:35.911240 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-lpx8h" Feb 27 10:54:35 crc kubenswrapper[4728]: I0227 10:54:35.914251 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 27 10:54:35 crc kubenswrapper[4728]: I0227 10:54:35.914353 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 27 10:54:35 crc kubenswrapper[4728]: I0227 10:54:35.917221 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 27 10:54:35 crc kubenswrapper[4728]: I0227 10:54:35.917575 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-r9nq7" Feb 27 10:54:35 crc kubenswrapper[4728]: I0227 10:54:35.921437 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-lpx8h"] Feb 27 10:54:35 crc kubenswrapper[4728]: I0227 10:54:35.977761 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a056167f-457f-4547-ab3e-cbe2433d3cfc-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "a056167f-457f-4547-ab3e-cbe2433d3cfc" (UID: "a056167f-457f-4547-ab3e-cbe2433d3cfc"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:54:35 crc kubenswrapper[4728]: I0227 10:54:35.980307 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e83b55c5-e7f6-4e31-b65e-14e0f39a21ec-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-lpx8h\" (UID: \"e83b55c5-e7f6-4e31-b65e-14e0f39a21ec\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-lpx8h" Feb 27 10:54:35 crc kubenswrapper[4728]: I0227 10:54:35.980390 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e83b55c5-e7f6-4e31-b65e-14e0f39a21ec-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-lpx8h\" (UID: \"e83b55c5-e7f6-4e31-b65e-14e0f39a21ec\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-lpx8h" Feb 27 10:54:35 crc kubenswrapper[4728]: I0227 10:54:35.980447 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xz9wm\" (UniqueName: \"kubernetes.io/projected/e83b55c5-e7f6-4e31-b65e-14e0f39a21ec-kube-api-access-xz9wm\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-lpx8h\" (UID: \"e83b55c5-e7f6-4e31-b65e-14e0f39a21ec\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-lpx8h" Feb 27 10:54:35 crc kubenswrapper[4728]: I0227 10:54:35.980480 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e83b55c5-e7f6-4e31-b65e-14e0f39a21ec-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-lpx8h\" (UID: \"e83b55c5-e7f6-4e31-b65e-14e0f39a21ec\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-lpx8h" Feb 27 10:54:35 crc kubenswrapper[4728]: I0227 10:54:35.987928 4728 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a056167f-457f-4547-ab3e-cbe2433d3cfc-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 27 10:54:35 crc kubenswrapper[4728]: I0227 10:54:35.987971 4728 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a056167f-457f-4547-ab3e-cbe2433d3cfc-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 10:54:35 crc kubenswrapper[4728]: I0227 10:54:35.987984 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4tvxw\" (UniqueName: \"kubernetes.io/projected/a056167f-457f-4547-ab3e-cbe2433d3cfc-kube-api-access-4tvxw\") on node \"crc\" DevicePath \"\"" Feb 27 10:54:35 crc kubenswrapper[4728]: I0227 10:54:35.988655 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a056167f-457f-4547-ab3e-cbe2433d3cfc-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "a056167f-457f-4547-ab3e-cbe2433d3cfc" (UID: "a056167f-457f-4547-ab3e-cbe2433d3cfc"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:54:36 crc kubenswrapper[4728]: I0227 10:54:36.046577 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a056167f-457f-4547-ab3e-cbe2433d3cfc-config-data" (OuterVolumeSpecName: "config-data") pod "a056167f-457f-4547-ab3e-cbe2433d3cfc" (UID: "a056167f-457f-4547-ab3e-cbe2433d3cfc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:54:36 crc kubenswrapper[4728]: I0227 10:54:36.050645 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a056167f-457f-4547-ab3e-cbe2433d3cfc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a056167f-457f-4547-ab3e-cbe2433d3cfc" (UID: "a056167f-457f-4547-ab3e-cbe2433d3cfc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:54:36 crc kubenswrapper[4728]: I0227 10:54:36.090346 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e83b55c5-e7f6-4e31-b65e-14e0f39a21ec-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-lpx8h\" (UID: \"e83b55c5-e7f6-4e31-b65e-14e0f39a21ec\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-lpx8h" Feb 27 10:54:36 crc kubenswrapper[4728]: I0227 10:54:36.090524 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e83b55c5-e7f6-4e31-b65e-14e0f39a21ec-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-lpx8h\" (UID: \"e83b55c5-e7f6-4e31-b65e-14e0f39a21ec\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-lpx8h" Feb 27 10:54:36 crc kubenswrapper[4728]: I0227 10:54:36.090580 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xz9wm\" (UniqueName: \"kubernetes.io/projected/e83b55c5-e7f6-4e31-b65e-14e0f39a21ec-kube-api-access-xz9wm\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-lpx8h\" (UID: \"e83b55c5-e7f6-4e31-b65e-14e0f39a21ec\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-lpx8h" Feb 27 10:54:36 crc kubenswrapper[4728]: I0227 10:54:36.090608 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e83b55c5-e7f6-4e31-b65e-14e0f39a21ec-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-lpx8h\" (UID: \"e83b55c5-e7f6-4e31-b65e-14e0f39a21ec\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-lpx8h" Feb 27 10:54:36 crc kubenswrapper[4728]: I0227 10:54:36.090825 4728 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a056167f-457f-4547-ab3e-cbe2433d3cfc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 10:54:36 crc kubenswrapper[4728]: I0227 10:54:36.090848 4728 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a056167f-457f-4547-ab3e-cbe2433d3cfc-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 27 10:54:36 crc kubenswrapper[4728]: I0227 10:54:36.090863 4728 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a056167f-457f-4547-ab3e-cbe2433d3cfc-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 10:54:36 crc kubenswrapper[4728]: I0227 10:54:36.094226 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e83b55c5-e7f6-4e31-b65e-14e0f39a21ec-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-lpx8h\" (UID: \"e83b55c5-e7f6-4e31-b65e-14e0f39a21ec\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-lpx8h" Feb 27 10:54:36 crc kubenswrapper[4728]: I0227 10:54:36.094921 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e83b55c5-e7f6-4e31-b65e-14e0f39a21ec-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-lpx8h\" (UID: \"e83b55c5-e7f6-4e31-b65e-14e0f39a21ec\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-lpx8h" Feb 27 10:54:36 crc kubenswrapper[4728]: I0227 10:54:36.095560 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e83b55c5-e7f6-4e31-b65e-14e0f39a21ec-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-lpx8h\" (UID: \"e83b55c5-e7f6-4e31-b65e-14e0f39a21ec\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-lpx8h" Feb 27 10:54:36 crc kubenswrapper[4728]: I0227 10:54:36.107566 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xz9wm\" (UniqueName: \"kubernetes.io/projected/e83b55c5-e7f6-4e31-b65e-14e0f39a21ec-kube-api-access-xz9wm\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-lpx8h\" (UID: \"e83b55c5-e7f6-4e31-b65e-14e0f39a21ec\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-lpx8h" Feb 27 10:54:36 crc kubenswrapper[4728]: I0227 10:54:36.239401 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-lpx8h" Feb 27 10:54:36 crc kubenswrapper[4728]: I0227 10:54:36.820717 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 27 10:54:36 crc kubenswrapper[4728]: I0227 10:54:36.832533 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-lpx8h"] Feb 27 10:54:36 crc kubenswrapper[4728]: I0227 10:54:36.919710 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Feb 27 10:54:36 crc kubenswrapper[4728]: I0227 10:54:36.935846 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-0"] Feb 27 10:54:36 crc kubenswrapper[4728]: I0227 10:54:36.980113 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Feb 27 10:54:36 crc kubenswrapper[4728]: I0227 10:54:36.984174 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 27 10:54:36 crc kubenswrapper[4728]: I0227 10:54:36.987137 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Feb 27 10:54:36 crc kubenswrapper[4728]: I0227 10:54:36.987814 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-4dctm" Feb 27 10:54:36 crc kubenswrapper[4728]: I0227 10:54:36.988061 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Feb 27 10:54:36 crc kubenswrapper[4728]: I0227 10:54:36.988322 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-internal-svc" Feb 27 10:54:36 crc kubenswrapper[4728]: I0227 10:54:36.988597 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-public-svc" Feb 27 10:54:36 crc kubenswrapper[4728]: I0227 10:54:36.999450 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Feb 27 10:54:37 crc kubenswrapper[4728]: I0227 10:54:37.119226 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c73db32f-59f6-49d6-b9c4-f12c029ce737-scripts\") pod \"aodh-0\" (UID: \"c73db32f-59f6-49d6-b9c4-f12c029ce737\") " pod="openstack/aodh-0" Feb 27 10:54:37 crc kubenswrapper[4728]: I0227 10:54:37.119272 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c73db32f-59f6-49d6-b9c4-f12c029ce737-internal-tls-certs\") pod \"aodh-0\" (UID: \"c73db32f-59f6-49d6-b9c4-f12c029ce737\") " pod="openstack/aodh-0" Feb 27 10:54:37 crc kubenswrapper[4728]: I0227 10:54:37.119593 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c73db32f-59f6-49d6-b9c4-f12c029ce737-public-tls-certs\") pod \"aodh-0\" (UID: \"c73db32f-59f6-49d6-b9c4-f12c029ce737\") " pod="openstack/aodh-0" Feb 27 10:54:37 crc kubenswrapper[4728]: I0227 10:54:37.119880 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pbtcp\" (UniqueName: \"kubernetes.io/projected/c73db32f-59f6-49d6-b9c4-f12c029ce737-kube-api-access-pbtcp\") pod \"aodh-0\" (UID: \"c73db32f-59f6-49d6-b9c4-f12c029ce737\") " pod="openstack/aodh-0" Feb 27 10:54:37 crc kubenswrapper[4728]: I0227 10:54:37.119961 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c73db32f-59f6-49d6-b9c4-f12c029ce737-combined-ca-bundle\") pod \"aodh-0\" (UID: \"c73db32f-59f6-49d6-b9c4-f12c029ce737\") " pod="openstack/aodh-0" Feb 27 10:54:37 crc kubenswrapper[4728]: I0227 10:54:37.120071 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c73db32f-59f6-49d6-b9c4-f12c029ce737-config-data\") pod \"aodh-0\" (UID: \"c73db32f-59f6-49d6-b9c4-f12c029ce737\") " pod="openstack/aodh-0" Feb 27 10:54:37 crc kubenswrapper[4728]: I0227 10:54:37.222134 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c73db32f-59f6-49d6-b9c4-f12c029ce737-public-tls-certs\") pod \"aodh-0\" (UID: \"c73db32f-59f6-49d6-b9c4-f12c029ce737\") " pod="openstack/aodh-0" Feb 27 10:54:37 crc kubenswrapper[4728]: I0227 10:54:37.222606 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pbtcp\" (UniqueName: \"kubernetes.io/projected/c73db32f-59f6-49d6-b9c4-f12c029ce737-kube-api-access-pbtcp\") pod \"aodh-0\" (UID: \"c73db32f-59f6-49d6-b9c4-f12c029ce737\") " pod="openstack/aodh-0" Feb 27 10:54:37 crc kubenswrapper[4728]: I0227 10:54:37.222660 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c73db32f-59f6-49d6-b9c4-f12c029ce737-combined-ca-bundle\") pod \"aodh-0\" (UID: \"c73db32f-59f6-49d6-b9c4-f12c029ce737\") " pod="openstack/aodh-0" Feb 27 10:54:37 crc kubenswrapper[4728]: I0227 10:54:37.222730 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c73db32f-59f6-49d6-b9c4-f12c029ce737-config-data\") pod \"aodh-0\" (UID: \"c73db32f-59f6-49d6-b9c4-f12c029ce737\") " pod="openstack/aodh-0" Feb 27 10:54:37 crc kubenswrapper[4728]: I0227 10:54:37.222858 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c73db32f-59f6-49d6-b9c4-f12c029ce737-scripts\") pod \"aodh-0\" (UID: \"c73db32f-59f6-49d6-b9c4-f12c029ce737\") " pod="openstack/aodh-0" Feb 27 10:54:37 crc kubenswrapper[4728]: I0227 10:54:37.222897 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c73db32f-59f6-49d6-b9c4-f12c029ce737-internal-tls-certs\") pod \"aodh-0\" (UID: \"c73db32f-59f6-49d6-b9c4-f12c029ce737\") " pod="openstack/aodh-0" Feb 27 10:54:37 crc kubenswrapper[4728]: I0227 10:54:37.228018 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c73db32f-59f6-49d6-b9c4-f12c029ce737-scripts\") pod \"aodh-0\" (UID: \"c73db32f-59f6-49d6-b9c4-f12c029ce737\") " pod="openstack/aodh-0" Feb 27 10:54:37 crc kubenswrapper[4728]: I0227 10:54:37.228121 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c73db32f-59f6-49d6-b9c4-f12c029ce737-internal-tls-certs\") pod \"aodh-0\" (UID: \"c73db32f-59f6-49d6-b9c4-f12c029ce737\") " pod="openstack/aodh-0" Feb 27 10:54:37 crc kubenswrapper[4728]: I0227 10:54:37.228589 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c73db32f-59f6-49d6-b9c4-f12c029ce737-combined-ca-bundle\") pod \"aodh-0\" (UID: \"c73db32f-59f6-49d6-b9c4-f12c029ce737\") " pod="openstack/aodh-0" Feb 27 10:54:37 crc kubenswrapper[4728]: I0227 10:54:37.228814 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c73db32f-59f6-49d6-b9c4-f12c029ce737-public-tls-certs\") pod \"aodh-0\" (UID: \"c73db32f-59f6-49d6-b9c4-f12c029ce737\") " pod="openstack/aodh-0" Feb 27 10:54:37 crc kubenswrapper[4728]: I0227 10:54:37.230550 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c73db32f-59f6-49d6-b9c4-f12c029ce737-config-data\") pod \"aodh-0\" (UID: \"c73db32f-59f6-49d6-b9c4-f12c029ce737\") " pod="openstack/aodh-0" Feb 27 10:54:37 crc kubenswrapper[4728]: I0227 10:54:37.256212 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pbtcp\" (UniqueName: \"kubernetes.io/projected/c73db32f-59f6-49d6-b9c4-f12c029ce737-kube-api-access-pbtcp\") pod \"aodh-0\" (UID: \"c73db32f-59f6-49d6-b9c4-f12c029ce737\") " pod="openstack/aodh-0" Feb 27 10:54:37 crc kubenswrapper[4728]: I0227 10:54:37.305001 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 27 10:54:37 crc kubenswrapper[4728]: W0227 10:54:37.809849 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc73db32f_59f6_49d6_b9c4_f12c029ce737.slice/crio-29406b74fd00a78e85ae03d8d4257387ee466838e450863d260f779e293d0ac2 WatchSource:0}: Error finding container 29406b74fd00a78e85ae03d8d4257387ee466838e450863d260f779e293d0ac2: Status 404 returned error can't find the container with id 29406b74fd00a78e85ae03d8d4257387ee466838e450863d260f779e293d0ac2 Feb 27 10:54:37 crc kubenswrapper[4728]: I0227 10:54:37.810084 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Feb 27 10:54:37 crc kubenswrapper[4728]: I0227 10:54:37.841786 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"c73db32f-59f6-49d6-b9c4-f12c029ce737","Type":"ContainerStarted","Data":"29406b74fd00a78e85ae03d8d4257387ee466838e450863d260f779e293d0ac2"} Feb 27 10:54:37 crc kubenswrapper[4728]: I0227 10:54:37.845941 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-lpx8h" event={"ID":"e83b55c5-e7f6-4e31-b65e-14e0f39a21ec","Type":"ContainerStarted","Data":"8375187feb6eafc85c777b89484e040717e873383b72023ac2ee4912c1a51144"} Feb 27 10:54:37 crc kubenswrapper[4728]: I0227 10:54:37.846010 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-lpx8h" event={"ID":"e83b55c5-e7f6-4e31-b65e-14e0f39a21ec","Type":"ContainerStarted","Data":"d1f840218877cd9c35dcddd965e6a5fd881a4fd07c08e2000c3b1fe82acdbe97"} Feb 27 10:54:37 crc kubenswrapper[4728]: I0227 10:54:37.866826 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-lpx8h" podStartSLOduration=2.473974045 podStartE2EDuration="2.866800815s" podCreationTimestamp="2026-02-27 10:54:35 +0000 UTC" firstStartedPulling="2026-02-27 10:54:36.827718308 +0000 UTC m=+1696.790084414" lastFinishedPulling="2026-02-27 10:54:37.220545078 +0000 UTC m=+1697.182911184" observedRunningTime="2026-02-27 10:54:37.862062395 +0000 UTC m=+1697.824428501" watchObservedRunningTime="2026-02-27 10:54:37.866800815 +0000 UTC m=+1697.829166961" Feb 27 10:54:38 crc kubenswrapper[4728]: I0227 10:54:38.746960 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a056167f-457f-4547-ab3e-cbe2433d3cfc" path="/var/lib/kubelet/pods/a056167f-457f-4547-ab3e-cbe2433d3cfc/volumes" Feb 27 10:54:38 crc kubenswrapper[4728]: I0227 10:54:38.863061 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"c73db32f-59f6-49d6-b9c4-f12c029ce737","Type":"ContainerStarted","Data":"fec243f770744ff22c9a269138f261847008cb65ea012f12486588f2bcf71e86"} Feb 27 10:54:39 crc kubenswrapper[4728]: I0227 10:54:39.904377 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"c73db32f-59f6-49d6-b9c4-f12c029ce737","Type":"ContainerStarted","Data":"cdb95cdde4031724735370c2fe227d856272961e47a6da4929e540474db5cc4a"} Feb 27 10:54:40 crc kubenswrapper[4728]: I0227 10:54:40.921464 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"c73db32f-59f6-49d6-b9c4-f12c029ce737","Type":"ContainerStarted","Data":"13d8e46278f9f3c316ac2787a35d9624985cf3e63c96eb1e2ae3437824ddbf9f"} Feb 27 10:54:41 crc kubenswrapper[4728]: I0227 10:54:41.937359 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"c73db32f-59f6-49d6-b9c4-f12c029ce737","Type":"ContainerStarted","Data":"e863ae8049928c140cc4d74e20847e4e7f40b2879548aa9a42afb56e154db8ac"} Feb 27 10:54:41 crc kubenswrapper[4728]: I0227 10:54:41.976436 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=2.372865417 podStartE2EDuration="5.976415049s" podCreationTimestamp="2026-02-27 10:54:36 +0000 UTC" firstStartedPulling="2026-02-27 10:54:37.816462859 +0000 UTC m=+1697.778828995" lastFinishedPulling="2026-02-27 10:54:41.420012521 +0000 UTC m=+1701.382378627" observedRunningTime="2026-02-27 10:54:41.956287393 +0000 UTC m=+1701.918653499" watchObservedRunningTime="2026-02-27 10:54:41.976415049 +0000 UTC m=+1701.938781155" Feb 27 10:54:56 crc kubenswrapper[4728]: I0227 10:54:56.157791 4728 generic.go:334] "Generic (PLEG): container finished" podID="2fdabfa1-9c8c-4434-8a44-30d40a18a023" containerID="97908c6f004c817eccf798c8e2009d6855ffd255f254f06e8c61f3c153b09a11" exitCode=0 Feb 27 10:54:56 crc kubenswrapper[4728]: I0227 10:54:56.157886 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"2fdabfa1-9c8c-4434-8a44-30d40a18a023","Type":"ContainerDied","Data":"97908c6f004c817eccf798c8e2009d6855ffd255f254f06e8c61f3c153b09a11"} Feb 27 10:54:57 crc kubenswrapper[4728]: I0227 10:54:57.177652 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"2fdabfa1-9c8c-4434-8a44-30d40a18a023","Type":"ContainerStarted","Data":"bdeff9564a2143769a4a2f3d9f2ed1129fb064527673aaa86e241a8c55059e97"} Feb 27 10:54:57 crc kubenswrapper[4728]: I0227 10:54:57.178581 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-1" Feb 27 10:54:57 crc kubenswrapper[4728]: I0227 10:54:57.223232 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-1" podStartSLOduration=37.223209363 podStartE2EDuration="37.223209363s" podCreationTimestamp="2026-02-27 10:54:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:54:57.209688616 +0000 UTC m=+1717.172054732" watchObservedRunningTime="2026-02-27 10:54:57.223209363 +0000 UTC m=+1717.185575479" Feb 27 10:55:05 crc kubenswrapper[4728]: I0227 10:55:05.922681 4728 patch_prober.go:28] interesting pod/machine-config-daemon-mf2hh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 10:55:05 crc kubenswrapper[4728]: I0227 10:55:05.923416 4728 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 10:55:11 crc kubenswrapper[4728]: I0227 10:55:11.041749 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-1" Feb 27 10:55:11 crc kubenswrapper[4728]: I0227 10:55:11.115374 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 27 10:55:15 crc kubenswrapper[4728]: I0227 10:55:15.833600 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="5948716b-2c2b-4a90-b4b5-f8daad17f020" containerName="rabbitmq" containerID="cri-o://9134c47853f75890d3740060e1e872c3d4608294607fb6e716c7ae974c2862b6" gracePeriod=604796 Feb 27 10:55:19 crc kubenswrapper[4728]: I0227 10:55:19.579296 4728 scope.go:117] "RemoveContainer" containerID="9bcd714432e176011983c4468db26de87b76e3c74ff679b41d17a3ec6ef5e954" Feb 27 10:55:19 crc kubenswrapper[4728]: I0227 10:55:19.621682 4728 scope.go:117] "RemoveContainer" containerID="dd812a37993c075fb2911e1dcdf493cf564cd9a15900330e4de3fd18c118594f" Feb 27 10:55:21 crc kubenswrapper[4728]: I0227 10:55:21.337667 4728 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="5948716b-2c2b-4a90-b4b5-f8daad17f020" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.135:5671: connect: connection refused" Feb 27 10:55:22 crc kubenswrapper[4728]: I0227 10:55:22.494950 4728 generic.go:334] "Generic (PLEG): container finished" podID="5948716b-2c2b-4a90-b4b5-f8daad17f020" containerID="9134c47853f75890d3740060e1e872c3d4608294607fb6e716c7ae974c2862b6" exitCode=0 Feb 27 10:55:22 crc kubenswrapper[4728]: I0227 10:55:22.495045 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"5948716b-2c2b-4a90-b4b5-f8daad17f020","Type":"ContainerDied","Data":"9134c47853f75890d3740060e1e872c3d4608294607fb6e716c7ae974c2862b6"} Feb 27 10:55:22 crc kubenswrapper[4728]: I0227 10:55:22.495261 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"5948716b-2c2b-4a90-b4b5-f8daad17f020","Type":"ContainerDied","Data":"bad6251d347acc9e66730129b933f53650d03311f9dc5315a9f605ae06d55405"} Feb 27 10:55:22 crc kubenswrapper[4728]: I0227 10:55:22.495281 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bad6251d347acc9e66730129b933f53650d03311f9dc5315a9f605ae06d55405" Feb 27 10:55:22 crc kubenswrapper[4728]: I0227 10:55:22.562491 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 27 10:55:22 crc kubenswrapper[4728]: I0227 10:55:22.673785 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j76f9\" (UniqueName: \"kubernetes.io/projected/5948716b-2c2b-4a90-b4b5-f8daad17f020-kube-api-access-j76f9\") pod \"5948716b-2c2b-4a90-b4b5-f8daad17f020\" (UID: \"5948716b-2c2b-4a90-b4b5-f8daad17f020\") " Feb 27 10:55:22 crc kubenswrapper[4728]: I0227 10:55:22.674677 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f3f36d95-12a2-44f4-8352-45ce349ef93c\") pod \"5948716b-2c2b-4a90-b4b5-f8daad17f020\" (UID: \"5948716b-2c2b-4a90-b4b5-f8daad17f020\") " Feb 27 10:55:22 crc kubenswrapper[4728]: I0227 10:55:22.674775 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/5948716b-2c2b-4a90-b4b5-f8daad17f020-erlang-cookie-secret\") pod \"5948716b-2c2b-4a90-b4b5-f8daad17f020\" (UID: \"5948716b-2c2b-4a90-b4b5-f8daad17f020\") " Feb 27 10:55:22 crc kubenswrapper[4728]: I0227 10:55:22.674901 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/5948716b-2c2b-4a90-b4b5-f8daad17f020-pod-info\") pod \"5948716b-2c2b-4a90-b4b5-f8daad17f020\" (UID: \"5948716b-2c2b-4a90-b4b5-f8daad17f020\") " Feb 27 10:55:22 crc kubenswrapper[4728]: I0227 10:55:22.674983 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/5948716b-2c2b-4a90-b4b5-f8daad17f020-rabbitmq-tls\") pod \"5948716b-2c2b-4a90-b4b5-f8daad17f020\" (UID: \"5948716b-2c2b-4a90-b4b5-f8daad17f020\") " Feb 27 10:55:22 crc kubenswrapper[4728]: I0227 10:55:22.675024 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/5948716b-2c2b-4a90-b4b5-f8daad17f020-rabbitmq-confd\") pod \"5948716b-2c2b-4a90-b4b5-f8daad17f020\" (UID: \"5948716b-2c2b-4a90-b4b5-f8daad17f020\") " Feb 27 10:55:22 crc kubenswrapper[4728]: I0227 10:55:22.675102 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/5948716b-2c2b-4a90-b4b5-f8daad17f020-server-conf\") pod \"5948716b-2c2b-4a90-b4b5-f8daad17f020\" (UID: \"5948716b-2c2b-4a90-b4b5-f8daad17f020\") " Feb 27 10:55:22 crc kubenswrapper[4728]: I0227 10:55:22.675131 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/5948716b-2c2b-4a90-b4b5-f8daad17f020-plugins-conf\") pod \"5948716b-2c2b-4a90-b4b5-f8daad17f020\" (UID: \"5948716b-2c2b-4a90-b4b5-f8daad17f020\") " Feb 27 10:55:22 crc kubenswrapper[4728]: I0227 10:55:22.675187 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/5948716b-2c2b-4a90-b4b5-f8daad17f020-rabbitmq-erlang-cookie\") pod \"5948716b-2c2b-4a90-b4b5-f8daad17f020\" (UID: \"5948716b-2c2b-4a90-b4b5-f8daad17f020\") " Feb 27 10:55:22 crc kubenswrapper[4728]: I0227 10:55:22.675223 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5948716b-2c2b-4a90-b4b5-f8daad17f020-config-data\") pod \"5948716b-2c2b-4a90-b4b5-f8daad17f020\" (UID: \"5948716b-2c2b-4a90-b4b5-f8daad17f020\") " Feb 27 10:55:22 crc kubenswrapper[4728]: I0227 10:55:22.675267 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/5948716b-2c2b-4a90-b4b5-f8daad17f020-rabbitmq-plugins\") pod \"5948716b-2c2b-4a90-b4b5-f8daad17f020\" (UID: \"5948716b-2c2b-4a90-b4b5-f8daad17f020\") " Feb 27 10:55:22 crc kubenswrapper[4728]: I0227 10:55:22.675755 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5948716b-2c2b-4a90-b4b5-f8daad17f020-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "5948716b-2c2b-4a90-b4b5-f8daad17f020" (UID: "5948716b-2c2b-4a90-b4b5-f8daad17f020"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:55:22 crc kubenswrapper[4728]: I0227 10:55:22.676256 4728 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/5948716b-2c2b-4a90-b4b5-f8daad17f020-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 27 10:55:22 crc kubenswrapper[4728]: I0227 10:55:22.679726 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5948716b-2c2b-4a90-b4b5-f8daad17f020-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "5948716b-2c2b-4a90-b4b5-f8daad17f020" (UID: "5948716b-2c2b-4a90-b4b5-f8daad17f020"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:55:22 crc kubenswrapper[4728]: I0227 10:55:22.683380 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5948716b-2c2b-4a90-b4b5-f8daad17f020-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "5948716b-2c2b-4a90-b4b5-f8daad17f020" (UID: "5948716b-2c2b-4a90-b4b5-f8daad17f020"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:55:22 crc kubenswrapper[4728]: I0227 10:55:22.690851 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5948716b-2c2b-4a90-b4b5-f8daad17f020-kube-api-access-j76f9" (OuterVolumeSpecName: "kube-api-access-j76f9") pod "5948716b-2c2b-4a90-b4b5-f8daad17f020" (UID: "5948716b-2c2b-4a90-b4b5-f8daad17f020"). InnerVolumeSpecName "kube-api-access-j76f9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:55:22 crc kubenswrapper[4728]: I0227 10:55:22.690960 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/5948716b-2c2b-4a90-b4b5-f8daad17f020-pod-info" (OuterVolumeSpecName: "pod-info") pod "5948716b-2c2b-4a90-b4b5-f8daad17f020" (UID: "5948716b-2c2b-4a90-b4b5-f8daad17f020"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 27 10:55:22 crc kubenswrapper[4728]: I0227 10:55:22.692785 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5948716b-2c2b-4a90-b4b5-f8daad17f020-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "5948716b-2c2b-4a90-b4b5-f8daad17f020" (UID: "5948716b-2c2b-4a90-b4b5-f8daad17f020"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 10:55:22 crc kubenswrapper[4728]: I0227 10:55:22.714054 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5948716b-2c2b-4a90-b4b5-f8daad17f020-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "5948716b-2c2b-4a90-b4b5-f8daad17f020" (UID: "5948716b-2c2b-4a90-b4b5-f8daad17f020"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:55:22 crc kubenswrapper[4728]: I0227 10:55:22.742743 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f3f36d95-12a2-44f4-8352-45ce349ef93c" (OuterVolumeSpecName: "persistence") pod "5948716b-2c2b-4a90-b4b5-f8daad17f020" (UID: "5948716b-2c2b-4a90-b4b5-f8daad17f020"). InnerVolumeSpecName "pvc-f3f36d95-12a2-44f4-8352-45ce349ef93c". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 27 10:55:22 crc kubenswrapper[4728]: I0227 10:55:22.743167 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5948716b-2c2b-4a90-b4b5-f8daad17f020-config-data" (OuterVolumeSpecName: "config-data") pod "5948716b-2c2b-4a90-b4b5-f8daad17f020" (UID: "5948716b-2c2b-4a90-b4b5-f8daad17f020"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:55:22 crc kubenswrapper[4728]: I0227 10:55:22.778397 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j76f9\" (UniqueName: \"kubernetes.io/projected/5948716b-2c2b-4a90-b4b5-f8daad17f020-kube-api-access-j76f9\") on node \"crc\" DevicePath \"\"" Feb 27 10:55:22 crc kubenswrapper[4728]: I0227 10:55:22.778455 4728 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-f3f36d95-12a2-44f4-8352-45ce349ef93c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f3f36d95-12a2-44f4-8352-45ce349ef93c\") on node \"crc\" " Feb 27 10:55:22 crc kubenswrapper[4728]: I0227 10:55:22.778471 4728 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/5948716b-2c2b-4a90-b4b5-f8daad17f020-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 27 10:55:22 crc kubenswrapper[4728]: I0227 10:55:22.778485 4728 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/5948716b-2c2b-4a90-b4b5-f8daad17f020-pod-info\") on node \"crc\" DevicePath \"\"" Feb 27 10:55:22 crc kubenswrapper[4728]: I0227 10:55:22.778496 4728 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/5948716b-2c2b-4a90-b4b5-f8daad17f020-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 27 10:55:22 crc kubenswrapper[4728]: I0227 10:55:22.778511 4728 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/5948716b-2c2b-4a90-b4b5-f8daad17f020-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 27 10:55:22 crc kubenswrapper[4728]: I0227 10:55:22.778544 4728 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5948716b-2c2b-4a90-b4b5-f8daad17f020-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 10:55:22 crc kubenswrapper[4728]: I0227 10:55:22.778556 4728 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/5948716b-2c2b-4a90-b4b5-f8daad17f020-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 27 10:55:22 crc kubenswrapper[4728]: I0227 10:55:22.802967 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5948716b-2c2b-4a90-b4b5-f8daad17f020-server-conf" (OuterVolumeSpecName: "server-conf") pod "5948716b-2c2b-4a90-b4b5-f8daad17f020" (UID: "5948716b-2c2b-4a90-b4b5-f8daad17f020"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 10:55:22 crc kubenswrapper[4728]: I0227 10:55:22.833942 4728 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 27 10:55:22 crc kubenswrapper[4728]: I0227 10:55:22.834160 4728 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-f3f36d95-12a2-44f4-8352-45ce349ef93c" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f3f36d95-12a2-44f4-8352-45ce349ef93c") on node "crc" Feb 27 10:55:22 crc kubenswrapper[4728]: I0227 10:55:22.847813 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5948716b-2c2b-4a90-b4b5-f8daad17f020-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "5948716b-2c2b-4a90-b4b5-f8daad17f020" (UID: "5948716b-2c2b-4a90-b4b5-f8daad17f020"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:55:22 crc kubenswrapper[4728]: I0227 10:55:22.880204 4728 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/5948716b-2c2b-4a90-b4b5-f8daad17f020-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 27 10:55:22 crc kubenswrapper[4728]: I0227 10:55:22.880248 4728 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/5948716b-2c2b-4a90-b4b5-f8daad17f020-server-conf\") on node \"crc\" DevicePath \"\"" Feb 27 10:55:22 crc kubenswrapper[4728]: I0227 10:55:22.880263 4728 reconciler_common.go:293] "Volume detached for volume \"pvc-f3f36d95-12a2-44f4-8352-45ce349ef93c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f3f36d95-12a2-44f4-8352-45ce349ef93c\") on node \"crc\" DevicePath \"\"" Feb 27 10:55:23 crc kubenswrapper[4728]: I0227 10:55:23.505050 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 27 10:55:23 crc kubenswrapper[4728]: I0227 10:55:23.541695 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 27 10:55:23 crc kubenswrapper[4728]: I0227 10:55:23.554858 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 27 10:55:23 crc kubenswrapper[4728]: I0227 10:55:23.587133 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 27 10:55:23 crc kubenswrapper[4728]: E0227 10:55:23.588004 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5948716b-2c2b-4a90-b4b5-f8daad17f020" containerName="rabbitmq" Feb 27 10:55:23 crc kubenswrapper[4728]: I0227 10:55:23.588029 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="5948716b-2c2b-4a90-b4b5-f8daad17f020" containerName="rabbitmq" Feb 27 10:55:23 crc kubenswrapper[4728]: E0227 10:55:23.588048 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5948716b-2c2b-4a90-b4b5-f8daad17f020" containerName="setup-container" Feb 27 10:55:23 crc kubenswrapper[4728]: I0227 10:55:23.588057 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="5948716b-2c2b-4a90-b4b5-f8daad17f020" containerName="setup-container" Feb 27 10:55:23 crc kubenswrapper[4728]: I0227 10:55:23.588359 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="5948716b-2c2b-4a90-b4b5-f8daad17f020" containerName="rabbitmq" Feb 27 10:55:23 crc kubenswrapper[4728]: I0227 10:55:23.590123 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 27 10:55:23 crc kubenswrapper[4728]: I0227 10:55:23.606067 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 27 10:55:23 crc kubenswrapper[4728]: I0227 10:55:23.705132 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52rfx\" (UniqueName: \"kubernetes.io/projected/73e4e1d2-5dc4-4578-bd53-f83b95638094-kube-api-access-52rfx\") pod \"rabbitmq-server-0\" (UID: \"73e4e1d2-5dc4-4578-bd53-f83b95638094\") " pod="openstack/rabbitmq-server-0" Feb 27 10:55:23 crc kubenswrapper[4728]: I0227 10:55:23.705245 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/73e4e1d2-5dc4-4578-bd53-f83b95638094-config-data\") pod \"rabbitmq-server-0\" (UID: \"73e4e1d2-5dc4-4578-bd53-f83b95638094\") " pod="openstack/rabbitmq-server-0" Feb 27 10:55:23 crc kubenswrapper[4728]: I0227 10:55:23.705285 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/73e4e1d2-5dc4-4578-bd53-f83b95638094-pod-info\") pod \"rabbitmq-server-0\" (UID: \"73e4e1d2-5dc4-4578-bd53-f83b95638094\") " pod="openstack/rabbitmq-server-0" Feb 27 10:55:23 crc kubenswrapper[4728]: I0227 10:55:23.705314 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/73e4e1d2-5dc4-4578-bd53-f83b95638094-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"73e4e1d2-5dc4-4578-bd53-f83b95638094\") " pod="openstack/rabbitmq-server-0" Feb 27 10:55:23 crc kubenswrapper[4728]: I0227 10:55:23.705361 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/73e4e1d2-5dc4-4578-bd53-f83b95638094-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"73e4e1d2-5dc4-4578-bd53-f83b95638094\") " pod="openstack/rabbitmq-server-0" Feb 27 10:55:23 crc kubenswrapper[4728]: I0227 10:55:23.705462 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/73e4e1d2-5dc4-4578-bd53-f83b95638094-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"73e4e1d2-5dc4-4578-bd53-f83b95638094\") " pod="openstack/rabbitmq-server-0" Feb 27 10:55:23 crc kubenswrapper[4728]: I0227 10:55:23.705535 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/73e4e1d2-5dc4-4578-bd53-f83b95638094-server-conf\") pod \"rabbitmq-server-0\" (UID: \"73e4e1d2-5dc4-4578-bd53-f83b95638094\") " pod="openstack/rabbitmq-server-0" Feb 27 10:55:23 crc kubenswrapper[4728]: I0227 10:55:23.705586 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/73e4e1d2-5dc4-4578-bd53-f83b95638094-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"73e4e1d2-5dc4-4578-bd53-f83b95638094\") " pod="openstack/rabbitmq-server-0" Feb 27 10:55:23 crc kubenswrapper[4728]: I0227 10:55:23.705622 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/73e4e1d2-5dc4-4578-bd53-f83b95638094-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"73e4e1d2-5dc4-4578-bd53-f83b95638094\") " pod="openstack/rabbitmq-server-0" Feb 27 10:55:23 crc kubenswrapper[4728]: I0227 10:55:23.705695 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/73e4e1d2-5dc4-4578-bd53-f83b95638094-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"73e4e1d2-5dc4-4578-bd53-f83b95638094\") " pod="openstack/rabbitmq-server-0" Feb 27 10:55:23 crc kubenswrapper[4728]: I0227 10:55:23.705780 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-f3f36d95-12a2-44f4-8352-45ce349ef93c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f3f36d95-12a2-44f4-8352-45ce349ef93c\") pod \"rabbitmq-server-0\" (UID: \"73e4e1d2-5dc4-4578-bd53-f83b95638094\") " pod="openstack/rabbitmq-server-0" Feb 27 10:55:23 crc kubenswrapper[4728]: I0227 10:55:23.808111 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/73e4e1d2-5dc4-4578-bd53-f83b95638094-server-conf\") pod \"rabbitmq-server-0\" (UID: \"73e4e1d2-5dc4-4578-bd53-f83b95638094\") " pod="openstack/rabbitmq-server-0" Feb 27 10:55:23 crc kubenswrapper[4728]: I0227 10:55:23.808172 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/73e4e1d2-5dc4-4578-bd53-f83b95638094-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"73e4e1d2-5dc4-4578-bd53-f83b95638094\") " pod="openstack/rabbitmq-server-0" Feb 27 10:55:23 crc kubenswrapper[4728]: I0227 10:55:23.808193 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/73e4e1d2-5dc4-4578-bd53-f83b95638094-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"73e4e1d2-5dc4-4578-bd53-f83b95638094\") " pod="openstack/rabbitmq-server-0" Feb 27 10:55:23 crc kubenswrapper[4728]: I0227 10:55:23.808240 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/73e4e1d2-5dc4-4578-bd53-f83b95638094-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"73e4e1d2-5dc4-4578-bd53-f83b95638094\") " pod="openstack/rabbitmq-server-0" Feb 27 10:55:23 crc kubenswrapper[4728]: I0227 10:55:23.808297 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-f3f36d95-12a2-44f4-8352-45ce349ef93c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f3f36d95-12a2-44f4-8352-45ce349ef93c\") pod \"rabbitmq-server-0\" (UID: \"73e4e1d2-5dc4-4578-bd53-f83b95638094\") " pod="openstack/rabbitmq-server-0" Feb 27 10:55:23 crc kubenswrapper[4728]: I0227 10:55:23.808386 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-52rfx\" (UniqueName: \"kubernetes.io/projected/73e4e1d2-5dc4-4578-bd53-f83b95638094-kube-api-access-52rfx\") pod \"rabbitmq-server-0\" (UID: \"73e4e1d2-5dc4-4578-bd53-f83b95638094\") " pod="openstack/rabbitmq-server-0" Feb 27 10:55:23 crc kubenswrapper[4728]: I0227 10:55:23.808438 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/73e4e1d2-5dc4-4578-bd53-f83b95638094-config-data\") pod \"rabbitmq-server-0\" (UID: \"73e4e1d2-5dc4-4578-bd53-f83b95638094\") " pod="openstack/rabbitmq-server-0" Feb 27 10:55:23 crc kubenswrapper[4728]: I0227 10:55:23.808468 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/73e4e1d2-5dc4-4578-bd53-f83b95638094-pod-info\") pod \"rabbitmq-server-0\" (UID: \"73e4e1d2-5dc4-4578-bd53-f83b95638094\") " pod="openstack/rabbitmq-server-0" Feb 27 10:55:23 crc kubenswrapper[4728]: I0227 10:55:23.808491 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/73e4e1d2-5dc4-4578-bd53-f83b95638094-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"73e4e1d2-5dc4-4578-bd53-f83b95638094\") " pod="openstack/rabbitmq-server-0" Feb 27 10:55:23 crc kubenswrapper[4728]: I0227 10:55:23.808612 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/73e4e1d2-5dc4-4578-bd53-f83b95638094-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"73e4e1d2-5dc4-4578-bd53-f83b95638094\") " pod="openstack/rabbitmq-server-0" Feb 27 10:55:23 crc kubenswrapper[4728]: I0227 10:55:23.808659 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/73e4e1d2-5dc4-4578-bd53-f83b95638094-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"73e4e1d2-5dc4-4578-bd53-f83b95638094\") " pod="openstack/rabbitmq-server-0" Feb 27 10:55:23 crc kubenswrapper[4728]: I0227 10:55:23.810261 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/73e4e1d2-5dc4-4578-bd53-f83b95638094-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"73e4e1d2-5dc4-4578-bd53-f83b95638094\") " pod="openstack/rabbitmq-server-0" Feb 27 10:55:23 crc kubenswrapper[4728]: I0227 10:55:23.810284 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/73e4e1d2-5dc4-4578-bd53-f83b95638094-server-conf\") pod \"rabbitmq-server-0\" (UID: \"73e4e1d2-5dc4-4578-bd53-f83b95638094\") " pod="openstack/rabbitmq-server-0" Feb 27 10:55:23 crc kubenswrapper[4728]: I0227 10:55:23.811371 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/73e4e1d2-5dc4-4578-bd53-f83b95638094-config-data\") pod \"rabbitmq-server-0\" (UID: \"73e4e1d2-5dc4-4578-bd53-f83b95638094\") " pod="openstack/rabbitmq-server-0" Feb 27 10:55:23 crc kubenswrapper[4728]: I0227 10:55:23.811548 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/73e4e1d2-5dc4-4578-bd53-f83b95638094-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"73e4e1d2-5dc4-4578-bd53-f83b95638094\") " pod="openstack/rabbitmq-server-0" Feb 27 10:55:23 crc kubenswrapper[4728]: I0227 10:55:23.812307 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/73e4e1d2-5dc4-4578-bd53-f83b95638094-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"73e4e1d2-5dc4-4578-bd53-f83b95638094\") " pod="openstack/rabbitmq-server-0" Feb 27 10:55:23 crc kubenswrapper[4728]: I0227 10:55:23.814911 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/73e4e1d2-5dc4-4578-bd53-f83b95638094-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"73e4e1d2-5dc4-4578-bd53-f83b95638094\") " pod="openstack/rabbitmq-server-0" Feb 27 10:55:23 crc kubenswrapper[4728]: I0227 10:55:23.815650 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/73e4e1d2-5dc4-4578-bd53-f83b95638094-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"73e4e1d2-5dc4-4578-bd53-f83b95638094\") " pod="openstack/rabbitmq-server-0" Feb 27 10:55:23 crc kubenswrapper[4728]: I0227 10:55:23.816207 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/73e4e1d2-5dc4-4578-bd53-f83b95638094-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"73e4e1d2-5dc4-4578-bd53-f83b95638094\") " pod="openstack/rabbitmq-server-0" Feb 27 10:55:23 crc kubenswrapper[4728]: I0227 10:55:23.816448 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/73e4e1d2-5dc4-4578-bd53-f83b95638094-pod-info\") pod \"rabbitmq-server-0\" (UID: \"73e4e1d2-5dc4-4578-bd53-f83b95638094\") " pod="openstack/rabbitmq-server-0" Feb 27 10:55:23 crc kubenswrapper[4728]: I0227 10:55:23.826203 4728 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 27 10:55:23 crc kubenswrapper[4728]: I0227 10:55:23.826247 4728 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-f3f36d95-12a2-44f4-8352-45ce349ef93c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f3f36d95-12a2-44f4-8352-45ce349ef93c\") pod \"rabbitmq-server-0\" (UID: \"73e4e1d2-5dc4-4578-bd53-f83b95638094\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/cf7e939e9da33831c806ab7477c702db97a95da5329aaac1969ecd83a668f3c0/globalmount\"" pod="openstack/rabbitmq-server-0" Feb 27 10:55:23 crc kubenswrapper[4728]: I0227 10:55:23.828095 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-52rfx\" (UniqueName: \"kubernetes.io/projected/73e4e1d2-5dc4-4578-bd53-f83b95638094-kube-api-access-52rfx\") pod \"rabbitmq-server-0\" (UID: \"73e4e1d2-5dc4-4578-bd53-f83b95638094\") " pod="openstack/rabbitmq-server-0" Feb 27 10:55:23 crc kubenswrapper[4728]: I0227 10:55:23.889198 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-f3f36d95-12a2-44f4-8352-45ce349ef93c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f3f36d95-12a2-44f4-8352-45ce349ef93c\") pod \"rabbitmq-server-0\" (UID: \"73e4e1d2-5dc4-4578-bd53-f83b95638094\") " pod="openstack/rabbitmq-server-0" Feb 27 10:55:23 crc kubenswrapper[4728]: I0227 10:55:23.923771 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 27 10:55:24 crc kubenswrapper[4728]: I0227 10:55:24.453608 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 27 10:55:24 crc kubenswrapper[4728]: I0227 10:55:24.535230 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"73e4e1d2-5dc4-4578-bd53-f83b95638094","Type":"ContainerStarted","Data":"6ab50bf1413d8b0965b2671274412a7d92b311ad51854f857d536ce05e51bf1b"} Feb 27 10:55:24 crc kubenswrapper[4728]: I0227 10:55:24.738303 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5948716b-2c2b-4a90-b4b5-f8daad17f020" path="/var/lib/kubelet/pods/5948716b-2c2b-4a90-b4b5-f8daad17f020/volumes" Feb 27 10:55:27 crc kubenswrapper[4728]: I0227 10:55:27.570317 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"73e4e1d2-5dc4-4578-bd53-f83b95638094","Type":"ContainerStarted","Data":"515f5ba257aa3cd4a17b0fa11c027317f8d0282f8a2e15fd8141b3f80ff33499"} Feb 27 10:55:35 crc kubenswrapper[4728]: I0227 10:55:35.921806 4728 patch_prober.go:28] interesting pod/machine-config-daemon-mf2hh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 10:55:35 crc kubenswrapper[4728]: I0227 10:55:35.922538 4728 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 10:55:58 crc kubenswrapper[4728]: I0227 10:55:58.967563 4728 generic.go:334] "Generic (PLEG): container finished" podID="73e4e1d2-5dc4-4578-bd53-f83b95638094" containerID="515f5ba257aa3cd4a17b0fa11c027317f8d0282f8a2e15fd8141b3f80ff33499" exitCode=0 Feb 27 10:55:58 crc kubenswrapper[4728]: I0227 10:55:58.967782 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"73e4e1d2-5dc4-4578-bd53-f83b95638094","Type":"ContainerDied","Data":"515f5ba257aa3cd4a17b0fa11c027317f8d0282f8a2e15fd8141b3f80ff33499"} Feb 27 10:55:59 crc kubenswrapper[4728]: I0227 10:55:59.985135 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"73e4e1d2-5dc4-4578-bd53-f83b95638094","Type":"ContainerStarted","Data":"a63fadbff9f8cb1a637f65abf07990f897e1a7faab2bf7acf5d2cbc2570fae5c"} Feb 27 10:55:59 crc kubenswrapper[4728]: I0227 10:55:59.985712 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 27 10:56:00 crc kubenswrapper[4728]: I0227 10:56:00.041685 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=37.041661089 podStartE2EDuration="37.041661089s" podCreationTimestamp="2026-02-27 10:55:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 10:56:00.025081409 +0000 UTC m=+1779.987447535" watchObservedRunningTime="2026-02-27 10:56:00.041661089 +0000 UTC m=+1780.004027205" Feb 27 10:56:00 crc kubenswrapper[4728]: I0227 10:56:00.170454 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536496-6p4m5"] Feb 27 10:56:00 crc kubenswrapper[4728]: I0227 10:56:00.173031 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536496-6p4m5" Feb 27 10:56:00 crc kubenswrapper[4728]: I0227 10:56:00.175406 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 10:56:00 crc kubenswrapper[4728]: I0227 10:56:00.177047 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 10:56:00 crc kubenswrapper[4728]: I0227 10:56:00.177786 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-wxdfj" Feb 27 10:56:00 crc kubenswrapper[4728]: I0227 10:56:00.197711 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536496-6p4m5"] Feb 27 10:56:00 crc kubenswrapper[4728]: I0227 10:56:00.238639 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m669r\" (UniqueName: \"kubernetes.io/projected/95407fb7-61a8-4fd7-8059-63734e7e50e9-kube-api-access-m669r\") pod \"auto-csr-approver-29536496-6p4m5\" (UID: \"95407fb7-61a8-4fd7-8059-63734e7e50e9\") " pod="openshift-infra/auto-csr-approver-29536496-6p4m5" Feb 27 10:56:00 crc kubenswrapper[4728]: I0227 10:56:00.341465 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m669r\" (UniqueName: \"kubernetes.io/projected/95407fb7-61a8-4fd7-8059-63734e7e50e9-kube-api-access-m669r\") pod \"auto-csr-approver-29536496-6p4m5\" (UID: \"95407fb7-61a8-4fd7-8059-63734e7e50e9\") " pod="openshift-infra/auto-csr-approver-29536496-6p4m5" Feb 27 10:56:00 crc kubenswrapper[4728]: I0227 10:56:00.382058 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m669r\" (UniqueName: \"kubernetes.io/projected/95407fb7-61a8-4fd7-8059-63734e7e50e9-kube-api-access-m669r\") pod \"auto-csr-approver-29536496-6p4m5\" (UID: \"95407fb7-61a8-4fd7-8059-63734e7e50e9\") " pod="openshift-infra/auto-csr-approver-29536496-6p4m5" Feb 27 10:56:00 crc kubenswrapper[4728]: I0227 10:56:00.507441 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536496-6p4m5" Feb 27 10:56:00 crc kubenswrapper[4728]: I0227 10:56:00.992960 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536496-6p4m5"] Feb 27 10:56:00 crc kubenswrapper[4728]: W0227 10:56:00.994719 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod95407fb7_61a8_4fd7_8059_63734e7e50e9.slice/crio-0369d5b8ccf60629666a136bed5d9604876706d7015095f1547d7904f09fe2f0 WatchSource:0}: Error finding container 0369d5b8ccf60629666a136bed5d9604876706d7015095f1547d7904f09fe2f0: Status 404 returned error can't find the container with id 0369d5b8ccf60629666a136bed5d9604876706d7015095f1547d7904f09fe2f0 Feb 27 10:56:02 crc kubenswrapper[4728]: I0227 10:56:02.019848 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536496-6p4m5" event={"ID":"95407fb7-61a8-4fd7-8059-63734e7e50e9","Type":"ContainerStarted","Data":"0369d5b8ccf60629666a136bed5d9604876706d7015095f1547d7904f09fe2f0"} Feb 27 10:56:03 crc kubenswrapper[4728]: I0227 10:56:03.034621 4728 generic.go:334] "Generic (PLEG): container finished" podID="95407fb7-61a8-4fd7-8059-63734e7e50e9" containerID="83031d97540b7e4e511d750099704061677cbb54f2239b7990769fc6ef19d022" exitCode=0 Feb 27 10:56:03 crc kubenswrapper[4728]: I0227 10:56:03.034683 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536496-6p4m5" event={"ID":"95407fb7-61a8-4fd7-8059-63734e7e50e9","Type":"ContainerDied","Data":"83031d97540b7e4e511d750099704061677cbb54f2239b7990769fc6ef19d022"} Feb 27 10:56:04 crc kubenswrapper[4728]: I0227 10:56:04.609495 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536496-6p4m5" Feb 27 10:56:04 crc kubenswrapper[4728]: I0227 10:56:04.771566 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m669r\" (UniqueName: \"kubernetes.io/projected/95407fb7-61a8-4fd7-8059-63734e7e50e9-kube-api-access-m669r\") pod \"95407fb7-61a8-4fd7-8059-63734e7e50e9\" (UID: \"95407fb7-61a8-4fd7-8059-63734e7e50e9\") " Feb 27 10:56:04 crc kubenswrapper[4728]: I0227 10:56:04.794576 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95407fb7-61a8-4fd7-8059-63734e7e50e9-kube-api-access-m669r" (OuterVolumeSpecName: "kube-api-access-m669r") pod "95407fb7-61a8-4fd7-8059-63734e7e50e9" (UID: "95407fb7-61a8-4fd7-8059-63734e7e50e9"). InnerVolumeSpecName "kube-api-access-m669r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:56:04 crc kubenswrapper[4728]: I0227 10:56:04.876177 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m669r\" (UniqueName: \"kubernetes.io/projected/95407fb7-61a8-4fd7-8059-63734e7e50e9-kube-api-access-m669r\") on node \"crc\" DevicePath \"\"" Feb 27 10:56:05 crc kubenswrapper[4728]: I0227 10:56:05.059737 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536496-6p4m5" event={"ID":"95407fb7-61a8-4fd7-8059-63734e7e50e9","Type":"ContainerDied","Data":"0369d5b8ccf60629666a136bed5d9604876706d7015095f1547d7904f09fe2f0"} Feb 27 10:56:05 crc kubenswrapper[4728]: I0227 10:56:05.059772 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0369d5b8ccf60629666a136bed5d9604876706d7015095f1547d7904f09fe2f0" Feb 27 10:56:05 crc kubenswrapper[4728]: I0227 10:56:05.060062 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536496-6p4m5" Feb 27 10:56:05 crc kubenswrapper[4728]: I0227 10:56:05.696514 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536490-5x76j"] Feb 27 10:56:05 crc kubenswrapper[4728]: I0227 10:56:05.708540 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536490-5x76j"] Feb 27 10:56:05 crc kubenswrapper[4728]: I0227 10:56:05.922048 4728 patch_prober.go:28] interesting pod/machine-config-daemon-mf2hh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 10:56:05 crc kubenswrapper[4728]: I0227 10:56:05.922138 4728 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 10:56:05 crc kubenswrapper[4728]: I0227 10:56:05.922201 4728 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" Feb 27 10:56:05 crc kubenswrapper[4728]: I0227 10:56:05.923791 4728 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1e9c3f8a2c89093b960b65e9fc2c283b1d41282e75aab6b6afbc29feaad679c4"} pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 27 10:56:05 crc kubenswrapper[4728]: I0227 10:56:05.923943 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" containerName="machine-config-daemon" containerID="cri-o://1e9c3f8a2c89093b960b65e9fc2c283b1d41282e75aab6b6afbc29feaad679c4" gracePeriod=600 Feb 27 10:56:06 crc kubenswrapper[4728]: E0227 10:56:06.048835 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mf2hh_openshift-machine-config-operator(c2cfd349-f825-497b-b698-7fb6bc258b22)\"" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" Feb 27 10:56:06 crc kubenswrapper[4728]: I0227 10:56:06.077184 4728 generic.go:334] "Generic (PLEG): container finished" podID="c2cfd349-f825-497b-b698-7fb6bc258b22" containerID="1e9c3f8a2c89093b960b65e9fc2c283b1d41282e75aab6b6afbc29feaad679c4" exitCode=0 Feb 27 10:56:06 crc kubenswrapper[4728]: I0227 10:56:06.077597 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" event={"ID":"c2cfd349-f825-497b-b698-7fb6bc258b22","Type":"ContainerDied","Data":"1e9c3f8a2c89093b960b65e9fc2c283b1d41282e75aab6b6afbc29feaad679c4"} Feb 27 10:56:06 crc kubenswrapper[4728]: I0227 10:56:06.077769 4728 scope.go:117] "RemoveContainer" containerID="e31218d6f1087057441016d4a0b5eeb91a41486bd3e9c9784604100aaaedc60a" Feb 27 10:56:06 crc kubenswrapper[4728]: I0227 10:56:06.078716 4728 scope.go:117] "RemoveContainer" containerID="1e9c3f8a2c89093b960b65e9fc2c283b1d41282e75aab6b6afbc29feaad679c4" Feb 27 10:56:06 crc kubenswrapper[4728]: E0227 10:56:06.079186 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mf2hh_openshift-machine-config-operator(c2cfd349-f825-497b-b698-7fb6bc258b22)\"" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" Feb 27 10:56:06 crc kubenswrapper[4728]: I0227 10:56:06.739941 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="33b33e17-516b-44dc-b35c-267b155abfa3" path="/var/lib/kubelet/pods/33b33e17-516b-44dc-b35c-267b155abfa3/volumes" Feb 27 10:56:13 crc kubenswrapper[4728]: I0227 10:56:13.928689 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 27 10:56:19 crc kubenswrapper[4728]: I0227 10:56:19.725525 4728 scope.go:117] "RemoveContainer" containerID="1e9c3f8a2c89093b960b65e9fc2c283b1d41282e75aab6b6afbc29feaad679c4" Feb 27 10:56:19 crc kubenswrapper[4728]: E0227 10:56:19.726584 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mf2hh_openshift-machine-config-operator(c2cfd349-f825-497b-b698-7fb6bc258b22)\"" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" Feb 27 10:56:19 crc kubenswrapper[4728]: I0227 10:56:19.766796 4728 scope.go:117] "RemoveContainer" containerID="2e02056dd29e84312f0a4af87f7933d56f9035ceff201854e18c2833432799dd" Feb 27 10:56:19 crc kubenswrapper[4728]: I0227 10:56:19.823258 4728 scope.go:117] "RemoveContainer" containerID="44fb28981bed6077cf7613fe63246cf6fffc2f68088a0eb025ac856ba856db60" Feb 27 10:56:19 crc kubenswrapper[4728]: I0227 10:56:19.872620 4728 scope.go:117] "RemoveContainer" containerID="9134c47853f75890d3740060e1e872c3d4608294607fb6e716c7ae974c2862b6" Feb 27 10:56:19 crc kubenswrapper[4728]: I0227 10:56:19.935346 4728 scope.go:117] "RemoveContainer" containerID="4f0bce30e7995e8fede562a5ec4a20cf6cce0e35118324e85e7d86547b46c89b" Feb 27 10:56:19 crc kubenswrapper[4728]: I0227 10:56:19.968530 4728 scope.go:117] "RemoveContainer" containerID="1ea7d949740097930544753162eda041e0ea42aee0a8c73e677e4887cdfaae0c" Feb 27 10:56:30 crc kubenswrapper[4728]: I0227 10:56:30.760065 4728 scope.go:117] "RemoveContainer" containerID="1e9c3f8a2c89093b960b65e9fc2c283b1d41282e75aab6b6afbc29feaad679c4" Feb 27 10:56:30 crc kubenswrapper[4728]: E0227 10:56:30.762247 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mf2hh_openshift-machine-config-operator(c2cfd349-f825-497b-b698-7fb6bc258b22)\"" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" Feb 27 10:56:42 crc kubenswrapper[4728]: I0227 10:56:42.725092 4728 scope.go:117] "RemoveContainer" containerID="1e9c3f8a2c89093b960b65e9fc2c283b1d41282e75aab6b6afbc29feaad679c4" Feb 27 10:56:42 crc kubenswrapper[4728]: E0227 10:56:42.726357 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mf2hh_openshift-machine-config-operator(c2cfd349-f825-497b-b698-7fb6bc258b22)\"" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" Feb 27 10:56:57 crc kubenswrapper[4728]: I0227 10:56:57.724527 4728 scope.go:117] "RemoveContainer" containerID="1e9c3f8a2c89093b960b65e9fc2c283b1d41282e75aab6b6afbc29feaad679c4" Feb 27 10:56:57 crc kubenswrapper[4728]: E0227 10:56:57.725427 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mf2hh_openshift-machine-config-operator(c2cfd349-f825-497b-b698-7fb6bc258b22)\"" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" Feb 27 10:57:11 crc kubenswrapper[4728]: I0227 10:57:11.725295 4728 scope.go:117] "RemoveContainer" containerID="1e9c3f8a2c89093b960b65e9fc2c283b1d41282e75aab6b6afbc29feaad679c4" Feb 27 10:57:11 crc kubenswrapper[4728]: E0227 10:57:11.726388 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mf2hh_openshift-machine-config-operator(c2cfd349-f825-497b-b698-7fb6bc258b22)\"" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" Feb 27 10:57:22 crc kubenswrapper[4728]: I0227 10:57:22.726261 4728 scope.go:117] "RemoveContainer" containerID="1e9c3f8a2c89093b960b65e9fc2c283b1d41282e75aab6b6afbc29feaad679c4" Feb 27 10:57:22 crc kubenswrapper[4728]: E0227 10:57:22.728625 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mf2hh_openshift-machine-config-operator(c2cfd349-f825-497b-b698-7fb6bc258b22)\"" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" Feb 27 10:57:32 crc kubenswrapper[4728]: I0227 10:57:32.081652 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-n9grl"] Feb 27 10:57:32 crc kubenswrapper[4728]: I0227 10:57:32.119656 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-8g6gt"] Feb 27 10:57:32 crc kubenswrapper[4728]: I0227 10:57:32.153545 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-e32a-account-create-update-kxsr5"] Feb 27 10:57:32 crc kubenswrapper[4728]: I0227 10:57:32.177893 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-8g6gt"] Feb 27 10:57:32 crc kubenswrapper[4728]: I0227 10:57:32.193106 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-e32a-account-create-update-kxsr5"] Feb 27 10:57:32 crc kubenswrapper[4728]: I0227 10:57:32.205962 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-n9grl"] Feb 27 10:57:32 crc kubenswrapper[4728]: I0227 10:57:32.740233 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="039bf17c-5c3a-4393-97e1-d8d78fb20fd8" path="/var/lib/kubelet/pods/039bf17c-5c3a-4393-97e1-d8d78fb20fd8/volumes" Feb 27 10:57:32 crc kubenswrapper[4728]: I0227 10:57:32.744318 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0f3f20b1-2c57-4b56-9b40-47dae5701446" path="/var/lib/kubelet/pods/0f3f20b1-2c57-4b56-9b40-47dae5701446/volumes" Feb 27 10:57:32 crc kubenswrapper[4728]: I0227 10:57:32.747110 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e17ee83c-ab26-4c75-b2d6-e9278ddfcc06" path="/var/lib/kubelet/pods/e17ee83c-ab26-4c75-b2d6-e9278ddfcc06/volumes" Feb 27 10:57:33 crc kubenswrapper[4728]: I0227 10:57:33.046908 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-63d8-account-create-update-ntp7w"] Feb 27 10:57:33 crc kubenswrapper[4728]: I0227 10:57:33.062315 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-63d8-account-create-update-ntp7w"] Feb 27 10:57:33 crc kubenswrapper[4728]: I0227 10:57:33.076536 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-69b9-account-create-update-4p7vx"] Feb 27 10:57:33 crc kubenswrapper[4728]: I0227 10:57:33.089834 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-h9f2n"] Feb 27 10:57:33 crc kubenswrapper[4728]: I0227 10:57:33.103567 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-851e-account-create-update-8stp4"] Feb 27 10:57:33 crc kubenswrapper[4728]: I0227 10:57:33.116929 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-h9f2n"] Feb 27 10:57:33 crc kubenswrapper[4728]: I0227 10:57:33.130567 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-jvhk9"] Feb 27 10:57:33 crc kubenswrapper[4728]: I0227 10:57:33.142682 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-jvhk9"] Feb 27 10:57:33 crc kubenswrapper[4728]: I0227 10:57:33.152168 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-69b9-account-create-update-4p7vx"] Feb 27 10:57:33 crc kubenswrapper[4728]: I0227 10:57:33.161785 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-851e-account-create-update-8stp4"] Feb 27 10:57:34 crc kubenswrapper[4728]: I0227 10:57:34.258281 4728 generic.go:334] "Generic (PLEG): container finished" podID="e83b55c5-e7f6-4e31-b65e-14e0f39a21ec" containerID="8375187feb6eafc85c777b89484e040717e873383b72023ac2ee4912c1a51144" exitCode=0 Feb 27 10:57:34 crc kubenswrapper[4728]: I0227 10:57:34.258523 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-lpx8h" event={"ID":"e83b55c5-e7f6-4e31-b65e-14e0f39a21ec","Type":"ContainerDied","Data":"8375187feb6eafc85c777b89484e040717e873383b72023ac2ee4912c1a51144"} Feb 27 10:57:34 crc kubenswrapper[4728]: I0227 10:57:34.745031 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="13a5f0a3-e12d-4f71-8551-63239926851b" path="/var/lib/kubelet/pods/13a5f0a3-e12d-4f71-8551-63239926851b/volumes" Feb 27 10:57:34 crc kubenswrapper[4728]: I0227 10:57:34.747169 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b2435b6-3193-40c1-85b8-e177682f9d3a" path="/var/lib/kubelet/pods/1b2435b6-3193-40c1-85b8-e177682f9d3a/volumes" Feb 27 10:57:34 crc kubenswrapper[4728]: I0227 10:57:34.749689 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f3369f2-cce2-4912-a3ff-59677709bda2" path="/var/lib/kubelet/pods/2f3369f2-cce2-4912-a3ff-59677709bda2/volumes" Feb 27 10:57:34 crc kubenswrapper[4728]: I0227 10:57:34.751274 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="415d3211-9668-4f77-8ea5-6e60c3685c14" path="/var/lib/kubelet/pods/415d3211-9668-4f77-8ea5-6e60c3685c14/volumes" Feb 27 10:57:34 crc kubenswrapper[4728]: I0227 10:57:34.752841 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="439b89bb-a1bf-41c4-bb4f-612eaaeecb2f" path="/var/lib/kubelet/pods/439b89bb-a1bf-41c4-bb4f-612eaaeecb2f/volumes" Feb 27 10:57:35 crc kubenswrapper[4728]: I0227 10:57:35.794848 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-lpx8h" Feb 27 10:57:35 crc kubenswrapper[4728]: I0227 10:57:35.836357 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e83b55c5-e7f6-4e31-b65e-14e0f39a21ec-inventory\") pod \"e83b55c5-e7f6-4e31-b65e-14e0f39a21ec\" (UID: \"e83b55c5-e7f6-4e31-b65e-14e0f39a21ec\") " Feb 27 10:57:35 crc kubenswrapper[4728]: I0227 10:57:35.837293 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e83b55c5-e7f6-4e31-b65e-14e0f39a21ec-bootstrap-combined-ca-bundle\") pod \"e83b55c5-e7f6-4e31-b65e-14e0f39a21ec\" (UID: \"e83b55c5-e7f6-4e31-b65e-14e0f39a21ec\") " Feb 27 10:57:35 crc kubenswrapper[4728]: I0227 10:57:35.837443 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xz9wm\" (UniqueName: \"kubernetes.io/projected/e83b55c5-e7f6-4e31-b65e-14e0f39a21ec-kube-api-access-xz9wm\") pod \"e83b55c5-e7f6-4e31-b65e-14e0f39a21ec\" (UID: \"e83b55c5-e7f6-4e31-b65e-14e0f39a21ec\") " Feb 27 10:57:35 crc kubenswrapper[4728]: I0227 10:57:35.837644 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e83b55c5-e7f6-4e31-b65e-14e0f39a21ec-ssh-key-openstack-edpm-ipam\") pod \"e83b55c5-e7f6-4e31-b65e-14e0f39a21ec\" (UID: \"e83b55c5-e7f6-4e31-b65e-14e0f39a21ec\") " Feb 27 10:57:35 crc kubenswrapper[4728]: I0227 10:57:35.867189 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e83b55c5-e7f6-4e31-b65e-14e0f39a21ec-kube-api-access-xz9wm" (OuterVolumeSpecName: "kube-api-access-xz9wm") pod "e83b55c5-e7f6-4e31-b65e-14e0f39a21ec" (UID: "e83b55c5-e7f6-4e31-b65e-14e0f39a21ec"). InnerVolumeSpecName "kube-api-access-xz9wm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:57:35 crc kubenswrapper[4728]: I0227 10:57:35.904350 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e83b55c5-e7f6-4e31-b65e-14e0f39a21ec-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "e83b55c5-e7f6-4e31-b65e-14e0f39a21ec" (UID: "e83b55c5-e7f6-4e31-b65e-14e0f39a21ec"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:57:35 crc kubenswrapper[4728]: I0227 10:57:35.948215 4728 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e83b55c5-e7f6-4e31-b65e-14e0f39a21ec-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 10:57:35 crc kubenswrapper[4728]: I0227 10:57:35.948459 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xz9wm\" (UniqueName: \"kubernetes.io/projected/e83b55c5-e7f6-4e31-b65e-14e0f39a21ec-kube-api-access-xz9wm\") on node \"crc\" DevicePath \"\"" Feb 27 10:57:35 crc kubenswrapper[4728]: I0227 10:57:35.985771 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e83b55c5-e7f6-4e31-b65e-14e0f39a21ec-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "e83b55c5-e7f6-4e31-b65e-14e0f39a21ec" (UID: "e83b55c5-e7f6-4e31-b65e-14e0f39a21ec"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:57:36 crc kubenswrapper[4728]: I0227 10:57:36.009773 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e83b55c5-e7f6-4e31-b65e-14e0f39a21ec-inventory" (OuterVolumeSpecName: "inventory") pod "e83b55c5-e7f6-4e31-b65e-14e0f39a21ec" (UID: "e83b55c5-e7f6-4e31-b65e-14e0f39a21ec"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:57:36 crc kubenswrapper[4728]: I0227 10:57:36.050973 4728 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e83b55c5-e7f6-4e31-b65e-14e0f39a21ec-inventory\") on node \"crc\" DevicePath \"\"" Feb 27 10:57:36 crc kubenswrapper[4728]: I0227 10:57:36.051218 4728 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e83b55c5-e7f6-4e31-b65e-14e0f39a21ec-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 27 10:57:36 crc kubenswrapper[4728]: I0227 10:57:36.287447 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-lpx8h" event={"ID":"e83b55c5-e7f6-4e31-b65e-14e0f39a21ec","Type":"ContainerDied","Data":"d1f840218877cd9c35dcddd965e6a5fd881a4fd07c08e2000c3b1fe82acdbe97"} Feb 27 10:57:36 crc kubenswrapper[4728]: I0227 10:57:36.287705 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d1f840218877cd9c35dcddd965e6a5fd881a4fd07c08e2000c3b1fe82acdbe97" Feb 27 10:57:36 crc kubenswrapper[4728]: I0227 10:57:36.287520 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-lpx8h" Feb 27 10:57:36 crc kubenswrapper[4728]: I0227 10:57:36.372844 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-rxdk5"] Feb 27 10:57:36 crc kubenswrapper[4728]: E0227 10:57:36.373381 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e83b55c5-e7f6-4e31-b65e-14e0f39a21ec" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 27 10:57:36 crc kubenswrapper[4728]: I0227 10:57:36.373408 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="e83b55c5-e7f6-4e31-b65e-14e0f39a21ec" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 27 10:57:36 crc kubenswrapper[4728]: E0227 10:57:36.373470 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95407fb7-61a8-4fd7-8059-63734e7e50e9" containerName="oc" Feb 27 10:57:36 crc kubenswrapper[4728]: I0227 10:57:36.373479 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="95407fb7-61a8-4fd7-8059-63734e7e50e9" containerName="oc" Feb 27 10:57:36 crc kubenswrapper[4728]: I0227 10:57:36.373776 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="95407fb7-61a8-4fd7-8059-63734e7e50e9" containerName="oc" Feb 27 10:57:36 crc kubenswrapper[4728]: I0227 10:57:36.373804 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="e83b55c5-e7f6-4e31-b65e-14e0f39a21ec" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 27 10:57:36 crc kubenswrapper[4728]: I0227 10:57:36.374918 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-rxdk5" Feb 27 10:57:36 crc kubenswrapper[4728]: I0227 10:57:36.380009 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 27 10:57:36 crc kubenswrapper[4728]: I0227 10:57:36.380379 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-r9nq7" Feb 27 10:57:36 crc kubenswrapper[4728]: I0227 10:57:36.380607 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 27 10:57:36 crc kubenswrapper[4728]: I0227 10:57:36.380792 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 27 10:57:36 crc kubenswrapper[4728]: I0227 10:57:36.386586 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-rxdk5"] Feb 27 10:57:36 crc kubenswrapper[4728]: I0227 10:57:36.459866 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6a90259d-d214-48ad-ad45-b4c87c9eac15-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-rxdk5\" (UID: \"6a90259d-d214-48ad-ad45-b4c87c9eac15\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-rxdk5" Feb 27 10:57:36 crc kubenswrapper[4728]: I0227 10:57:36.459958 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6a90259d-d214-48ad-ad45-b4c87c9eac15-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-rxdk5\" (UID: \"6a90259d-d214-48ad-ad45-b4c87c9eac15\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-rxdk5" Feb 27 10:57:36 crc kubenswrapper[4728]: I0227 10:57:36.460030 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2lhkk\" (UniqueName: \"kubernetes.io/projected/6a90259d-d214-48ad-ad45-b4c87c9eac15-kube-api-access-2lhkk\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-rxdk5\" (UID: \"6a90259d-d214-48ad-ad45-b4c87c9eac15\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-rxdk5" Feb 27 10:57:36 crc kubenswrapper[4728]: I0227 10:57:36.561631 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6a90259d-d214-48ad-ad45-b4c87c9eac15-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-rxdk5\" (UID: \"6a90259d-d214-48ad-ad45-b4c87c9eac15\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-rxdk5" Feb 27 10:57:36 crc kubenswrapper[4728]: I0227 10:57:36.561712 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2lhkk\" (UniqueName: \"kubernetes.io/projected/6a90259d-d214-48ad-ad45-b4c87c9eac15-kube-api-access-2lhkk\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-rxdk5\" (UID: \"6a90259d-d214-48ad-ad45-b4c87c9eac15\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-rxdk5" Feb 27 10:57:36 crc kubenswrapper[4728]: I0227 10:57:36.561893 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6a90259d-d214-48ad-ad45-b4c87c9eac15-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-rxdk5\" (UID: \"6a90259d-d214-48ad-ad45-b4c87c9eac15\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-rxdk5" Feb 27 10:57:36 crc kubenswrapper[4728]: I0227 10:57:36.566278 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6a90259d-d214-48ad-ad45-b4c87c9eac15-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-rxdk5\" (UID: \"6a90259d-d214-48ad-ad45-b4c87c9eac15\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-rxdk5" Feb 27 10:57:36 crc kubenswrapper[4728]: I0227 10:57:36.566477 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6a90259d-d214-48ad-ad45-b4c87c9eac15-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-rxdk5\" (UID: \"6a90259d-d214-48ad-ad45-b4c87c9eac15\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-rxdk5" Feb 27 10:57:36 crc kubenswrapper[4728]: I0227 10:57:36.580627 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2lhkk\" (UniqueName: \"kubernetes.io/projected/6a90259d-d214-48ad-ad45-b4c87c9eac15-kube-api-access-2lhkk\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-rxdk5\" (UID: \"6a90259d-d214-48ad-ad45-b4c87c9eac15\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-rxdk5" Feb 27 10:57:36 crc kubenswrapper[4728]: I0227 10:57:36.694094 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-rxdk5" Feb 27 10:57:36 crc kubenswrapper[4728]: I0227 10:57:36.724819 4728 scope.go:117] "RemoveContainer" containerID="1e9c3f8a2c89093b960b65e9fc2c283b1d41282e75aab6b6afbc29feaad679c4" Feb 27 10:57:36 crc kubenswrapper[4728]: E0227 10:57:36.725157 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mf2hh_openshift-machine-config-operator(c2cfd349-f825-497b-b698-7fb6bc258b22)\"" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" Feb 27 10:57:37 crc kubenswrapper[4728]: I0227 10:57:37.261196 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-rxdk5"] Feb 27 10:57:37 crc kubenswrapper[4728]: I0227 10:57:37.264908 4728 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 27 10:57:37 crc kubenswrapper[4728]: I0227 10:57:37.302970 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-rxdk5" event={"ID":"6a90259d-d214-48ad-ad45-b4c87c9eac15","Type":"ContainerStarted","Data":"2f37e138fafbb37d2596c40118a00f456183fc4aa8962e1d1246e4757680e340"} Feb 27 10:57:38 crc kubenswrapper[4728]: I0227 10:57:38.317764 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-rxdk5" event={"ID":"6a90259d-d214-48ad-ad45-b4c87c9eac15","Type":"ContainerStarted","Data":"144e34cd8bb594c66ea3de72e539cc96b7a3d21dea93fd44cf8ef27b4b706b86"} Feb 27 10:57:38 crc kubenswrapper[4728]: I0227 10:57:38.360552 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-rxdk5" podStartSLOduration=1.8235519789999999 podStartE2EDuration="2.360485778s" podCreationTimestamp="2026-02-27 10:57:36 +0000 UTC" firstStartedPulling="2026-02-27 10:57:37.264615172 +0000 UTC m=+1877.226981288" lastFinishedPulling="2026-02-27 10:57:37.801548941 +0000 UTC m=+1877.763915087" observedRunningTime="2026-02-27 10:57:38.33992304 +0000 UTC m=+1878.302289166" watchObservedRunningTime="2026-02-27 10:57:38.360485778 +0000 UTC m=+1878.322851924" Feb 27 10:57:41 crc kubenswrapper[4728]: I0227 10:57:41.042655 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-m2jc8"] Feb 27 10:57:41 crc kubenswrapper[4728]: I0227 10:57:41.055214 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-m2jc8"] Feb 27 10:57:41 crc kubenswrapper[4728]: I0227 10:57:41.066361 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-6a86-account-create-update-5cwvr"] Feb 27 10:57:41 crc kubenswrapper[4728]: I0227 10:57:41.076042 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-xbvtd"] Feb 27 10:57:41 crc kubenswrapper[4728]: I0227 10:57:41.086070 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-xbvtd"] Feb 27 10:57:41 crc kubenswrapper[4728]: I0227 10:57:41.096277 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-6a86-account-create-update-5cwvr"] Feb 27 10:57:42 crc kubenswrapper[4728]: I0227 10:57:42.758756 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="170c93c3-680a-4c65-a684-9fcf38798fb2" path="/var/lib/kubelet/pods/170c93c3-680a-4c65-a684-9fcf38798fb2/volumes" Feb 27 10:57:42 crc kubenswrapper[4728]: I0227 10:57:42.762078 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="95695a61-7232-4058-a53f-4452a50cead2" path="/var/lib/kubelet/pods/95695a61-7232-4058-a53f-4452a50cead2/volumes" Feb 27 10:57:42 crc kubenswrapper[4728]: I0227 10:57:42.767048 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f18f0df8-5474-49a0-b699-b8199f62036e" path="/var/lib/kubelet/pods/f18f0df8-5474-49a0-b699-b8199f62036e/volumes" Feb 27 10:57:50 crc kubenswrapper[4728]: I0227 10:57:50.778432 4728 scope.go:117] "RemoveContainer" containerID="1e9c3f8a2c89093b960b65e9fc2c283b1d41282e75aab6b6afbc29feaad679c4" Feb 27 10:57:50 crc kubenswrapper[4728]: E0227 10:57:50.779846 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mf2hh_openshift-machine-config-operator(c2cfd349-f825-497b-b698-7fb6bc258b22)\"" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" Feb 27 10:58:00 crc kubenswrapper[4728]: I0227 10:58:00.169334 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536498-kg4qj"] Feb 27 10:58:00 crc kubenswrapper[4728]: I0227 10:58:00.172298 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536498-kg4qj" Feb 27 10:58:00 crc kubenswrapper[4728]: I0227 10:58:00.175741 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 10:58:00 crc kubenswrapper[4728]: I0227 10:58:00.175886 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 10:58:00 crc kubenswrapper[4728]: I0227 10:58:00.178560 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-wxdfj" Feb 27 10:58:00 crc kubenswrapper[4728]: I0227 10:58:00.183447 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536498-kg4qj"] Feb 27 10:58:00 crc kubenswrapper[4728]: I0227 10:58:00.315840 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vd789\" (UniqueName: \"kubernetes.io/projected/0c4815a0-42fc-4f9d-9452-ea11ed73135b-kube-api-access-vd789\") pod \"auto-csr-approver-29536498-kg4qj\" (UID: \"0c4815a0-42fc-4f9d-9452-ea11ed73135b\") " pod="openshift-infra/auto-csr-approver-29536498-kg4qj" Feb 27 10:58:00 crc kubenswrapper[4728]: I0227 10:58:00.419472 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vd789\" (UniqueName: \"kubernetes.io/projected/0c4815a0-42fc-4f9d-9452-ea11ed73135b-kube-api-access-vd789\") pod \"auto-csr-approver-29536498-kg4qj\" (UID: \"0c4815a0-42fc-4f9d-9452-ea11ed73135b\") " pod="openshift-infra/auto-csr-approver-29536498-kg4qj" Feb 27 10:58:00 crc kubenswrapper[4728]: I0227 10:58:00.450331 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vd789\" (UniqueName: \"kubernetes.io/projected/0c4815a0-42fc-4f9d-9452-ea11ed73135b-kube-api-access-vd789\") pod \"auto-csr-approver-29536498-kg4qj\" (UID: \"0c4815a0-42fc-4f9d-9452-ea11ed73135b\") " pod="openshift-infra/auto-csr-approver-29536498-kg4qj" Feb 27 10:58:00 crc kubenswrapper[4728]: I0227 10:58:00.505376 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536498-kg4qj" Feb 27 10:58:01 crc kubenswrapper[4728]: I0227 10:58:00.999842 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536498-kg4qj"] Feb 27 10:58:01 crc kubenswrapper[4728]: I0227 10:58:01.621768 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536498-kg4qj" event={"ID":"0c4815a0-42fc-4f9d-9452-ea11ed73135b","Type":"ContainerStarted","Data":"af3536ad7c0f69791f78d4c2b5c36f099c1c203a7aa00c5480c2f47e7a3cb21d"} Feb 27 10:58:02 crc kubenswrapper[4728]: I0227 10:58:02.642691 4728 generic.go:334] "Generic (PLEG): container finished" podID="0c4815a0-42fc-4f9d-9452-ea11ed73135b" containerID="ba3cc39ec142272e723f6e09dc3b0f199c93eec7744c706fa3280f9de36ca2d8" exitCode=0 Feb 27 10:58:02 crc kubenswrapper[4728]: I0227 10:58:02.642768 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536498-kg4qj" event={"ID":"0c4815a0-42fc-4f9d-9452-ea11ed73135b","Type":"ContainerDied","Data":"ba3cc39ec142272e723f6e09dc3b0f199c93eec7744c706fa3280f9de36ca2d8"} Feb 27 10:58:04 crc kubenswrapper[4728]: I0227 10:58:04.082645 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536498-kg4qj" Feb 27 10:58:04 crc kubenswrapper[4728]: I0227 10:58:04.215808 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vd789\" (UniqueName: \"kubernetes.io/projected/0c4815a0-42fc-4f9d-9452-ea11ed73135b-kube-api-access-vd789\") pod \"0c4815a0-42fc-4f9d-9452-ea11ed73135b\" (UID: \"0c4815a0-42fc-4f9d-9452-ea11ed73135b\") " Feb 27 10:58:04 crc kubenswrapper[4728]: I0227 10:58:04.224399 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c4815a0-42fc-4f9d-9452-ea11ed73135b-kube-api-access-vd789" (OuterVolumeSpecName: "kube-api-access-vd789") pod "0c4815a0-42fc-4f9d-9452-ea11ed73135b" (UID: "0c4815a0-42fc-4f9d-9452-ea11ed73135b"). InnerVolumeSpecName "kube-api-access-vd789". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:58:04 crc kubenswrapper[4728]: I0227 10:58:04.319716 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vd789\" (UniqueName: \"kubernetes.io/projected/0c4815a0-42fc-4f9d-9452-ea11ed73135b-kube-api-access-vd789\") on node \"crc\" DevicePath \"\"" Feb 27 10:58:04 crc kubenswrapper[4728]: I0227 10:58:04.667401 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536498-kg4qj" event={"ID":"0c4815a0-42fc-4f9d-9452-ea11ed73135b","Type":"ContainerDied","Data":"af3536ad7c0f69791f78d4c2b5c36f099c1c203a7aa00c5480c2f47e7a3cb21d"} Feb 27 10:58:04 crc kubenswrapper[4728]: I0227 10:58:04.667662 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="af3536ad7c0f69791f78d4c2b5c36f099c1c203a7aa00c5480c2f47e7a3cb21d" Feb 27 10:58:04 crc kubenswrapper[4728]: I0227 10:58:04.667437 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536498-kg4qj" Feb 27 10:58:05 crc kubenswrapper[4728]: I0227 10:58:05.174629 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536492-xrgfd"] Feb 27 10:58:05 crc kubenswrapper[4728]: I0227 10:58:05.192216 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536492-xrgfd"] Feb 27 10:58:05 crc kubenswrapper[4728]: I0227 10:58:05.725932 4728 scope.go:117] "RemoveContainer" containerID="1e9c3f8a2c89093b960b65e9fc2c283b1d41282e75aab6b6afbc29feaad679c4" Feb 27 10:58:05 crc kubenswrapper[4728]: E0227 10:58:05.726650 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mf2hh_openshift-machine-config-operator(c2cfd349-f825-497b-b698-7fb6bc258b22)\"" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" Feb 27 10:58:06 crc kubenswrapper[4728]: I0227 10:58:06.046284 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-98848"] Feb 27 10:58:06 crc kubenswrapper[4728]: I0227 10:58:06.061399 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-98848"] Feb 27 10:58:06 crc kubenswrapper[4728]: I0227 10:58:06.741581 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc607cb0-0557-4198-8bae-07f9a55cf4a5" path="/var/lib/kubelet/pods/cc607cb0-0557-4198-8bae-07f9a55cf4a5/volumes" Feb 27 10:58:06 crc kubenswrapper[4728]: I0227 10:58:06.744267 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f8c25d7b-2d32-4f79-a9f0-c8a53ad7c787" path="/var/lib/kubelet/pods/f8c25d7b-2d32-4f79-a9f0-c8a53ad7c787/volumes" Feb 27 10:58:16 crc kubenswrapper[4728]: I0227 10:58:16.725308 4728 scope.go:117] "RemoveContainer" containerID="1e9c3f8a2c89093b960b65e9fc2c283b1d41282e75aab6b6afbc29feaad679c4" Feb 27 10:58:16 crc kubenswrapper[4728]: E0227 10:58:16.726564 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mf2hh_openshift-machine-config-operator(c2cfd349-f825-497b-b698-7fb6bc258b22)\"" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" Feb 27 10:58:20 crc kubenswrapper[4728]: I0227 10:58:20.044837 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-create-lb6tb"] Feb 27 10:58:20 crc kubenswrapper[4728]: I0227 10:58:20.061385 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-31b7-account-create-update-gwfvg"] Feb 27 10:58:20 crc kubenswrapper[4728]: I0227 10:58:20.090362 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-4rkxc"] Feb 27 10:58:20 crc kubenswrapper[4728]: I0227 10:58:20.106272 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-create-lb6tb"] Feb 27 10:58:20 crc kubenswrapper[4728]: I0227 10:58:20.117308 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-4rkxc"] Feb 27 10:58:20 crc kubenswrapper[4728]: I0227 10:58:20.128113 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-qqckp"] Feb 27 10:58:20 crc kubenswrapper[4728]: I0227 10:58:20.137588 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-31b7-account-create-update-gwfvg"] Feb 27 10:58:20 crc kubenswrapper[4728]: I0227 10:58:20.147152 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-151d-account-create-update-dsl49"] Feb 27 10:58:20 crc kubenswrapper[4728]: I0227 10:58:20.156709 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-151d-account-create-update-dsl49"] Feb 27 10:58:20 crc kubenswrapper[4728]: I0227 10:58:20.164003 4728 scope.go:117] "RemoveContainer" containerID="94083ca2be748847e2f46f86ecabedd161cb7582bee47fabfeca03ac316c26ed" Feb 27 10:58:20 crc kubenswrapper[4728]: I0227 10:58:20.170280 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-qqckp"] Feb 27 10:58:20 crc kubenswrapper[4728]: I0227 10:58:20.179946 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-52njv"] Feb 27 10:58:20 crc kubenswrapper[4728]: I0227 10:58:20.190086 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-52njv"] Feb 27 10:58:20 crc kubenswrapper[4728]: I0227 10:58:20.200366 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-332e-account-create-update-mt6gn"] Feb 27 10:58:20 crc kubenswrapper[4728]: I0227 10:58:20.202093 4728 scope.go:117] "RemoveContainer" containerID="962f3a1ebe83c405bb3c360206b65206f001ff682990267d92836a74bf4dfca6" Feb 27 10:58:20 crc kubenswrapper[4728]: I0227 10:58:20.225972 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-332e-account-create-update-mt6gn"] Feb 27 10:58:20 crc kubenswrapper[4728]: I0227 10:58:20.231183 4728 scope.go:117] "RemoveContainer" containerID="869c05cfc5f32c79cbd4629a72add801bba595dab4720b90437efd90d5bca092" Feb 27 10:58:20 crc kubenswrapper[4728]: I0227 10:58:20.239022 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-6bca-account-create-update-bcjb2"] Feb 27 10:58:20 crc kubenswrapper[4728]: I0227 10:58:20.251872 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-6bca-account-create-update-bcjb2"] Feb 27 10:58:20 crc kubenswrapper[4728]: I0227 10:58:20.300250 4728 scope.go:117] "RemoveContainer" containerID="8b62547dddc79619b7ca691bcc8ad715f6f692b7ea7f6c32aa2f4755733a909a" Feb 27 10:58:20 crc kubenswrapper[4728]: I0227 10:58:20.364478 4728 scope.go:117] "RemoveContainer" containerID="0f52ec5c91461471521b8a393b975aede7bcc72dfa8e16fdff2e79103c118587" Feb 27 10:58:20 crc kubenswrapper[4728]: I0227 10:58:20.418542 4728 scope.go:117] "RemoveContainer" containerID="f9f2d8d912390de684dbeca893fc6e00f5cff27566b226162549305a7fe73564" Feb 27 10:58:20 crc kubenswrapper[4728]: I0227 10:58:20.484891 4728 scope.go:117] "RemoveContainer" containerID="b3e48fbd853810a5fa71c2ec9a6b3a770c83d310830c8678e070b6ff9a459c5b" Feb 27 10:58:20 crc kubenswrapper[4728]: I0227 10:58:20.517060 4728 scope.go:117] "RemoveContainer" containerID="0fc5bba26e1853d9f84d1ce5af9e45dc24081e9904ceae5bd9266248fe545f9d" Feb 27 10:58:20 crc kubenswrapper[4728]: I0227 10:58:20.577664 4728 scope.go:117] "RemoveContainer" containerID="8766429dec4d5a212661654f4bc0512821fe17b80d97bfe039c3f4c0d0273195" Feb 27 10:58:20 crc kubenswrapper[4728]: I0227 10:58:20.604093 4728 scope.go:117] "RemoveContainer" containerID="cc5279c9def68b0f45ba0a3747a69db7f715ce1c662903d49ec00534bf0b9335" Feb 27 10:58:20 crc kubenswrapper[4728]: I0227 10:58:20.641580 4728 scope.go:117] "RemoveContainer" containerID="faf9aa723ed87358c780fd4235cfc9648bd021e9891cfab804e4c1dd0f651c01" Feb 27 10:58:20 crc kubenswrapper[4728]: I0227 10:58:20.670849 4728 scope.go:117] "RemoveContainer" containerID="41f841acba8e2c07ee5924d86fe2c0f80b3baa7472969b6af931e1d6331adfdb" Feb 27 10:58:20 crc kubenswrapper[4728]: I0227 10:58:20.706648 4728 scope.go:117] "RemoveContainer" containerID="dd99b9342a0c90fd99db61847460349bf4d0b929a28db0ba9c8935f77e2b0803" Feb 27 10:58:20 crc kubenswrapper[4728]: I0227 10:58:20.738336 4728 scope.go:117] "RemoveContainer" containerID="5e2608124ad79d8b2fe74a73abf5915f99ce69f08f4eac9344a62df0fa7e7571" Feb 27 10:58:20 crc kubenswrapper[4728]: I0227 10:58:20.752319 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7100f5fe-cf67-4a79-b69f-c2ccf91d9426" path="/var/lib/kubelet/pods/7100f5fe-cf67-4a79-b69f-c2ccf91d9426/volumes" Feb 27 10:58:20 crc kubenswrapper[4728]: I0227 10:58:20.755198 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7160eba2-79dd-42c9-8540-30f948234052" path="/var/lib/kubelet/pods/7160eba2-79dd-42c9-8540-30f948234052/volumes" Feb 27 10:58:20 crc kubenswrapper[4728]: I0227 10:58:20.757223 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="85f6bc27-1d0e-48ad-9a23-60062c5f8bdb" path="/var/lib/kubelet/pods/85f6bc27-1d0e-48ad-9a23-60062c5f8bdb/volumes" Feb 27 10:58:20 crc kubenswrapper[4728]: I0227 10:58:20.760821 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f7f72d1-b9f2-4a8f-b965-6892e4aaa6bd" path="/var/lib/kubelet/pods/8f7f72d1-b9f2-4a8f-b965-6892e4aaa6bd/volumes" Feb 27 10:58:20 crc kubenswrapper[4728]: I0227 10:58:20.766250 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="abbbfc84-fe7d-4fc9-8f96-d360b6356660" path="/var/lib/kubelet/pods/abbbfc84-fe7d-4fc9-8f96-d360b6356660/volumes" Feb 27 10:58:20 crc kubenswrapper[4728]: I0227 10:58:20.769475 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b856c7d2-6928-4bd0-b327-2abd6d6f664f" path="/var/lib/kubelet/pods/b856c7d2-6928-4bd0-b327-2abd6d6f664f/volumes" Feb 27 10:58:20 crc kubenswrapper[4728]: I0227 10:58:20.771873 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ea36ec6b-4135-4ead-9535-8d54c658c5c3" path="/var/lib/kubelet/pods/ea36ec6b-4135-4ead-9535-8d54c658c5c3/volumes" Feb 27 10:58:20 crc kubenswrapper[4728]: I0227 10:58:20.774484 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f9136d15-a48d-43e3-aff8-ca09ca1f222b" path="/var/lib/kubelet/pods/f9136d15-a48d-43e3-aff8-ca09ca1f222b/volumes" Feb 27 10:58:20 crc kubenswrapper[4728]: I0227 10:58:20.782963 4728 scope.go:117] "RemoveContainer" containerID="52fda0da1f1a149338f78ddaeedd2ffd12ef6d0c88817e6ac905bcf6be6c150b" Feb 27 10:58:20 crc kubenswrapper[4728]: I0227 10:58:20.822142 4728 scope.go:117] "RemoveContainer" containerID="e4041dbb5c8d290e129167eed9e2894e923153d73e5b5952d283d9b19a991075" Feb 27 10:58:20 crc kubenswrapper[4728]: I0227 10:58:20.849839 4728 scope.go:117] "RemoveContainer" containerID="2772bdd65c95354316ecfc963f4c5a14af38ca8c9ccd5d8d81c9c86136755062" Feb 27 10:58:20 crc kubenswrapper[4728]: I0227 10:58:20.878242 4728 scope.go:117] "RemoveContainer" containerID="7c4fd34a7a78d5cd5cc475e2214a4881e77504f6f19cc947476ad6351419c003" Feb 27 10:58:20 crc kubenswrapper[4728]: I0227 10:58:20.915352 4728 scope.go:117] "RemoveContainer" containerID="0806a4561ad4b075a2627fa32d9790de05161694af2fb1330de565f2a8ac485d" Feb 27 10:58:20 crc kubenswrapper[4728]: I0227 10:58:20.946457 4728 scope.go:117] "RemoveContainer" containerID="39c3495ef2240bcb4ccd6ecfbbf1d3a45e7ead86602f45e065cea3b63dc56269" Feb 27 10:58:20 crc kubenswrapper[4728]: I0227 10:58:20.972871 4728 scope.go:117] "RemoveContainer" containerID="3af49d92b83ed7cf122a4b46d316c3c1ac4d9de6b88e604fcbfc15d926ffd8c2" Feb 27 10:58:21 crc kubenswrapper[4728]: I0227 10:58:21.000801 4728 scope.go:117] "RemoveContainer" containerID="9e8a5e2c6b151274c38fd27234450268911df4a0d9684570d479781e0a023a50" Feb 27 10:58:21 crc kubenswrapper[4728]: I0227 10:58:21.026889 4728 scope.go:117] "RemoveContainer" containerID="491c13b241208261f9fd3853edb11bcbae0923fb864e625b7c56cb8f7831783a" Feb 27 10:58:21 crc kubenswrapper[4728]: I0227 10:58:21.053422 4728 scope.go:117] "RemoveContainer" containerID="b1e98f0a01f972e8e0a4e700d3c5a92aaee69d6d29c18e60e1f23b8b081e4173" Feb 27 10:58:21 crc kubenswrapper[4728]: I0227 10:58:21.085463 4728 scope.go:117] "RemoveContainer" containerID="4da79aa54374319530ddbe894d643f5b81add54efea317a80f7bb617f94ba060" Feb 27 10:58:26 crc kubenswrapper[4728]: I0227 10:58:26.049616 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-bc9dk"] Feb 27 10:58:26 crc kubenswrapper[4728]: I0227 10:58:26.063105 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-bc9dk"] Feb 27 10:58:26 crc kubenswrapper[4728]: I0227 10:58:26.740327 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d31f26e9-dded-4375-abb8-f038bce13899" path="/var/lib/kubelet/pods/d31f26e9-dded-4375-abb8-f038bce13899/volumes" Feb 27 10:58:30 crc kubenswrapper[4728]: I0227 10:58:30.733951 4728 scope.go:117] "RemoveContainer" containerID="1e9c3f8a2c89093b960b65e9fc2c283b1d41282e75aab6b6afbc29feaad679c4" Feb 27 10:58:30 crc kubenswrapper[4728]: E0227 10:58:30.735854 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mf2hh_openshift-machine-config-operator(c2cfd349-f825-497b-b698-7fb6bc258b22)\"" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" Feb 27 10:58:44 crc kubenswrapper[4728]: I0227 10:58:44.725966 4728 scope.go:117] "RemoveContainer" containerID="1e9c3f8a2c89093b960b65e9fc2c283b1d41282e75aab6b6afbc29feaad679c4" Feb 27 10:58:44 crc kubenswrapper[4728]: E0227 10:58:44.727177 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mf2hh_openshift-machine-config-operator(c2cfd349-f825-497b-b698-7fb6bc258b22)\"" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" Feb 27 10:58:55 crc kubenswrapper[4728]: I0227 10:58:55.724603 4728 scope.go:117] "RemoveContainer" containerID="1e9c3f8a2c89093b960b65e9fc2c283b1d41282e75aab6b6afbc29feaad679c4" Feb 27 10:58:55 crc kubenswrapper[4728]: E0227 10:58:55.725273 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mf2hh_openshift-machine-config-operator(c2cfd349-f825-497b-b698-7fb6bc258b22)\"" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" Feb 27 10:59:06 crc kubenswrapper[4728]: I0227 10:59:06.725655 4728 scope.go:117] "RemoveContainer" containerID="1e9c3f8a2c89093b960b65e9fc2c283b1d41282e75aab6b6afbc29feaad679c4" Feb 27 10:59:06 crc kubenswrapper[4728]: E0227 10:59:06.726743 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mf2hh_openshift-machine-config-operator(c2cfd349-f825-497b-b698-7fb6bc258b22)\"" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" Feb 27 10:59:07 crc kubenswrapper[4728]: I0227 10:59:07.061252 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-4msxf"] Feb 27 10:59:07 crc kubenswrapper[4728]: I0227 10:59:07.081899 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-4msxf"] Feb 27 10:59:08 crc kubenswrapper[4728]: I0227 10:59:08.741973 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="085a2ebe-2930-448c-aec9-396b505fb399" path="/var/lib/kubelet/pods/085a2ebe-2930-448c-aec9-396b505fb399/volumes" Feb 27 10:59:15 crc kubenswrapper[4728]: I0227 10:59:15.038337 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-tmx8h"] Feb 27 10:59:15 crc kubenswrapper[4728]: I0227 10:59:15.049332 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-tmx8h"] Feb 27 10:59:16 crc kubenswrapper[4728]: I0227 10:59:16.041216 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-tstt8"] Feb 27 10:59:16 crc kubenswrapper[4728]: I0227 10:59:16.051133 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-v2vbf"] Feb 27 10:59:16 crc kubenswrapper[4728]: I0227 10:59:16.061228 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-v2vbf"] Feb 27 10:59:16 crc kubenswrapper[4728]: I0227 10:59:16.070443 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-tstt8"] Feb 27 10:59:16 crc kubenswrapper[4728]: I0227 10:59:16.749435 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="141bf253-61a8-46a0-9d10-9aefbbd124c6" path="/var/lib/kubelet/pods/141bf253-61a8-46a0-9d10-9aefbbd124c6/volumes" Feb 27 10:59:16 crc kubenswrapper[4728]: I0227 10:59:16.751332 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d3ddaa20-d617-4d45-9e82-b31c982de147" path="/var/lib/kubelet/pods/d3ddaa20-d617-4d45-9e82-b31c982de147/volumes" Feb 27 10:59:16 crc kubenswrapper[4728]: I0227 10:59:16.752570 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f69c278c-545b-4a40-9f34-53d895c528c0" path="/var/lib/kubelet/pods/f69c278c-545b-4a40-9f34-53d895c528c0/volumes" Feb 27 10:59:17 crc kubenswrapper[4728]: I0227 10:59:17.725094 4728 scope.go:117] "RemoveContainer" containerID="1e9c3f8a2c89093b960b65e9fc2c283b1d41282e75aab6b6afbc29feaad679c4" Feb 27 10:59:17 crc kubenswrapper[4728]: E0227 10:59:17.725632 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mf2hh_openshift-machine-config-operator(c2cfd349-f825-497b-b698-7fb6bc258b22)\"" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" Feb 27 10:59:21 crc kubenswrapper[4728]: I0227 10:59:21.504916 4728 scope.go:117] "RemoveContainer" containerID="8caff2e6a77c3078e4fba1cbe58ca491f7dafb5852c1814771cc5b8218eb1488" Feb 27 10:59:21 crc kubenswrapper[4728]: I0227 10:59:21.557029 4728 scope.go:117] "RemoveContainer" containerID="6b7582970aa14ffa741abca577db0d6d0bc4e25d8e05e7e1d66642688674ae26" Feb 27 10:59:21 crc kubenswrapper[4728]: I0227 10:59:21.601778 4728 scope.go:117] "RemoveContainer" containerID="c050a67b2f5f62b9a818d67124386db2c589134db2850e4f241133daee7eed5b" Feb 27 10:59:21 crc kubenswrapper[4728]: I0227 10:59:21.668003 4728 scope.go:117] "RemoveContainer" containerID="d7b6eec4e9f9cca6b7772e37722b6730767a340092347fd394452c1526e98955" Feb 27 10:59:21 crc kubenswrapper[4728]: I0227 10:59:21.713000 4728 scope.go:117] "RemoveContainer" containerID="cc4747e9802a8d462b2d4c2235792b8ceb2cceb46be16b9796c075451ca903b4" Feb 27 10:59:21 crc kubenswrapper[4728]: I0227 10:59:21.786729 4728 scope.go:117] "RemoveContainer" containerID="d1b547d94847f3dca4ab8f436e619449567c33b3c8310f9df38636fe58550781" Feb 27 10:59:21 crc kubenswrapper[4728]: I0227 10:59:21.820408 4728 scope.go:117] "RemoveContainer" containerID="f1abe65966a47425466ace996490bcf4d94d3349b1740e7639e183104240803e" Feb 27 10:59:21 crc kubenswrapper[4728]: I0227 10:59:21.870863 4728 scope.go:117] "RemoveContainer" containerID="bfdc65d6422751035eac485694f2efb82adc4ce16dfd42a905a39c32b4823bbf" Feb 27 10:59:21 crc kubenswrapper[4728]: I0227 10:59:21.899260 4728 scope.go:117] "RemoveContainer" containerID="a01d10a99a8b2be9ba0ab1bc9e3953c78b32c9585aaa9aa0150af4480b4e8091" Feb 27 10:59:21 crc kubenswrapper[4728]: I0227 10:59:21.934164 4728 scope.go:117] "RemoveContainer" containerID="2b52ca2354545d301b5b9fd9a4be5db5b27d3cca3b6e1522f1ed3aecdd9a19b0" Feb 27 10:59:21 crc kubenswrapper[4728]: I0227 10:59:21.962889 4728 scope.go:117] "RemoveContainer" containerID="719698de05e31a5bdedf94ade490685ae23fe85cb8b9a4a4a10c9ea62d2bc702" Feb 27 10:59:21 crc kubenswrapper[4728]: I0227 10:59:21.987892 4728 scope.go:117] "RemoveContainer" containerID="9752f11f948dc0098e7f9171797e12dd9bdabf580206df8f6012025a1ce7634e" Feb 27 10:59:22 crc kubenswrapper[4728]: I0227 10:59:22.023233 4728 scope.go:117] "RemoveContainer" containerID="b15315e8121f267a59c25c275594fd0210f3375e891f65b83d56685bbc950f17" Feb 27 10:59:22 crc kubenswrapper[4728]: I0227 10:59:22.056039 4728 scope.go:117] "RemoveContainer" containerID="d58201e350456ff6648eab9f54f9a69bc1f8422e0ff56b1e2779544fae8a2756" Feb 27 10:59:28 crc kubenswrapper[4728]: I0227 10:59:28.726365 4728 scope.go:117] "RemoveContainer" containerID="1e9c3f8a2c89093b960b65e9fc2c283b1d41282e75aab6b6afbc29feaad679c4" Feb 27 10:59:28 crc kubenswrapper[4728]: E0227 10:59:28.727326 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mf2hh_openshift-machine-config-operator(c2cfd349-f825-497b-b698-7fb6bc258b22)\"" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" Feb 27 10:59:28 crc kubenswrapper[4728]: I0227 10:59:28.752609 4728 generic.go:334] "Generic (PLEG): container finished" podID="6a90259d-d214-48ad-ad45-b4c87c9eac15" containerID="144e34cd8bb594c66ea3de72e539cc96b7a3d21dea93fd44cf8ef27b4b706b86" exitCode=0 Feb 27 10:59:28 crc kubenswrapper[4728]: I0227 10:59:28.752654 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-rxdk5" event={"ID":"6a90259d-d214-48ad-ad45-b4c87c9eac15","Type":"ContainerDied","Data":"144e34cd8bb594c66ea3de72e539cc96b7a3d21dea93fd44cf8ef27b4b706b86"} Feb 27 10:59:30 crc kubenswrapper[4728]: I0227 10:59:30.325303 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-rxdk5" Feb 27 10:59:30 crc kubenswrapper[4728]: I0227 10:59:30.417020 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6a90259d-d214-48ad-ad45-b4c87c9eac15-ssh-key-openstack-edpm-ipam\") pod \"6a90259d-d214-48ad-ad45-b4c87c9eac15\" (UID: \"6a90259d-d214-48ad-ad45-b4c87c9eac15\") " Feb 27 10:59:30 crc kubenswrapper[4728]: I0227 10:59:30.417118 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6a90259d-d214-48ad-ad45-b4c87c9eac15-inventory\") pod \"6a90259d-d214-48ad-ad45-b4c87c9eac15\" (UID: \"6a90259d-d214-48ad-ad45-b4c87c9eac15\") " Feb 27 10:59:30 crc kubenswrapper[4728]: I0227 10:59:30.417157 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2lhkk\" (UniqueName: \"kubernetes.io/projected/6a90259d-d214-48ad-ad45-b4c87c9eac15-kube-api-access-2lhkk\") pod \"6a90259d-d214-48ad-ad45-b4c87c9eac15\" (UID: \"6a90259d-d214-48ad-ad45-b4c87c9eac15\") " Feb 27 10:59:30 crc kubenswrapper[4728]: I0227 10:59:30.426864 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a90259d-d214-48ad-ad45-b4c87c9eac15-kube-api-access-2lhkk" (OuterVolumeSpecName: "kube-api-access-2lhkk") pod "6a90259d-d214-48ad-ad45-b4c87c9eac15" (UID: "6a90259d-d214-48ad-ad45-b4c87c9eac15"). InnerVolumeSpecName "kube-api-access-2lhkk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 10:59:30 crc kubenswrapper[4728]: I0227 10:59:30.480007 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a90259d-d214-48ad-ad45-b4c87c9eac15-inventory" (OuterVolumeSpecName: "inventory") pod "6a90259d-d214-48ad-ad45-b4c87c9eac15" (UID: "6a90259d-d214-48ad-ad45-b4c87c9eac15"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:59:30 crc kubenswrapper[4728]: I0227 10:59:30.481950 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a90259d-d214-48ad-ad45-b4c87c9eac15-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "6a90259d-d214-48ad-ad45-b4c87c9eac15" (UID: "6a90259d-d214-48ad-ad45-b4c87c9eac15"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 10:59:30 crc kubenswrapper[4728]: I0227 10:59:30.520306 4728 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6a90259d-d214-48ad-ad45-b4c87c9eac15-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 27 10:59:30 crc kubenswrapper[4728]: I0227 10:59:30.520550 4728 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6a90259d-d214-48ad-ad45-b4c87c9eac15-inventory\") on node \"crc\" DevicePath \"\"" Feb 27 10:59:30 crc kubenswrapper[4728]: I0227 10:59:30.520634 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2lhkk\" (UniqueName: \"kubernetes.io/projected/6a90259d-d214-48ad-ad45-b4c87c9eac15-kube-api-access-2lhkk\") on node \"crc\" DevicePath \"\"" Feb 27 10:59:30 crc kubenswrapper[4728]: I0227 10:59:30.775738 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-rxdk5" event={"ID":"6a90259d-d214-48ad-ad45-b4c87c9eac15","Type":"ContainerDied","Data":"2f37e138fafbb37d2596c40118a00f456183fc4aa8962e1d1246e4757680e340"} Feb 27 10:59:30 crc kubenswrapper[4728]: I0227 10:59:30.776156 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2f37e138fafbb37d2596c40118a00f456183fc4aa8962e1d1246e4757680e340" Feb 27 10:59:30 crc kubenswrapper[4728]: I0227 10:59:30.775864 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-rxdk5" Feb 27 10:59:30 crc kubenswrapper[4728]: I0227 10:59:30.878294 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-9qk2n"] Feb 27 10:59:30 crc kubenswrapper[4728]: E0227 10:59:30.878960 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c4815a0-42fc-4f9d-9452-ea11ed73135b" containerName="oc" Feb 27 10:59:30 crc kubenswrapper[4728]: I0227 10:59:30.878987 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c4815a0-42fc-4f9d-9452-ea11ed73135b" containerName="oc" Feb 27 10:59:30 crc kubenswrapper[4728]: E0227 10:59:30.879019 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a90259d-d214-48ad-ad45-b4c87c9eac15" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 27 10:59:30 crc kubenswrapper[4728]: I0227 10:59:30.879042 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a90259d-d214-48ad-ad45-b4c87c9eac15" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 27 10:59:30 crc kubenswrapper[4728]: I0227 10:59:30.879347 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c4815a0-42fc-4f9d-9452-ea11ed73135b" containerName="oc" Feb 27 10:59:30 crc kubenswrapper[4728]: I0227 10:59:30.879392 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a90259d-d214-48ad-ad45-b4c87c9eac15" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 27 10:59:30 crc kubenswrapper[4728]: I0227 10:59:30.880445 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-9qk2n" Feb 27 10:59:30 crc kubenswrapper[4728]: I0227 10:59:30.886307 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 27 10:59:30 crc kubenswrapper[4728]: I0227 10:59:30.886633 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-r9nq7" Feb 27 10:59:30 crc kubenswrapper[4728]: I0227 10:59:30.887229 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 27 10:59:30 crc kubenswrapper[4728]: I0227 10:59:30.893396 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-9qk2n"] Feb 27 10:59:30 crc kubenswrapper[4728]: I0227 10:59:30.946495 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 27 10:59:30 crc kubenswrapper[4728]: I0227 10:59:30.951311 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5cf31d36-5693-4ec4-bd25-87524df66974-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-9qk2n\" (UID: \"5cf31d36-5693-4ec4-bd25-87524df66974\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-9qk2n" Feb 27 10:59:30 crc kubenswrapper[4728]: I0227 10:59:30.951565 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w98zc\" (UniqueName: \"kubernetes.io/projected/5cf31d36-5693-4ec4-bd25-87524df66974-kube-api-access-w98zc\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-9qk2n\" (UID: \"5cf31d36-5693-4ec4-bd25-87524df66974\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-9qk2n" Feb 27 10:59:30 crc kubenswrapper[4728]: I0227 10:59:30.951739 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5cf31d36-5693-4ec4-bd25-87524df66974-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-9qk2n\" (UID: \"5cf31d36-5693-4ec4-bd25-87524df66974\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-9qk2n" Feb 27 10:59:31 crc kubenswrapper[4728]: I0227 10:59:31.054329 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5cf31d36-5693-4ec4-bd25-87524df66974-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-9qk2n\" (UID: \"5cf31d36-5693-4ec4-bd25-87524df66974\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-9qk2n" Feb 27 10:59:31 crc kubenswrapper[4728]: I0227 10:59:31.054411 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w98zc\" (UniqueName: \"kubernetes.io/projected/5cf31d36-5693-4ec4-bd25-87524df66974-kube-api-access-w98zc\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-9qk2n\" (UID: \"5cf31d36-5693-4ec4-bd25-87524df66974\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-9qk2n" Feb 27 10:59:31 crc kubenswrapper[4728]: I0227 10:59:31.054472 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5cf31d36-5693-4ec4-bd25-87524df66974-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-9qk2n\" (UID: \"5cf31d36-5693-4ec4-bd25-87524df66974\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-9qk2n" Feb 27 10:59:31 crc kubenswrapper[4728]: I0227 10:59:31.059096 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5cf31d36-5693-4ec4-bd25-87524df66974-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-9qk2n\" (UID: \"5cf31d36-5693-4ec4-bd25-87524df66974\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-9qk2n" Feb 27 10:59:31 crc kubenswrapper[4728]: I0227 10:59:31.059942 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5cf31d36-5693-4ec4-bd25-87524df66974-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-9qk2n\" (UID: \"5cf31d36-5693-4ec4-bd25-87524df66974\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-9qk2n" Feb 27 10:59:31 crc kubenswrapper[4728]: I0227 10:59:31.073634 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w98zc\" (UniqueName: \"kubernetes.io/projected/5cf31d36-5693-4ec4-bd25-87524df66974-kube-api-access-w98zc\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-9qk2n\" (UID: \"5cf31d36-5693-4ec4-bd25-87524df66974\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-9qk2n" Feb 27 10:59:31 crc kubenswrapper[4728]: I0227 10:59:31.262604 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-9qk2n" Feb 27 10:59:31 crc kubenswrapper[4728]: I0227 10:59:31.848342 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-9qk2n"] Feb 27 10:59:32 crc kubenswrapper[4728]: I0227 10:59:32.801809 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-9qk2n" event={"ID":"5cf31d36-5693-4ec4-bd25-87524df66974","Type":"ContainerStarted","Data":"cbab449db1b132cc1d8846939a5371d8241d0850fe8bb299aed8abe071c66235"} Feb 27 10:59:32 crc kubenswrapper[4728]: I0227 10:59:32.802305 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-9qk2n" event={"ID":"5cf31d36-5693-4ec4-bd25-87524df66974","Type":"ContainerStarted","Data":"6b3b30aff0844663f4a034b7a013bd701d379cdb0d28e6002c27325cf11c0922"} Feb 27 10:59:32 crc kubenswrapper[4728]: I0227 10:59:32.818835 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-9qk2n" podStartSLOduration=2.381549817 podStartE2EDuration="2.818807083s" podCreationTimestamp="2026-02-27 10:59:30 +0000 UTC" firstStartedPulling="2026-02-27 10:59:31.860531039 +0000 UTC m=+1991.822897145" lastFinishedPulling="2026-02-27 10:59:32.297788285 +0000 UTC m=+1992.260154411" observedRunningTime="2026-02-27 10:59:32.815379449 +0000 UTC m=+1992.777745555" watchObservedRunningTime="2026-02-27 10:59:32.818807083 +0000 UTC m=+1992.781173189" Feb 27 10:59:34 crc kubenswrapper[4728]: I0227 10:59:34.056582 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-dds8x"] Feb 27 10:59:34 crc kubenswrapper[4728]: I0227 10:59:34.069474 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-dds8x"] Feb 27 10:59:34 crc kubenswrapper[4728]: I0227 10:59:34.750311 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e293ec90-6006-49f0-8e24-8a0f4327d2cf" path="/var/lib/kubelet/pods/e293ec90-6006-49f0-8e24-8a0f4327d2cf/volumes" Feb 27 10:59:41 crc kubenswrapper[4728]: I0227 10:59:41.726241 4728 scope.go:117] "RemoveContainer" containerID="1e9c3f8a2c89093b960b65e9fc2c283b1d41282e75aab6b6afbc29feaad679c4" Feb 27 10:59:41 crc kubenswrapper[4728]: E0227 10:59:41.727395 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mf2hh_openshift-machine-config-operator(c2cfd349-f825-497b-b698-7fb6bc258b22)\"" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" Feb 27 10:59:55 crc kubenswrapper[4728]: I0227 10:59:55.724396 4728 scope.go:117] "RemoveContainer" containerID="1e9c3f8a2c89093b960b65e9fc2c283b1d41282e75aab6b6afbc29feaad679c4" Feb 27 10:59:55 crc kubenswrapper[4728]: E0227 10:59:55.725236 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mf2hh_openshift-machine-config-operator(c2cfd349-f825-497b-b698-7fb6bc258b22)\"" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" Feb 27 11:00:00 crc kubenswrapper[4728]: I0227 11:00:00.191631 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536500-pqxmp"] Feb 27 11:00:00 crc kubenswrapper[4728]: I0227 11:00:00.194835 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536500-pqxmp" Feb 27 11:00:00 crc kubenswrapper[4728]: I0227 11:00:00.197126 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 11:00:00 crc kubenswrapper[4728]: I0227 11:00:00.197684 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 11:00:00 crc kubenswrapper[4728]: I0227 11:00:00.198111 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-wxdfj" Feb 27 11:00:00 crc kubenswrapper[4728]: I0227 11:00:00.205370 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29536500-fxt22"] Feb 27 11:00:00 crc kubenswrapper[4728]: I0227 11:00:00.207273 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29536500-fxt22" Feb 27 11:00:00 crc kubenswrapper[4728]: I0227 11:00:00.209753 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 27 11:00:00 crc kubenswrapper[4728]: I0227 11:00:00.209761 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 27 11:00:00 crc kubenswrapper[4728]: I0227 11:00:00.222799 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536500-pqxmp"] Feb 27 11:00:00 crc kubenswrapper[4728]: I0227 11:00:00.263925 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29536500-fxt22"] Feb 27 11:00:00 crc kubenswrapper[4728]: I0227 11:00:00.345214 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/896c73eb-cb8d-4aa1-88e7-9748213bb799-config-volume\") pod \"collect-profiles-29536500-fxt22\" (UID: \"896c73eb-cb8d-4aa1-88e7-9748213bb799\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536500-fxt22" Feb 27 11:00:00 crc kubenswrapper[4728]: I0227 11:00:00.345281 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdzxl\" (UniqueName: \"kubernetes.io/projected/878771e5-dc09-46d1-a3b0-79628625cd3e-kube-api-access-cdzxl\") pod \"auto-csr-approver-29536500-pqxmp\" (UID: \"878771e5-dc09-46d1-a3b0-79628625cd3e\") " pod="openshift-infra/auto-csr-approver-29536500-pqxmp" Feb 27 11:00:00 crc kubenswrapper[4728]: I0227 11:00:00.345820 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/896c73eb-cb8d-4aa1-88e7-9748213bb799-secret-volume\") pod \"collect-profiles-29536500-fxt22\" (UID: \"896c73eb-cb8d-4aa1-88e7-9748213bb799\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536500-fxt22" Feb 27 11:00:00 crc kubenswrapper[4728]: I0227 11:00:00.345869 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g577s\" (UniqueName: \"kubernetes.io/projected/896c73eb-cb8d-4aa1-88e7-9748213bb799-kube-api-access-g577s\") pod \"collect-profiles-29536500-fxt22\" (UID: \"896c73eb-cb8d-4aa1-88e7-9748213bb799\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536500-fxt22" Feb 27 11:00:00 crc kubenswrapper[4728]: I0227 11:00:00.448407 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/896c73eb-cb8d-4aa1-88e7-9748213bb799-config-volume\") pod \"collect-profiles-29536500-fxt22\" (UID: \"896c73eb-cb8d-4aa1-88e7-9748213bb799\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536500-fxt22" Feb 27 11:00:00 crc kubenswrapper[4728]: I0227 11:00:00.448476 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cdzxl\" (UniqueName: \"kubernetes.io/projected/878771e5-dc09-46d1-a3b0-79628625cd3e-kube-api-access-cdzxl\") pod \"auto-csr-approver-29536500-pqxmp\" (UID: \"878771e5-dc09-46d1-a3b0-79628625cd3e\") " pod="openshift-infra/auto-csr-approver-29536500-pqxmp" Feb 27 11:00:00 crc kubenswrapper[4728]: I0227 11:00:00.448683 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/896c73eb-cb8d-4aa1-88e7-9748213bb799-secret-volume\") pod \"collect-profiles-29536500-fxt22\" (UID: \"896c73eb-cb8d-4aa1-88e7-9748213bb799\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536500-fxt22" Feb 27 11:00:00 crc kubenswrapper[4728]: I0227 11:00:00.448711 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g577s\" (UniqueName: \"kubernetes.io/projected/896c73eb-cb8d-4aa1-88e7-9748213bb799-kube-api-access-g577s\") pod \"collect-profiles-29536500-fxt22\" (UID: \"896c73eb-cb8d-4aa1-88e7-9748213bb799\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536500-fxt22" Feb 27 11:00:00 crc kubenswrapper[4728]: I0227 11:00:00.449711 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/896c73eb-cb8d-4aa1-88e7-9748213bb799-config-volume\") pod \"collect-profiles-29536500-fxt22\" (UID: \"896c73eb-cb8d-4aa1-88e7-9748213bb799\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536500-fxt22" Feb 27 11:00:00 crc kubenswrapper[4728]: I0227 11:00:00.464087 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/896c73eb-cb8d-4aa1-88e7-9748213bb799-secret-volume\") pod \"collect-profiles-29536500-fxt22\" (UID: \"896c73eb-cb8d-4aa1-88e7-9748213bb799\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536500-fxt22" Feb 27 11:00:00 crc kubenswrapper[4728]: I0227 11:00:00.465387 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g577s\" (UniqueName: \"kubernetes.io/projected/896c73eb-cb8d-4aa1-88e7-9748213bb799-kube-api-access-g577s\") pod \"collect-profiles-29536500-fxt22\" (UID: \"896c73eb-cb8d-4aa1-88e7-9748213bb799\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536500-fxt22" Feb 27 11:00:00 crc kubenswrapper[4728]: I0227 11:00:00.466380 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cdzxl\" (UniqueName: \"kubernetes.io/projected/878771e5-dc09-46d1-a3b0-79628625cd3e-kube-api-access-cdzxl\") pod \"auto-csr-approver-29536500-pqxmp\" (UID: \"878771e5-dc09-46d1-a3b0-79628625cd3e\") " pod="openshift-infra/auto-csr-approver-29536500-pqxmp" Feb 27 11:00:00 crc kubenswrapper[4728]: I0227 11:00:00.527993 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536500-pqxmp" Feb 27 11:00:00 crc kubenswrapper[4728]: I0227 11:00:00.547526 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29536500-fxt22" Feb 27 11:00:01 crc kubenswrapper[4728]: I0227 11:00:01.037494 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536500-pqxmp"] Feb 27 11:00:01 crc kubenswrapper[4728]: W0227 11:00:01.042178 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod878771e5_dc09_46d1_a3b0_79628625cd3e.slice/crio-fc377e02c225b886f15b51a6158ab370bc10e99219c170d1a62e8fa335d60fb1 WatchSource:0}: Error finding container fc377e02c225b886f15b51a6158ab370bc10e99219c170d1a62e8fa335d60fb1: Status 404 returned error can't find the container with id fc377e02c225b886f15b51a6158ab370bc10e99219c170d1a62e8fa335d60fb1 Feb 27 11:00:01 crc kubenswrapper[4728]: I0227 11:00:01.128253 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29536500-fxt22"] Feb 27 11:00:01 crc kubenswrapper[4728]: I0227 11:00:01.177368 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29536500-fxt22" event={"ID":"896c73eb-cb8d-4aa1-88e7-9748213bb799","Type":"ContainerStarted","Data":"031cf9a1c395a242d35a9933f8e32725c052888c24b0e55e812f88a847194479"} Feb 27 11:00:01 crc kubenswrapper[4728]: I0227 11:00:01.178574 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536500-pqxmp" event={"ID":"878771e5-dc09-46d1-a3b0-79628625cd3e","Type":"ContainerStarted","Data":"fc377e02c225b886f15b51a6158ab370bc10e99219c170d1a62e8fa335d60fb1"} Feb 27 11:00:02 crc kubenswrapper[4728]: I0227 11:00:02.192128 4728 generic.go:334] "Generic (PLEG): container finished" podID="896c73eb-cb8d-4aa1-88e7-9748213bb799" containerID="48f6ffb83f6a36d6d6a86dc65837a6815f32590ad606cfea7ee28491fb4626d7" exitCode=0 Feb 27 11:00:02 crc kubenswrapper[4728]: I0227 11:00:02.192413 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29536500-fxt22" event={"ID":"896c73eb-cb8d-4aa1-88e7-9748213bb799","Type":"ContainerDied","Data":"48f6ffb83f6a36d6d6a86dc65837a6815f32590ad606cfea7ee28491fb4626d7"} Feb 27 11:00:03 crc kubenswrapper[4728]: I0227 11:00:03.697036 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29536500-fxt22" Feb 27 11:00:03 crc kubenswrapper[4728]: I0227 11:00:03.871610 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/896c73eb-cb8d-4aa1-88e7-9748213bb799-secret-volume\") pod \"896c73eb-cb8d-4aa1-88e7-9748213bb799\" (UID: \"896c73eb-cb8d-4aa1-88e7-9748213bb799\") " Feb 27 11:00:03 crc kubenswrapper[4728]: I0227 11:00:03.871799 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/896c73eb-cb8d-4aa1-88e7-9748213bb799-config-volume\") pod \"896c73eb-cb8d-4aa1-88e7-9748213bb799\" (UID: \"896c73eb-cb8d-4aa1-88e7-9748213bb799\") " Feb 27 11:00:03 crc kubenswrapper[4728]: I0227 11:00:03.871926 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g577s\" (UniqueName: \"kubernetes.io/projected/896c73eb-cb8d-4aa1-88e7-9748213bb799-kube-api-access-g577s\") pod \"896c73eb-cb8d-4aa1-88e7-9748213bb799\" (UID: \"896c73eb-cb8d-4aa1-88e7-9748213bb799\") " Feb 27 11:00:03 crc kubenswrapper[4728]: I0227 11:00:03.872669 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/896c73eb-cb8d-4aa1-88e7-9748213bb799-config-volume" (OuterVolumeSpecName: "config-volume") pod "896c73eb-cb8d-4aa1-88e7-9748213bb799" (UID: "896c73eb-cb8d-4aa1-88e7-9748213bb799"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 11:00:03 crc kubenswrapper[4728]: I0227 11:00:03.873059 4728 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/896c73eb-cb8d-4aa1-88e7-9748213bb799-config-volume\") on node \"crc\" DevicePath \"\"" Feb 27 11:00:03 crc kubenswrapper[4728]: I0227 11:00:03.879438 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/896c73eb-cb8d-4aa1-88e7-9748213bb799-kube-api-access-g577s" (OuterVolumeSpecName: "kube-api-access-g577s") pod "896c73eb-cb8d-4aa1-88e7-9748213bb799" (UID: "896c73eb-cb8d-4aa1-88e7-9748213bb799"). InnerVolumeSpecName "kube-api-access-g577s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:00:03 crc kubenswrapper[4728]: I0227 11:00:03.880382 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/896c73eb-cb8d-4aa1-88e7-9748213bb799-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "896c73eb-cb8d-4aa1-88e7-9748213bb799" (UID: "896c73eb-cb8d-4aa1-88e7-9748213bb799"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 11:00:03 crc kubenswrapper[4728]: I0227 11:00:03.975203 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g577s\" (UniqueName: \"kubernetes.io/projected/896c73eb-cb8d-4aa1-88e7-9748213bb799-kube-api-access-g577s\") on node \"crc\" DevicePath \"\"" Feb 27 11:00:03 crc kubenswrapper[4728]: I0227 11:00:03.975246 4728 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/896c73eb-cb8d-4aa1-88e7-9748213bb799-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 27 11:00:04 crc kubenswrapper[4728]: I0227 11:00:04.267535 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29536500-fxt22" event={"ID":"896c73eb-cb8d-4aa1-88e7-9748213bb799","Type":"ContainerDied","Data":"031cf9a1c395a242d35a9933f8e32725c052888c24b0e55e812f88a847194479"} Feb 27 11:00:04 crc kubenswrapper[4728]: I0227 11:00:04.267814 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="031cf9a1c395a242d35a9933f8e32725c052888c24b0e55e812f88a847194479" Feb 27 11:00:04 crc kubenswrapper[4728]: I0227 11:00:04.267601 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29536500-fxt22" Feb 27 11:00:04 crc kubenswrapper[4728]: I0227 11:00:04.789480 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29536455-mx6zt"] Feb 27 11:00:04 crc kubenswrapper[4728]: I0227 11:00:04.799127 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29536455-mx6zt"] Feb 27 11:00:05 crc kubenswrapper[4728]: I0227 11:00:05.290566 4728 generic.go:334] "Generic (PLEG): container finished" podID="878771e5-dc09-46d1-a3b0-79628625cd3e" containerID="bb6a64d91e929eccc4a6e4f936efb33af12bc05c225938f538353c554399fb7e" exitCode=0 Feb 27 11:00:05 crc kubenswrapper[4728]: I0227 11:00:05.290659 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536500-pqxmp" event={"ID":"878771e5-dc09-46d1-a3b0-79628625cd3e","Type":"ContainerDied","Data":"bb6a64d91e929eccc4a6e4f936efb33af12bc05c225938f538353c554399fb7e"} Feb 27 11:00:06 crc kubenswrapper[4728]: I0227 11:00:06.739764 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f82e7468-152d-46a6-9012-3bb0b4219b3f" path="/var/lib/kubelet/pods/f82e7468-152d-46a6-9012-3bb0b4219b3f/volumes" Feb 27 11:00:06 crc kubenswrapper[4728]: I0227 11:00:06.766299 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536500-pqxmp" Feb 27 11:00:06 crc kubenswrapper[4728]: I0227 11:00:06.853094 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cdzxl\" (UniqueName: \"kubernetes.io/projected/878771e5-dc09-46d1-a3b0-79628625cd3e-kube-api-access-cdzxl\") pod \"878771e5-dc09-46d1-a3b0-79628625cd3e\" (UID: \"878771e5-dc09-46d1-a3b0-79628625cd3e\") " Feb 27 11:00:06 crc kubenswrapper[4728]: I0227 11:00:06.861465 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/878771e5-dc09-46d1-a3b0-79628625cd3e-kube-api-access-cdzxl" (OuterVolumeSpecName: "kube-api-access-cdzxl") pod "878771e5-dc09-46d1-a3b0-79628625cd3e" (UID: "878771e5-dc09-46d1-a3b0-79628625cd3e"). InnerVolumeSpecName "kube-api-access-cdzxl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:00:06 crc kubenswrapper[4728]: I0227 11:00:06.954779 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cdzxl\" (UniqueName: \"kubernetes.io/projected/878771e5-dc09-46d1-a3b0-79628625cd3e-kube-api-access-cdzxl\") on node \"crc\" DevicePath \"\"" Feb 27 11:00:07 crc kubenswrapper[4728]: I0227 11:00:07.316280 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536500-pqxmp" event={"ID":"878771e5-dc09-46d1-a3b0-79628625cd3e","Type":"ContainerDied","Data":"fc377e02c225b886f15b51a6158ab370bc10e99219c170d1a62e8fa335d60fb1"} Feb 27 11:00:07 crc kubenswrapper[4728]: I0227 11:00:07.316318 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fc377e02c225b886f15b51a6158ab370bc10e99219c170d1a62e8fa335d60fb1" Feb 27 11:00:07 crc kubenswrapper[4728]: I0227 11:00:07.316408 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536500-pqxmp" Feb 27 11:00:07 crc kubenswrapper[4728]: I0227 11:00:07.835833 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536494-97ggp"] Feb 27 11:00:07 crc kubenswrapper[4728]: I0227 11:00:07.847657 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536494-97ggp"] Feb 27 11:00:08 crc kubenswrapper[4728]: I0227 11:00:08.725279 4728 scope.go:117] "RemoveContainer" containerID="1e9c3f8a2c89093b960b65e9fc2c283b1d41282e75aab6b6afbc29feaad679c4" Feb 27 11:00:08 crc kubenswrapper[4728]: E0227 11:00:08.725641 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mf2hh_openshift-machine-config-operator(c2cfd349-f825-497b-b698-7fb6bc258b22)\"" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" Feb 27 11:00:08 crc kubenswrapper[4728]: I0227 11:00:08.739495 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="829ffeda-5f1b-4cfa-8417-71c47d4e621f" path="/var/lib/kubelet/pods/829ffeda-5f1b-4cfa-8417-71c47d4e621f/volumes" Feb 27 11:00:10 crc kubenswrapper[4728]: I0227 11:00:10.036575 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-7371-account-create-update-6lhlf"] Feb 27 11:00:10 crc kubenswrapper[4728]: I0227 11:00:10.049819 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-7371-account-create-update-6lhlf"] Feb 27 11:00:10 crc kubenswrapper[4728]: I0227 11:00:10.751690 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="051376ee-f481-42d6-a69c-637247569a1d" path="/var/lib/kubelet/pods/051376ee-f481-42d6-a69c-637247569a1d/volumes" Feb 27 11:00:12 crc kubenswrapper[4728]: I0227 11:00:12.072446 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-m9lhq"] Feb 27 11:00:12 crc kubenswrapper[4728]: I0227 11:00:12.096612 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-3833-account-create-update-sllck"] Feb 27 11:00:12 crc kubenswrapper[4728]: I0227 11:00:12.113110 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-dmpcg"] Feb 27 11:00:12 crc kubenswrapper[4728]: I0227 11:00:12.124039 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-cnj5s"] Feb 27 11:00:12 crc kubenswrapper[4728]: I0227 11:00:12.137010 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-b66a-account-create-update-wtmz5"] Feb 27 11:00:12 crc kubenswrapper[4728]: I0227 11:00:12.152311 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-3833-account-create-update-sllck"] Feb 27 11:00:12 crc kubenswrapper[4728]: I0227 11:00:12.163027 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-m9lhq"] Feb 27 11:00:12 crc kubenswrapper[4728]: I0227 11:00:12.174307 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-dmpcg"] Feb 27 11:00:12 crc kubenswrapper[4728]: I0227 11:00:12.186041 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-b66a-account-create-update-wtmz5"] Feb 27 11:00:12 crc kubenswrapper[4728]: I0227 11:00:12.194631 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-cnj5s"] Feb 27 11:00:12 crc kubenswrapper[4728]: I0227 11:00:12.739046 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="390e39cb-69e2-4f3b-94a7-deef2d13c027" path="/var/lib/kubelet/pods/390e39cb-69e2-4f3b-94a7-deef2d13c027/volumes" Feb 27 11:00:12 crc kubenswrapper[4728]: I0227 11:00:12.739835 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d5f0465-1665-4c7f-b8f5-e345efbe3872" path="/var/lib/kubelet/pods/9d5f0465-1665-4c7f-b8f5-e345efbe3872/volumes" Feb 27 11:00:12 crc kubenswrapper[4728]: I0227 11:00:12.740463 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af86f825-d16c-4856-a823-01f06824a97f" path="/var/lib/kubelet/pods/af86f825-d16c-4856-a823-01f06824a97f/volumes" Feb 27 11:00:12 crc kubenswrapper[4728]: I0227 11:00:12.741821 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b3f1a225-d366-4d33-bd31-53ef143b546b" path="/var/lib/kubelet/pods/b3f1a225-d366-4d33-bd31-53ef143b546b/volumes" Feb 27 11:00:12 crc kubenswrapper[4728]: I0227 11:00:12.742849 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c297845b-bf34-4c1d-88f0-3f9b28d09e54" path="/var/lib/kubelet/pods/c297845b-bf34-4c1d-88f0-3f9b28d09e54/volumes" Feb 27 11:00:19 crc kubenswrapper[4728]: I0227 11:00:19.724942 4728 scope.go:117] "RemoveContainer" containerID="1e9c3f8a2c89093b960b65e9fc2c283b1d41282e75aab6b6afbc29feaad679c4" Feb 27 11:00:19 crc kubenswrapper[4728]: E0227 11:00:19.726221 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mf2hh_openshift-machine-config-operator(c2cfd349-f825-497b-b698-7fb6bc258b22)\"" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" Feb 27 11:00:22 crc kubenswrapper[4728]: I0227 11:00:22.347876 4728 scope.go:117] "RemoveContainer" containerID="edadc918fd6b5518e1194a93b819e609d220ae4326c3699c90bca9a9be1cfe95" Feb 27 11:00:22 crc kubenswrapper[4728]: I0227 11:00:22.379677 4728 scope.go:117] "RemoveContainer" containerID="af14006cb97dcfb309cae43234688f5a90e9d54c25c2ea51f5ac948c38136fe3" Feb 27 11:00:22 crc kubenswrapper[4728]: I0227 11:00:22.468148 4728 scope.go:117] "RemoveContainer" containerID="9047859afd057f8c632d00166deebd1664cf8f80449e4893837ed2245fd4deae" Feb 27 11:00:22 crc kubenswrapper[4728]: I0227 11:00:22.649665 4728 scope.go:117] "RemoveContainer" containerID="aa1cb35b13eb3354dc69d33a288dd974cee0bf56a76e56257f834a3057988edb" Feb 27 11:00:22 crc kubenswrapper[4728]: I0227 11:00:22.706947 4728 scope.go:117] "RemoveContainer" containerID="7ae680eac9014a0a7a7b44a257a042e0ad152a6c92810dbe55308637990af761" Feb 27 11:00:22 crc kubenswrapper[4728]: I0227 11:00:22.747838 4728 scope.go:117] "RemoveContainer" containerID="f4015a30582b187ccb0c55f1870d06cb1d9d793f10f1ed0ed31c3e2494d51211" Feb 27 11:00:22 crc kubenswrapper[4728]: I0227 11:00:22.799493 4728 scope.go:117] "RemoveContainer" containerID="0b5c76c2458773631054dcaca04d179732379c023235d1a76b79c113f201901a" Feb 27 11:00:22 crc kubenswrapper[4728]: I0227 11:00:22.836707 4728 scope.go:117] "RemoveContainer" containerID="ca665a991a6757017f54e60649d11642ff9c681aa78712ef7af425af22d4ab5d" Feb 27 11:00:22 crc kubenswrapper[4728]: I0227 11:00:22.860434 4728 scope.go:117] "RemoveContainer" containerID="b543d083385962cb317532c57b2ca5f63d7e509372a519209505ae381fce765b" Feb 27 11:00:31 crc kubenswrapper[4728]: I0227 11:00:31.725806 4728 scope.go:117] "RemoveContainer" containerID="1e9c3f8a2c89093b960b65e9fc2c283b1d41282e75aab6b6afbc29feaad679c4" Feb 27 11:00:31 crc kubenswrapper[4728]: E0227 11:00:31.726924 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mf2hh_openshift-machine-config-operator(c2cfd349-f825-497b-b698-7fb6bc258b22)\"" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" Feb 27 11:00:38 crc kubenswrapper[4728]: I0227 11:00:38.739682 4728 generic.go:334] "Generic (PLEG): container finished" podID="5cf31d36-5693-4ec4-bd25-87524df66974" containerID="cbab449db1b132cc1d8846939a5371d8241d0850fe8bb299aed8abe071c66235" exitCode=0 Feb 27 11:00:38 crc kubenswrapper[4728]: I0227 11:00:38.747810 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-9qk2n" event={"ID":"5cf31d36-5693-4ec4-bd25-87524df66974","Type":"ContainerDied","Data":"cbab449db1b132cc1d8846939a5371d8241d0850fe8bb299aed8abe071c66235"} Feb 27 11:00:40 crc kubenswrapper[4728]: I0227 11:00:40.376216 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-9qk2n" Feb 27 11:00:40 crc kubenswrapper[4728]: I0227 11:00:40.544120 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5cf31d36-5693-4ec4-bd25-87524df66974-inventory\") pod \"5cf31d36-5693-4ec4-bd25-87524df66974\" (UID: \"5cf31d36-5693-4ec4-bd25-87524df66974\") " Feb 27 11:00:40 crc kubenswrapper[4728]: I0227 11:00:40.544216 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5cf31d36-5693-4ec4-bd25-87524df66974-ssh-key-openstack-edpm-ipam\") pod \"5cf31d36-5693-4ec4-bd25-87524df66974\" (UID: \"5cf31d36-5693-4ec4-bd25-87524df66974\") " Feb 27 11:00:40 crc kubenswrapper[4728]: I0227 11:00:40.544661 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w98zc\" (UniqueName: \"kubernetes.io/projected/5cf31d36-5693-4ec4-bd25-87524df66974-kube-api-access-w98zc\") pod \"5cf31d36-5693-4ec4-bd25-87524df66974\" (UID: \"5cf31d36-5693-4ec4-bd25-87524df66974\") " Feb 27 11:00:40 crc kubenswrapper[4728]: I0227 11:00:40.556152 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5cf31d36-5693-4ec4-bd25-87524df66974-kube-api-access-w98zc" (OuterVolumeSpecName: "kube-api-access-w98zc") pod "5cf31d36-5693-4ec4-bd25-87524df66974" (UID: "5cf31d36-5693-4ec4-bd25-87524df66974"). InnerVolumeSpecName "kube-api-access-w98zc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:00:40 crc kubenswrapper[4728]: I0227 11:00:40.588612 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5cf31d36-5693-4ec4-bd25-87524df66974-inventory" (OuterVolumeSpecName: "inventory") pod "5cf31d36-5693-4ec4-bd25-87524df66974" (UID: "5cf31d36-5693-4ec4-bd25-87524df66974"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 11:00:40 crc kubenswrapper[4728]: I0227 11:00:40.612585 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5cf31d36-5693-4ec4-bd25-87524df66974-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "5cf31d36-5693-4ec4-bd25-87524df66974" (UID: "5cf31d36-5693-4ec4-bd25-87524df66974"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 11:00:40 crc kubenswrapper[4728]: I0227 11:00:40.648228 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w98zc\" (UniqueName: \"kubernetes.io/projected/5cf31d36-5693-4ec4-bd25-87524df66974-kube-api-access-w98zc\") on node \"crc\" DevicePath \"\"" Feb 27 11:00:40 crc kubenswrapper[4728]: I0227 11:00:40.648268 4728 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5cf31d36-5693-4ec4-bd25-87524df66974-inventory\") on node \"crc\" DevicePath \"\"" Feb 27 11:00:40 crc kubenswrapper[4728]: I0227 11:00:40.648282 4728 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5cf31d36-5693-4ec4-bd25-87524df66974-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 27 11:00:40 crc kubenswrapper[4728]: I0227 11:00:40.788233 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-9qk2n" event={"ID":"5cf31d36-5693-4ec4-bd25-87524df66974","Type":"ContainerDied","Data":"6b3b30aff0844663f4a034b7a013bd701d379cdb0d28e6002c27325cf11c0922"} Feb 27 11:00:40 crc kubenswrapper[4728]: I0227 11:00:40.788610 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6b3b30aff0844663f4a034b7a013bd701d379cdb0d28e6002c27325cf11c0922" Feb 27 11:00:40 crc kubenswrapper[4728]: I0227 11:00:40.788312 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-9qk2n" Feb 27 11:00:40 crc kubenswrapper[4728]: I0227 11:00:40.880028 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-bhhg4"] Feb 27 11:00:40 crc kubenswrapper[4728]: E0227 11:00:40.880958 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5cf31d36-5693-4ec4-bd25-87524df66974" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 27 11:00:40 crc kubenswrapper[4728]: I0227 11:00:40.880974 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="5cf31d36-5693-4ec4-bd25-87524df66974" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 27 11:00:40 crc kubenswrapper[4728]: E0227 11:00:40.881033 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="878771e5-dc09-46d1-a3b0-79628625cd3e" containerName="oc" Feb 27 11:00:40 crc kubenswrapper[4728]: I0227 11:00:40.881043 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="878771e5-dc09-46d1-a3b0-79628625cd3e" containerName="oc" Feb 27 11:00:40 crc kubenswrapper[4728]: E0227 11:00:40.881062 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="896c73eb-cb8d-4aa1-88e7-9748213bb799" containerName="collect-profiles" Feb 27 11:00:40 crc kubenswrapper[4728]: I0227 11:00:40.881070 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="896c73eb-cb8d-4aa1-88e7-9748213bb799" containerName="collect-profiles" Feb 27 11:00:40 crc kubenswrapper[4728]: I0227 11:00:40.881313 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="5cf31d36-5693-4ec4-bd25-87524df66974" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 27 11:00:40 crc kubenswrapper[4728]: I0227 11:00:40.881348 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="896c73eb-cb8d-4aa1-88e7-9748213bb799" containerName="collect-profiles" Feb 27 11:00:40 crc kubenswrapper[4728]: I0227 11:00:40.881363 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="878771e5-dc09-46d1-a3b0-79628625cd3e" containerName="oc" Feb 27 11:00:40 crc kubenswrapper[4728]: I0227 11:00:40.882681 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-bhhg4" Feb 27 11:00:40 crc kubenswrapper[4728]: I0227 11:00:40.892945 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 27 11:00:40 crc kubenswrapper[4728]: I0227 11:00:40.893567 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-r9nq7" Feb 27 11:00:40 crc kubenswrapper[4728]: I0227 11:00:40.893781 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 27 11:00:40 crc kubenswrapper[4728]: I0227 11:00:40.895092 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 27 11:00:40 crc kubenswrapper[4728]: I0227 11:00:40.912437 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-bhhg4"] Feb 27 11:00:41 crc kubenswrapper[4728]: I0227 11:00:41.079552 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hfxg9\" (UniqueName: \"kubernetes.io/projected/699102c6-20ff-4e22-8981-e8c12a0c5a01-kube-api-access-hfxg9\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-bhhg4\" (UID: \"699102c6-20ff-4e22-8981-e8c12a0c5a01\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-bhhg4" Feb 27 11:00:41 crc kubenswrapper[4728]: I0227 11:00:41.079855 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/699102c6-20ff-4e22-8981-e8c12a0c5a01-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-bhhg4\" (UID: \"699102c6-20ff-4e22-8981-e8c12a0c5a01\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-bhhg4" Feb 27 11:00:41 crc kubenswrapper[4728]: I0227 11:00:41.080437 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/699102c6-20ff-4e22-8981-e8c12a0c5a01-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-bhhg4\" (UID: \"699102c6-20ff-4e22-8981-e8c12a0c5a01\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-bhhg4" Feb 27 11:00:41 crc kubenswrapper[4728]: I0227 11:00:41.182669 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/699102c6-20ff-4e22-8981-e8c12a0c5a01-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-bhhg4\" (UID: \"699102c6-20ff-4e22-8981-e8c12a0c5a01\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-bhhg4" Feb 27 11:00:41 crc kubenswrapper[4728]: I0227 11:00:41.183773 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/699102c6-20ff-4e22-8981-e8c12a0c5a01-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-bhhg4\" (UID: \"699102c6-20ff-4e22-8981-e8c12a0c5a01\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-bhhg4" Feb 27 11:00:41 crc kubenswrapper[4728]: I0227 11:00:41.184111 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hfxg9\" (UniqueName: \"kubernetes.io/projected/699102c6-20ff-4e22-8981-e8c12a0c5a01-kube-api-access-hfxg9\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-bhhg4\" (UID: \"699102c6-20ff-4e22-8981-e8c12a0c5a01\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-bhhg4" Feb 27 11:00:41 crc kubenswrapper[4728]: I0227 11:00:41.187851 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/699102c6-20ff-4e22-8981-e8c12a0c5a01-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-bhhg4\" (UID: \"699102c6-20ff-4e22-8981-e8c12a0c5a01\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-bhhg4" Feb 27 11:00:41 crc kubenswrapper[4728]: I0227 11:00:41.192172 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/699102c6-20ff-4e22-8981-e8c12a0c5a01-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-bhhg4\" (UID: \"699102c6-20ff-4e22-8981-e8c12a0c5a01\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-bhhg4" Feb 27 11:00:41 crc kubenswrapper[4728]: I0227 11:00:41.205832 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hfxg9\" (UniqueName: \"kubernetes.io/projected/699102c6-20ff-4e22-8981-e8c12a0c5a01-kube-api-access-hfxg9\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-bhhg4\" (UID: \"699102c6-20ff-4e22-8981-e8c12a0c5a01\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-bhhg4" Feb 27 11:00:41 crc kubenswrapper[4728]: I0227 11:00:41.208787 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-bhhg4" Feb 27 11:00:41 crc kubenswrapper[4728]: I0227 11:00:41.873942 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-bhhg4"] Feb 27 11:00:42 crc kubenswrapper[4728]: I0227 11:00:42.812185 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-bhhg4" event={"ID":"699102c6-20ff-4e22-8981-e8c12a0c5a01","Type":"ContainerStarted","Data":"131f79de235427d5b3ee52a4428fda2491e67d39d52cb2c7b959cc5e1e1933ad"} Feb 27 11:00:42 crc kubenswrapper[4728]: I0227 11:00:42.812756 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-bhhg4" event={"ID":"699102c6-20ff-4e22-8981-e8c12a0c5a01","Type":"ContainerStarted","Data":"50f3e9c878599d5a5a20bc541b4d61bf0c4dbc1c6a6b47622a9e915fd81172b6"} Feb 27 11:00:42 crc kubenswrapper[4728]: I0227 11:00:42.832405 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-bhhg4" podStartSLOduration=2.393852017 podStartE2EDuration="2.832388396s" podCreationTimestamp="2026-02-27 11:00:40 +0000 UTC" firstStartedPulling="2026-02-27 11:00:41.87654381 +0000 UTC m=+2061.838909916" lastFinishedPulling="2026-02-27 11:00:42.315080179 +0000 UTC m=+2062.277446295" observedRunningTime="2026-02-27 11:00:42.827298549 +0000 UTC m=+2062.789664655" watchObservedRunningTime="2026-02-27 11:00:42.832388396 +0000 UTC m=+2062.794754492" Feb 27 11:00:44 crc kubenswrapper[4728]: I0227 11:00:44.047875 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-74b5q"] Feb 27 11:00:44 crc kubenswrapper[4728]: I0227 11:00:44.067114 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-74b5q"] Feb 27 11:00:44 crc kubenswrapper[4728]: I0227 11:00:44.725282 4728 scope.go:117] "RemoveContainer" containerID="1e9c3f8a2c89093b960b65e9fc2c283b1d41282e75aab6b6afbc29feaad679c4" Feb 27 11:00:44 crc kubenswrapper[4728]: E0227 11:00:44.725689 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mf2hh_openshift-machine-config-operator(c2cfd349-f825-497b-b698-7fb6bc258b22)\"" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" Feb 27 11:00:44 crc kubenswrapper[4728]: I0227 11:00:44.740024 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4aaa1a60-7863-44de-8271-50bcd8fc1743" path="/var/lib/kubelet/pods/4aaa1a60-7863-44de-8271-50bcd8fc1743/volumes" Feb 27 11:00:47 crc kubenswrapper[4728]: I0227 11:00:47.873605 4728 generic.go:334] "Generic (PLEG): container finished" podID="699102c6-20ff-4e22-8981-e8c12a0c5a01" containerID="131f79de235427d5b3ee52a4428fda2491e67d39d52cb2c7b959cc5e1e1933ad" exitCode=0 Feb 27 11:00:47 crc kubenswrapper[4728]: I0227 11:00:47.873705 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-bhhg4" event={"ID":"699102c6-20ff-4e22-8981-e8c12a0c5a01","Type":"ContainerDied","Data":"131f79de235427d5b3ee52a4428fda2491e67d39d52cb2c7b959cc5e1e1933ad"} Feb 27 11:00:49 crc kubenswrapper[4728]: I0227 11:00:49.433484 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-bhhg4" Feb 27 11:00:49 crc kubenswrapper[4728]: I0227 11:00:49.614521 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/699102c6-20ff-4e22-8981-e8c12a0c5a01-inventory\") pod \"699102c6-20ff-4e22-8981-e8c12a0c5a01\" (UID: \"699102c6-20ff-4e22-8981-e8c12a0c5a01\") " Feb 27 11:00:49 crc kubenswrapper[4728]: I0227 11:00:49.614817 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/699102c6-20ff-4e22-8981-e8c12a0c5a01-ssh-key-openstack-edpm-ipam\") pod \"699102c6-20ff-4e22-8981-e8c12a0c5a01\" (UID: \"699102c6-20ff-4e22-8981-e8c12a0c5a01\") " Feb 27 11:00:49 crc kubenswrapper[4728]: I0227 11:00:49.615061 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hfxg9\" (UniqueName: \"kubernetes.io/projected/699102c6-20ff-4e22-8981-e8c12a0c5a01-kube-api-access-hfxg9\") pod \"699102c6-20ff-4e22-8981-e8c12a0c5a01\" (UID: \"699102c6-20ff-4e22-8981-e8c12a0c5a01\") " Feb 27 11:00:49 crc kubenswrapper[4728]: I0227 11:00:49.620954 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/699102c6-20ff-4e22-8981-e8c12a0c5a01-kube-api-access-hfxg9" (OuterVolumeSpecName: "kube-api-access-hfxg9") pod "699102c6-20ff-4e22-8981-e8c12a0c5a01" (UID: "699102c6-20ff-4e22-8981-e8c12a0c5a01"). InnerVolumeSpecName "kube-api-access-hfxg9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:00:49 crc kubenswrapper[4728]: I0227 11:00:49.647051 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/699102c6-20ff-4e22-8981-e8c12a0c5a01-inventory" (OuterVolumeSpecName: "inventory") pod "699102c6-20ff-4e22-8981-e8c12a0c5a01" (UID: "699102c6-20ff-4e22-8981-e8c12a0c5a01"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 11:00:49 crc kubenswrapper[4728]: I0227 11:00:49.648967 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/699102c6-20ff-4e22-8981-e8c12a0c5a01-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "699102c6-20ff-4e22-8981-e8c12a0c5a01" (UID: "699102c6-20ff-4e22-8981-e8c12a0c5a01"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 11:00:49 crc kubenswrapper[4728]: I0227 11:00:49.718432 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hfxg9\" (UniqueName: \"kubernetes.io/projected/699102c6-20ff-4e22-8981-e8c12a0c5a01-kube-api-access-hfxg9\") on node \"crc\" DevicePath \"\"" Feb 27 11:00:49 crc kubenswrapper[4728]: I0227 11:00:49.718476 4728 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/699102c6-20ff-4e22-8981-e8c12a0c5a01-inventory\") on node \"crc\" DevicePath \"\"" Feb 27 11:00:49 crc kubenswrapper[4728]: I0227 11:00:49.718488 4728 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/699102c6-20ff-4e22-8981-e8c12a0c5a01-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 27 11:00:49 crc kubenswrapper[4728]: I0227 11:00:49.896536 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-bhhg4" event={"ID":"699102c6-20ff-4e22-8981-e8c12a0c5a01","Type":"ContainerDied","Data":"50f3e9c878599d5a5a20bc541b4d61bf0c4dbc1c6a6b47622a9e915fd81172b6"} Feb 27 11:00:49 crc kubenswrapper[4728]: I0227 11:00:49.896588 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="50f3e9c878599d5a5a20bc541b4d61bf0c4dbc1c6a6b47622a9e915fd81172b6" Feb 27 11:00:49 crc kubenswrapper[4728]: I0227 11:00:49.896650 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-bhhg4" Feb 27 11:00:50 crc kubenswrapper[4728]: I0227 11:00:50.031071 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-vh27d"] Feb 27 11:00:50 crc kubenswrapper[4728]: E0227 11:00:50.031652 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="699102c6-20ff-4e22-8981-e8c12a0c5a01" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 27 11:00:50 crc kubenswrapper[4728]: I0227 11:00:50.031672 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="699102c6-20ff-4e22-8981-e8c12a0c5a01" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 27 11:00:50 crc kubenswrapper[4728]: I0227 11:00:50.031980 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="699102c6-20ff-4e22-8981-e8c12a0c5a01" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 27 11:00:50 crc kubenswrapper[4728]: I0227 11:00:50.032863 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-vh27d" Feb 27 11:00:50 crc kubenswrapper[4728]: I0227 11:00:50.035612 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 27 11:00:50 crc kubenswrapper[4728]: I0227 11:00:50.035641 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 27 11:00:50 crc kubenswrapper[4728]: I0227 11:00:50.035925 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-r9nq7" Feb 27 11:00:50 crc kubenswrapper[4728]: I0227 11:00:50.035971 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 27 11:00:50 crc kubenswrapper[4728]: I0227 11:00:50.046040 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-vh27d"] Feb 27 11:00:50 crc kubenswrapper[4728]: I0227 11:00:50.128784 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1c346330-059d-4998-91ec-3014d3cfe1b9-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-vh27d\" (UID: \"1c346330-059d-4998-91ec-3014d3cfe1b9\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-vh27d" Feb 27 11:00:50 crc kubenswrapper[4728]: I0227 11:00:50.128885 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4smbq\" (UniqueName: \"kubernetes.io/projected/1c346330-059d-4998-91ec-3014d3cfe1b9-kube-api-access-4smbq\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-vh27d\" (UID: \"1c346330-059d-4998-91ec-3014d3cfe1b9\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-vh27d" Feb 27 11:00:50 crc kubenswrapper[4728]: I0227 11:00:50.129031 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1c346330-059d-4998-91ec-3014d3cfe1b9-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-vh27d\" (UID: \"1c346330-059d-4998-91ec-3014d3cfe1b9\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-vh27d" Feb 27 11:00:50 crc kubenswrapper[4728]: I0227 11:00:50.231637 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1c346330-059d-4998-91ec-3014d3cfe1b9-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-vh27d\" (UID: \"1c346330-059d-4998-91ec-3014d3cfe1b9\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-vh27d" Feb 27 11:00:50 crc kubenswrapper[4728]: I0227 11:00:50.231775 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1c346330-059d-4998-91ec-3014d3cfe1b9-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-vh27d\" (UID: \"1c346330-059d-4998-91ec-3014d3cfe1b9\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-vh27d" Feb 27 11:00:50 crc kubenswrapper[4728]: I0227 11:00:50.231851 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4smbq\" (UniqueName: \"kubernetes.io/projected/1c346330-059d-4998-91ec-3014d3cfe1b9-kube-api-access-4smbq\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-vh27d\" (UID: \"1c346330-059d-4998-91ec-3014d3cfe1b9\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-vh27d" Feb 27 11:00:50 crc kubenswrapper[4728]: I0227 11:00:50.235699 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1c346330-059d-4998-91ec-3014d3cfe1b9-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-vh27d\" (UID: \"1c346330-059d-4998-91ec-3014d3cfe1b9\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-vh27d" Feb 27 11:00:50 crc kubenswrapper[4728]: I0227 11:00:50.245770 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1c346330-059d-4998-91ec-3014d3cfe1b9-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-vh27d\" (UID: \"1c346330-059d-4998-91ec-3014d3cfe1b9\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-vh27d" Feb 27 11:00:50 crc kubenswrapper[4728]: I0227 11:00:50.251609 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4smbq\" (UniqueName: \"kubernetes.io/projected/1c346330-059d-4998-91ec-3014d3cfe1b9-kube-api-access-4smbq\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-vh27d\" (UID: \"1c346330-059d-4998-91ec-3014d3cfe1b9\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-vh27d" Feb 27 11:00:50 crc kubenswrapper[4728]: I0227 11:00:50.351839 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-vh27d" Feb 27 11:00:51 crc kubenswrapper[4728]: I0227 11:00:51.014829 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-vh27d"] Feb 27 11:00:51 crc kubenswrapper[4728]: I0227 11:00:51.982044 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-vh27d" event={"ID":"1c346330-059d-4998-91ec-3014d3cfe1b9","Type":"ContainerStarted","Data":"dc293e1c6e851a3747d59844298a2d9dfb0bc22d97771722ccb0ed7c0526e5d9"} Feb 27 11:00:51 crc kubenswrapper[4728]: I0227 11:00:51.982419 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-vh27d" event={"ID":"1c346330-059d-4998-91ec-3014d3cfe1b9","Type":"ContainerStarted","Data":"5654e6ded48af7d7d1bf089b9099a781166705f647f81771316c6236e6bf59c2"} Feb 27 11:00:52 crc kubenswrapper[4728]: I0227 11:00:52.015475 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-vh27d" podStartSLOduration=1.5844313840000002 podStartE2EDuration="2.01545634s" podCreationTimestamp="2026-02-27 11:00:50 +0000 UTC" firstStartedPulling="2026-02-27 11:00:51.019520716 +0000 UTC m=+2070.981886832" lastFinishedPulling="2026-02-27 11:00:51.450545672 +0000 UTC m=+2071.412911788" observedRunningTime="2026-02-27 11:00:52.008964784 +0000 UTC m=+2071.971330890" watchObservedRunningTime="2026-02-27 11:00:52.01545634 +0000 UTC m=+2071.977822446" Feb 27 11:00:59 crc kubenswrapper[4728]: I0227 11:00:59.725957 4728 scope.go:117] "RemoveContainer" containerID="1e9c3f8a2c89093b960b65e9fc2c283b1d41282e75aab6b6afbc29feaad679c4" Feb 27 11:00:59 crc kubenswrapper[4728]: E0227 11:00:59.727160 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mf2hh_openshift-machine-config-operator(c2cfd349-f825-497b-b698-7fb6bc258b22)\"" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" Feb 27 11:01:00 crc kubenswrapper[4728]: I0227 11:01:00.140919 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29536501-ptgcq"] Feb 27 11:01:00 crc kubenswrapper[4728]: I0227 11:01:00.143822 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29536501-ptgcq" Feb 27 11:01:00 crc kubenswrapper[4728]: I0227 11:01:00.154706 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29536501-ptgcq"] Feb 27 11:01:00 crc kubenswrapper[4728]: I0227 11:01:00.289751 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3bc7091-e483-405f-8cf6-a3494fa34cdf-combined-ca-bundle\") pod \"keystone-cron-29536501-ptgcq\" (UID: \"c3bc7091-e483-405f-8cf6-a3494fa34cdf\") " pod="openstack/keystone-cron-29536501-ptgcq" Feb 27 11:01:00 crc kubenswrapper[4728]: I0227 11:01:00.289964 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9p4c\" (UniqueName: \"kubernetes.io/projected/c3bc7091-e483-405f-8cf6-a3494fa34cdf-kube-api-access-d9p4c\") pod \"keystone-cron-29536501-ptgcq\" (UID: \"c3bc7091-e483-405f-8cf6-a3494fa34cdf\") " pod="openstack/keystone-cron-29536501-ptgcq" Feb 27 11:01:00 crc kubenswrapper[4728]: I0227 11:01:00.290105 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c3bc7091-e483-405f-8cf6-a3494fa34cdf-fernet-keys\") pod \"keystone-cron-29536501-ptgcq\" (UID: \"c3bc7091-e483-405f-8cf6-a3494fa34cdf\") " pod="openstack/keystone-cron-29536501-ptgcq" Feb 27 11:01:00 crc kubenswrapper[4728]: I0227 11:01:00.290137 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3bc7091-e483-405f-8cf6-a3494fa34cdf-config-data\") pod \"keystone-cron-29536501-ptgcq\" (UID: \"c3bc7091-e483-405f-8cf6-a3494fa34cdf\") " pod="openstack/keystone-cron-29536501-ptgcq" Feb 27 11:01:00 crc kubenswrapper[4728]: I0227 11:01:00.393800 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3bc7091-e483-405f-8cf6-a3494fa34cdf-combined-ca-bundle\") pod \"keystone-cron-29536501-ptgcq\" (UID: \"c3bc7091-e483-405f-8cf6-a3494fa34cdf\") " pod="openstack/keystone-cron-29536501-ptgcq" Feb 27 11:01:00 crc kubenswrapper[4728]: I0227 11:01:00.393935 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d9p4c\" (UniqueName: \"kubernetes.io/projected/c3bc7091-e483-405f-8cf6-a3494fa34cdf-kube-api-access-d9p4c\") pod \"keystone-cron-29536501-ptgcq\" (UID: \"c3bc7091-e483-405f-8cf6-a3494fa34cdf\") " pod="openstack/keystone-cron-29536501-ptgcq" Feb 27 11:01:00 crc kubenswrapper[4728]: I0227 11:01:00.394020 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c3bc7091-e483-405f-8cf6-a3494fa34cdf-fernet-keys\") pod \"keystone-cron-29536501-ptgcq\" (UID: \"c3bc7091-e483-405f-8cf6-a3494fa34cdf\") " pod="openstack/keystone-cron-29536501-ptgcq" Feb 27 11:01:00 crc kubenswrapper[4728]: I0227 11:01:00.394040 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3bc7091-e483-405f-8cf6-a3494fa34cdf-config-data\") pod \"keystone-cron-29536501-ptgcq\" (UID: \"c3bc7091-e483-405f-8cf6-a3494fa34cdf\") " pod="openstack/keystone-cron-29536501-ptgcq" Feb 27 11:01:00 crc kubenswrapper[4728]: I0227 11:01:00.402881 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3bc7091-e483-405f-8cf6-a3494fa34cdf-combined-ca-bundle\") pod \"keystone-cron-29536501-ptgcq\" (UID: \"c3bc7091-e483-405f-8cf6-a3494fa34cdf\") " pod="openstack/keystone-cron-29536501-ptgcq" Feb 27 11:01:00 crc kubenswrapper[4728]: I0227 11:01:00.403392 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3bc7091-e483-405f-8cf6-a3494fa34cdf-config-data\") pod \"keystone-cron-29536501-ptgcq\" (UID: \"c3bc7091-e483-405f-8cf6-a3494fa34cdf\") " pod="openstack/keystone-cron-29536501-ptgcq" Feb 27 11:01:00 crc kubenswrapper[4728]: I0227 11:01:00.404827 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c3bc7091-e483-405f-8cf6-a3494fa34cdf-fernet-keys\") pod \"keystone-cron-29536501-ptgcq\" (UID: \"c3bc7091-e483-405f-8cf6-a3494fa34cdf\") " pod="openstack/keystone-cron-29536501-ptgcq" Feb 27 11:01:00 crc kubenswrapper[4728]: I0227 11:01:00.429228 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d9p4c\" (UniqueName: \"kubernetes.io/projected/c3bc7091-e483-405f-8cf6-a3494fa34cdf-kube-api-access-d9p4c\") pod \"keystone-cron-29536501-ptgcq\" (UID: \"c3bc7091-e483-405f-8cf6-a3494fa34cdf\") " pod="openstack/keystone-cron-29536501-ptgcq" Feb 27 11:01:00 crc kubenswrapper[4728]: I0227 11:01:00.490272 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29536501-ptgcq" Feb 27 11:01:00 crc kubenswrapper[4728]: I0227 11:01:00.953396 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29536501-ptgcq"] Feb 27 11:01:01 crc kubenswrapper[4728]: I0227 11:01:01.075906 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29536501-ptgcq" event={"ID":"c3bc7091-e483-405f-8cf6-a3494fa34cdf","Type":"ContainerStarted","Data":"0e45b1c4f3213411fd415a5afa44811ad9136ca5bdfe123b5764463e3e250dbc"} Feb 27 11:01:02 crc kubenswrapper[4728]: I0227 11:01:02.091808 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29536501-ptgcq" event={"ID":"c3bc7091-e483-405f-8cf6-a3494fa34cdf","Type":"ContainerStarted","Data":"fe2b147dfb6260b8c81127dc52e2d679be04a07b910146f59beffd575806ef7d"} Feb 27 11:01:02 crc kubenswrapper[4728]: I0227 11:01:02.120403 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29536501-ptgcq" podStartSLOduration=2.120384479 podStartE2EDuration="2.120384479s" podCreationTimestamp="2026-02-27 11:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 11:01:02.113995236 +0000 UTC m=+2082.076361342" watchObservedRunningTime="2026-02-27 11:01:02.120384479 +0000 UTC m=+2082.082750585" Feb 27 11:01:05 crc kubenswrapper[4728]: I0227 11:01:05.140998 4728 generic.go:334] "Generic (PLEG): container finished" podID="c3bc7091-e483-405f-8cf6-a3494fa34cdf" containerID="fe2b147dfb6260b8c81127dc52e2d679be04a07b910146f59beffd575806ef7d" exitCode=0 Feb 27 11:01:05 crc kubenswrapper[4728]: I0227 11:01:05.141137 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29536501-ptgcq" event={"ID":"c3bc7091-e483-405f-8cf6-a3494fa34cdf","Type":"ContainerDied","Data":"fe2b147dfb6260b8c81127dc52e2d679be04a07b910146f59beffd575806ef7d"} Feb 27 11:01:06 crc kubenswrapper[4728]: I0227 11:01:06.587243 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29536501-ptgcq" Feb 27 11:01:06 crc kubenswrapper[4728]: I0227 11:01:06.691124 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c3bc7091-e483-405f-8cf6-a3494fa34cdf-fernet-keys\") pod \"c3bc7091-e483-405f-8cf6-a3494fa34cdf\" (UID: \"c3bc7091-e483-405f-8cf6-a3494fa34cdf\") " Feb 27 11:01:06 crc kubenswrapper[4728]: I0227 11:01:06.691254 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3bc7091-e483-405f-8cf6-a3494fa34cdf-config-data\") pod \"c3bc7091-e483-405f-8cf6-a3494fa34cdf\" (UID: \"c3bc7091-e483-405f-8cf6-a3494fa34cdf\") " Feb 27 11:01:06 crc kubenswrapper[4728]: I0227 11:01:06.691289 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3bc7091-e483-405f-8cf6-a3494fa34cdf-combined-ca-bundle\") pod \"c3bc7091-e483-405f-8cf6-a3494fa34cdf\" (UID: \"c3bc7091-e483-405f-8cf6-a3494fa34cdf\") " Feb 27 11:01:06 crc kubenswrapper[4728]: I0227 11:01:06.691576 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d9p4c\" (UniqueName: \"kubernetes.io/projected/c3bc7091-e483-405f-8cf6-a3494fa34cdf-kube-api-access-d9p4c\") pod \"c3bc7091-e483-405f-8cf6-a3494fa34cdf\" (UID: \"c3bc7091-e483-405f-8cf6-a3494fa34cdf\") " Feb 27 11:01:06 crc kubenswrapper[4728]: I0227 11:01:06.699317 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3bc7091-e483-405f-8cf6-a3494fa34cdf-kube-api-access-d9p4c" (OuterVolumeSpecName: "kube-api-access-d9p4c") pod "c3bc7091-e483-405f-8cf6-a3494fa34cdf" (UID: "c3bc7091-e483-405f-8cf6-a3494fa34cdf"). InnerVolumeSpecName "kube-api-access-d9p4c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:01:06 crc kubenswrapper[4728]: I0227 11:01:06.700140 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3bc7091-e483-405f-8cf6-a3494fa34cdf-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "c3bc7091-e483-405f-8cf6-a3494fa34cdf" (UID: "c3bc7091-e483-405f-8cf6-a3494fa34cdf"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 11:01:06 crc kubenswrapper[4728]: I0227 11:01:06.749776 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3bc7091-e483-405f-8cf6-a3494fa34cdf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c3bc7091-e483-405f-8cf6-a3494fa34cdf" (UID: "c3bc7091-e483-405f-8cf6-a3494fa34cdf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 11:01:06 crc kubenswrapper[4728]: I0227 11:01:06.767205 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3bc7091-e483-405f-8cf6-a3494fa34cdf-config-data" (OuterVolumeSpecName: "config-data") pod "c3bc7091-e483-405f-8cf6-a3494fa34cdf" (UID: "c3bc7091-e483-405f-8cf6-a3494fa34cdf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 11:01:06 crc kubenswrapper[4728]: I0227 11:01:06.795706 4728 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3bc7091-e483-405f-8cf6-a3494fa34cdf-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 11:01:06 crc kubenswrapper[4728]: I0227 11:01:06.795750 4728 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3bc7091-e483-405f-8cf6-a3494fa34cdf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 11:01:06 crc kubenswrapper[4728]: I0227 11:01:06.795770 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d9p4c\" (UniqueName: \"kubernetes.io/projected/c3bc7091-e483-405f-8cf6-a3494fa34cdf-kube-api-access-d9p4c\") on node \"crc\" DevicePath \"\"" Feb 27 11:01:06 crc kubenswrapper[4728]: I0227 11:01:06.795782 4728 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c3bc7091-e483-405f-8cf6-a3494fa34cdf-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 27 11:01:07 crc kubenswrapper[4728]: I0227 11:01:07.167838 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29536501-ptgcq" event={"ID":"c3bc7091-e483-405f-8cf6-a3494fa34cdf","Type":"ContainerDied","Data":"0e45b1c4f3213411fd415a5afa44811ad9136ca5bdfe123b5764463e3e250dbc"} Feb 27 11:01:07 crc kubenswrapper[4728]: I0227 11:01:07.167877 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0e45b1c4f3213411fd415a5afa44811ad9136ca5bdfe123b5764463e3e250dbc" Feb 27 11:01:07 crc kubenswrapper[4728]: I0227 11:01:07.167917 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29536501-ptgcq" Feb 27 11:01:10 crc kubenswrapper[4728]: I0227 11:01:10.736890 4728 scope.go:117] "RemoveContainer" containerID="1e9c3f8a2c89093b960b65e9fc2c283b1d41282e75aab6b6afbc29feaad679c4" Feb 27 11:01:11 crc kubenswrapper[4728]: I0227 11:01:11.055450 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-mmv5g"] Feb 27 11:01:11 crc kubenswrapper[4728]: I0227 11:01:11.077025 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-mmv5g"] Feb 27 11:01:11 crc kubenswrapper[4728]: I0227 11:01:11.217022 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" event={"ID":"c2cfd349-f825-497b-b698-7fb6bc258b22","Type":"ContainerStarted","Data":"d8d628b62f7040f098c44223cc7f9acb05d4ddab156eb45d341a5998e4060b93"} Feb 27 11:01:12 crc kubenswrapper[4728]: I0227 11:01:12.743919 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e93f65f0-9517-45e7-bcfc-3cbb70046b3e" path="/var/lib/kubelet/pods/e93f65f0-9517-45e7-bcfc-3cbb70046b3e/volumes" Feb 27 11:01:13 crc kubenswrapper[4728]: I0227 11:01:13.031762 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-85k7n"] Feb 27 11:01:13 crc kubenswrapper[4728]: I0227 11:01:13.049620 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-85k7n"] Feb 27 11:01:14 crc kubenswrapper[4728]: I0227 11:01:14.739801 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="72a5cb19-78b7-47a9-8d6f-7b5b2b67f395" path="/var/lib/kubelet/pods/72a5cb19-78b7-47a9-8d6f-7b5b2b67f395/volumes" Feb 27 11:01:20 crc kubenswrapper[4728]: I0227 11:01:20.033741 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-create-x426g"] Feb 27 11:01:20 crc kubenswrapper[4728]: I0227 11:01:20.047115 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-88ac-account-create-update-q6xk7"] Feb 27 11:01:20 crc kubenswrapper[4728]: I0227 11:01:20.061846 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-create-x426g"] Feb 27 11:01:20 crc kubenswrapper[4728]: I0227 11:01:20.072986 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-88ac-account-create-update-q6xk7"] Feb 27 11:01:20 crc kubenswrapper[4728]: I0227 11:01:20.742740 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="204df3b2-d221-4ba6-811d-d232b4a5d12e" path="/var/lib/kubelet/pods/204df3b2-d221-4ba6-811d-d232b4a5d12e/volumes" Feb 27 11:01:20 crc kubenswrapper[4728]: I0227 11:01:20.743967 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="41862d42-5899-4daf-8f29-a24ba28d3908" path="/var/lib/kubelet/pods/41862d42-5899-4daf-8f29-a24ba28d3908/volumes" Feb 27 11:01:21 crc kubenswrapper[4728]: I0227 11:01:21.796617 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-dh9jz"] Feb 27 11:01:21 crc kubenswrapper[4728]: E0227 11:01:21.803933 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3bc7091-e483-405f-8cf6-a3494fa34cdf" containerName="keystone-cron" Feb 27 11:01:21 crc kubenswrapper[4728]: I0227 11:01:21.803981 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3bc7091-e483-405f-8cf6-a3494fa34cdf" containerName="keystone-cron" Feb 27 11:01:21 crc kubenswrapper[4728]: I0227 11:01:21.804454 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3bc7091-e483-405f-8cf6-a3494fa34cdf" containerName="keystone-cron" Feb 27 11:01:21 crc kubenswrapper[4728]: I0227 11:01:21.806924 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dh9jz" Feb 27 11:01:21 crc kubenswrapper[4728]: I0227 11:01:21.816460 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dh9jz"] Feb 27 11:01:21 crc kubenswrapper[4728]: I0227 11:01:21.925842 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e7d4b51e-9bc2-4ba6-b9df-9d559f09bebd-utilities\") pod \"redhat-operators-dh9jz\" (UID: \"e7d4b51e-9bc2-4ba6-b9df-9d559f09bebd\") " pod="openshift-marketplace/redhat-operators-dh9jz" Feb 27 11:01:21 crc kubenswrapper[4728]: I0227 11:01:21.925985 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e7d4b51e-9bc2-4ba6-b9df-9d559f09bebd-catalog-content\") pod \"redhat-operators-dh9jz\" (UID: \"e7d4b51e-9bc2-4ba6-b9df-9d559f09bebd\") " pod="openshift-marketplace/redhat-operators-dh9jz" Feb 27 11:01:21 crc kubenswrapper[4728]: I0227 11:01:21.926054 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ncf2v\" (UniqueName: \"kubernetes.io/projected/e7d4b51e-9bc2-4ba6-b9df-9d559f09bebd-kube-api-access-ncf2v\") pod \"redhat-operators-dh9jz\" (UID: \"e7d4b51e-9bc2-4ba6-b9df-9d559f09bebd\") " pod="openshift-marketplace/redhat-operators-dh9jz" Feb 27 11:01:22 crc kubenswrapper[4728]: I0227 11:01:22.028331 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e7d4b51e-9bc2-4ba6-b9df-9d559f09bebd-utilities\") pod \"redhat-operators-dh9jz\" (UID: \"e7d4b51e-9bc2-4ba6-b9df-9d559f09bebd\") " pod="openshift-marketplace/redhat-operators-dh9jz" Feb 27 11:01:22 crc kubenswrapper[4728]: I0227 11:01:22.028812 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e7d4b51e-9bc2-4ba6-b9df-9d559f09bebd-catalog-content\") pod \"redhat-operators-dh9jz\" (UID: \"e7d4b51e-9bc2-4ba6-b9df-9d559f09bebd\") " pod="openshift-marketplace/redhat-operators-dh9jz" Feb 27 11:01:22 crc kubenswrapper[4728]: I0227 11:01:22.028854 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e7d4b51e-9bc2-4ba6-b9df-9d559f09bebd-utilities\") pod \"redhat-operators-dh9jz\" (UID: \"e7d4b51e-9bc2-4ba6-b9df-9d559f09bebd\") " pod="openshift-marketplace/redhat-operators-dh9jz" Feb 27 11:01:22 crc kubenswrapper[4728]: I0227 11:01:22.028883 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ncf2v\" (UniqueName: \"kubernetes.io/projected/e7d4b51e-9bc2-4ba6-b9df-9d559f09bebd-kube-api-access-ncf2v\") pod \"redhat-operators-dh9jz\" (UID: \"e7d4b51e-9bc2-4ba6-b9df-9d559f09bebd\") " pod="openshift-marketplace/redhat-operators-dh9jz" Feb 27 11:01:22 crc kubenswrapper[4728]: I0227 11:01:22.029301 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e7d4b51e-9bc2-4ba6-b9df-9d559f09bebd-catalog-content\") pod \"redhat-operators-dh9jz\" (UID: \"e7d4b51e-9bc2-4ba6-b9df-9d559f09bebd\") " pod="openshift-marketplace/redhat-operators-dh9jz" Feb 27 11:01:22 crc kubenswrapper[4728]: I0227 11:01:22.060222 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ncf2v\" (UniqueName: \"kubernetes.io/projected/e7d4b51e-9bc2-4ba6-b9df-9d559f09bebd-kube-api-access-ncf2v\") pod \"redhat-operators-dh9jz\" (UID: \"e7d4b51e-9bc2-4ba6-b9df-9d559f09bebd\") " pod="openshift-marketplace/redhat-operators-dh9jz" Feb 27 11:01:22 crc kubenswrapper[4728]: I0227 11:01:22.144873 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dh9jz" Feb 27 11:01:22 crc kubenswrapper[4728]: I0227 11:01:22.712256 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dh9jz"] Feb 27 11:01:23 crc kubenswrapper[4728]: I0227 11:01:23.072223 4728 scope.go:117] "RemoveContainer" containerID="e7f9e5e7af91bf89f829ae55dd9e9b4c6b616663b446670de46b1807e2c16165" Feb 27 11:01:23 crc kubenswrapper[4728]: I0227 11:01:23.112378 4728 scope.go:117] "RemoveContainer" containerID="6f40f487424f71e0d0146e913f5f9501d1d177c2e72bcbc2ccbc0a4c401c579f" Feb 27 11:01:23 crc kubenswrapper[4728]: I0227 11:01:23.170484 4728 scope.go:117] "RemoveContainer" containerID="0f26c5efd0b2a4d72fccf77b12fbee4f8a5ef9a3950d91e4dc4a7c29890194f5" Feb 27 11:01:23 crc kubenswrapper[4728]: I0227 11:01:23.204563 4728 scope.go:117] "RemoveContainer" containerID="88d1b4dcf84f7296d467a124e0be85197cfc9732fb0ec8ec914114972c0f328b" Feb 27 11:01:23 crc kubenswrapper[4728]: I0227 11:01:23.263481 4728 scope.go:117] "RemoveContainer" containerID="9f6ea11dad6bb2049d938751dcc3375fb2100ea3e696693d6c0e5f6ed31e08ad" Feb 27 11:01:23 crc kubenswrapper[4728]: I0227 11:01:23.393674 4728 generic.go:334] "Generic (PLEG): container finished" podID="e7d4b51e-9bc2-4ba6-b9df-9d559f09bebd" containerID="9d62686c3ca2830a26a5a1b5a4d4918d6f945a212f0c7121118c890537599499" exitCode=0 Feb 27 11:01:23 crc kubenswrapper[4728]: I0227 11:01:23.393869 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dh9jz" event={"ID":"e7d4b51e-9bc2-4ba6-b9df-9d559f09bebd","Type":"ContainerDied","Data":"9d62686c3ca2830a26a5a1b5a4d4918d6f945a212f0c7121118c890537599499"} Feb 27 11:01:23 crc kubenswrapper[4728]: I0227 11:01:23.393994 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dh9jz" event={"ID":"e7d4b51e-9bc2-4ba6-b9df-9d559f09bebd","Type":"ContainerStarted","Data":"13d1f15cf8c2c5fa2b0a3cc9e251a63908f8b0fb2b2088257699807b540f6097"} Feb 27 11:01:25 crc kubenswrapper[4728]: I0227 11:01:25.420540 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dh9jz" event={"ID":"e7d4b51e-9bc2-4ba6-b9df-9d559f09bebd","Type":"ContainerStarted","Data":"a081dd9b20a5b725030da2643a468c187ec8f68b1f5c3283321a5d4633404395"} Feb 27 11:01:28 crc kubenswrapper[4728]: I0227 11:01:28.477932 4728 generic.go:334] "Generic (PLEG): container finished" podID="1c346330-059d-4998-91ec-3014d3cfe1b9" containerID="dc293e1c6e851a3747d59844298a2d9dfb0bc22d97771722ccb0ed7c0526e5d9" exitCode=0 Feb 27 11:01:28 crc kubenswrapper[4728]: I0227 11:01:28.478448 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-vh27d" event={"ID":"1c346330-059d-4998-91ec-3014d3cfe1b9","Type":"ContainerDied","Data":"dc293e1c6e851a3747d59844298a2d9dfb0bc22d97771722ccb0ed7c0526e5d9"} Feb 27 11:01:30 crc kubenswrapper[4728]: I0227 11:01:30.356517 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-vh27d" Feb 27 11:01:30 crc kubenswrapper[4728]: I0227 11:01:30.459528 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1c346330-059d-4998-91ec-3014d3cfe1b9-ssh-key-openstack-edpm-ipam\") pod \"1c346330-059d-4998-91ec-3014d3cfe1b9\" (UID: \"1c346330-059d-4998-91ec-3014d3cfe1b9\") " Feb 27 11:01:30 crc kubenswrapper[4728]: I0227 11:01:30.459860 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4smbq\" (UniqueName: \"kubernetes.io/projected/1c346330-059d-4998-91ec-3014d3cfe1b9-kube-api-access-4smbq\") pod \"1c346330-059d-4998-91ec-3014d3cfe1b9\" (UID: \"1c346330-059d-4998-91ec-3014d3cfe1b9\") " Feb 27 11:01:30 crc kubenswrapper[4728]: I0227 11:01:30.459963 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1c346330-059d-4998-91ec-3014d3cfe1b9-inventory\") pod \"1c346330-059d-4998-91ec-3014d3cfe1b9\" (UID: \"1c346330-059d-4998-91ec-3014d3cfe1b9\") " Feb 27 11:01:30 crc kubenswrapper[4728]: I0227 11:01:30.464838 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c346330-059d-4998-91ec-3014d3cfe1b9-kube-api-access-4smbq" (OuterVolumeSpecName: "kube-api-access-4smbq") pod "1c346330-059d-4998-91ec-3014d3cfe1b9" (UID: "1c346330-059d-4998-91ec-3014d3cfe1b9"). InnerVolumeSpecName "kube-api-access-4smbq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:01:30 crc kubenswrapper[4728]: I0227 11:01:30.497861 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c346330-059d-4998-91ec-3014d3cfe1b9-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "1c346330-059d-4998-91ec-3014d3cfe1b9" (UID: "1c346330-059d-4998-91ec-3014d3cfe1b9"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 11:01:30 crc kubenswrapper[4728]: I0227 11:01:30.497919 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c346330-059d-4998-91ec-3014d3cfe1b9-inventory" (OuterVolumeSpecName: "inventory") pod "1c346330-059d-4998-91ec-3014d3cfe1b9" (UID: "1c346330-059d-4998-91ec-3014d3cfe1b9"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 11:01:30 crc kubenswrapper[4728]: I0227 11:01:30.499065 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-vh27d" event={"ID":"1c346330-059d-4998-91ec-3014d3cfe1b9","Type":"ContainerDied","Data":"5654e6ded48af7d7d1bf089b9099a781166705f647f81771316c6236e6bf59c2"} Feb 27 11:01:30 crc kubenswrapper[4728]: I0227 11:01:30.499101 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5654e6ded48af7d7d1bf089b9099a781166705f647f81771316c6236e6bf59c2" Feb 27 11:01:30 crc kubenswrapper[4728]: I0227 11:01:30.499149 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-vh27d" Feb 27 11:01:30 crc kubenswrapper[4728]: I0227 11:01:30.562034 4728 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1c346330-059d-4998-91ec-3014d3cfe1b9-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 27 11:01:30 crc kubenswrapper[4728]: I0227 11:01:30.562273 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4smbq\" (UniqueName: \"kubernetes.io/projected/1c346330-059d-4998-91ec-3014d3cfe1b9-kube-api-access-4smbq\") on node \"crc\" DevicePath \"\"" Feb 27 11:01:30 crc kubenswrapper[4728]: I0227 11:01:30.562412 4728 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1c346330-059d-4998-91ec-3014d3cfe1b9-inventory\") on node \"crc\" DevicePath \"\"" Feb 27 11:01:30 crc kubenswrapper[4728]: I0227 11:01:30.618617 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-xmsk4"] Feb 27 11:01:30 crc kubenswrapper[4728]: E0227 11:01:30.619095 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c346330-059d-4998-91ec-3014d3cfe1b9" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 27 11:01:30 crc kubenswrapper[4728]: I0227 11:01:30.619114 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c346330-059d-4998-91ec-3014d3cfe1b9" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 27 11:01:30 crc kubenswrapper[4728]: I0227 11:01:30.619342 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c346330-059d-4998-91ec-3014d3cfe1b9" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 27 11:01:30 crc kubenswrapper[4728]: I0227 11:01:30.620467 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-xmsk4" Feb 27 11:01:30 crc kubenswrapper[4728]: I0227 11:01:30.626414 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 27 11:01:30 crc kubenswrapper[4728]: I0227 11:01:30.626451 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-r9nq7" Feb 27 11:01:30 crc kubenswrapper[4728]: I0227 11:01:30.626466 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 27 11:01:30 crc kubenswrapper[4728]: I0227 11:01:30.631825 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 27 11:01:30 crc kubenswrapper[4728]: I0227 11:01:30.633847 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-xmsk4"] Feb 27 11:01:30 crc kubenswrapper[4728]: I0227 11:01:30.664623 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/15e0a257-4a55-402f-b410-fa67a8cc6b7d-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-xmsk4\" (UID: \"15e0a257-4a55-402f-b410-fa67a8cc6b7d\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-xmsk4" Feb 27 11:01:30 crc kubenswrapper[4728]: I0227 11:01:30.664696 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/15e0a257-4a55-402f-b410-fa67a8cc6b7d-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-xmsk4\" (UID: \"15e0a257-4a55-402f-b410-fa67a8cc6b7d\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-xmsk4" Feb 27 11:01:30 crc kubenswrapper[4728]: I0227 11:01:30.664757 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-srtbd\" (UniqueName: \"kubernetes.io/projected/15e0a257-4a55-402f-b410-fa67a8cc6b7d-kube-api-access-srtbd\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-xmsk4\" (UID: \"15e0a257-4a55-402f-b410-fa67a8cc6b7d\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-xmsk4" Feb 27 11:01:30 crc kubenswrapper[4728]: I0227 11:01:30.767935 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/15e0a257-4a55-402f-b410-fa67a8cc6b7d-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-xmsk4\" (UID: \"15e0a257-4a55-402f-b410-fa67a8cc6b7d\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-xmsk4" Feb 27 11:01:30 crc kubenswrapper[4728]: I0227 11:01:30.768395 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/15e0a257-4a55-402f-b410-fa67a8cc6b7d-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-xmsk4\" (UID: \"15e0a257-4a55-402f-b410-fa67a8cc6b7d\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-xmsk4" Feb 27 11:01:30 crc kubenswrapper[4728]: I0227 11:01:30.768676 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-srtbd\" (UniqueName: \"kubernetes.io/projected/15e0a257-4a55-402f-b410-fa67a8cc6b7d-kube-api-access-srtbd\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-xmsk4\" (UID: \"15e0a257-4a55-402f-b410-fa67a8cc6b7d\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-xmsk4" Feb 27 11:01:30 crc kubenswrapper[4728]: I0227 11:01:30.776899 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/15e0a257-4a55-402f-b410-fa67a8cc6b7d-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-xmsk4\" (UID: \"15e0a257-4a55-402f-b410-fa67a8cc6b7d\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-xmsk4" Feb 27 11:01:30 crc kubenswrapper[4728]: I0227 11:01:30.785179 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-srtbd\" (UniqueName: \"kubernetes.io/projected/15e0a257-4a55-402f-b410-fa67a8cc6b7d-kube-api-access-srtbd\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-xmsk4\" (UID: \"15e0a257-4a55-402f-b410-fa67a8cc6b7d\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-xmsk4" Feb 27 11:01:30 crc kubenswrapper[4728]: I0227 11:01:30.787533 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/15e0a257-4a55-402f-b410-fa67a8cc6b7d-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-xmsk4\" (UID: \"15e0a257-4a55-402f-b410-fa67a8cc6b7d\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-xmsk4" Feb 27 11:01:30 crc kubenswrapper[4728]: I0227 11:01:30.940317 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-xmsk4" Feb 27 11:01:31 crc kubenswrapper[4728]: W0227 11:01:31.400944 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod15e0a257_4a55_402f_b410_fa67a8cc6b7d.slice/crio-c32e6ac27365692ab1da51e1f2e62ffa6e6eed9bbf9ec65c393256150d3a6212 WatchSource:0}: Error finding container c32e6ac27365692ab1da51e1f2e62ffa6e6eed9bbf9ec65c393256150d3a6212: Status 404 returned error can't find the container with id c32e6ac27365692ab1da51e1f2e62ffa6e6eed9bbf9ec65c393256150d3a6212 Feb 27 11:01:31 crc kubenswrapper[4728]: I0227 11:01:31.401652 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-xmsk4"] Feb 27 11:01:31 crc kubenswrapper[4728]: I0227 11:01:31.510048 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-xmsk4" event={"ID":"15e0a257-4a55-402f-b410-fa67a8cc6b7d","Type":"ContainerStarted","Data":"c32e6ac27365692ab1da51e1f2e62ffa6e6eed9bbf9ec65c393256150d3a6212"} Feb 27 11:01:31 crc kubenswrapper[4728]: I0227 11:01:31.512322 4728 generic.go:334] "Generic (PLEG): container finished" podID="e7d4b51e-9bc2-4ba6-b9df-9d559f09bebd" containerID="a081dd9b20a5b725030da2643a468c187ec8f68b1f5c3283321a5d4633404395" exitCode=0 Feb 27 11:01:31 crc kubenswrapper[4728]: I0227 11:01:31.512364 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dh9jz" event={"ID":"e7d4b51e-9bc2-4ba6-b9df-9d559f09bebd","Type":"ContainerDied","Data":"a081dd9b20a5b725030da2643a468c187ec8f68b1f5c3283321a5d4633404395"} Feb 27 11:01:32 crc kubenswrapper[4728]: I0227 11:01:32.544050 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dh9jz" event={"ID":"e7d4b51e-9bc2-4ba6-b9df-9d559f09bebd","Type":"ContainerStarted","Data":"197da9e52ac27c3516c8fd8024f11cbd86344063241a6bbd7688e9ec2329a45b"} Feb 27 11:01:32 crc kubenswrapper[4728]: I0227 11:01:32.546085 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-xmsk4" event={"ID":"15e0a257-4a55-402f-b410-fa67a8cc6b7d","Type":"ContainerStarted","Data":"81982eb0b053ab5f0e805e017d009a71a1d518daad0a953a571437c8c96f16b2"} Feb 27 11:01:32 crc kubenswrapper[4728]: I0227 11:01:32.576348 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-dh9jz" podStartSLOduration=3.078028131 podStartE2EDuration="11.576325983s" podCreationTimestamp="2026-02-27 11:01:21 +0000 UTC" firstStartedPulling="2026-02-27 11:01:23.411663799 +0000 UTC m=+2103.374029905" lastFinishedPulling="2026-02-27 11:01:31.909961651 +0000 UTC m=+2111.872327757" observedRunningTime="2026-02-27 11:01:32.561683046 +0000 UTC m=+2112.524049162" watchObservedRunningTime="2026-02-27 11:01:32.576325983 +0000 UTC m=+2112.538692109" Feb 27 11:01:32 crc kubenswrapper[4728]: I0227 11:01:32.589928 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-xmsk4" podStartSLOduration=2.085141085 podStartE2EDuration="2.589905381s" podCreationTimestamp="2026-02-27 11:01:30 +0000 UTC" firstStartedPulling="2026-02-27 11:01:31.403730485 +0000 UTC m=+2111.366096591" lastFinishedPulling="2026-02-27 11:01:31.908494771 +0000 UTC m=+2111.870860887" observedRunningTime="2026-02-27 11:01:32.579138189 +0000 UTC m=+2112.541504295" watchObservedRunningTime="2026-02-27 11:01:32.589905381 +0000 UTC m=+2112.552271507" Feb 27 11:01:42 crc kubenswrapper[4728]: I0227 11:01:42.154788 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-dh9jz" Feb 27 11:01:42 crc kubenswrapper[4728]: I0227 11:01:42.155875 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-dh9jz" Feb 27 11:01:43 crc kubenswrapper[4728]: I0227 11:01:43.207590 4728 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-dh9jz" podUID="e7d4b51e-9bc2-4ba6-b9df-9d559f09bebd" containerName="registry-server" probeResult="failure" output=< Feb 27 11:01:43 crc kubenswrapper[4728]: timeout: failed to connect service ":50051" within 1s Feb 27 11:01:43 crc kubenswrapper[4728]: > Feb 27 11:01:52 crc kubenswrapper[4728]: I0227 11:01:52.225549 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-dh9jz" Feb 27 11:01:52 crc kubenswrapper[4728]: I0227 11:01:52.289761 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-dh9jz" Feb 27 11:01:53 crc kubenswrapper[4728]: I0227 11:01:53.008906 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dh9jz"] Feb 27 11:01:53 crc kubenswrapper[4728]: I0227 11:01:53.791491 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-dh9jz" podUID="e7d4b51e-9bc2-4ba6-b9df-9d559f09bebd" containerName="registry-server" containerID="cri-o://197da9e52ac27c3516c8fd8024f11cbd86344063241a6bbd7688e9ec2329a45b" gracePeriod=2 Feb 27 11:01:54 crc kubenswrapper[4728]: I0227 11:01:54.392256 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dh9jz" Feb 27 11:01:54 crc kubenswrapper[4728]: I0227 11:01:54.550731 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ncf2v\" (UniqueName: \"kubernetes.io/projected/e7d4b51e-9bc2-4ba6-b9df-9d559f09bebd-kube-api-access-ncf2v\") pod \"e7d4b51e-9bc2-4ba6-b9df-9d559f09bebd\" (UID: \"e7d4b51e-9bc2-4ba6-b9df-9d559f09bebd\") " Feb 27 11:01:54 crc kubenswrapper[4728]: I0227 11:01:54.550904 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e7d4b51e-9bc2-4ba6-b9df-9d559f09bebd-catalog-content\") pod \"e7d4b51e-9bc2-4ba6-b9df-9d559f09bebd\" (UID: \"e7d4b51e-9bc2-4ba6-b9df-9d559f09bebd\") " Feb 27 11:01:54 crc kubenswrapper[4728]: I0227 11:01:54.551014 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e7d4b51e-9bc2-4ba6-b9df-9d559f09bebd-utilities\") pod \"e7d4b51e-9bc2-4ba6-b9df-9d559f09bebd\" (UID: \"e7d4b51e-9bc2-4ba6-b9df-9d559f09bebd\") " Feb 27 11:01:54 crc kubenswrapper[4728]: I0227 11:01:54.552261 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e7d4b51e-9bc2-4ba6-b9df-9d559f09bebd-utilities" (OuterVolumeSpecName: "utilities") pod "e7d4b51e-9bc2-4ba6-b9df-9d559f09bebd" (UID: "e7d4b51e-9bc2-4ba6-b9df-9d559f09bebd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 11:01:54 crc kubenswrapper[4728]: I0227 11:01:54.556933 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7d4b51e-9bc2-4ba6-b9df-9d559f09bebd-kube-api-access-ncf2v" (OuterVolumeSpecName: "kube-api-access-ncf2v") pod "e7d4b51e-9bc2-4ba6-b9df-9d559f09bebd" (UID: "e7d4b51e-9bc2-4ba6-b9df-9d559f09bebd"). InnerVolumeSpecName "kube-api-access-ncf2v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:01:54 crc kubenswrapper[4728]: I0227 11:01:54.653021 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ncf2v\" (UniqueName: \"kubernetes.io/projected/e7d4b51e-9bc2-4ba6-b9df-9d559f09bebd-kube-api-access-ncf2v\") on node \"crc\" DevicePath \"\"" Feb 27 11:01:54 crc kubenswrapper[4728]: I0227 11:01:54.653049 4728 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e7d4b51e-9bc2-4ba6-b9df-9d559f09bebd-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 11:01:54 crc kubenswrapper[4728]: I0227 11:01:54.685858 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e7d4b51e-9bc2-4ba6-b9df-9d559f09bebd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e7d4b51e-9bc2-4ba6-b9df-9d559f09bebd" (UID: "e7d4b51e-9bc2-4ba6-b9df-9d559f09bebd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 11:01:54 crc kubenswrapper[4728]: I0227 11:01:54.755572 4728 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e7d4b51e-9bc2-4ba6-b9df-9d559f09bebd-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 11:01:54 crc kubenswrapper[4728]: I0227 11:01:54.804027 4728 generic.go:334] "Generic (PLEG): container finished" podID="e7d4b51e-9bc2-4ba6-b9df-9d559f09bebd" containerID="197da9e52ac27c3516c8fd8024f11cbd86344063241a6bbd7688e9ec2329a45b" exitCode=0 Feb 27 11:01:54 crc kubenswrapper[4728]: I0227 11:01:54.804123 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dh9jz" event={"ID":"e7d4b51e-9bc2-4ba6-b9df-9d559f09bebd","Type":"ContainerDied","Data":"197da9e52ac27c3516c8fd8024f11cbd86344063241a6bbd7688e9ec2329a45b"} Feb 27 11:01:54 crc kubenswrapper[4728]: I0227 11:01:54.804202 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dh9jz" event={"ID":"e7d4b51e-9bc2-4ba6-b9df-9d559f09bebd","Type":"ContainerDied","Data":"13d1f15cf8c2c5fa2b0a3cc9e251a63908f8b0fb2b2088257699807b540f6097"} Feb 27 11:01:54 crc kubenswrapper[4728]: I0227 11:01:54.804129 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dh9jz" Feb 27 11:01:54 crc kubenswrapper[4728]: I0227 11:01:54.804223 4728 scope.go:117] "RemoveContainer" containerID="197da9e52ac27c3516c8fd8024f11cbd86344063241a6bbd7688e9ec2329a45b" Feb 27 11:01:54 crc kubenswrapper[4728]: I0227 11:01:54.832580 4728 scope.go:117] "RemoveContainer" containerID="a081dd9b20a5b725030da2643a468c187ec8f68b1f5c3283321a5d4633404395" Feb 27 11:01:54 crc kubenswrapper[4728]: I0227 11:01:54.838532 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dh9jz"] Feb 27 11:01:54 crc kubenswrapper[4728]: I0227 11:01:54.853467 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-dh9jz"] Feb 27 11:01:54 crc kubenswrapper[4728]: I0227 11:01:54.857843 4728 scope.go:117] "RemoveContainer" containerID="9d62686c3ca2830a26a5a1b5a4d4918d6f945a212f0c7121118c890537599499" Feb 27 11:01:54 crc kubenswrapper[4728]: I0227 11:01:54.922815 4728 scope.go:117] "RemoveContainer" containerID="197da9e52ac27c3516c8fd8024f11cbd86344063241a6bbd7688e9ec2329a45b" Feb 27 11:01:54 crc kubenswrapper[4728]: E0227 11:01:54.923519 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"197da9e52ac27c3516c8fd8024f11cbd86344063241a6bbd7688e9ec2329a45b\": container with ID starting with 197da9e52ac27c3516c8fd8024f11cbd86344063241a6bbd7688e9ec2329a45b not found: ID does not exist" containerID="197da9e52ac27c3516c8fd8024f11cbd86344063241a6bbd7688e9ec2329a45b" Feb 27 11:01:54 crc kubenswrapper[4728]: I0227 11:01:54.923573 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"197da9e52ac27c3516c8fd8024f11cbd86344063241a6bbd7688e9ec2329a45b"} err="failed to get container status \"197da9e52ac27c3516c8fd8024f11cbd86344063241a6bbd7688e9ec2329a45b\": rpc error: code = NotFound desc = could not find container \"197da9e52ac27c3516c8fd8024f11cbd86344063241a6bbd7688e9ec2329a45b\": container with ID starting with 197da9e52ac27c3516c8fd8024f11cbd86344063241a6bbd7688e9ec2329a45b not found: ID does not exist" Feb 27 11:01:54 crc kubenswrapper[4728]: I0227 11:01:54.923607 4728 scope.go:117] "RemoveContainer" containerID="a081dd9b20a5b725030da2643a468c187ec8f68b1f5c3283321a5d4633404395" Feb 27 11:01:54 crc kubenswrapper[4728]: E0227 11:01:54.923978 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a081dd9b20a5b725030da2643a468c187ec8f68b1f5c3283321a5d4633404395\": container with ID starting with a081dd9b20a5b725030da2643a468c187ec8f68b1f5c3283321a5d4633404395 not found: ID does not exist" containerID="a081dd9b20a5b725030da2643a468c187ec8f68b1f5c3283321a5d4633404395" Feb 27 11:01:54 crc kubenswrapper[4728]: I0227 11:01:54.924107 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a081dd9b20a5b725030da2643a468c187ec8f68b1f5c3283321a5d4633404395"} err="failed to get container status \"a081dd9b20a5b725030da2643a468c187ec8f68b1f5c3283321a5d4633404395\": rpc error: code = NotFound desc = could not find container \"a081dd9b20a5b725030da2643a468c187ec8f68b1f5c3283321a5d4633404395\": container with ID starting with a081dd9b20a5b725030da2643a468c187ec8f68b1f5c3283321a5d4633404395 not found: ID does not exist" Feb 27 11:01:54 crc kubenswrapper[4728]: I0227 11:01:54.924215 4728 scope.go:117] "RemoveContainer" containerID="9d62686c3ca2830a26a5a1b5a4d4918d6f945a212f0c7121118c890537599499" Feb 27 11:01:54 crc kubenswrapper[4728]: E0227 11:01:54.924609 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d62686c3ca2830a26a5a1b5a4d4918d6f945a212f0c7121118c890537599499\": container with ID starting with 9d62686c3ca2830a26a5a1b5a4d4918d6f945a212f0c7121118c890537599499 not found: ID does not exist" containerID="9d62686c3ca2830a26a5a1b5a4d4918d6f945a212f0c7121118c890537599499" Feb 27 11:01:54 crc kubenswrapper[4728]: I0227 11:01:54.924968 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d62686c3ca2830a26a5a1b5a4d4918d6f945a212f0c7121118c890537599499"} err="failed to get container status \"9d62686c3ca2830a26a5a1b5a4d4918d6f945a212f0c7121118c890537599499\": rpc error: code = NotFound desc = could not find container \"9d62686c3ca2830a26a5a1b5a4d4918d6f945a212f0c7121118c890537599499\": container with ID starting with 9d62686c3ca2830a26a5a1b5a4d4918d6f945a212f0c7121118c890537599499 not found: ID does not exist" Feb 27 11:01:56 crc kubenswrapper[4728]: I0227 11:01:56.739957 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7d4b51e-9bc2-4ba6-b9df-9d559f09bebd" path="/var/lib/kubelet/pods/e7d4b51e-9bc2-4ba6-b9df-9d559f09bebd/volumes" Feb 27 11:02:00 crc kubenswrapper[4728]: I0227 11:02:00.144228 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536502-nm4pd"] Feb 27 11:02:00 crc kubenswrapper[4728]: E0227 11:02:00.145468 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7d4b51e-9bc2-4ba6-b9df-9d559f09bebd" containerName="extract-content" Feb 27 11:02:00 crc kubenswrapper[4728]: I0227 11:02:00.145486 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7d4b51e-9bc2-4ba6-b9df-9d559f09bebd" containerName="extract-content" Feb 27 11:02:00 crc kubenswrapper[4728]: E0227 11:02:00.145538 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7d4b51e-9bc2-4ba6-b9df-9d559f09bebd" containerName="registry-server" Feb 27 11:02:00 crc kubenswrapper[4728]: I0227 11:02:00.145546 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7d4b51e-9bc2-4ba6-b9df-9d559f09bebd" containerName="registry-server" Feb 27 11:02:00 crc kubenswrapper[4728]: E0227 11:02:00.145580 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7d4b51e-9bc2-4ba6-b9df-9d559f09bebd" containerName="extract-utilities" Feb 27 11:02:00 crc kubenswrapper[4728]: I0227 11:02:00.145589 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7d4b51e-9bc2-4ba6-b9df-9d559f09bebd" containerName="extract-utilities" Feb 27 11:02:00 crc kubenswrapper[4728]: I0227 11:02:00.145839 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7d4b51e-9bc2-4ba6-b9df-9d559f09bebd" containerName="registry-server" Feb 27 11:02:00 crc kubenswrapper[4728]: I0227 11:02:00.146851 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536502-nm4pd" Feb 27 11:02:00 crc kubenswrapper[4728]: I0227 11:02:00.150821 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 11:02:00 crc kubenswrapper[4728]: I0227 11:02:00.152182 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-wxdfj" Feb 27 11:02:00 crc kubenswrapper[4728]: I0227 11:02:00.152307 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 11:02:00 crc kubenswrapper[4728]: I0227 11:02:00.167361 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536502-nm4pd"] Feb 27 11:02:00 crc kubenswrapper[4728]: I0227 11:02:00.303809 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6v5bl\" (UniqueName: \"kubernetes.io/projected/4a3bdcec-52e7-41a6-b551-b4a93c8dce37-kube-api-access-6v5bl\") pod \"auto-csr-approver-29536502-nm4pd\" (UID: \"4a3bdcec-52e7-41a6-b551-b4a93c8dce37\") " pod="openshift-infra/auto-csr-approver-29536502-nm4pd" Feb 27 11:02:00 crc kubenswrapper[4728]: I0227 11:02:00.406139 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6v5bl\" (UniqueName: \"kubernetes.io/projected/4a3bdcec-52e7-41a6-b551-b4a93c8dce37-kube-api-access-6v5bl\") pod \"auto-csr-approver-29536502-nm4pd\" (UID: \"4a3bdcec-52e7-41a6-b551-b4a93c8dce37\") " pod="openshift-infra/auto-csr-approver-29536502-nm4pd" Feb 27 11:02:00 crc kubenswrapper[4728]: I0227 11:02:00.423704 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6v5bl\" (UniqueName: \"kubernetes.io/projected/4a3bdcec-52e7-41a6-b551-b4a93c8dce37-kube-api-access-6v5bl\") pod \"auto-csr-approver-29536502-nm4pd\" (UID: \"4a3bdcec-52e7-41a6-b551-b4a93c8dce37\") " pod="openshift-infra/auto-csr-approver-29536502-nm4pd" Feb 27 11:02:00 crc kubenswrapper[4728]: I0227 11:02:00.476362 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536502-nm4pd" Feb 27 11:02:00 crc kubenswrapper[4728]: I0227 11:02:00.960347 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536502-nm4pd"] Feb 27 11:02:01 crc kubenswrapper[4728]: I0227 11:02:01.895039 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536502-nm4pd" event={"ID":"4a3bdcec-52e7-41a6-b551-b4a93c8dce37","Type":"ContainerStarted","Data":"1ac74324dde7a0f6567456eddc468b0d88fa1eecb571aec85e9625e8a2330afd"} Feb 27 11:02:02 crc kubenswrapper[4728]: I0227 11:02:02.908255 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536502-nm4pd" event={"ID":"4a3bdcec-52e7-41a6-b551-b4a93c8dce37","Type":"ContainerStarted","Data":"6bfe573f5a38d7d588eab3f852f7360fd17cfdb0d3a7e931c6735fe74876d6a4"} Feb 27 11:02:02 crc kubenswrapper[4728]: I0227 11:02:02.930544 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29536502-nm4pd" podStartSLOduration=1.442146937 podStartE2EDuration="2.930526955s" podCreationTimestamp="2026-02-27 11:02:00 +0000 UTC" firstStartedPulling="2026-02-27 11:02:00.967703284 +0000 UTC m=+2140.930069390" lastFinishedPulling="2026-02-27 11:02:02.456083302 +0000 UTC m=+2142.418449408" observedRunningTime="2026-02-27 11:02:02.92256183 +0000 UTC m=+2142.884927946" watchObservedRunningTime="2026-02-27 11:02:02.930526955 +0000 UTC m=+2142.892893061" Feb 27 11:02:03 crc kubenswrapper[4728]: I0227 11:02:03.922835 4728 generic.go:334] "Generic (PLEG): container finished" podID="4a3bdcec-52e7-41a6-b551-b4a93c8dce37" containerID="6bfe573f5a38d7d588eab3f852f7360fd17cfdb0d3a7e931c6735fe74876d6a4" exitCode=0 Feb 27 11:02:03 crc kubenswrapper[4728]: I0227 11:02:03.923097 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536502-nm4pd" event={"ID":"4a3bdcec-52e7-41a6-b551-b4a93c8dce37","Type":"ContainerDied","Data":"6bfe573f5a38d7d588eab3f852f7360fd17cfdb0d3a7e931c6735fe74876d6a4"} Feb 27 11:02:05 crc kubenswrapper[4728]: I0227 11:02:05.127034 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-zdmnv"] Feb 27 11:02:05 crc kubenswrapper[4728]: I0227 11:02:05.140234 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-zdmnv"] Feb 27 11:02:05 crc kubenswrapper[4728]: I0227 11:02:05.545355 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536502-nm4pd" Feb 27 11:02:05 crc kubenswrapper[4728]: I0227 11:02:05.560835 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6v5bl\" (UniqueName: \"kubernetes.io/projected/4a3bdcec-52e7-41a6-b551-b4a93c8dce37-kube-api-access-6v5bl\") pod \"4a3bdcec-52e7-41a6-b551-b4a93c8dce37\" (UID: \"4a3bdcec-52e7-41a6-b551-b4a93c8dce37\") " Feb 27 11:02:05 crc kubenswrapper[4728]: I0227 11:02:05.568744 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a3bdcec-52e7-41a6-b551-b4a93c8dce37-kube-api-access-6v5bl" (OuterVolumeSpecName: "kube-api-access-6v5bl") pod "4a3bdcec-52e7-41a6-b551-b4a93c8dce37" (UID: "4a3bdcec-52e7-41a6-b551-b4a93c8dce37"). InnerVolumeSpecName "kube-api-access-6v5bl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:02:05 crc kubenswrapper[4728]: I0227 11:02:05.663670 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6v5bl\" (UniqueName: \"kubernetes.io/projected/4a3bdcec-52e7-41a6-b551-b4a93c8dce37-kube-api-access-6v5bl\") on node \"crc\" DevicePath \"\"" Feb 27 11:02:05 crc kubenswrapper[4728]: I0227 11:02:05.950536 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536502-nm4pd" Feb 27 11:02:05 crc kubenswrapper[4728]: I0227 11:02:05.950529 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536502-nm4pd" event={"ID":"4a3bdcec-52e7-41a6-b551-b4a93c8dce37","Type":"ContainerDied","Data":"1ac74324dde7a0f6567456eddc468b0d88fa1eecb571aec85e9625e8a2330afd"} Feb 27 11:02:05 crc kubenswrapper[4728]: I0227 11:02:05.950674 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1ac74324dde7a0f6567456eddc468b0d88fa1eecb571aec85e9625e8a2330afd" Feb 27 11:02:05 crc kubenswrapper[4728]: I0227 11:02:05.987264 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536496-6p4m5"] Feb 27 11:02:05 crc kubenswrapper[4728]: I0227 11:02:05.998337 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536496-6p4m5"] Feb 27 11:02:06 crc kubenswrapper[4728]: I0227 11:02:06.739111 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="08778ec2-d0d0-42a2-8497-290bfe1b10c1" path="/var/lib/kubelet/pods/08778ec2-d0d0-42a2-8497-290bfe1b10c1/volumes" Feb 27 11:02:06 crc kubenswrapper[4728]: I0227 11:02:06.741228 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="95407fb7-61a8-4fd7-8059-63734e7e50e9" path="/var/lib/kubelet/pods/95407fb7-61a8-4fd7-8059-63734e7e50e9/volumes" Feb 27 11:02:22 crc kubenswrapper[4728]: I0227 11:02:22.139574 4728 generic.go:334] "Generic (PLEG): container finished" podID="15e0a257-4a55-402f-b410-fa67a8cc6b7d" containerID="81982eb0b053ab5f0e805e017d009a71a1d518daad0a953a571437c8c96f16b2" exitCode=0 Feb 27 11:02:22 crc kubenswrapper[4728]: I0227 11:02:22.139643 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-xmsk4" event={"ID":"15e0a257-4a55-402f-b410-fa67a8cc6b7d","Type":"ContainerDied","Data":"81982eb0b053ab5f0e805e017d009a71a1d518daad0a953a571437c8c96f16b2"} Feb 27 11:02:23 crc kubenswrapper[4728]: I0227 11:02:23.530234 4728 scope.go:117] "RemoveContainer" containerID="83031d97540b7e4e511d750099704061677cbb54f2239b7990769fc6ef19d022" Feb 27 11:02:23 crc kubenswrapper[4728]: I0227 11:02:23.600230 4728 scope.go:117] "RemoveContainer" containerID="7daba541974762bd1dc66fca9853b1be4a8ce00014832111e32d50a498e242d1" Feb 27 11:02:23 crc kubenswrapper[4728]: I0227 11:02:23.738993 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-xmsk4" Feb 27 11:02:23 crc kubenswrapper[4728]: I0227 11:02:23.765166 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-srtbd\" (UniqueName: \"kubernetes.io/projected/15e0a257-4a55-402f-b410-fa67a8cc6b7d-kube-api-access-srtbd\") pod \"15e0a257-4a55-402f-b410-fa67a8cc6b7d\" (UID: \"15e0a257-4a55-402f-b410-fa67a8cc6b7d\") " Feb 27 11:02:23 crc kubenswrapper[4728]: I0227 11:02:23.765613 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/15e0a257-4a55-402f-b410-fa67a8cc6b7d-ssh-key-openstack-edpm-ipam\") pod \"15e0a257-4a55-402f-b410-fa67a8cc6b7d\" (UID: \"15e0a257-4a55-402f-b410-fa67a8cc6b7d\") " Feb 27 11:02:23 crc kubenswrapper[4728]: I0227 11:02:23.777581 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15e0a257-4a55-402f-b410-fa67a8cc6b7d-kube-api-access-srtbd" (OuterVolumeSpecName: "kube-api-access-srtbd") pod "15e0a257-4a55-402f-b410-fa67a8cc6b7d" (UID: "15e0a257-4a55-402f-b410-fa67a8cc6b7d"). InnerVolumeSpecName "kube-api-access-srtbd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:02:23 crc kubenswrapper[4728]: I0227 11:02:23.811738 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15e0a257-4a55-402f-b410-fa67a8cc6b7d-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "15e0a257-4a55-402f-b410-fa67a8cc6b7d" (UID: "15e0a257-4a55-402f-b410-fa67a8cc6b7d"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 11:02:23 crc kubenswrapper[4728]: I0227 11:02:23.867675 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/15e0a257-4a55-402f-b410-fa67a8cc6b7d-inventory\") pod \"15e0a257-4a55-402f-b410-fa67a8cc6b7d\" (UID: \"15e0a257-4a55-402f-b410-fa67a8cc6b7d\") " Feb 27 11:02:23 crc kubenswrapper[4728]: I0227 11:02:23.869081 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-srtbd\" (UniqueName: \"kubernetes.io/projected/15e0a257-4a55-402f-b410-fa67a8cc6b7d-kube-api-access-srtbd\") on node \"crc\" DevicePath \"\"" Feb 27 11:02:23 crc kubenswrapper[4728]: I0227 11:02:23.869198 4728 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/15e0a257-4a55-402f-b410-fa67a8cc6b7d-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 27 11:02:23 crc kubenswrapper[4728]: I0227 11:02:23.897106 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15e0a257-4a55-402f-b410-fa67a8cc6b7d-inventory" (OuterVolumeSpecName: "inventory") pod "15e0a257-4a55-402f-b410-fa67a8cc6b7d" (UID: "15e0a257-4a55-402f-b410-fa67a8cc6b7d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 11:02:23 crc kubenswrapper[4728]: I0227 11:02:23.972105 4728 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/15e0a257-4a55-402f-b410-fa67a8cc6b7d-inventory\") on node \"crc\" DevicePath \"\"" Feb 27 11:02:24 crc kubenswrapper[4728]: I0227 11:02:24.168050 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-xmsk4" event={"ID":"15e0a257-4a55-402f-b410-fa67a8cc6b7d","Type":"ContainerDied","Data":"c32e6ac27365692ab1da51e1f2e62ffa6e6eed9bbf9ec65c393256150d3a6212"} Feb 27 11:02:24 crc kubenswrapper[4728]: I0227 11:02:24.168092 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c32e6ac27365692ab1da51e1f2e62ffa6e6eed9bbf9ec65c393256150d3a6212" Feb 27 11:02:24 crc kubenswrapper[4728]: I0227 11:02:24.168113 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-xmsk4" Feb 27 11:02:24 crc kubenswrapper[4728]: I0227 11:02:24.293206 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-8tlqs"] Feb 27 11:02:24 crc kubenswrapper[4728]: E0227 11:02:24.293935 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a3bdcec-52e7-41a6-b551-b4a93c8dce37" containerName="oc" Feb 27 11:02:24 crc kubenswrapper[4728]: I0227 11:02:24.293962 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a3bdcec-52e7-41a6-b551-b4a93c8dce37" containerName="oc" Feb 27 11:02:24 crc kubenswrapper[4728]: E0227 11:02:24.294001 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15e0a257-4a55-402f-b410-fa67a8cc6b7d" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 27 11:02:24 crc kubenswrapper[4728]: I0227 11:02:24.294014 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="15e0a257-4a55-402f-b410-fa67a8cc6b7d" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 27 11:02:24 crc kubenswrapper[4728]: I0227 11:02:24.294449 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="15e0a257-4a55-402f-b410-fa67a8cc6b7d" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 27 11:02:24 crc kubenswrapper[4728]: I0227 11:02:24.294489 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a3bdcec-52e7-41a6-b551-b4a93c8dce37" containerName="oc" Feb 27 11:02:24 crc kubenswrapper[4728]: I0227 11:02:24.295569 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-8tlqs" Feb 27 11:02:24 crc kubenswrapper[4728]: I0227 11:02:24.298592 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 27 11:02:24 crc kubenswrapper[4728]: I0227 11:02:24.299405 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 27 11:02:24 crc kubenswrapper[4728]: I0227 11:02:24.299425 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-r9nq7" Feb 27 11:02:24 crc kubenswrapper[4728]: I0227 11:02:24.307981 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-8tlqs"] Feb 27 11:02:24 crc kubenswrapper[4728]: I0227 11:02:24.313042 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 27 11:02:24 crc kubenswrapper[4728]: I0227 11:02:24.384983 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/bfce4106-f546-4fdb-af8a-adc09ccd8b17-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-8tlqs\" (UID: \"bfce4106-f546-4fdb-af8a-adc09ccd8b17\") " pod="openstack/ssh-known-hosts-edpm-deployment-8tlqs" Feb 27 11:02:24 crc kubenswrapper[4728]: I0227 11:02:24.385733 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bfce4106-f546-4fdb-af8a-adc09ccd8b17-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-8tlqs\" (UID: \"bfce4106-f546-4fdb-af8a-adc09ccd8b17\") " pod="openstack/ssh-known-hosts-edpm-deployment-8tlqs" Feb 27 11:02:24 crc kubenswrapper[4728]: I0227 11:02:24.385818 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gv58m\" (UniqueName: \"kubernetes.io/projected/bfce4106-f546-4fdb-af8a-adc09ccd8b17-kube-api-access-gv58m\") pod \"ssh-known-hosts-edpm-deployment-8tlqs\" (UID: \"bfce4106-f546-4fdb-af8a-adc09ccd8b17\") " pod="openstack/ssh-known-hosts-edpm-deployment-8tlqs" Feb 27 11:02:24 crc kubenswrapper[4728]: I0227 11:02:24.490088 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bfce4106-f546-4fdb-af8a-adc09ccd8b17-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-8tlqs\" (UID: \"bfce4106-f546-4fdb-af8a-adc09ccd8b17\") " pod="openstack/ssh-known-hosts-edpm-deployment-8tlqs" Feb 27 11:02:24 crc kubenswrapper[4728]: I0227 11:02:24.490177 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gv58m\" (UniqueName: \"kubernetes.io/projected/bfce4106-f546-4fdb-af8a-adc09ccd8b17-kube-api-access-gv58m\") pod \"ssh-known-hosts-edpm-deployment-8tlqs\" (UID: \"bfce4106-f546-4fdb-af8a-adc09ccd8b17\") " pod="openstack/ssh-known-hosts-edpm-deployment-8tlqs" Feb 27 11:02:24 crc kubenswrapper[4728]: I0227 11:02:24.490314 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/bfce4106-f546-4fdb-af8a-adc09ccd8b17-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-8tlqs\" (UID: \"bfce4106-f546-4fdb-af8a-adc09ccd8b17\") " pod="openstack/ssh-known-hosts-edpm-deployment-8tlqs" Feb 27 11:02:24 crc kubenswrapper[4728]: I0227 11:02:24.495567 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/bfce4106-f546-4fdb-af8a-adc09ccd8b17-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-8tlqs\" (UID: \"bfce4106-f546-4fdb-af8a-adc09ccd8b17\") " pod="openstack/ssh-known-hosts-edpm-deployment-8tlqs" Feb 27 11:02:24 crc kubenswrapper[4728]: I0227 11:02:24.505151 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bfce4106-f546-4fdb-af8a-adc09ccd8b17-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-8tlqs\" (UID: \"bfce4106-f546-4fdb-af8a-adc09ccd8b17\") " pod="openstack/ssh-known-hosts-edpm-deployment-8tlqs" Feb 27 11:02:24 crc kubenswrapper[4728]: I0227 11:02:24.517288 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gv58m\" (UniqueName: \"kubernetes.io/projected/bfce4106-f546-4fdb-af8a-adc09ccd8b17-kube-api-access-gv58m\") pod \"ssh-known-hosts-edpm-deployment-8tlqs\" (UID: \"bfce4106-f546-4fdb-af8a-adc09ccd8b17\") " pod="openstack/ssh-known-hosts-edpm-deployment-8tlqs" Feb 27 11:02:24 crc kubenswrapper[4728]: I0227 11:02:24.625377 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-8tlqs" Feb 27 11:02:25 crc kubenswrapper[4728]: I0227 11:02:25.251083 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-8tlqs"] Feb 27 11:02:25 crc kubenswrapper[4728]: W0227 11:02:25.262039 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbfce4106_f546_4fdb_af8a_adc09ccd8b17.slice/crio-59b193479d76d0ede95f940d224c301f14a685da78a630dfdb1f762bf7861e9b WatchSource:0}: Error finding container 59b193479d76d0ede95f940d224c301f14a685da78a630dfdb1f762bf7861e9b: Status 404 returned error can't find the container with id 59b193479d76d0ede95f940d224c301f14a685da78a630dfdb1f762bf7861e9b Feb 27 11:02:26 crc kubenswrapper[4728]: I0227 11:02:26.191375 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-8tlqs" event={"ID":"bfce4106-f546-4fdb-af8a-adc09ccd8b17","Type":"ContainerStarted","Data":"eb00eb7671ad8165336dedb63fe9260e85512d25965f580f7c4b028c3efe2a09"} Feb 27 11:02:26 crc kubenswrapper[4728]: I0227 11:02:26.192039 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-8tlqs" event={"ID":"bfce4106-f546-4fdb-af8a-adc09ccd8b17","Type":"ContainerStarted","Data":"59b193479d76d0ede95f940d224c301f14a685da78a630dfdb1f762bf7861e9b"} Feb 27 11:02:26 crc kubenswrapper[4728]: I0227 11:02:26.217654 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-8tlqs" podStartSLOduration=1.681930587 podStartE2EDuration="2.217635244s" podCreationTimestamp="2026-02-27 11:02:24 +0000 UTC" firstStartedPulling="2026-02-27 11:02:25.265689942 +0000 UTC m=+2165.228056048" lastFinishedPulling="2026-02-27 11:02:25.801394599 +0000 UTC m=+2165.763760705" observedRunningTime="2026-02-27 11:02:26.208528547 +0000 UTC m=+2166.170894653" watchObservedRunningTime="2026-02-27 11:02:26.217635244 +0000 UTC m=+2166.180001360" Feb 27 11:02:33 crc kubenswrapper[4728]: I0227 11:02:33.297382 4728 generic.go:334] "Generic (PLEG): container finished" podID="bfce4106-f546-4fdb-af8a-adc09ccd8b17" containerID="eb00eb7671ad8165336dedb63fe9260e85512d25965f580f7c4b028c3efe2a09" exitCode=0 Feb 27 11:02:33 crc kubenswrapper[4728]: I0227 11:02:33.297485 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-8tlqs" event={"ID":"bfce4106-f546-4fdb-af8a-adc09ccd8b17","Type":"ContainerDied","Data":"eb00eb7671ad8165336dedb63fe9260e85512d25965f580f7c4b028c3efe2a09"} Feb 27 11:02:34 crc kubenswrapper[4728]: I0227 11:02:34.822150 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-8tlqs" Feb 27 11:02:34 crc kubenswrapper[4728]: I0227 11:02:34.966018 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/bfce4106-f546-4fdb-af8a-adc09ccd8b17-inventory-0\") pod \"bfce4106-f546-4fdb-af8a-adc09ccd8b17\" (UID: \"bfce4106-f546-4fdb-af8a-adc09ccd8b17\") " Feb 27 11:02:34 crc kubenswrapper[4728]: I0227 11:02:34.966260 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bfce4106-f546-4fdb-af8a-adc09ccd8b17-ssh-key-openstack-edpm-ipam\") pod \"bfce4106-f546-4fdb-af8a-adc09ccd8b17\" (UID: \"bfce4106-f546-4fdb-af8a-adc09ccd8b17\") " Feb 27 11:02:34 crc kubenswrapper[4728]: I0227 11:02:34.966475 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gv58m\" (UniqueName: \"kubernetes.io/projected/bfce4106-f546-4fdb-af8a-adc09ccd8b17-kube-api-access-gv58m\") pod \"bfce4106-f546-4fdb-af8a-adc09ccd8b17\" (UID: \"bfce4106-f546-4fdb-af8a-adc09ccd8b17\") " Feb 27 11:02:34 crc kubenswrapper[4728]: I0227 11:02:34.971382 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bfce4106-f546-4fdb-af8a-adc09ccd8b17-kube-api-access-gv58m" (OuterVolumeSpecName: "kube-api-access-gv58m") pod "bfce4106-f546-4fdb-af8a-adc09ccd8b17" (UID: "bfce4106-f546-4fdb-af8a-adc09ccd8b17"). InnerVolumeSpecName "kube-api-access-gv58m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:02:34 crc kubenswrapper[4728]: I0227 11:02:34.996452 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bfce4106-f546-4fdb-af8a-adc09ccd8b17-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "bfce4106-f546-4fdb-af8a-adc09ccd8b17" (UID: "bfce4106-f546-4fdb-af8a-adc09ccd8b17"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 11:02:34 crc kubenswrapper[4728]: I0227 11:02:34.998225 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bfce4106-f546-4fdb-af8a-adc09ccd8b17-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "bfce4106-f546-4fdb-af8a-adc09ccd8b17" (UID: "bfce4106-f546-4fdb-af8a-adc09ccd8b17"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 11:02:35 crc kubenswrapper[4728]: I0227 11:02:35.069952 4728 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/bfce4106-f546-4fdb-af8a-adc09ccd8b17-inventory-0\") on node \"crc\" DevicePath \"\"" Feb 27 11:02:35 crc kubenswrapper[4728]: I0227 11:02:35.069993 4728 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bfce4106-f546-4fdb-af8a-adc09ccd8b17-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 27 11:02:35 crc kubenswrapper[4728]: I0227 11:02:35.070006 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gv58m\" (UniqueName: \"kubernetes.io/projected/bfce4106-f546-4fdb-af8a-adc09ccd8b17-kube-api-access-gv58m\") on node \"crc\" DevicePath \"\"" Feb 27 11:02:35 crc kubenswrapper[4728]: I0227 11:02:35.322818 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-8tlqs" event={"ID":"bfce4106-f546-4fdb-af8a-adc09ccd8b17","Type":"ContainerDied","Data":"59b193479d76d0ede95f940d224c301f14a685da78a630dfdb1f762bf7861e9b"} Feb 27 11:02:35 crc kubenswrapper[4728]: I0227 11:02:35.323194 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="59b193479d76d0ede95f940d224c301f14a685da78a630dfdb1f762bf7861e9b" Feb 27 11:02:35 crc kubenswrapper[4728]: I0227 11:02:35.323000 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-8tlqs" Feb 27 11:02:35 crc kubenswrapper[4728]: I0227 11:02:35.400824 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-6v98n"] Feb 27 11:02:35 crc kubenswrapper[4728]: E0227 11:02:35.401710 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bfce4106-f546-4fdb-af8a-adc09ccd8b17" containerName="ssh-known-hosts-edpm-deployment" Feb 27 11:02:35 crc kubenswrapper[4728]: I0227 11:02:35.401809 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfce4106-f546-4fdb-af8a-adc09ccd8b17" containerName="ssh-known-hosts-edpm-deployment" Feb 27 11:02:35 crc kubenswrapper[4728]: I0227 11:02:35.402285 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="bfce4106-f546-4fdb-af8a-adc09ccd8b17" containerName="ssh-known-hosts-edpm-deployment" Feb 27 11:02:35 crc kubenswrapper[4728]: I0227 11:02:35.403427 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-6v98n" Feb 27 11:02:35 crc kubenswrapper[4728]: I0227 11:02:35.406441 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 27 11:02:35 crc kubenswrapper[4728]: I0227 11:02:35.406779 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-r9nq7" Feb 27 11:02:35 crc kubenswrapper[4728]: I0227 11:02:35.407097 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 27 11:02:35 crc kubenswrapper[4728]: I0227 11:02:35.407180 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 27 11:02:35 crc kubenswrapper[4728]: I0227 11:02:35.422551 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-6v98n"] Feb 27 11:02:35 crc kubenswrapper[4728]: I0227 11:02:35.478831 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbk9s\" (UniqueName: \"kubernetes.io/projected/72eee809-e748-4af7-a5b9-3f59015b2d8d-kube-api-access-sbk9s\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-6v98n\" (UID: \"72eee809-e748-4af7-a5b9-3f59015b2d8d\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-6v98n" Feb 27 11:02:35 crc kubenswrapper[4728]: I0227 11:02:35.479300 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/72eee809-e748-4af7-a5b9-3f59015b2d8d-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-6v98n\" (UID: \"72eee809-e748-4af7-a5b9-3f59015b2d8d\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-6v98n" Feb 27 11:02:35 crc kubenswrapper[4728]: I0227 11:02:35.479396 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/72eee809-e748-4af7-a5b9-3f59015b2d8d-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-6v98n\" (UID: \"72eee809-e748-4af7-a5b9-3f59015b2d8d\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-6v98n" Feb 27 11:02:35 crc kubenswrapper[4728]: I0227 11:02:35.582820 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sbk9s\" (UniqueName: \"kubernetes.io/projected/72eee809-e748-4af7-a5b9-3f59015b2d8d-kube-api-access-sbk9s\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-6v98n\" (UID: \"72eee809-e748-4af7-a5b9-3f59015b2d8d\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-6v98n" Feb 27 11:02:35 crc kubenswrapper[4728]: I0227 11:02:35.583240 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/72eee809-e748-4af7-a5b9-3f59015b2d8d-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-6v98n\" (UID: \"72eee809-e748-4af7-a5b9-3f59015b2d8d\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-6v98n" Feb 27 11:02:35 crc kubenswrapper[4728]: I0227 11:02:35.583330 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/72eee809-e748-4af7-a5b9-3f59015b2d8d-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-6v98n\" (UID: \"72eee809-e748-4af7-a5b9-3f59015b2d8d\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-6v98n" Feb 27 11:02:35 crc kubenswrapper[4728]: I0227 11:02:35.588154 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/72eee809-e748-4af7-a5b9-3f59015b2d8d-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-6v98n\" (UID: \"72eee809-e748-4af7-a5b9-3f59015b2d8d\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-6v98n" Feb 27 11:02:35 crc kubenswrapper[4728]: I0227 11:02:35.594454 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/72eee809-e748-4af7-a5b9-3f59015b2d8d-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-6v98n\" (UID: \"72eee809-e748-4af7-a5b9-3f59015b2d8d\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-6v98n" Feb 27 11:02:35 crc kubenswrapper[4728]: I0227 11:02:35.601740 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sbk9s\" (UniqueName: \"kubernetes.io/projected/72eee809-e748-4af7-a5b9-3f59015b2d8d-kube-api-access-sbk9s\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-6v98n\" (UID: \"72eee809-e748-4af7-a5b9-3f59015b2d8d\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-6v98n" Feb 27 11:02:35 crc kubenswrapper[4728]: I0227 11:02:35.734162 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-6v98n" Feb 27 11:02:36 crc kubenswrapper[4728]: I0227 11:02:36.404387 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-6v98n"] Feb 27 11:02:36 crc kubenswrapper[4728]: W0227 11:02:36.412843 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod72eee809_e748_4af7_a5b9_3f59015b2d8d.slice/crio-9a1d12b1a10d30a810cd2ba982f8cd72e8c6e58f0724edae953d9c20808cd187 WatchSource:0}: Error finding container 9a1d12b1a10d30a810cd2ba982f8cd72e8c6e58f0724edae953d9c20808cd187: Status 404 returned error can't find the container with id 9a1d12b1a10d30a810cd2ba982f8cd72e8c6e58f0724edae953d9c20808cd187 Feb 27 11:02:37 crc kubenswrapper[4728]: I0227 11:02:37.353228 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-6v98n" event={"ID":"72eee809-e748-4af7-a5b9-3f59015b2d8d","Type":"ContainerStarted","Data":"bafc8f8f6d7571eb9cdadd8d5a3f46644e108096eb077d37d99be85ad2b40cc1"} Feb 27 11:02:37 crc kubenswrapper[4728]: I0227 11:02:37.353594 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-6v98n" event={"ID":"72eee809-e748-4af7-a5b9-3f59015b2d8d","Type":"ContainerStarted","Data":"9a1d12b1a10d30a810cd2ba982f8cd72e8c6e58f0724edae953d9c20808cd187"} Feb 27 11:02:37 crc kubenswrapper[4728]: I0227 11:02:37.396728 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-6v98n" podStartSLOduration=1.948665701 podStartE2EDuration="2.396699059s" podCreationTimestamp="2026-02-27 11:02:35 +0000 UTC" firstStartedPulling="2026-02-27 11:02:36.417320402 +0000 UTC m=+2176.379686508" lastFinishedPulling="2026-02-27 11:02:36.86535376 +0000 UTC m=+2176.827719866" observedRunningTime="2026-02-27 11:02:37.379791779 +0000 UTC m=+2177.342157905" watchObservedRunningTime="2026-02-27 11:02:37.396699059 +0000 UTC m=+2177.359065205" Feb 27 11:02:38 crc kubenswrapper[4728]: I0227 11:02:38.122843 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-2tcmk"] Feb 27 11:02:38 crc kubenswrapper[4728]: I0227 11:02:38.127527 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2tcmk" Feb 27 11:02:38 crc kubenswrapper[4728]: I0227 11:02:38.147311 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2tcmk"] Feb 27 11:02:38 crc kubenswrapper[4728]: I0227 11:02:38.163923 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bdd3fc23-984e-421b-93ad-fc2302602e92-utilities\") pod \"community-operators-2tcmk\" (UID: \"bdd3fc23-984e-421b-93ad-fc2302602e92\") " pod="openshift-marketplace/community-operators-2tcmk" Feb 27 11:02:38 crc kubenswrapper[4728]: I0227 11:02:38.164517 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bdd3fc23-984e-421b-93ad-fc2302602e92-catalog-content\") pod \"community-operators-2tcmk\" (UID: \"bdd3fc23-984e-421b-93ad-fc2302602e92\") " pod="openshift-marketplace/community-operators-2tcmk" Feb 27 11:02:38 crc kubenswrapper[4728]: I0227 11:02:38.164639 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wm298\" (UniqueName: \"kubernetes.io/projected/bdd3fc23-984e-421b-93ad-fc2302602e92-kube-api-access-wm298\") pod \"community-operators-2tcmk\" (UID: \"bdd3fc23-984e-421b-93ad-fc2302602e92\") " pod="openshift-marketplace/community-operators-2tcmk" Feb 27 11:02:38 crc kubenswrapper[4728]: I0227 11:02:38.266559 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wm298\" (UniqueName: \"kubernetes.io/projected/bdd3fc23-984e-421b-93ad-fc2302602e92-kube-api-access-wm298\") pod \"community-operators-2tcmk\" (UID: \"bdd3fc23-984e-421b-93ad-fc2302602e92\") " pod="openshift-marketplace/community-operators-2tcmk" Feb 27 11:02:38 crc kubenswrapper[4728]: I0227 11:02:38.266661 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bdd3fc23-984e-421b-93ad-fc2302602e92-utilities\") pod \"community-operators-2tcmk\" (UID: \"bdd3fc23-984e-421b-93ad-fc2302602e92\") " pod="openshift-marketplace/community-operators-2tcmk" Feb 27 11:02:38 crc kubenswrapper[4728]: I0227 11:02:38.266845 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bdd3fc23-984e-421b-93ad-fc2302602e92-catalog-content\") pod \"community-operators-2tcmk\" (UID: \"bdd3fc23-984e-421b-93ad-fc2302602e92\") " pod="openshift-marketplace/community-operators-2tcmk" Feb 27 11:02:38 crc kubenswrapper[4728]: I0227 11:02:38.267134 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bdd3fc23-984e-421b-93ad-fc2302602e92-utilities\") pod \"community-operators-2tcmk\" (UID: \"bdd3fc23-984e-421b-93ad-fc2302602e92\") " pod="openshift-marketplace/community-operators-2tcmk" Feb 27 11:02:38 crc kubenswrapper[4728]: I0227 11:02:38.267198 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bdd3fc23-984e-421b-93ad-fc2302602e92-catalog-content\") pod \"community-operators-2tcmk\" (UID: \"bdd3fc23-984e-421b-93ad-fc2302602e92\") " pod="openshift-marketplace/community-operators-2tcmk" Feb 27 11:02:38 crc kubenswrapper[4728]: I0227 11:02:38.286811 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wm298\" (UniqueName: \"kubernetes.io/projected/bdd3fc23-984e-421b-93ad-fc2302602e92-kube-api-access-wm298\") pod \"community-operators-2tcmk\" (UID: \"bdd3fc23-984e-421b-93ad-fc2302602e92\") " pod="openshift-marketplace/community-operators-2tcmk" Feb 27 11:02:38 crc kubenswrapper[4728]: I0227 11:02:38.474378 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2tcmk" Feb 27 11:02:39 crc kubenswrapper[4728]: I0227 11:02:39.073161 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2tcmk"] Feb 27 11:02:39 crc kubenswrapper[4728]: W0227 11:02:39.079721 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbdd3fc23_984e_421b_93ad_fc2302602e92.slice/crio-0446fdb562a51f71648e60c4c17612637ccc9e89a64a04354783357154efa5d8 WatchSource:0}: Error finding container 0446fdb562a51f71648e60c4c17612637ccc9e89a64a04354783357154efa5d8: Status 404 returned error can't find the container with id 0446fdb562a51f71648e60c4c17612637ccc9e89a64a04354783357154efa5d8 Feb 27 11:02:39 crc kubenswrapper[4728]: I0227 11:02:39.378409 4728 generic.go:334] "Generic (PLEG): container finished" podID="bdd3fc23-984e-421b-93ad-fc2302602e92" containerID="62e3b0cdd6fc01eeaf43d756d289dbe02bcf953a438c9370d0a0943087f11e24" exitCode=0 Feb 27 11:02:39 crc kubenswrapper[4728]: I0227 11:02:39.378548 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2tcmk" event={"ID":"bdd3fc23-984e-421b-93ad-fc2302602e92","Type":"ContainerDied","Data":"62e3b0cdd6fc01eeaf43d756d289dbe02bcf953a438c9370d0a0943087f11e24"} Feb 27 11:02:39 crc kubenswrapper[4728]: I0227 11:02:39.378634 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2tcmk" event={"ID":"bdd3fc23-984e-421b-93ad-fc2302602e92","Type":"ContainerStarted","Data":"0446fdb562a51f71648e60c4c17612637ccc9e89a64a04354783357154efa5d8"} Feb 27 11:02:39 crc kubenswrapper[4728]: I0227 11:02:39.382146 4728 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 27 11:02:41 crc kubenswrapper[4728]: I0227 11:02:41.403959 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2tcmk" event={"ID":"bdd3fc23-984e-421b-93ad-fc2302602e92","Type":"ContainerStarted","Data":"d6be73d9a79d889609b73b4b56e15e35e48d486362204a7c062b5162d7e9b5d6"} Feb 27 11:02:42 crc kubenswrapper[4728]: I0227 11:02:42.418950 4728 generic.go:334] "Generic (PLEG): container finished" podID="bdd3fc23-984e-421b-93ad-fc2302602e92" containerID="d6be73d9a79d889609b73b4b56e15e35e48d486362204a7c062b5162d7e9b5d6" exitCode=0 Feb 27 11:02:42 crc kubenswrapper[4728]: I0227 11:02:42.419001 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2tcmk" event={"ID":"bdd3fc23-984e-421b-93ad-fc2302602e92","Type":"ContainerDied","Data":"d6be73d9a79d889609b73b4b56e15e35e48d486362204a7c062b5162d7e9b5d6"} Feb 27 11:02:43 crc kubenswrapper[4728]: I0227 11:02:43.430369 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2tcmk" event={"ID":"bdd3fc23-984e-421b-93ad-fc2302602e92","Type":"ContainerStarted","Data":"b161841d7c077adeff92143132f43a69b87c41c66fbe44c87330ff0122c851c6"} Feb 27 11:02:43 crc kubenswrapper[4728]: I0227 11:02:43.454776 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-2tcmk" podStartSLOduration=2.000139212 podStartE2EDuration="5.454755724s" podCreationTimestamp="2026-02-27 11:02:38 +0000 UTC" firstStartedPulling="2026-02-27 11:02:39.381905526 +0000 UTC m=+2179.344271632" lastFinishedPulling="2026-02-27 11:02:42.836522038 +0000 UTC m=+2182.798888144" observedRunningTime="2026-02-27 11:02:43.445256747 +0000 UTC m=+2183.407622853" watchObservedRunningTime="2026-02-27 11:02:43.454755724 +0000 UTC m=+2183.417121830" Feb 27 11:02:44 crc kubenswrapper[4728]: I0227 11:02:44.891237 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-fbbrm"] Feb 27 11:02:44 crc kubenswrapper[4728]: I0227 11:02:44.910756 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fbbrm" Feb 27 11:02:44 crc kubenswrapper[4728]: I0227 11:02:44.975772 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fbbrm"] Feb 27 11:02:45 crc kubenswrapper[4728]: I0227 11:02:45.037617 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9s2th\" (UniqueName: \"kubernetes.io/projected/eb3b246e-199a-47ea-9bc2-1b16b91a1522-kube-api-access-9s2th\") pod \"certified-operators-fbbrm\" (UID: \"eb3b246e-199a-47ea-9bc2-1b16b91a1522\") " pod="openshift-marketplace/certified-operators-fbbrm" Feb 27 11:02:45 crc kubenswrapper[4728]: I0227 11:02:45.037816 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb3b246e-199a-47ea-9bc2-1b16b91a1522-catalog-content\") pod \"certified-operators-fbbrm\" (UID: \"eb3b246e-199a-47ea-9bc2-1b16b91a1522\") " pod="openshift-marketplace/certified-operators-fbbrm" Feb 27 11:02:45 crc kubenswrapper[4728]: I0227 11:02:45.037957 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb3b246e-199a-47ea-9bc2-1b16b91a1522-utilities\") pod \"certified-operators-fbbrm\" (UID: \"eb3b246e-199a-47ea-9bc2-1b16b91a1522\") " pod="openshift-marketplace/certified-operators-fbbrm" Feb 27 11:02:45 crc kubenswrapper[4728]: I0227 11:02:45.085215 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-sz58s"] Feb 27 11:02:45 crc kubenswrapper[4728]: I0227 11:02:45.089082 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sz58s" Feb 27 11:02:45 crc kubenswrapper[4728]: I0227 11:02:45.106284 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-sz58s"] Feb 27 11:02:45 crc kubenswrapper[4728]: I0227 11:02:45.140328 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9s2th\" (UniqueName: \"kubernetes.io/projected/eb3b246e-199a-47ea-9bc2-1b16b91a1522-kube-api-access-9s2th\") pod \"certified-operators-fbbrm\" (UID: \"eb3b246e-199a-47ea-9bc2-1b16b91a1522\") " pod="openshift-marketplace/certified-operators-fbbrm" Feb 27 11:02:45 crc kubenswrapper[4728]: I0227 11:02:45.140473 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb3b246e-199a-47ea-9bc2-1b16b91a1522-catalog-content\") pod \"certified-operators-fbbrm\" (UID: \"eb3b246e-199a-47ea-9bc2-1b16b91a1522\") " pod="openshift-marketplace/certified-operators-fbbrm" Feb 27 11:02:45 crc kubenswrapper[4728]: I0227 11:02:45.140614 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb3b246e-199a-47ea-9bc2-1b16b91a1522-utilities\") pod \"certified-operators-fbbrm\" (UID: \"eb3b246e-199a-47ea-9bc2-1b16b91a1522\") " pod="openshift-marketplace/certified-operators-fbbrm" Feb 27 11:02:45 crc kubenswrapper[4728]: I0227 11:02:45.141038 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb3b246e-199a-47ea-9bc2-1b16b91a1522-utilities\") pod \"certified-operators-fbbrm\" (UID: \"eb3b246e-199a-47ea-9bc2-1b16b91a1522\") " pod="openshift-marketplace/certified-operators-fbbrm" Feb 27 11:02:45 crc kubenswrapper[4728]: I0227 11:02:45.141260 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb3b246e-199a-47ea-9bc2-1b16b91a1522-catalog-content\") pod \"certified-operators-fbbrm\" (UID: \"eb3b246e-199a-47ea-9bc2-1b16b91a1522\") " pod="openshift-marketplace/certified-operators-fbbrm" Feb 27 11:02:45 crc kubenswrapper[4728]: I0227 11:02:45.168467 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9s2th\" (UniqueName: \"kubernetes.io/projected/eb3b246e-199a-47ea-9bc2-1b16b91a1522-kube-api-access-9s2th\") pod \"certified-operators-fbbrm\" (UID: \"eb3b246e-199a-47ea-9bc2-1b16b91a1522\") " pod="openshift-marketplace/certified-operators-fbbrm" Feb 27 11:02:45 crc kubenswrapper[4728]: I0227 11:02:45.242363 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/837a3add-46f4-45b9-9a99-587f7cf094a1-catalog-content\") pod \"redhat-marketplace-sz58s\" (UID: \"837a3add-46f4-45b9-9a99-587f7cf094a1\") " pod="openshift-marketplace/redhat-marketplace-sz58s" Feb 27 11:02:45 crc kubenswrapper[4728]: I0227 11:02:45.242445 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/837a3add-46f4-45b9-9a99-587f7cf094a1-utilities\") pod \"redhat-marketplace-sz58s\" (UID: \"837a3add-46f4-45b9-9a99-587f7cf094a1\") " pod="openshift-marketplace/redhat-marketplace-sz58s" Feb 27 11:02:45 crc kubenswrapper[4728]: I0227 11:02:45.242618 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m97qb\" (UniqueName: \"kubernetes.io/projected/837a3add-46f4-45b9-9a99-587f7cf094a1-kube-api-access-m97qb\") pod \"redhat-marketplace-sz58s\" (UID: \"837a3add-46f4-45b9-9a99-587f7cf094a1\") " pod="openshift-marketplace/redhat-marketplace-sz58s" Feb 27 11:02:45 crc kubenswrapper[4728]: I0227 11:02:45.246114 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fbbrm" Feb 27 11:02:45 crc kubenswrapper[4728]: I0227 11:02:45.344383 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m97qb\" (UniqueName: \"kubernetes.io/projected/837a3add-46f4-45b9-9a99-587f7cf094a1-kube-api-access-m97qb\") pod \"redhat-marketplace-sz58s\" (UID: \"837a3add-46f4-45b9-9a99-587f7cf094a1\") " pod="openshift-marketplace/redhat-marketplace-sz58s" Feb 27 11:02:45 crc kubenswrapper[4728]: I0227 11:02:45.345438 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/837a3add-46f4-45b9-9a99-587f7cf094a1-catalog-content\") pod \"redhat-marketplace-sz58s\" (UID: \"837a3add-46f4-45b9-9a99-587f7cf094a1\") " pod="openshift-marketplace/redhat-marketplace-sz58s" Feb 27 11:02:45 crc kubenswrapper[4728]: I0227 11:02:45.345520 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/837a3add-46f4-45b9-9a99-587f7cf094a1-utilities\") pod \"redhat-marketplace-sz58s\" (UID: \"837a3add-46f4-45b9-9a99-587f7cf094a1\") " pod="openshift-marketplace/redhat-marketplace-sz58s" Feb 27 11:02:45 crc kubenswrapper[4728]: I0227 11:02:45.346047 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/837a3add-46f4-45b9-9a99-587f7cf094a1-utilities\") pod \"redhat-marketplace-sz58s\" (UID: \"837a3add-46f4-45b9-9a99-587f7cf094a1\") " pod="openshift-marketplace/redhat-marketplace-sz58s" Feb 27 11:02:45 crc kubenswrapper[4728]: I0227 11:02:45.347355 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/837a3add-46f4-45b9-9a99-587f7cf094a1-catalog-content\") pod \"redhat-marketplace-sz58s\" (UID: \"837a3add-46f4-45b9-9a99-587f7cf094a1\") " pod="openshift-marketplace/redhat-marketplace-sz58s" Feb 27 11:02:45 crc kubenswrapper[4728]: I0227 11:02:45.367223 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m97qb\" (UniqueName: \"kubernetes.io/projected/837a3add-46f4-45b9-9a99-587f7cf094a1-kube-api-access-m97qb\") pod \"redhat-marketplace-sz58s\" (UID: \"837a3add-46f4-45b9-9a99-587f7cf094a1\") " pod="openshift-marketplace/redhat-marketplace-sz58s" Feb 27 11:02:45 crc kubenswrapper[4728]: I0227 11:02:45.408346 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sz58s" Feb 27 11:02:45 crc kubenswrapper[4728]: W0227 11:02:45.749737 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeb3b246e_199a_47ea_9bc2_1b16b91a1522.slice/crio-b047f9a03da009727008d8c149e9c73eda8087b7a4e92fb22dc27680988789d8 WatchSource:0}: Error finding container b047f9a03da009727008d8c149e9c73eda8087b7a4e92fb22dc27680988789d8: Status 404 returned error can't find the container with id b047f9a03da009727008d8c149e9c73eda8087b7a4e92fb22dc27680988789d8 Feb 27 11:02:45 crc kubenswrapper[4728]: I0227 11:02:45.754953 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fbbrm"] Feb 27 11:02:46 crc kubenswrapper[4728]: I0227 11:02:46.257542 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-sz58s"] Feb 27 11:02:46 crc kubenswrapper[4728]: W0227 11:02:46.266692 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod837a3add_46f4_45b9_9a99_587f7cf094a1.slice/crio-c067e74fe3f4289fd6b0eebc22b49eb69cb1dd7c9cc58e1c6b9804f081371caf WatchSource:0}: Error finding container c067e74fe3f4289fd6b0eebc22b49eb69cb1dd7c9cc58e1c6b9804f081371caf: Status 404 returned error can't find the container with id c067e74fe3f4289fd6b0eebc22b49eb69cb1dd7c9cc58e1c6b9804f081371caf Feb 27 11:02:46 crc kubenswrapper[4728]: I0227 11:02:46.521534 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sz58s" event={"ID":"837a3add-46f4-45b9-9a99-587f7cf094a1","Type":"ContainerStarted","Data":"c067e74fe3f4289fd6b0eebc22b49eb69cb1dd7c9cc58e1c6b9804f081371caf"} Feb 27 11:02:46 crc kubenswrapper[4728]: I0227 11:02:46.522981 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fbbrm" event={"ID":"eb3b246e-199a-47ea-9bc2-1b16b91a1522","Type":"ContainerStarted","Data":"b047f9a03da009727008d8c149e9c73eda8087b7a4e92fb22dc27680988789d8"} Feb 27 11:02:47 crc kubenswrapper[4728]: I0227 11:02:47.537921 4728 generic.go:334] "Generic (PLEG): container finished" podID="837a3add-46f4-45b9-9a99-587f7cf094a1" containerID="22a2c1ae31dcdc9f115e480758486c6150654ac6c4afd6192d2a8d80c7b7926e" exitCode=0 Feb 27 11:02:47 crc kubenswrapper[4728]: I0227 11:02:47.538065 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sz58s" event={"ID":"837a3add-46f4-45b9-9a99-587f7cf094a1","Type":"ContainerDied","Data":"22a2c1ae31dcdc9f115e480758486c6150654ac6c4afd6192d2a8d80c7b7926e"} Feb 27 11:02:47 crc kubenswrapper[4728]: I0227 11:02:47.545295 4728 generic.go:334] "Generic (PLEG): container finished" podID="72eee809-e748-4af7-a5b9-3f59015b2d8d" containerID="bafc8f8f6d7571eb9cdadd8d5a3f46644e108096eb077d37d99be85ad2b40cc1" exitCode=0 Feb 27 11:02:47 crc kubenswrapper[4728]: I0227 11:02:47.545371 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-6v98n" event={"ID":"72eee809-e748-4af7-a5b9-3f59015b2d8d","Type":"ContainerDied","Data":"bafc8f8f6d7571eb9cdadd8d5a3f46644e108096eb077d37d99be85ad2b40cc1"} Feb 27 11:02:47 crc kubenswrapper[4728]: I0227 11:02:47.547898 4728 generic.go:334] "Generic (PLEG): container finished" podID="eb3b246e-199a-47ea-9bc2-1b16b91a1522" containerID="2212df44dfda5b09e52f4757eda6ea6d188f3ed9b7b5af1da364b7084fead46e" exitCode=0 Feb 27 11:02:47 crc kubenswrapper[4728]: I0227 11:02:47.547945 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fbbrm" event={"ID":"eb3b246e-199a-47ea-9bc2-1b16b91a1522","Type":"ContainerDied","Data":"2212df44dfda5b09e52f4757eda6ea6d188f3ed9b7b5af1da364b7084fead46e"} Feb 27 11:02:48 crc kubenswrapper[4728]: I0227 11:02:48.474891 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-2tcmk" Feb 27 11:02:48 crc kubenswrapper[4728]: I0227 11:02:48.475351 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-2tcmk" Feb 27 11:02:48 crc kubenswrapper[4728]: I0227 11:02:48.536031 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-2tcmk" Feb 27 11:02:48 crc kubenswrapper[4728]: I0227 11:02:48.562254 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fbbrm" event={"ID":"eb3b246e-199a-47ea-9bc2-1b16b91a1522","Type":"ContainerStarted","Data":"7e335c628b4308728d1add96a9622c96eba99853c05754f2c1d75aaca608b838"} Feb 27 11:02:48 crc kubenswrapper[4728]: I0227 11:02:48.566639 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sz58s" event={"ID":"837a3add-46f4-45b9-9a99-587f7cf094a1","Type":"ContainerStarted","Data":"edfd1dd92a27aabaee2eee79b3c42f34091c9d884f6af25a918a028d19d650de"} Feb 27 11:02:48 crc kubenswrapper[4728]: I0227 11:02:48.643755 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-2tcmk" Feb 27 11:02:49 crc kubenswrapper[4728]: I0227 11:02:49.136277 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-6v98n" Feb 27 11:02:49 crc kubenswrapper[4728]: I0227 11:02:49.248559 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sbk9s\" (UniqueName: \"kubernetes.io/projected/72eee809-e748-4af7-a5b9-3f59015b2d8d-kube-api-access-sbk9s\") pod \"72eee809-e748-4af7-a5b9-3f59015b2d8d\" (UID: \"72eee809-e748-4af7-a5b9-3f59015b2d8d\") " Feb 27 11:02:49 crc kubenswrapper[4728]: I0227 11:02:49.249030 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/72eee809-e748-4af7-a5b9-3f59015b2d8d-ssh-key-openstack-edpm-ipam\") pod \"72eee809-e748-4af7-a5b9-3f59015b2d8d\" (UID: \"72eee809-e748-4af7-a5b9-3f59015b2d8d\") " Feb 27 11:02:49 crc kubenswrapper[4728]: I0227 11:02:49.249257 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/72eee809-e748-4af7-a5b9-3f59015b2d8d-inventory\") pod \"72eee809-e748-4af7-a5b9-3f59015b2d8d\" (UID: \"72eee809-e748-4af7-a5b9-3f59015b2d8d\") " Feb 27 11:02:49 crc kubenswrapper[4728]: I0227 11:02:49.255204 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72eee809-e748-4af7-a5b9-3f59015b2d8d-kube-api-access-sbk9s" (OuterVolumeSpecName: "kube-api-access-sbk9s") pod "72eee809-e748-4af7-a5b9-3f59015b2d8d" (UID: "72eee809-e748-4af7-a5b9-3f59015b2d8d"). InnerVolumeSpecName "kube-api-access-sbk9s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:02:49 crc kubenswrapper[4728]: I0227 11:02:49.286252 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72eee809-e748-4af7-a5b9-3f59015b2d8d-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "72eee809-e748-4af7-a5b9-3f59015b2d8d" (UID: "72eee809-e748-4af7-a5b9-3f59015b2d8d"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 11:02:49 crc kubenswrapper[4728]: I0227 11:02:49.288190 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72eee809-e748-4af7-a5b9-3f59015b2d8d-inventory" (OuterVolumeSpecName: "inventory") pod "72eee809-e748-4af7-a5b9-3f59015b2d8d" (UID: "72eee809-e748-4af7-a5b9-3f59015b2d8d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 11:02:49 crc kubenswrapper[4728]: I0227 11:02:49.353391 4728 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/72eee809-e748-4af7-a5b9-3f59015b2d8d-inventory\") on node \"crc\" DevicePath \"\"" Feb 27 11:02:49 crc kubenswrapper[4728]: I0227 11:02:49.353759 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sbk9s\" (UniqueName: \"kubernetes.io/projected/72eee809-e748-4af7-a5b9-3f59015b2d8d-kube-api-access-sbk9s\") on node \"crc\" DevicePath \"\"" Feb 27 11:02:49 crc kubenswrapper[4728]: I0227 11:02:49.353909 4728 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/72eee809-e748-4af7-a5b9-3f59015b2d8d-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 27 11:02:49 crc kubenswrapper[4728]: I0227 11:02:49.576235 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-6v98n" event={"ID":"72eee809-e748-4af7-a5b9-3f59015b2d8d","Type":"ContainerDied","Data":"9a1d12b1a10d30a810cd2ba982f8cd72e8c6e58f0724edae953d9c20808cd187"} Feb 27 11:02:49 crc kubenswrapper[4728]: I0227 11:02:49.576284 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9a1d12b1a10d30a810cd2ba982f8cd72e8c6e58f0724edae953d9c20808cd187" Feb 27 11:02:49 crc kubenswrapper[4728]: I0227 11:02:49.576705 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-6v98n" Feb 27 11:02:49 crc kubenswrapper[4728]: I0227 11:02:49.744934 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7t7vd"] Feb 27 11:02:49 crc kubenswrapper[4728]: E0227 11:02:49.745879 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72eee809-e748-4af7-a5b9-3f59015b2d8d" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 27 11:02:49 crc kubenswrapper[4728]: I0227 11:02:49.745899 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="72eee809-e748-4af7-a5b9-3f59015b2d8d" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 27 11:02:49 crc kubenswrapper[4728]: I0227 11:02:49.746138 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="72eee809-e748-4af7-a5b9-3f59015b2d8d" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 27 11:02:49 crc kubenswrapper[4728]: I0227 11:02:49.747002 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7t7vd" Feb 27 11:02:49 crc kubenswrapper[4728]: I0227 11:02:49.749346 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 27 11:02:49 crc kubenswrapper[4728]: I0227 11:02:49.750101 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-r9nq7" Feb 27 11:02:49 crc kubenswrapper[4728]: I0227 11:02:49.753949 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 27 11:02:49 crc kubenswrapper[4728]: I0227 11:02:49.753969 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 27 11:02:49 crc kubenswrapper[4728]: I0227 11:02:49.761869 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7t7vd"] Feb 27 11:02:49 crc kubenswrapper[4728]: I0227 11:02:49.865104 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hkxkw\" (UniqueName: \"kubernetes.io/projected/da0fc581-2d10-45bd-aecf-8af4e8964c24-kube-api-access-hkxkw\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-7t7vd\" (UID: \"da0fc581-2d10-45bd-aecf-8af4e8964c24\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7t7vd" Feb 27 11:02:49 crc kubenswrapper[4728]: I0227 11:02:49.865767 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/da0fc581-2d10-45bd-aecf-8af4e8964c24-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-7t7vd\" (UID: \"da0fc581-2d10-45bd-aecf-8af4e8964c24\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7t7vd" Feb 27 11:02:49 crc kubenswrapper[4728]: I0227 11:02:49.865805 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/da0fc581-2d10-45bd-aecf-8af4e8964c24-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-7t7vd\" (UID: \"da0fc581-2d10-45bd-aecf-8af4e8964c24\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7t7vd" Feb 27 11:02:49 crc kubenswrapper[4728]: I0227 11:02:49.968936 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/da0fc581-2d10-45bd-aecf-8af4e8964c24-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-7t7vd\" (UID: \"da0fc581-2d10-45bd-aecf-8af4e8964c24\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7t7vd" Feb 27 11:02:49 crc kubenswrapper[4728]: I0227 11:02:49.968986 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/da0fc581-2d10-45bd-aecf-8af4e8964c24-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-7t7vd\" (UID: \"da0fc581-2d10-45bd-aecf-8af4e8964c24\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7t7vd" Feb 27 11:02:49 crc kubenswrapper[4728]: I0227 11:02:49.969101 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hkxkw\" (UniqueName: \"kubernetes.io/projected/da0fc581-2d10-45bd-aecf-8af4e8964c24-kube-api-access-hkxkw\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-7t7vd\" (UID: \"da0fc581-2d10-45bd-aecf-8af4e8964c24\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7t7vd" Feb 27 11:02:49 crc kubenswrapper[4728]: I0227 11:02:49.974163 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/da0fc581-2d10-45bd-aecf-8af4e8964c24-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-7t7vd\" (UID: \"da0fc581-2d10-45bd-aecf-8af4e8964c24\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7t7vd" Feb 27 11:02:49 crc kubenswrapper[4728]: I0227 11:02:49.975019 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/da0fc581-2d10-45bd-aecf-8af4e8964c24-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-7t7vd\" (UID: \"da0fc581-2d10-45bd-aecf-8af4e8964c24\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7t7vd" Feb 27 11:02:49 crc kubenswrapper[4728]: I0227 11:02:49.998088 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hkxkw\" (UniqueName: \"kubernetes.io/projected/da0fc581-2d10-45bd-aecf-8af4e8964c24-kube-api-access-hkxkw\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-7t7vd\" (UID: \"da0fc581-2d10-45bd-aecf-8af4e8964c24\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7t7vd" Feb 27 11:02:50 crc kubenswrapper[4728]: I0227 11:02:50.064750 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7t7vd" Feb 27 11:02:50 crc kubenswrapper[4728]: I0227 11:02:50.632966 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7t7vd"] Feb 27 11:02:51 crc kubenswrapper[4728]: I0227 11:02:51.283790 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2tcmk"] Feb 27 11:02:51 crc kubenswrapper[4728]: I0227 11:02:51.284635 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-2tcmk" podUID="bdd3fc23-984e-421b-93ad-fc2302602e92" containerName="registry-server" containerID="cri-o://b161841d7c077adeff92143132f43a69b87c41c66fbe44c87330ff0122c851c6" gracePeriod=2 Feb 27 11:02:51 crc kubenswrapper[4728]: I0227 11:02:51.607240 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7t7vd" event={"ID":"da0fc581-2d10-45bd-aecf-8af4e8964c24","Type":"ContainerStarted","Data":"20fce2b4517acf3f57f7aac7235517e9d47cd41529d0b3aac15b27283d29393b"} Feb 27 11:02:51 crc kubenswrapper[4728]: I0227 11:02:51.607713 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7t7vd" event={"ID":"da0fc581-2d10-45bd-aecf-8af4e8964c24","Type":"ContainerStarted","Data":"ad8e337186b82f40f2ac504000062772ca7aca86fa55b28614f9195f37e60198"} Feb 27 11:02:51 crc kubenswrapper[4728]: I0227 11:02:51.611851 4728 generic.go:334] "Generic (PLEG): container finished" podID="bdd3fc23-984e-421b-93ad-fc2302602e92" containerID="b161841d7c077adeff92143132f43a69b87c41c66fbe44c87330ff0122c851c6" exitCode=0 Feb 27 11:02:51 crc kubenswrapper[4728]: I0227 11:02:51.611922 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2tcmk" event={"ID":"bdd3fc23-984e-421b-93ad-fc2302602e92","Type":"ContainerDied","Data":"b161841d7c077adeff92143132f43a69b87c41c66fbe44c87330ff0122c851c6"} Feb 27 11:02:51 crc kubenswrapper[4728]: I0227 11:02:51.614307 4728 generic.go:334] "Generic (PLEG): container finished" podID="837a3add-46f4-45b9-9a99-587f7cf094a1" containerID="edfd1dd92a27aabaee2eee79b3c42f34091c9d884f6af25a918a028d19d650de" exitCode=0 Feb 27 11:02:51 crc kubenswrapper[4728]: I0227 11:02:51.614350 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sz58s" event={"ID":"837a3add-46f4-45b9-9a99-587f7cf094a1","Type":"ContainerDied","Data":"edfd1dd92a27aabaee2eee79b3c42f34091c9d884f6af25a918a028d19d650de"} Feb 27 11:02:51 crc kubenswrapper[4728]: I0227 11:02:51.655719 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7t7vd" podStartSLOduration=2.223527511 podStartE2EDuration="2.655693357s" podCreationTimestamp="2026-02-27 11:02:49 +0000 UTC" firstStartedPulling="2026-02-27 11:02:50.632202996 +0000 UTC m=+2190.594569132" lastFinishedPulling="2026-02-27 11:02:51.064368872 +0000 UTC m=+2191.026734978" observedRunningTime="2026-02-27 11:02:51.63147108 +0000 UTC m=+2191.593837196" watchObservedRunningTime="2026-02-27 11:02:51.655693357 +0000 UTC m=+2191.618059463" Feb 27 11:02:51 crc kubenswrapper[4728]: I0227 11:02:51.825592 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2tcmk" Feb 27 11:02:51 crc kubenswrapper[4728]: I0227 11:02:51.922438 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bdd3fc23-984e-421b-93ad-fc2302602e92-utilities\") pod \"bdd3fc23-984e-421b-93ad-fc2302602e92\" (UID: \"bdd3fc23-984e-421b-93ad-fc2302602e92\") " Feb 27 11:02:51 crc kubenswrapper[4728]: I0227 11:02:51.922623 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bdd3fc23-984e-421b-93ad-fc2302602e92-catalog-content\") pod \"bdd3fc23-984e-421b-93ad-fc2302602e92\" (UID: \"bdd3fc23-984e-421b-93ad-fc2302602e92\") " Feb 27 11:02:51 crc kubenswrapper[4728]: I0227 11:02:51.922665 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wm298\" (UniqueName: \"kubernetes.io/projected/bdd3fc23-984e-421b-93ad-fc2302602e92-kube-api-access-wm298\") pod \"bdd3fc23-984e-421b-93ad-fc2302602e92\" (UID: \"bdd3fc23-984e-421b-93ad-fc2302602e92\") " Feb 27 11:02:51 crc kubenswrapper[4728]: I0227 11:02:51.923194 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bdd3fc23-984e-421b-93ad-fc2302602e92-utilities" (OuterVolumeSpecName: "utilities") pod "bdd3fc23-984e-421b-93ad-fc2302602e92" (UID: "bdd3fc23-984e-421b-93ad-fc2302602e92"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 11:02:51 crc kubenswrapper[4728]: I0227 11:02:51.927860 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bdd3fc23-984e-421b-93ad-fc2302602e92-kube-api-access-wm298" (OuterVolumeSpecName: "kube-api-access-wm298") pod "bdd3fc23-984e-421b-93ad-fc2302602e92" (UID: "bdd3fc23-984e-421b-93ad-fc2302602e92"). InnerVolumeSpecName "kube-api-access-wm298". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:02:51 crc kubenswrapper[4728]: I0227 11:02:51.982493 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bdd3fc23-984e-421b-93ad-fc2302602e92-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bdd3fc23-984e-421b-93ad-fc2302602e92" (UID: "bdd3fc23-984e-421b-93ad-fc2302602e92"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 11:02:52 crc kubenswrapper[4728]: I0227 11:02:52.025828 4728 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bdd3fc23-984e-421b-93ad-fc2302602e92-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 11:02:52 crc kubenswrapper[4728]: I0227 11:02:52.025967 4728 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bdd3fc23-984e-421b-93ad-fc2302602e92-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 11:02:52 crc kubenswrapper[4728]: I0227 11:02:52.026056 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wm298\" (UniqueName: \"kubernetes.io/projected/bdd3fc23-984e-421b-93ad-fc2302602e92-kube-api-access-wm298\") on node \"crc\" DevicePath \"\"" Feb 27 11:02:52 crc kubenswrapper[4728]: I0227 11:02:52.631115 4728 generic.go:334] "Generic (PLEG): container finished" podID="eb3b246e-199a-47ea-9bc2-1b16b91a1522" containerID="7e335c628b4308728d1add96a9622c96eba99853c05754f2c1d75aaca608b838" exitCode=0 Feb 27 11:02:52 crc kubenswrapper[4728]: I0227 11:02:52.631221 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fbbrm" event={"ID":"eb3b246e-199a-47ea-9bc2-1b16b91a1522","Type":"ContainerDied","Data":"7e335c628b4308728d1add96a9622c96eba99853c05754f2c1d75aaca608b838"} Feb 27 11:02:52 crc kubenswrapper[4728]: I0227 11:02:52.638877 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2tcmk" Feb 27 11:02:52 crc kubenswrapper[4728]: I0227 11:02:52.639209 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2tcmk" event={"ID":"bdd3fc23-984e-421b-93ad-fc2302602e92","Type":"ContainerDied","Data":"0446fdb562a51f71648e60c4c17612637ccc9e89a64a04354783357154efa5d8"} Feb 27 11:02:52 crc kubenswrapper[4728]: I0227 11:02:52.639285 4728 scope.go:117] "RemoveContainer" containerID="b161841d7c077adeff92143132f43a69b87c41c66fbe44c87330ff0122c851c6" Feb 27 11:02:52 crc kubenswrapper[4728]: I0227 11:02:52.723715 4728 scope.go:117] "RemoveContainer" containerID="d6be73d9a79d889609b73b4b56e15e35e48d486362204a7c062b5162d7e9b5d6" Feb 27 11:02:52 crc kubenswrapper[4728]: I0227 11:02:52.755748 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2tcmk"] Feb 27 11:02:52 crc kubenswrapper[4728]: I0227 11:02:52.766487 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-2tcmk"] Feb 27 11:02:52 crc kubenswrapper[4728]: I0227 11:02:52.768335 4728 scope.go:117] "RemoveContainer" containerID="62e3b0cdd6fc01eeaf43d756d289dbe02bcf953a438c9370d0a0943087f11e24" Feb 27 11:02:53 crc kubenswrapper[4728]: I0227 11:02:53.655746 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sz58s" event={"ID":"837a3add-46f4-45b9-9a99-587f7cf094a1","Type":"ContainerStarted","Data":"be2c0db84efdaec4e7b7aa970c2f433527ffb797b909a6e00e74fe4d75c5ff93"} Feb 27 11:02:53 crc kubenswrapper[4728]: I0227 11:02:53.658682 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fbbrm" event={"ID":"eb3b246e-199a-47ea-9bc2-1b16b91a1522","Type":"ContainerStarted","Data":"1740198137d35a8e4997dc147a67bebb68f3c5e89832dc0b602f73cb1a9b3e51"} Feb 27 11:02:53 crc kubenswrapper[4728]: I0227 11:02:53.694774 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-sz58s" podStartSLOduration=3.644894939 podStartE2EDuration="8.694750008s" podCreationTimestamp="2026-02-27 11:02:45 +0000 UTC" firstStartedPulling="2026-02-27 11:02:47.540113731 +0000 UTC m=+2187.502479847" lastFinishedPulling="2026-02-27 11:02:52.58996877 +0000 UTC m=+2192.552334916" observedRunningTime="2026-02-27 11:02:53.674401245 +0000 UTC m=+2193.636767361" watchObservedRunningTime="2026-02-27 11:02:53.694750008 +0000 UTC m=+2193.657116124" Feb 27 11:02:53 crc kubenswrapper[4728]: I0227 11:02:53.711586 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-fbbrm" podStartSLOduration=4.080792442 podStartE2EDuration="9.711563294s" podCreationTimestamp="2026-02-27 11:02:44 +0000 UTC" firstStartedPulling="2026-02-27 11:02:47.549818194 +0000 UTC m=+2187.512184300" lastFinishedPulling="2026-02-27 11:02:53.180589046 +0000 UTC m=+2193.142955152" observedRunningTime="2026-02-27 11:02:53.710209717 +0000 UTC m=+2193.672575823" watchObservedRunningTime="2026-02-27 11:02:53.711563294 +0000 UTC m=+2193.673929410" Feb 27 11:02:54 crc kubenswrapper[4728]: I0227 11:02:54.748492 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bdd3fc23-984e-421b-93ad-fc2302602e92" path="/var/lib/kubelet/pods/bdd3fc23-984e-421b-93ad-fc2302602e92/volumes" Feb 27 11:02:55 crc kubenswrapper[4728]: I0227 11:02:55.246983 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-fbbrm" Feb 27 11:02:55 crc kubenswrapper[4728]: I0227 11:02:55.247332 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-fbbrm" Feb 27 11:02:55 crc kubenswrapper[4728]: I0227 11:02:55.409796 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-sz58s" Feb 27 11:02:55 crc kubenswrapper[4728]: I0227 11:02:55.409878 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-sz58s" Feb 27 11:02:56 crc kubenswrapper[4728]: I0227 11:02:56.328562 4728 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-fbbrm" podUID="eb3b246e-199a-47ea-9bc2-1b16b91a1522" containerName="registry-server" probeResult="failure" output=< Feb 27 11:02:56 crc kubenswrapper[4728]: timeout: failed to connect service ":50051" within 1s Feb 27 11:02:56 crc kubenswrapper[4728]: > Feb 27 11:02:56 crc kubenswrapper[4728]: I0227 11:02:56.464533 4728 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-sz58s" podUID="837a3add-46f4-45b9-9a99-587f7cf094a1" containerName="registry-server" probeResult="failure" output=< Feb 27 11:02:56 crc kubenswrapper[4728]: timeout: failed to connect service ":50051" within 1s Feb 27 11:02:56 crc kubenswrapper[4728]: > Feb 27 11:03:03 crc kubenswrapper[4728]: I0227 11:03:03.791201 4728 generic.go:334] "Generic (PLEG): container finished" podID="da0fc581-2d10-45bd-aecf-8af4e8964c24" containerID="20fce2b4517acf3f57f7aac7235517e9d47cd41529d0b3aac15b27283d29393b" exitCode=0 Feb 27 11:03:03 crc kubenswrapper[4728]: I0227 11:03:03.791708 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7t7vd" event={"ID":"da0fc581-2d10-45bd-aecf-8af4e8964c24","Type":"ContainerDied","Data":"20fce2b4517acf3f57f7aac7235517e9d47cd41529d0b3aac15b27283d29393b"} Feb 27 11:03:05 crc kubenswrapper[4728]: I0227 11:03:05.310129 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-fbbrm" Feb 27 11:03:05 crc kubenswrapper[4728]: I0227 11:03:05.366884 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-fbbrm" Feb 27 11:03:05 crc kubenswrapper[4728]: I0227 11:03:05.393521 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7t7vd" Feb 27 11:03:05 crc kubenswrapper[4728]: I0227 11:03:05.463407 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-sz58s" Feb 27 11:03:05 crc kubenswrapper[4728]: I0227 11:03:05.500711 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/da0fc581-2d10-45bd-aecf-8af4e8964c24-ssh-key-openstack-edpm-ipam\") pod \"da0fc581-2d10-45bd-aecf-8af4e8964c24\" (UID: \"da0fc581-2d10-45bd-aecf-8af4e8964c24\") " Feb 27 11:03:05 crc kubenswrapper[4728]: I0227 11:03:05.500795 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/da0fc581-2d10-45bd-aecf-8af4e8964c24-inventory\") pod \"da0fc581-2d10-45bd-aecf-8af4e8964c24\" (UID: \"da0fc581-2d10-45bd-aecf-8af4e8964c24\") " Feb 27 11:03:05 crc kubenswrapper[4728]: I0227 11:03:05.501398 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hkxkw\" (UniqueName: \"kubernetes.io/projected/da0fc581-2d10-45bd-aecf-8af4e8964c24-kube-api-access-hkxkw\") pod \"da0fc581-2d10-45bd-aecf-8af4e8964c24\" (UID: \"da0fc581-2d10-45bd-aecf-8af4e8964c24\") " Feb 27 11:03:05 crc kubenswrapper[4728]: I0227 11:03:05.514718 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da0fc581-2d10-45bd-aecf-8af4e8964c24-kube-api-access-hkxkw" (OuterVolumeSpecName: "kube-api-access-hkxkw") pod "da0fc581-2d10-45bd-aecf-8af4e8964c24" (UID: "da0fc581-2d10-45bd-aecf-8af4e8964c24"). InnerVolumeSpecName "kube-api-access-hkxkw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:03:05 crc kubenswrapper[4728]: I0227 11:03:05.519015 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-sz58s" Feb 27 11:03:05 crc kubenswrapper[4728]: I0227 11:03:05.534752 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da0fc581-2d10-45bd-aecf-8af4e8964c24-inventory" (OuterVolumeSpecName: "inventory") pod "da0fc581-2d10-45bd-aecf-8af4e8964c24" (UID: "da0fc581-2d10-45bd-aecf-8af4e8964c24"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 11:03:05 crc kubenswrapper[4728]: I0227 11:03:05.539792 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da0fc581-2d10-45bd-aecf-8af4e8964c24-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "da0fc581-2d10-45bd-aecf-8af4e8964c24" (UID: "da0fc581-2d10-45bd-aecf-8af4e8964c24"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 11:03:05 crc kubenswrapper[4728]: I0227 11:03:05.566484 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fbbrm"] Feb 27 11:03:05 crc kubenswrapper[4728]: I0227 11:03:05.605254 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hkxkw\" (UniqueName: \"kubernetes.io/projected/da0fc581-2d10-45bd-aecf-8af4e8964c24-kube-api-access-hkxkw\") on node \"crc\" DevicePath \"\"" Feb 27 11:03:05 crc kubenswrapper[4728]: I0227 11:03:05.605305 4728 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/da0fc581-2d10-45bd-aecf-8af4e8964c24-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 27 11:03:05 crc kubenswrapper[4728]: I0227 11:03:05.605331 4728 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/da0fc581-2d10-45bd-aecf-8af4e8964c24-inventory\") on node \"crc\" DevicePath \"\"" Feb 27 11:03:05 crc kubenswrapper[4728]: I0227 11:03:05.816141 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7t7vd" event={"ID":"da0fc581-2d10-45bd-aecf-8af4e8964c24","Type":"ContainerDied","Data":"ad8e337186b82f40f2ac504000062772ca7aca86fa55b28614f9195f37e60198"} Feb 27 11:03:05 crc kubenswrapper[4728]: I0227 11:03:05.816555 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ad8e337186b82f40f2ac504000062772ca7aca86fa55b28614f9195f37e60198" Feb 27 11:03:05 crc kubenswrapper[4728]: I0227 11:03:05.816241 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7t7vd" Feb 27 11:03:05 crc kubenswrapper[4728]: I0227 11:03:05.945661 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-f7k98"] Feb 27 11:03:05 crc kubenswrapper[4728]: E0227 11:03:05.946242 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdd3fc23-984e-421b-93ad-fc2302602e92" containerName="extract-utilities" Feb 27 11:03:05 crc kubenswrapper[4728]: I0227 11:03:05.946269 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdd3fc23-984e-421b-93ad-fc2302602e92" containerName="extract-utilities" Feb 27 11:03:05 crc kubenswrapper[4728]: E0227 11:03:05.946312 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdd3fc23-984e-421b-93ad-fc2302602e92" containerName="extract-content" Feb 27 11:03:05 crc kubenswrapper[4728]: I0227 11:03:05.946322 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdd3fc23-984e-421b-93ad-fc2302602e92" containerName="extract-content" Feb 27 11:03:05 crc kubenswrapper[4728]: E0227 11:03:05.946335 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da0fc581-2d10-45bd-aecf-8af4e8964c24" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 27 11:03:05 crc kubenswrapper[4728]: I0227 11:03:05.946344 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="da0fc581-2d10-45bd-aecf-8af4e8964c24" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 27 11:03:05 crc kubenswrapper[4728]: E0227 11:03:05.946358 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdd3fc23-984e-421b-93ad-fc2302602e92" containerName="registry-server" Feb 27 11:03:05 crc kubenswrapper[4728]: I0227 11:03:05.946365 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdd3fc23-984e-421b-93ad-fc2302602e92" containerName="registry-server" Feb 27 11:03:05 crc kubenswrapper[4728]: I0227 11:03:05.946671 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="da0fc581-2d10-45bd-aecf-8af4e8964c24" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 27 11:03:05 crc kubenswrapper[4728]: I0227 11:03:05.946708 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="bdd3fc23-984e-421b-93ad-fc2302602e92" containerName="registry-server" Feb 27 11:03:05 crc kubenswrapper[4728]: I0227 11:03:05.947603 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-f7k98" Feb 27 11:03:05 crc kubenswrapper[4728]: I0227 11:03:05.956058 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 27 11:03:05 crc kubenswrapper[4728]: I0227 11:03:05.956201 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 27 11:03:05 crc kubenswrapper[4728]: I0227 11:03:05.956321 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Feb 27 11:03:05 crc kubenswrapper[4728]: I0227 11:03:05.956393 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 27 11:03:05 crc kubenswrapper[4728]: I0227 11:03:05.957140 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Feb 27 11:03:05 crc kubenswrapper[4728]: I0227 11:03:05.957294 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0" Feb 27 11:03:05 crc kubenswrapper[4728]: I0227 11:03:05.957468 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Feb 27 11:03:05 crc kubenswrapper[4728]: I0227 11:03:05.959432 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-r9nq7" Feb 27 11:03:05 crc kubenswrapper[4728]: I0227 11:03:05.959679 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Feb 27 11:03:05 crc kubenswrapper[4728]: I0227 11:03:05.971951 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-f7k98"] Feb 27 11:03:06 crc kubenswrapper[4728]: I0227 11:03:06.074115 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73079959-3e26-47dc-8d3a-e7051acb0574-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-f7k98\" (UID: \"73079959-3e26-47dc-8d3a-e7051acb0574\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-f7k98" Feb 27 11:03:06 crc kubenswrapper[4728]: I0227 11:03:06.074710 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73079959-3e26-47dc-8d3a-e7051acb0574-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-f7k98\" (UID: \"73079959-3e26-47dc-8d3a-e7051acb0574\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-f7k98" Feb 27 11:03:06 crc kubenswrapper[4728]: I0227 11:03:06.074732 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/73079959-3e26-47dc-8d3a-e7051acb0574-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-f7k98\" (UID: \"73079959-3e26-47dc-8d3a-e7051acb0574\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-f7k98" Feb 27 11:03:06 crc kubenswrapper[4728]: I0227 11:03:06.074749 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73079959-3e26-47dc-8d3a-e7051acb0574-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-f7k98\" (UID: \"73079959-3e26-47dc-8d3a-e7051acb0574\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-f7k98" Feb 27 11:03:06 crc kubenswrapper[4728]: I0227 11:03:06.074777 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73079959-3e26-47dc-8d3a-e7051acb0574-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-f7k98\" (UID: \"73079959-3e26-47dc-8d3a-e7051acb0574\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-f7k98" Feb 27 11:03:06 crc kubenswrapper[4728]: I0227 11:03:06.074797 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73079959-3e26-47dc-8d3a-e7051acb0574-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-f7k98\" (UID: \"73079959-3e26-47dc-8d3a-e7051acb0574\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-f7k98" Feb 27 11:03:06 crc kubenswrapper[4728]: I0227 11:03:06.074816 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/73079959-3e26-47dc-8d3a-e7051acb0574-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-f7k98\" (UID: \"73079959-3e26-47dc-8d3a-e7051acb0574\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-f7k98" Feb 27 11:03:06 crc kubenswrapper[4728]: I0227 11:03:06.074977 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/73079959-3e26-47dc-8d3a-e7051acb0574-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-f7k98\" (UID: \"73079959-3e26-47dc-8d3a-e7051acb0574\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-f7k98" Feb 27 11:03:06 crc kubenswrapper[4728]: I0227 11:03:06.075021 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/73079959-3e26-47dc-8d3a-e7051acb0574-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-f7k98\" (UID: \"73079959-3e26-47dc-8d3a-e7051acb0574\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-f7k98" Feb 27 11:03:06 crc kubenswrapper[4728]: I0227 11:03:06.075064 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nzntv\" (UniqueName: \"kubernetes.io/projected/73079959-3e26-47dc-8d3a-e7051acb0574-kube-api-access-nzntv\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-f7k98\" (UID: \"73079959-3e26-47dc-8d3a-e7051acb0574\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-f7k98" Feb 27 11:03:06 crc kubenswrapper[4728]: I0227 11:03:06.075135 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73079959-3e26-47dc-8d3a-e7051acb0574-telemetry-power-monitoring-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-f7k98\" (UID: \"73079959-3e26-47dc-8d3a-e7051acb0574\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-f7k98" Feb 27 11:03:06 crc kubenswrapper[4728]: I0227 11:03:06.075197 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73079959-3e26-47dc-8d3a-e7051acb0574-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-f7k98\" (UID: \"73079959-3e26-47dc-8d3a-e7051acb0574\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-f7k98" Feb 27 11:03:06 crc kubenswrapper[4728]: I0227 11:03:06.075236 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73079959-3e26-47dc-8d3a-e7051acb0574-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-f7k98\" (UID: \"73079959-3e26-47dc-8d3a-e7051acb0574\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-f7k98" Feb 27 11:03:06 crc kubenswrapper[4728]: I0227 11:03:06.075394 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/73079959-3e26-47dc-8d3a-e7051acb0574-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-f7k98\" (UID: \"73079959-3e26-47dc-8d3a-e7051acb0574\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-f7k98" Feb 27 11:03:06 crc kubenswrapper[4728]: I0227 11:03:06.075454 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/73079959-3e26-47dc-8d3a-e7051acb0574-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-f7k98\" (UID: \"73079959-3e26-47dc-8d3a-e7051acb0574\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-f7k98" Feb 27 11:03:06 crc kubenswrapper[4728]: I0227 11:03:06.075728 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/73079959-3e26-47dc-8d3a-e7051acb0574-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-f7k98\" (UID: \"73079959-3e26-47dc-8d3a-e7051acb0574\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-f7k98" Feb 27 11:03:06 crc kubenswrapper[4728]: I0227 11:03:06.177917 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73079959-3e26-47dc-8d3a-e7051acb0574-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-f7k98\" (UID: \"73079959-3e26-47dc-8d3a-e7051acb0574\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-f7k98" Feb 27 11:03:06 crc kubenswrapper[4728]: I0227 11:03:06.177981 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73079959-3e26-47dc-8d3a-e7051acb0574-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-f7k98\" (UID: \"73079959-3e26-47dc-8d3a-e7051acb0574\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-f7k98" Feb 27 11:03:06 crc kubenswrapper[4728]: I0227 11:03:06.178024 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/73079959-3e26-47dc-8d3a-e7051acb0574-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-f7k98\" (UID: \"73079959-3e26-47dc-8d3a-e7051acb0574\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-f7k98" Feb 27 11:03:06 crc kubenswrapper[4728]: I0227 11:03:06.178044 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/73079959-3e26-47dc-8d3a-e7051acb0574-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-f7k98\" (UID: \"73079959-3e26-47dc-8d3a-e7051acb0574\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-f7k98" Feb 27 11:03:06 crc kubenswrapper[4728]: I0227 11:03:06.178146 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/73079959-3e26-47dc-8d3a-e7051acb0574-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-f7k98\" (UID: \"73079959-3e26-47dc-8d3a-e7051acb0574\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-f7k98" Feb 27 11:03:06 crc kubenswrapper[4728]: I0227 11:03:06.178833 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73079959-3e26-47dc-8d3a-e7051acb0574-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-f7k98\" (UID: \"73079959-3e26-47dc-8d3a-e7051acb0574\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-f7k98" Feb 27 11:03:06 crc kubenswrapper[4728]: I0227 11:03:06.178888 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73079959-3e26-47dc-8d3a-e7051acb0574-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-f7k98\" (UID: \"73079959-3e26-47dc-8d3a-e7051acb0574\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-f7k98" Feb 27 11:03:06 crc kubenswrapper[4728]: I0227 11:03:06.178911 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/73079959-3e26-47dc-8d3a-e7051acb0574-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-f7k98\" (UID: \"73079959-3e26-47dc-8d3a-e7051acb0574\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-f7k98" Feb 27 11:03:06 crc kubenswrapper[4728]: I0227 11:03:06.178931 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73079959-3e26-47dc-8d3a-e7051acb0574-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-f7k98\" (UID: \"73079959-3e26-47dc-8d3a-e7051acb0574\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-f7k98" Feb 27 11:03:06 crc kubenswrapper[4728]: I0227 11:03:06.178963 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73079959-3e26-47dc-8d3a-e7051acb0574-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-f7k98\" (UID: \"73079959-3e26-47dc-8d3a-e7051acb0574\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-f7k98" Feb 27 11:03:06 crc kubenswrapper[4728]: I0227 11:03:06.179017 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73079959-3e26-47dc-8d3a-e7051acb0574-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-f7k98\" (UID: \"73079959-3e26-47dc-8d3a-e7051acb0574\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-f7k98" Feb 27 11:03:06 crc kubenswrapper[4728]: I0227 11:03:06.179045 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/73079959-3e26-47dc-8d3a-e7051acb0574-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-f7k98\" (UID: \"73079959-3e26-47dc-8d3a-e7051acb0574\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-f7k98" Feb 27 11:03:06 crc kubenswrapper[4728]: I0227 11:03:06.179131 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/73079959-3e26-47dc-8d3a-e7051acb0574-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-f7k98\" (UID: \"73079959-3e26-47dc-8d3a-e7051acb0574\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-f7k98" Feb 27 11:03:06 crc kubenswrapper[4728]: I0227 11:03:06.179216 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/73079959-3e26-47dc-8d3a-e7051acb0574-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-f7k98\" (UID: \"73079959-3e26-47dc-8d3a-e7051acb0574\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-f7k98" Feb 27 11:03:06 crc kubenswrapper[4728]: I0227 11:03:06.179261 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nzntv\" (UniqueName: \"kubernetes.io/projected/73079959-3e26-47dc-8d3a-e7051acb0574-kube-api-access-nzntv\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-f7k98\" (UID: \"73079959-3e26-47dc-8d3a-e7051acb0574\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-f7k98" Feb 27 11:03:06 crc kubenswrapper[4728]: I0227 11:03:06.179292 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73079959-3e26-47dc-8d3a-e7051acb0574-telemetry-power-monitoring-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-f7k98\" (UID: \"73079959-3e26-47dc-8d3a-e7051acb0574\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-f7k98" Feb 27 11:03:06 crc kubenswrapper[4728]: I0227 11:03:06.183025 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/73079959-3e26-47dc-8d3a-e7051acb0574-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-f7k98\" (UID: \"73079959-3e26-47dc-8d3a-e7051acb0574\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-f7k98" Feb 27 11:03:06 crc kubenswrapper[4728]: I0227 11:03:06.183218 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73079959-3e26-47dc-8d3a-e7051acb0574-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-f7k98\" (UID: \"73079959-3e26-47dc-8d3a-e7051acb0574\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-f7k98" Feb 27 11:03:06 crc kubenswrapper[4728]: I0227 11:03:06.183406 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/73079959-3e26-47dc-8d3a-e7051acb0574-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-f7k98\" (UID: \"73079959-3e26-47dc-8d3a-e7051acb0574\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-f7k98" Feb 27 11:03:06 crc kubenswrapper[4728]: I0227 11:03:06.183808 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73079959-3e26-47dc-8d3a-e7051acb0574-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-f7k98\" (UID: \"73079959-3e26-47dc-8d3a-e7051acb0574\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-f7k98" Feb 27 11:03:06 crc kubenswrapper[4728]: I0227 11:03:06.184922 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/73079959-3e26-47dc-8d3a-e7051acb0574-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-f7k98\" (UID: \"73079959-3e26-47dc-8d3a-e7051acb0574\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-f7k98" Feb 27 11:03:06 crc kubenswrapper[4728]: I0227 11:03:06.185649 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73079959-3e26-47dc-8d3a-e7051acb0574-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-f7k98\" (UID: \"73079959-3e26-47dc-8d3a-e7051acb0574\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-f7k98" Feb 27 11:03:06 crc kubenswrapper[4728]: I0227 11:03:06.185676 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73079959-3e26-47dc-8d3a-e7051acb0574-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-f7k98\" (UID: \"73079959-3e26-47dc-8d3a-e7051acb0574\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-f7k98" Feb 27 11:03:06 crc kubenswrapper[4728]: I0227 11:03:06.187797 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73079959-3e26-47dc-8d3a-e7051acb0574-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-f7k98\" (UID: \"73079959-3e26-47dc-8d3a-e7051acb0574\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-f7k98" Feb 27 11:03:06 crc kubenswrapper[4728]: I0227 11:03:06.188021 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73079959-3e26-47dc-8d3a-e7051acb0574-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-f7k98\" (UID: \"73079959-3e26-47dc-8d3a-e7051acb0574\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-f7k98" Feb 27 11:03:06 crc kubenswrapper[4728]: I0227 11:03:06.188878 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/73079959-3e26-47dc-8d3a-e7051acb0574-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-f7k98\" (UID: \"73079959-3e26-47dc-8d3a-e7051acb0574\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-f7k98" Feb 27 11:03:06 crc kubenswrapper[4728]: I0227 11:03:06.189547 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/73079959-3e26-47dc-8d3a-e7051acb0574-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-f7k98\" (UID: \"73079959-3e26-47dc-8d3a-e7051acb0574\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-f7k98" Feb 27 11:03:06 crc kubenswrapper[4728]: I0227 11:03:06.189883 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73079959-3e26-47dc-8d3a-e7051acb0574-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-f7k98\" (UID: \"73079959-3e26-47dc-8d3a-e7051acb0574\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-f7k98" Feb 27 11:03:06 crc kubenswrapper[4728]: I0227 11:03:06.190267 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73079959-3e26-47dc-8d3a-e7051acb0574-telemetry-power-monitoring-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-f7k98\" (UID: \"73079959-3e26-47dc-8d3a-e7051acb0574\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-f7k98" Feb 27 11:03:06 crc kubenswrapper[4728]: I0227 11:03:06.190688 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/73079959-3e26-47dc-8d3a-e7051acb0574-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-f7k98\" (UID: \"73079959-3e26-47dc-8d3a-e7051acb0574\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-f7k98" Feb 27 11:03:06 crc kubenswrapper[4728]: I0227 11:03:06.195317 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/73079959-3e26-47dc-8d3a-e7051acb0574-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-f7k98\" (UID: \"73079959-3e26-47dc-8d3a-e7051acb0574\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-f7k98" Feb 27 11:03:06 crc kubenswrapper[4728]: I0227 11:03:06.198096 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nzntv\" (UniqueName: \"kubernetes.io/projected/73079959-3e26-47dc-8d3a-e7051acb0574-kube-api-access-nzntv\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-f7k98\" (UID: \"73079959-3e26-47dc-8d3a-e7051acb0574\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-f7k98" Feb 27 11:03:06 crc kubenswrapper[4728]: I0227 11:03:06.270211 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-f7k98" Feb 27 11:03:06 crc kubenswrapper[4728]: I0227 11:03:06.833995 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-fbbrm" podUID="eb3b246e-199a-47ea-9bc2-1b16b91a1522" containerName="registry-server" containerID="cri-o://1740198137d35a8e4997dc147a67bebb68f3c5e89832dc0b602f73cb1a9b3e51" gracePeriod=2 Feb 27 11:03:06 crc kubenswrapper[4728]: I0227 11:03:06.926271 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-f7k98"] Feb 27 11:03:06 crc kubenswrapper[4728]: W0227 11:03:06.954689 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod73079959_3e26_47dc_8d3a_e7051acb0574.slice/crio-8e4caa0d9121b0b8d5badfecc30a76cd03ee6a4875f09a68ec002e7ee558d703 WatchSource:0}: Error finding container 8e4caa0d9121b0b8d5badfecc30a76cd03ee6a4875f09a68ec002e7ee558d703: Status 404 returned error can't find the container with id 8e4caa0d9121b0b8d5badfecc30a76cd03ee6a4875f09a68ec002e7ee558d703 Feb 27 11:03:07 crc kubenswrapper[4728]: I0227 11:03:07.408652 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fbbrm" Feb 27 11:03:07 crc kubenswrapper[4728]: I0227 11:03:07.420549 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9s2th\" (UniqueName: \"kubernetes.io/projected/eb3b246e-199a-47ea-9bc2-1b16b91a1522-kube-api-access-9s2th\") pod \"eb3b246e-199a-47ea-9bc2-1b16b91a1522\" (UID: \"eb3b246e-199a-47ea-9bc2-1b16b91a1522\") " Feb 27 11:03:07 crc kubenswrapper[4728]: I0227 11:03:07.420741 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb3b246e-199a-47ea-9bc2-1b16b91a1522-utilities\") pod \"eb3b246e-199a-47ea-9bc2-1b16b91a1522\" (UID: \"eb3b246e-199a-47ea-9bc2-1b16b91a1522\") " Feb 27 11:03:07 crc kubenswrapper[4728]: I0227 11:03:07.420774 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb3b246e-199a-47ea-9bc2-1b16b91a1522-catalog-content\") pod \"eb3b246e-199a-47ea-9bc2-1b16b91a1522\" (UID: \"eb3b246e-199a-47ea-9bc2-1b16b91a1522\") " Feb 27 11:03:07 crc kubenswrapper[4728]: I0227 11:03:07.421520 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eb3b246e-199a-47ea-9bc2-1b16b91a1522-utilities" (OuterVolumeSpecName: "utilities") pod "eb3b246e-199a-47ea-9bc2-1b16b91a1522" (UID: "eb3b246e-199a-47ea-9bc2-1b16b91a1522"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 11:03:07 crc kubenswrapper[4728]: I0227 11:03:07.421831 4728 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb3b246e-199a-47ea-9bc2-1b16b91a1522-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 11:03:07 crc kubenswrapper[4728]: I0227 11:03:07.426600 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb3b246e-199a-47ea-9bc2-1b16b91a1522-kube-api-access-9s2th" (OuterVolumeSpecName: "kube-api-access-9s2th") pod "eb3b246e-199a-47ea-9bc2-1b16b91a1522" (UID: "eb3b246e-199a-47ea-9bc2-1b16b91a1522"). InnerVolumeSpecName "kube-api-access-9s2th". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:03:07 crc kubenswrapper[4728]: I0227 11:03:07.518106 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eb3b246e-199a-47ea-9bc2-1b16b91a1522-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "eb3b246e-199a-47ea-9bc2-1b16b91a1522" (UID: "eb3b246e-199a-47ea-9bc2-1b16b91a1522"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 11:03:07 crc kubenswrapper[4728]: I0227 11:03:07.524654 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9s2th\" (UniqueName: \"kubernetes.io/projected/eb3b246e-199a-47ea-9bc2-1b16b91a1522-kube-api-access-9s2th\") on node \"crc\" DevicePath \"\"" Feb 27 11:03:07 crc kubenswrapper[4728]: I0227 11:03:07.524704 4728 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb3b246e-199a-47ea-9bc2-1b16b91a1522-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 11:03:07 crc kubenswrapper[4728]: I0227 11:03:07.761791 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-sz58s"] Feb 27 11:03:07 crc kubenswrapper[4728]: I0227 11:03:07.762005 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-sz58s" podUID="837a3add-46f4-45b9-9a99-587f7cf094a1" containerName="registry-server" containerID="cri-o://be2c0db84efdaec4e7b7aa970c2f433527ffb797b909a6e00e74fe4d75c5ff93" gracePeriod=2 Feb 27 11:03:07 crc kubenswrapper[4728]: I0227 11:03:07.847378 4728 generic.go:334] "Generic (PLEG): container finished" podID="eb3b246e-199a-47ea-9bc2-1b16b91a1522" containerID="1740198137d35a8e4997dc147a67bebb68f3c5e89832dc0b602f73cb1a9b3e51" exitCode=0 Feb 27 11:03:07 crc kubenswrapper[4728]: I0227 11:03:07.847714 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fbbrm" event={"ID":"eb3b246e-199a-47ea-9bc2-1b16b91a1522","Type":"ContainerDied","Data":"1740198137d35a8e4997dc147a67bebb68f3c5e89832dc0b602f73cb1a9b3e51"} Feb 27 11:03:07 crc kubenswrapper[4728]: I0227 11:03:07.847741 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fbbrm" event={"ID":"eb3b246e-199a-47ea-9bc2-1b16b91a1522","Type":"ContainerDied","Data":"b047f9a03da009727008d8c149e9c73eda8087b7a4e92fb22dc27680988789d8"} Feb 27 11:03:07 crc kubenswrapper[4728]: I0227 11:03:07.847749 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fbbrm" Feb 27 11:03:07 crc kubenswrapper[4728]: I0227 11:03:07.847760 4728 scope.go:117] "RemoveContainer" containerID="1740198137d35a8e4997dc147a67bebb68f3c5e89832dc0b602f73cb1a9b3e51" Feb 27 11:03:07 crc kubenswrapper[4728]: I0227 11:03:07.859398 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-f7k98" event={"ID":"73079959-3e26-47dc-8d3a-e7051acb0574","Type":"ContainerStarted","Data":"b6d44205bb47f7465ac2f54833061a4f7ec723217d54d6d6e477d34d72178acf"} Feb 27 11:03:07 crc kubenswrapper[4728]: I0227 11:03:07.859449 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-f7k98" event={"ID":"73079959-3e26-47dc-8d3a-e7051acb0574","Type":"ContainerStarted","Data":"8e4caa0d9121b0b8d5badfecc30a76cd03ee6a4875f09a68ec002e7ee558d703"} Feb 27 11:03:07 crc kubenswrapper[4728]: I0227 11:03:07.884181 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-f7k98" podStartSLOduration=2.378825927 podStartE2EDuration="2.884157059s" podCreationTimestamp="2026-02-27 11:03:05 +0000 UTC" firstStartedPulling="2026-02-27 11:03:06.956957736 +0000 UTC m=+2206.919323842" lastFinishedPulling="2026-02-27 11:03:07.462288868 +0000 UTC m=+2207.424654974" observedRunningTime="2026-02-27 11:03:07.879697397 +0000 UTC m=+2207.842063533" watchObservedRunningTime="2026-02-27 11:03:07.884157059 +0000 UTC m=+2207.846523185" Feb 27 11:03:07 crc kubenswrapper[4728]: I0227 11:03:07.987230 4728 scope.go:117] "RemoveContainer" containerID="7e335c628b4308728d1add96a9622c96eba99853c05754f2c1d75aaca608b838" Feb 27 11:03:07 crc kubenswrapper[4728]: I0227 11:03:07.999495 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fbbrm"] Feb 27 11:03:08 crc kubenswrapper[4728]: I0227 11:03:08.017581 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-fbbrm"] Feb 27 11:03:08 crc kubenswrapper[4728]: I0227 11:03:08.026140 4728 scope.go:117] "RemoveContainer" containerID="2212df44dfda5b09e52f4757eda6ea6d188f3ed9b7b5af1da364b7084fead46e" Feb 27 11:03:08 crc kubenswrapper[4728]: I0227 11:03:08.057330 4728 scope.go:117] "RemoveContainer" containerID="1740198137d35a8e4997dc147a67bebb68f3c5e89832dc0b602f73cb1a9b3e51" Feb 27 11:03:08 crc kubenswrapper[4728]: E0227 11:03:08.057811 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1740198137d35a8e4997dc147a67bebb68f3c5e89832dc0b602f73cb1a9b3e51\": container with ID starting with 1740198137d35a8e4997dc147a67bebb68f3c5e89832dc0b602f73cb1a9b3e51 not found: ID does not exist" containerID="1740198137d35a8e4997dc147a67bebb68f3c5e89832dc0b602f73cb1a9b3e51" Feb 27 11:03:08 crc kubenswrapper[4728]: I0227 11:03:08.057846 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1740198137d35a8e4997dc147a67bebb68f3c5e89832dc0b602f73cb1a9b3e51"} err="failed to get container status \"1740198137d35a8e4997dc147a67bebb68f3c5e89832dc0b602f73cb1a9b3e51\": rpc error: code = NotFound desc = could not find container \"1740198137d35a8e4997dc147a67bebb68f3c5e89832dc0b602f73cb1a9b3e51\": container with ID starting with 1740198137d35a8e4997dc147a67bebb68f3c5e89832dc0b602f73cb1a9b3e51 not found: ID does not exist" Feb 27 11:03:08 crc kubenswrapper[4728]: I0227 11:03:08.057877 4728 scope.go:117] "RemoveContainer" containerID="7e335c628b4308728d1add96a9622c96eba99853c05754f2c1d75aaca608b838" Feb 27 11:03:08 crc kubenswrapper[4728]: E0227 11:03:08.058158 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7e335c628b4308728d1add96a9622c96eba99853c05754f2c1d75aaca608b838\": container with ID starting with 7e335c628b4308728d1add96a9622c96eba99853c05754f2c1d75aaca608b838 not found: ID does not exist" containerID="7e335c628b4308728d1add96a9622c96eba99853c05754f2c1d75aaca608b838" Feb 27 11:03:08 crc kubenswrapper[4728]: I0227 11:03:08.058321 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e335c628b4308728d1add96a9622c96eba99853c05754f2c1d75aaca608b838"} err="failed to get container status \"7e335c628b4308728d1add96a9622c96eba99853c05754f2c1d75aaca608b838\": rpc error: code = NotFound desc = could not find container \"7e335c628b4308728d1add96a9622c96eba99853c05754f2c1d75aaca608b838\": container with ID starting with 7e335c628b4308728d1add96a9622c96eba99853c05754f2c1d75aaca608b838 not found: ID does not exist" Feb 27 11:03:08 crc kubenswrapper[4728]: I0227 11:03:08.058444 4728 scope.go:117] "RemoveContainer" containerID="2212df44dfda5b09e52f4757eda6ea6d188f3ed9b7b5af1da364b7084fead46e" Feb 27 11:03:08 crc kubenswrapper[4728]: E0227 11:03:08.058863 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2212df44dfda5b09e52f4757eda6ea6d188f3ed9b7b5af1da364b7084fead46e\": container with ID starting with 2212df44dfda5b09e52f4757eda6ea6d188f3ed9b7b5af1da364b7084fead46e not found: ID does not exist" containerID="2212df44dfda5b09e52f4757eda6ea6d188f3ed9b7b5af1da364b7084fead46e" Feb 27 11:03:08 crc kubenswrapper[4728]: I0227 11:03:08.058886 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2212df44dfda5b09e52f4757eda6ea6d188f3ed9b7b5af1da364b7084fead46e"} err="failed to get container status \"2212df44dfda5b09e52f4757eda6ea6d188f3ed9b7b5af1da364b7084fead46e\": rpc error: code = NotFound desc = could not find container \"2212df44dfda5b09e52f4757eda6ea6d188f3ed9b7b5af1da364b7084fead46e\": container with ID starting with 2212df44dfda5b09e52f4757eda6ea6d188f3ed9b7b5af1da364b7084fead46e not found: ID does not exist" Feb 27 11:03:08 crc kubenswrapper[4728]: I0227 11:03:08.265797 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sz58s" Feb 27 11:03:08 crc kubenswrapper[4728]: I0227 11:03:08.342555 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/837a3add-46f4-45b9-9a99-587f7cf094a1-catalog-content\") pod \"837a3add-46f4-45b9-9a99-587f7cf094a1\" (UID: \"837a3add-46f4-45b9-9a99-587f7cf094a1\") " Feb 27 11:03:08 crc kubenswrapper[4728]: I0227 11:03:08.342729 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/837a3add-46f4-45b9-9a99-587f7cf094a1-utilities\") pod \"837a3add-46f4-45b9-9a99-587f7cf094a1\" (UID: \"837a3add-46f4-45b9-9a99-587f7cf094a1\") " Feb 27 11:03:08 crc kubenswrapper[4728]: I0227 11:03:08.342764 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m97qb\" (UniqueName: \"kubernetes.io/projected/837a3add-46f4-45b9-9a99-587f7cf094a1-kube-api-access-m97qb\") pod \"837a3add-46f4-45b9-9a99-587f7cf094a1\" (UID: \"837a3add-46f4-45b9-9a99-587f7cf094a1\") " Feb 27 11:03:08 crc kubenswrapper[4728]: I0227 11:03:08.345054 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/837a3add-46f4-45b9-9a99-587f7cf094a1-utilities" (OuterVolumeSpecName: "utilities") pod "837a3add-46f4-45b9-9a99-587f7cf094a1" (UID: "837a3add-46f4-45b9-9a99-587f7cf094a1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 11:03:08 crc kubenswrapper[4728]: I0227 11:03:08.349868 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/837a3add-46f4-45b9-9a99-587f7cf094a1-kube-api-access-m97qb" (OuterVolumeSpecName: "kube-api-access-m97qb") pod "837a3add-46f4-45b9-9a99-587f7cf094a1" (UID: "837a3add-46f4-45b9-9a99-587f7cf094a1"). InnerVolumeSpecName "kube-api-access-m97qb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:03:08 crc kubenswrapper[4728]: I0227 11:03:08.379739 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/837a3add-46f4-45b9-9a99-587f7cf094a1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "837a3add-46f4-45b9-9a99-587f7cf094a1" (UID: "837a3add-46f4-45b9-9a99-587f7cf094a1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 11:03:08 crc kubenswrapper[4728]: I0227 11:03:08.449525 4728 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/837a3add-46f4-45b9-9a99-587f7cf094a1-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 11:03:08 crc kubenswrapper[4728]: I0227 11:03:08.449570 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m97qb\" (UniqueName: \"kubernetes.io/projected/837a3add-46f4-45b9-9a99-587f7cf094a1-kube-api-access-m97qb\") on node \"crc\" DevicePath \"\"" Feb 27 11:03:08 crc kubenswrapper[4728]: I0227 11:03:08.449586 4728 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/837a3add-46f4-45b9-9a99-587f7cf094a1-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 11:03:08 crc kubenswrapper[4728]: I0227 11:03:08.743287 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb3b246e-199a-47ea-9bc2-1b16b91a1522" path="/var/lib/kubelet/pods/eb3b246e-199a-47ea-9bc2-1b16b91a1522/volumes" Feb 27 11:03:08 crc kubenswrapper[4728]: I0227 11:03:08.884200 4728 generic.go:334] "Generic (PLEG): container finished" podID="837a3add-46f4-45b9-9a99-587f7cf094a1" containerID="be2c0db84efdaec4e7b7aa970c2f433527ffb797b909a6e00e74fe4d75c5ff93" exitCode=0 Feb 27 11:03:08 crc kubenswrapper[4728]: I0227 11:03:08.884282 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sz58s" Feb 27 11:03:08 crc kubenswrapper[4728]: I0227 11:03:08.884288 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sz58s" event={"ID":"837a3add-46f4-45b9-9a99-587f7cf094a1","Type":"ContainerDied","Data":"be2c0db84efdaec4e7b7aa970c2f433527ffb797b909a6e00e74fe4d75c5ff93"} Feb 27 11:03:08 crc kubenswrapper[4728]: I0227 11:03:08.885391 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sz58s" event={"ID":"837a3add-46f4-45b9-9a99-587f7cf094a1","Type":"ContainerDied","Data":"c067e74fe3f4289fd6b0eebc22b49eb69cb1dd7c9cc58e1c6b9804f081371caf"} Feb 27 11:03:08 crc kubenswrapper[4728]: I0227 11:03:08.885447 4728 scope.go:117] "RemoveContainer" containerID="be2c0db84efdaec4e7b7aa970c2f433527ffb797b909a6e00e74fe4d75c5ff93" Feb 27 11:03:08 crc kubenswrapper[4728]: I0227 11:03:08.918969 4728 scope.go:117] "RemoveContainer" containerID="edfd1dd92a27aabaee2eee79b3c42f34091c9d884f6af25a918a028d19d650de" Feb 27 11:03:08 crc kubenswrapper[4728]: I0227 11:03:08.925826 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-sz58s"] Feb 27 11:03:08 crc kubenswrapper[4728]: I0227 11:03:08.941476 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-sz58s"] Feb 27 11:03:08 crc kubenswrapper[4728]: I0227 11:03:08.947912 4728 scope.go:117] "RemoveContainer" containerID="22a2c1ae31dcdc9f115e480758486c6150654ac6c4afd6192d2a8d80c7b7926e" Feb 27 11:03:09 crc kubenswrapper[4728]: I0227 11:03:09.004959 4728 scope.go:117] "RemoveContainer" containerID="be2c0db84efdaec4e7b7aa970c2f433527ffb797b909a6e00e74fe4d75c5ff93" Feb 27 11:03:09 crc kubenswrapper[4728]: E0227 11:03:09.005937 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"be2c0db84efdaec4e7b7aa970c2f433527ffb797b909a6e00e74fe4d75c5ff93\": container with ID starting with be2c0db84efdaec4e7b7aa970c2f433527ffb797b909a6e00e74fe4d75c5ff93 not found: ID does not exist" containerID="be2c0db84efdaec4e7b7aa970c2f433527ffb797b909a6e00e74fe4d75c5ff93" Feb 27 11:03:09 crc kubenswrapper[4728]: I0227 11:03:09.005971 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be2c0db84efdaec4e7b7aa970c2f433527ffb797b909a6e00e74fe4d75c5ff93"} err="failed to get container status \"be2c0db84efdaec4e7b7aa970c2f433527ffb797b909a6e00e74fe4d75c5ff93\": rpc error: code = NotFound desc = could not find container \"be2c0db84efdaec4e7b7aa970c2f433527ffb797b909a6e00e74fe4d75c5ff93\": container with ID starting with be2c0db84efdaec4e7b7aa970c2f433527ffb797b909a6e00e74fe4d75c5ff93 not found: ID does not exist" Feb 27 11:03:09 crc kubenswrapper[4728]: I0227 11:03:09.005995 4728 scope.go:117] "RemoveContainer" containerID="edfd1dd92a27aabaee2eee79b3c42f34091c9d884f6af25a918a028d19d650de" Feb 27 11:03:09 crc kubenswrapper[4728]: E0227 11:03:09.006272 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"edfd1dd92a27aabaee2eee79b3c42f34091c9d884f6af25a918a028d19d650de\": container with ID starting with edfd1dd92a27aabaee2eee79b3c42f34091c9d884f6af25a918a028d19d650de not found: ID does not exist" containerID="edfd1dd92a27aabaee2eee79b3c42f34091c9d884f6af25a918a028d19d650de" Feb 27 11:03:09 crc kubenswrapper[4728]: I0227 11:03:09.006292 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"edfd1dd92a27aabaee2eee79b3c42f34091c9d884f6af25a918a028d19d650de"} err="failed to get container status \"edfd1dd92a27aabaee2eee79b3c42f34091c9d884f6af25a918a028d19d650de\": rpc error: code = NotFound desc = could not find container \"edfd1dd92a27aabaee2eee79b3c42f34091c9d884f6af25a918a028d19d650de\": container with ID starting with edfd1dd92a27aabaee2eee79b3c42f34091c9d884f6af25a918a028d19d650de not found: ID does not exist" Feb 27 11:03:09 crc kubenswrapper[4728]: I0227 11:03:09.006304 4728 scope.go:117] "RemoveContainer" containerID="22a2c1ae31dcdc9f115e480758486c6150654ac6c4afd6192d2a8d80c7b7926e" Feb 27 11:03:09 crc kubenswrapper[4728]: E0227 11:03:09.006600 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"22a2c1ae31dcdc9f115e480758486c6150654ac6c4afd6192d2a8d80c7b7926e\": container with ID starting with 22a2c1ae31dcdc9f115e480758486c6150654ac6c4afd6192d2a8d80c7b7926e not found: ID does not exist" containerID="22a2c1ae31dcdc9f115e480758486c6150654ac6c4afd6192d2a8d80c7b7926e" Feb 27 11:03:09 crc kubenswrapper[4728]: I0227 11:03:09.006618 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"22a2c1ae31dcdc9f115e480758486c6150654ac6c4afd6192d2a8d80c7b7926e"} err="failed to get container status \"22a2c1ae31dcdc9f115e480758486c6150654ac6c4afd6192d2a8d80c7b7926e\": rpc error: code = NotFound desc = could not find container \"22a2c1ae31dcdc9f115e480758486c6150654ac6c4afd6192d2a8d80c7b7926e\": container with ID starting with 22a2c1ae31dcdc9f115e480758486c6150654ac6c4afd6192d2a8d80c7b7926e not found: ID does not exist" Feb 27 11:03:10 crc kubenswrapper[4728]: I0227 11:03:10.738034 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="837a3add-46f4-45b9-9a99-587f7cf094a1" path="/var/lib/kubelet/pods/837a3add-46f4-45b9-9a99-587f7cf094a1/volumes" Feb 27 11:03:35 crc kubenswrapper[4728]: I0227 11:03:35.922429 4728 patch_prober.go:28] interesting pod/machine-config-daemon-mf2hh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 11:03:35 crc kubenswrapper[4728]: I0227 11:03:35.923287 4728 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 11:03:37 crc kubenswrapper[4728]: I0227 11:03:37.064263 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-sync-tqhw2"] Feb 27 11:03:37 crc kubenswrapper[4728]: I0227 11:03:37.080897 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-sync-tqhw2"] Feb 27 11:03:38 crc kubenswrapper[4728]: I0227 11:03:38.738328 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c112afc6-4352-4004-885a-0b1d88caffae" path="/var/lib/kubelet/pods/c112afc6-4352-4004-885a-0b1d88caffae/volumes" Feb 27 11:03:53 crc kubenswrapper[4728]: I0227 11:03:53.450910 4728 generic.go:334] "Generic (PLEG): container finished" podID="73079959-3e26-47dc-8d3a-e7051acb0574" containerID="b6d44205bb47f7465ac2f54833061a4f7ec723217d54d6d6e477d34d72178acf" exitCode=0 Feb 27 11:03:53 crc kubenswrapper[4728]: I0227 11:03:53.451101 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-f7k98" event={"ID":"73079959-3e26-47dc-8d3a-e7051acb0574","Type":"ContainerDied","Data":"b6d44205bb47f7465ac2f54833061a4f7ec723217d54d6d6e477d34d72178acf"} Feb 27 11:03:54 crc kubenswrapper[4728]: I0227 11:03:54.968858 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-f7k98" Feb 27 11:03:55 crc kubenswrapper[4728]: I0227 11:03:55.053460 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73079959-3e26-47dc-8d3a-e7051acb0574-bootstrap-combined-ca-bundle\") pod \"73079959-3e26-47dc-8d3a-e7051acb0574\" (UID: \"73079959-3e26-47dc-8d3a-e7051acb0574\") " Feb 27 11:03:55 crc kubenswrapper[4728]: I0227 11:03:55.053568 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73079959-3e26-47dc-8d3a-e7051acb0574-nova-combined-ca-bundle\") pod \"73079959-3e26-47dc-8d3a-e7051acb0574\" (UID: \"73079959-3e26-47dc-8d3a-e7051acb0574\") " Feb 27 11:03:55 crc kubenswrapper[4728]: I0227 11:03:55.053646 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/73079959-3e26-47dc-8d3a-e7051acb0574-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"73079959-3e26-47dc-8d3a-e7051acb0574\" (UID: \"73079959-3e26-47dc-8d3a-e7051acb0574\") " Feb 27 11:03:55 crc kubenswrapper[4728]: I0227 11:03:55.053701 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/73079959-3e26-47dc-8d3a-e7051acb0574-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"73079959-3e26-47dc-8d3a-e7051acb0574\" (UID: \"73079959-3e26-47dc-8d3a-e7051acb0574\") " Feb 27 11:03:55 crc kubenswrapper[4728]: I0227 11:03:55.053729 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73079959-3e26-47dc-8d3a-e7051acb0574-ovn-combined-ca-bundle\") pod \"73079959-3e26-47dc-8d3a-e7051acb0574\" (UID: \"73079959-3e26-47dc-8d3a-e7051acb0574\") " Feb 27 11:03:55 crc kubenswrapper[4728]: I0227 11:03:55.053869 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73079959-3e26-47dc-8d3a-e7051acb0574-telemetry-combined-ca-bundle\") pod \"73079959-3e26-47dc-8d3a-e7051acb0574\" (UID: \"73079959-3e26-47dc-8d3a-e7051acb0574\") " Feb 27 11:03:55 crc kubenswrapper[4728]: I0227 11:03:55.053906 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/73079959-3e26-47dc-8d3a-e7051acb0574-ssh-key-openstack-edpm-ipam\") pod \"73079959-3e26-47dc-8d3a-e7051acb0574\" (UID: \"73079959-3e26-47dc-8d3a-e7051acb0574\") " Feb 27 11:03:55 crc kubenswrapper[4728]: I0227 11:03:55.053959 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73079959-3e26-47dc-8d3a-e7051acb0574-repo-setup-combined-ca-bundle\") pod \"73079959-3e26-47dc-8d3a-e7051acb0574\" (UID: \"73079959-3e26-47dc-8d3a-e7051acb0574\") " Feb 27 11:03:55 crc kubenswrapper[4728]: I0227 11:03:55.054006 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzntv\" (UniqueName: \"kubernetes.io/projected/73079959-3e26-47dc-8d3a-e7051acb0574-kube-api-access-nzntv\") pod \"73079959-3e26-47dc-8d3a-e7051acb0574\" (UID: \"73079959-3e26-47dc-8d3a-e7051acb0574\") " Feb 27 11:03:55 crc kubenswrapper[4728]: I0227 11:03:55.054053 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73079959-3e26-47dc-8d3a-e7051acb0574-libvirt-combined-ca-bundle\") pod \"73079959-3e26-47dc-8d3a-e7051acb0574\" (UID: \"73079959-3e26-47dc-8d3a-e7051acb0574\") " Feb 27 11:03:55 crc kubenswrapper[4728]: I0227 11:03:55.054146 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/73079959-3e26-47dc-8d3a-e7051acb0574-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"73079959-3e26-47dc-8d3a-e7051acb0574\" (UID: \"73079959-3e26-47dc-8d3a-e7051acb0574\") " Feb 27 11:03:55 crc kubenswrapper[4728]: I0227 11:03:55.054222 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73079959-3e26-47dc-8d3a-e7051acb0574-telemetry-power-monitoring-combined-ca-bundle\") pod \"73079959-3e26-47dc-8d3a-e7051acb0574\" (UID: \"73079959-3e26-47dc-8d3a-e7051acb0574\") " Feb 27 11:03:55 crc kubenswrapper[4728]: I0227 11:03:55.054269 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/73079959-3e26-47dc-8d3a-e7051acb0574-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"73079959-3e26-47dc-8d3a-e7051acb0574\" (UID: \"73079959-3e26-47dc-8d3a-e7051acb0574\") " Feb 27 11:03:55 crc kubenswrapper[4728]: I0227 11:03:55.054308 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/73079959-3e26-47dc-8d3a-e7051acb0574-inventory\") pod \"73079959-3e26-47dc-8d3a-e7051acb0574\" (UID: \"73079959-3e26-47dc-8d3a-e7051acb0574\") " Feb 27 11:03:55 crc kubenswrapper[4728]: I0227 11:03:55.054662 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/73079959-3e26-47dc-8d3a-e7051acb0574-openstack-edpm-ipam-ovn-default-certs-0\") pod \"73079959-3e26-47dc-8d3a-e7051acb0574\" (UID: \"73079959-3e26-47dc-8d3a-e7051acb0574\") " Feb 27 11:03:55 crc kubenswrapper[4728]: I0227 11:03:55.054722 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73079959-3e26-47dc-8d3a-e7051acb0574-neutron-metadata-combined-ca-bundle\") pod \"73079959-3e26-47dc-8d3a-e7051acb0574\" (UID: \"73079959-3e26-47dc-8d3a-e7051acb0574\") " Feb 27 11:03:55 crc kubenswrapper[4728]: I0227 11:03:55.060823 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73079959-3e26-47dc-8d3a-e7051acb0574-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "73079959-3e26-47dc-8d3a-e7051acb0574" (UID: "73079959-3e26-47dc-8d3a-e7051acb0574"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 11:03:55 crc kubenswrapper[4728]: I0227 11:03:55.061337 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73079959-3e26-47dc-8d3a-e7051acb0574-telemetry-power-monitoring-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-power-monitoring-combined-ca-bundle") pod "73079959-3e26-47dc-8d3a-e7051acb0574" (UID: "73079959-3e26-47dc-8d3a-e7051acb0574"). InnerVolumeSpecName "telemetry-power-monitoring-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 11:03:55 crc kubenswrapper[4728]: I0227 11:03:55.066597 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73079959-3e26-47dc-8d3a-e7051acb0574-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "73079959-3e26-47dc-8d3a-e7051acb0574" (UID: "73079959-3e26-47dc-8d3a-e7051acb0574"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 11:03:55 crc kubenswrapper[4728]: I0227 11:03:55.066723 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73079959-3e26-47dc-8d3a-e7051acb0574-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "73079959-3e26-47dc-8d3a-e7051acb0574" (UID: "73079959-3e26-47dc-8d3a-e7051acb0574"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 11:03:55 crc kubenswrapper[4728]: I0227 11:03:55.071589 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73079959-3e26-47dc-8d3a-e7051acb0574-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "73079959-3e26-47dc-8d3a-e7051acb0574" (UID: "73079959-3e26-47dc-8d3a-e7051acb0574"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:03:55 crc kubenswrapper[4728]: I0227 11:03:55.071895 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73079959-3e26-47dc-8d3a-e7051acb0574-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "73079959-3e26-47dc-8d3a-e7051acb0574" (UID: "73079959-3e26-47dc-8d3a-e7051acb0574"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:03:55 crc kubenswrapper[4728]: I0227 11:03:55.072328 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73079959-3e26-47dc-8d3a-e7051acb0574-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "73079959-3e26-47dc-8d3a-e7051acb0574" (UID: "73079959-3e26-47dc-8d3a-e7051acb0574"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 11:03:55 crc kubenswrapper[4728]: I0227 11:03:55.072724 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73079959-3e26-47dc-8d3a-e7051acb0574-kube-api-access-nzntv" (OuterVolumeSpecName: "kube-api-access-nzntv") pod "73079959-3e26-47dc-8d3a-e7051acb0574" (UID: "73079959-3e26-47dc-8d3a-e7051acb0574"). InnerVolumeSpecName "kube-api-access-nzntv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:03:55 crc kubenswrapper[4728]: I0227 11:03:55.075626 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73079959-3e26-47dc-8d3a-e7051acb0574-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "73079959-3e26-47dc-8d3a-e7051acb0574" (UID: "73079959-3e26-47dc-8d3a-e7051acb0574"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 11:03:55 crc kubenswrapper[4728]: I0227 11:03:55.075759 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73079959-3e26-47dc-8d3a-e7051acb0574-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "73079959-3e26-47dc-8d3a-e7051acb0574" (UID: "73079959-3e26-47dc-8d3a-e7051acb0574"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 11:03:55 crc kubenswrapper[4728]: I0227 11:03:55.076559 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73079959-3e26-47dc-8d3a-e7051acb0574-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "73079959-3e26-47dc-8d3a-e7051acb0574" (UID: "73079959-3e26-47dc-8d3a-e7051acb0574"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 11:03:55 crc kubenswrapper[4728]: I0227 11:03:55.078460 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73079959-3e26-47dc-8d3a-e7051acb0574-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0") pod "73079959-3e26-47dc-8d3a-e7051acb0574" (UID: "73079959-3e26-47dc-8d3a-e7051acb0574"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:03:55 crc kubenswrapper[4728]: I0227 11:03:55.084964 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73079959-3e26-47dc-8d3a-e7051acb0574-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "73079959-3e26-47dc-8d3a-e7051acb0574" (UID: "73079959-3e26-47dc-8d3a-e7051acb0574"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:03:55 crc kubenswrapper[4728]: I0227 11:03:55.089881 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73079959-3e26-47dc-8d3a-e7051acb0574-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "73079959-3e26-47dc-8d3a-e7051acb0574" (UID: "73079959-3e26-47dc-8d3a-e7051acb0574"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:03:55 crc kubenswrapper[4728]: I0227 11:03:55.121471 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73079959-3e26-47dc-8d3a-e7051acb0574-inventory" (OuterVolumeSpecName: "inventory") pod "73079959-3e26-47dc-8d3a-e7051acb0574" (UID: "73079959-3e26-47dc-8d3a-e7051acb0574"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 11:03:55 crc kubenswrapper[4728]: I0227 11:03:55.130828 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73079959-3e26-47dc-8d3a-e7051acb0574-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "73079959-3e26-47dc-8d3a-e7051acb0574" (UID: "73079959-3e26-47dc-8d3a-e7051acb0574"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 11:03:55 crc kubenswrapper[4728]: I0227 11:03:55.157881 4728 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73079959-3e26-47dc-8d3a-e7051acb0574-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 11:03:55 crc kubenswrapper[4728]: I0227 11:03:55.157931 4728 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73079959-3e26-47dc-8d3a-e7051acb0574-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 11:03:55 crc kubenswrapper[4728]: I0227 11:03:55.157953 4728 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/73079959-3e26-47dc-8d3a-e7051acb0574-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 27 11:03:55 crc kubenswrapper[4728]: I0227 11:03:55.157974 4728 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/73079959-3e26-47dc-8d3a-e7051acb0574-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 27 11:03:55 crc kubenswrapper[4728]: I0227 11:03:55.157996 4728 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73079959-3e26-47dc-8d3a-e7051acb0574-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 11:03:55 crc kubenswrapper[4728]: I0227 11:03:55.158014 4728 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73079959-3e26-47dc-8d3a-e7051acb0574-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 11:03:55 crc kubenswrapper[4728]: I0227 11:03:55.158034 4728 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/73079959-3e26-47dc-8d3a-e7051acb0574-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 27 11:03:55 crc kubenswrapper[4728]: I0227 11:03:55.158052 4728 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73079959-3e26-47dc-8d3a-e7051acb0574-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 11:03:55 crc kubenswrapper[4728]: I0227 11:03:55.158069 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzntv\" (UniqueName: \"kubernetes.io/projected/73079959-3e26-47dc-8d3a-e7051acb0574-kube-api-access-nzntv\") on node \"crc\" DevicePath \"\"" Feb 27 11:03:55 crc kubenswrapper[4728]: I0227 11:03:55.158087 4728 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73079959-3e26-47dc-8d3a-e7051acb0574-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 11:03:55 crc kubenswrapper[4728]: I0227 11:03:55.158106 4728 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/73079959-3e26-47dc-8d3a-e7051acb0574-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 27 11:03:55 crc kubenswrapper[4728]: I0227 11:03:55.158127 4728 reconciler_common.go:293] "Volume detached for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73079959-3e26-47dc-8d3a-e7051acb0574-telemetry-power-monitoring-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 11:03:55 crc kubenswrapper[4728]: I0227 11:03:55.158148 4728 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/73079959-3e26-47dc-8d3a-e7051acb0574-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 27 11:03:55 crc kubenswrapper[4728]: I0227 11:03:55.158167 4728 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/73079959-3e26-47dc-8d3a-e7051acb0574-inventory\") on node \"crc\" DevicePath \"\"" Feb 27 11:03:55 crc kubenswrapper[4728]: I0227 11:03:55.158184 4728 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/73079959-3e26-47dc-8d3a-e7051acb0574-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 27 11:03:55 crc kubenswrapper[4728]: I0227 11:03:55.158203 4728 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73079959-3e26-47dc-8d3a-e7051acb0574-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 11:03:55 crc kubenswrapper[4728]: I0227 11:03:55.475435 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-f7k98" event={"ID":"73079959-3e26-47dc-8d3a-e7051acb0574","Type":"ContainerDied","Data":"8e4caa0d9121b0b8d5badfecc30a76cd03ee6a4875f09a68ec002e7ee558d703"} Feb 27 11:03:55 crc kubenswrapper[4728]: I0227 11:03:55.475476 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8e4caa0d9121b0b8d5badfecc30a76cd03ee6a4875f09a68ec002e7ee558d703" Feb 27 11:03:55 crc kubenswrapper[4728]: I0227 11:03:55.475563 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-f7k98" Feb 27 11:03:55 crc kubenswrapper[4728]: I0227 11:03:55.631924 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-kjj79"] Feb 27 11:03:55 crc kubenswrapper[4728]: E0227 11:03:55.632469 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb3b246e-199a-47ea-9bc2-1b16b91a1522" containerName="extract-content" Feb 27 11:03:55 crc kubenswrapper[4728]: I0227 11:03:55.632529 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb3b246e-199a-47ea-9bc2-1b16b91a1522" containerName="extract-content" Feb 27 11:03:55 crc kubenswrapper[4728]: E0227 11:03:55.632546 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="837a3add-46f4-45b9-9a99-587f7cf094a1" containerName="registry-server" Feb 27 11:03:55 crc kubenswrapper[4728]: I0227 11:03:55.632558 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="837a3add-46f4-45b9-9a99-587f7cf094a1" containerName="registry-server" Feb 27 11:03:55 crc kubenswrapper[4728]: E0227 11:03:55.632582 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb3b246e-199a-47ea-9bc2-1b16b91a1522" containerName="registry-server" Feb 27 11:03:55 crc kubenswrapper[4728]: I0227 11:03:55.632590 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb3b246e-199a-47ea-9bc2-1b16b91a1522" containerName="registry-server" Feb 27 11:03:55 crc kubenswrapper[4728]: E0227 11:03:55.632612 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="837a3add-46f4-45b9-9a99-587f7cf094a1" containerName="extract-utilities" Feb 27 11:03:55 crc kubenswrapper[4728]: I0227 11:03:55.632619 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="837a3add-46f4-45b9-9a99-587f7cf094a1" containerName="extract-utilities" Feb 27 11:03:55 crc kubenswrapper[4728]: E0227 11:03:55.632659 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73079959-3e26-47dc-8d3a-e7051acb0574" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 27 11:03:55 crc kubenswrapper[4728]: I0227 11:03:55.632669 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="73079959-3e26-47dc-8d3a-e7051acb0574" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 27 11:03:55 crc kubenswrapper[4728]: E0227 11:03:55.632680 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="837a3add-46f4-45b9-9a99-587f7cf094a1" containerName="extract-content" Feb 27 11:03:55 crc kubenswrapper[4728]: I0227 11:03:55.632687 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="837a3add-46f4-45b9-9a99-587f7cf094a1" containerName="extract-content" Feb 27 11:03:55 crc kubenswrapper[4728]: E0227 11:03:55.632711 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb3b246e-199a-47ea-9bc2-1b16b91a1522" containerName="extract-utilities" Feb 27 11:03:55 crc kubenswrapper[4728]: I0227 11:03:55.632717 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb3b246e-199a-47ea-9bc2-1b16b91a1522" containerName="extract-utilities" Feb 27 11:03:55 crc kubenswrapper[4728]: I0227 11:03:55.632962 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="837a3add-46f4-45b9-9a99-587f7cf094a1" containerName="registry-server" Feb 27 11:03:55 crc kubenswrapper[4728]: I0227 11:03:55.633000 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb3b246e-199a-47ea-9bc2-1b16b91a1522" containerName="registry-server" Feb 27 11:03:55 crc kubenswrapper[4728]: I0227 11:03:55.633026 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="73079959-3e26-47dc-8d3a-e7051acb0574" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 27 11:03:55 crc kubenswrapper[4728]: I0227 11:03:55.633987 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-kjj79" Feb 27 11:03:55 crc kubenswrapper[4728]: I0227 11:03:55.636432 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 27 11:03:55 crc kubenswrapper[4728]: I0227 11:03:55.636528 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-r9nq7" Feb 27 11:03:55 crc kubenswrapper[4728]: I0227 11:03:55.637012 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Feb 27 11:03:55 crc kubenswrapper[4728]: I0227 11:03:55.637034 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 27 11:03:55 crc kubenswrapper[4728]: I0227 11:03:55.641234 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 27 11:03:55 crc kubenswrapper[4728]: I0227 11:03:55.652150 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-kjj79"] Feb 27 11:03:55 crc kubenswrapper[4728]: I0227 11:03:55.774922 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2m42g\" (UniqueName: \"kubernetes.io/projected/d9757dd1-ea1e-492b-8781-9e64f6965762-kube-api-access-2m42g\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-kjj79\" (UID: \"d9757dd1-ea1e-492b-8781-9e64f6965762\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-kjj79" Feb 27 11:03:55 crc kubenswrapper[4728]: I0227 11:03:55.775212 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d9757dd1-ea1e-492b-8781-9e64f6965762-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-kjj79\" (UID: \"d9757dd1-ea1e-492b-8781-9e64f6965762\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-kjj79" Feb 27 11:03:55 crc kubenswrapper[4728]: I0227 11:03:55.775442 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d9757dd1-ea1e-492b-8781-9e64f6965762-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-kjj79\" (UID: \"d9757dd1-ea1e-492b-8781-9e64f6965762\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-kjj79" Feb 27 11:03:55 crc kubenswrapper[4728]: I0227 11:03:55.775614 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9757dd1-ea1e-492b-8781-9e64f6965762-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-kjj79\" (UID: \"d9757dd1-ea1e-492b-8781-9e64f6965762\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-kjj79" Feb 27 11:03:55 crc kubenswrapper[4728]: I0227 11:03:55.775708 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/d9757dd1-ea1e-492b-8781-9e64f6965762-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-kjj79\" (UID: \"d9757dd1-ea1e-492b-8781-9e64f6965762\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-kjj79" Feb 27 11:03:55 crc kubenswrapper[4728]: I0227 11:03:55.877866 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2m42g\" (UniqueName: \"kubernetes.io/projected/d9757dd1-ea1e-492b-8781-9e64f6965762-kube-api-access-2m42g\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-kjj79\" (UID: \"d9757dd1-ea1e-492b-8781-9e64f6965762\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-kjj79" Feb 27 11:03:55 crc kubenswrapper[4728]: I0227 11:03:55.878145 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d9757dd1-ea1e-492b-8781-9e64f6965762-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-kjj79\" (UID: \"d9757dd1-ea1e-492b-8781-9e64f6965762\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-kjj79" Feb 27 11:03:55 crc kubenswrapper[4728]: I0227 11:03:55.878468 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d9757dd1-ea1e-492b-8781-9e64f6965762-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-kjj79\" (UID: \"d9757dd1-ea1e-492b-8781-9e64f6965762\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-kjj79" Feb 27 11:03:55 crc kubenswrapper[4728]: I0227 11:03:55.878629 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9757dd1-ea1e-492b-8781-9e64f6965762-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-kjj79\" (UID: \"d9757dd1-ea1e-492b-8781-9e64f6965762\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-kjj79" Feb 27 11:03:55 crc kubenswrapper[4728]: I0227 11:03:55.878759 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/d9757dd1-ea1e-492b-8781-9e64f6965762-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-kjj79\" (UID: \"d9757dd1-ea1e-492b-8781-9e64f6965762\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-kjj79" Feb 27 11:03:55 crc kubenswrapper[4728]: I0227 11:03:55.879770 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/d9757dd1-ea1e-492b-8781-9e64f6965762-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-kjj79\" (UID: \"d9757dd1-ea1e-492b-8781-9e64f6965762\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-kjj79" Feb 27 11:03:55 crc kubenswrapper[4728]: I0227 11:03:55.883954 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9757dd1-ea1e-492b-8781-9e64f6965762-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-kjj79\" (UID: \"d9757dd1-ea1e-492b-8781-9e64f6965762\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-kjj79" Feb 27 11:03:55 crc kubenswrapper[4728]: I0227 11:03:55.884607 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d9757dd1-ea1e-492b-8781-9e64f6965762-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-kjj79\" (UID: \"d9757dd1-ea1e-492b-8781-9e64f6965762\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-kjj79" Feb 27 11:03:55 crc kubenswrapper[4728]: I0227 11:03:55.885623 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d9757dd1-ea1e-492b-8781-9e64f6965762-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-kjj79\" (UID: \"d9757dd1-ea1e-492b-8781-9e64f6965762\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-kjj79" Feb 27 11:03:55 crc kubenswrapper[4728]: I0227 11:03:55.899950 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2m42g\" (UniqueName: \"kubernetes.io/projected/d9757dd1-ea1e-492b-8781-9e64f6965762-kube-api-access-2m42g\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-kjj79\" (UID: \"d9757dd1-ea1e-492b-8781-9e64f6965762\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-kjj79" Feb 27 11:03:55 crc kubenswrapper[4728]: I0227 11:03:55.954477 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-kjj79" Feb 27 11:03:56 crc kubenswrapper[4728]: I0227 11:03:56.540845 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-kjj79"] Feb 27 11:03:57 crc kubenswrapper[4728]: I0227 11:03:57.504824 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-kjj79" event={"ID":"d9757dd1-ea1e-492b-8781-9e64f6965762","Type":"ContainerStarted","Data":"f3c70618a3bfa16bcf5d0c2f2eee9a54b7f673c3ff5199966a381878f0e1bfa7"} Feb 27 11:03:57 crc kubenswrapper[4728]: I0227 11:03:57.505173 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-kjj79" event={"ID":"d9757dd1-ea1e-492b-8781-9e64f6965762","Type":"ContainerStarted","Data":"22126d4c66fb89d6fa9e9236d3c5b21df66417685085e9f93e234f5bb2625937"} Feb 27 11:03:57 crc kubenswrapper[4728]: I0227 11:03:57.532984 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-kjj79" podStartSLOduration=2.080230223 podStartE2EDuration="2.532965526s" podCreationTimestamp="2026-02-27 11:03:55 +0000 UTC" firstStartedPulling="2026-02-27 11:03:56.542846706 +0000 UTC m=+2256.505212822" lastFinishedPulling="2026-02-27 11:03:56.995581979 +0000 UTC m=+2256.957948125" observedRunningTime="2026-02-27 11:03:57.530132309 +0000 UTC m=+2257.492498485" watchObservedRunningTime="2026-02-27 11:03:57.532965526 +0000 UTC m=+2257.495331632" Feb 27 11:04:00 crc kubenswrapper[4728]: I0227 11:04:00.136588 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536504-b82wk"] Feb 27 11:04:00 crc kubenswrapper[4728]: I0227 11:04:00.138448 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536504-b82wk" Feb 27 11:04:00 crc kubenswrapper[4728]: I0227 11:04:00.141236 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 11:04:00 crc kubenswrapper[4728]: I0227 11:04:00.141829 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 11:04:00 crc kubenswrapper[4728]: I0227 11:04:00.141879 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-wxdfj" Feb 27 11:04:00 crc kubenswrapper[4728]: I0227 11:04:00.156405 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536504-b82wk"] Feb 27 11:04:00 crc kubenswrapper[4728]: I0227 11:04:00.299079 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4wzh6\" (UniqueName: \"kubernetes.io/projected/adea2aa6-d1ac-4405-87c6-287a246a5db0-kube-api-access-4wzh6\") pod \"auto-csr-approver-29536504-b82wk\" (UID: \"adea2aa6-d1ac-4405-87c6-287a246a5db0\") " pod="openshift-infra/auto-csr-approver-29536504-b82wk" Feb 27 11:04:00 crc kubenswrapper[4728]: I0227 11:04:00.401513 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4wzh6\" (UniqueName: \"kubernetes.io/projected/adea2aa6-d1ac-4405-87c6-287a246a5db0-kube-api-access-4wzh6\") pod \"auto-csr-approver-29536504-b82wk\" (UID: \"adea2aa6-d1ac-4405-87c6-287a246a5db0\") " pod="openshift-infra/auto-csr-approver-29536504-b82wk" Feb 27 11:04:00 crc kubenswrapper[4728]: I0227 11:04:00.423280 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4wzh6\" (UniqueName: \"kubernetes.io/projected/adea2aa6-d1ac-4405-87c6-287a246a5db0-kube-api-access-4wzh6\") pod \"auto-csr-approver-29536504-b82wk\" (UID: \"adea2aa6-d1ac-4405-87c6-287a246a5db0\") " pod="openshift-infra/auto-csr-approver-29536504-b82wk" Feb 27 11:04:00 crc kubenswrapper[4728]: I0227 11:04:00.468346 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536504-b82wk" Feb 27 11:04:00 crc kubenswrapper[4728]: W0227 11:04:00.976269 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podadea2aa6_d1ac_4405_87c6_287a246a5db0.slice/crio-7eb8823cd3730cc9f83d0e7baa4e85388ed7f67f8a2350ffb86196d6a62e84f5 WatchSource:0}: Error finding container 7eb8823cd3730cc9f83d0e7baa4e85388ed7f67f8a2350ffb86196d6a62e84f5: Status 404 returned error can't find the container with id 7eb8823cd3730cc9f83d0e7baa4e85388ed7f67f8a2350ffb86196d6a62e84f5 Feb 27 11:04:00 crc kubenswrapper[4728]: I0227 11:04:00.978170 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536504-b82wk"] Feb 27 11:04:01 crc kubenswrapper[4728]: I0227 11:04:01.554011 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536504-b82wk" event={"ID":"adea2aa6-d1ac-4405-87c6-287a246a5db0","Type":"ContainerStarted","Data":"7eb8823cd3730cc9f83d0e7baa4e85388ed7f67f8a2350ffb86196d6a62e84f5"} Feb 27 11:04:02 crc kubenswrapper[4728]: I0227 11:04:02.577292 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536504-b82wk" event={"ID":"adea2aa6-d1ac-4405-87c6-287a246a5db0","Type":"ContainerStarted","Data":"501fc3d34daee083998ac9589c3bfea2b403f8082c9dfe59254aee09e6d7bcd1"} Feb 27 11:04:02 crc kubenswrapper[4728]: I0227 11:04:02.592509 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29536504-b82wk" podStartSLOduration=1.590779299 podStartE2EDuration="2.592483166s" podCreationTimestamp="2026-02-27 11:04:00 +0000 UTC" firstStartedPulling="2026-02-27 11:04:00.980279284 +0000 UTC m=+2260.942645390" lastFinishedPulling="2026-02-27 11:04:01.981983151 +0000 UTC m=+2261.944349257" observedRunningTime="2026-02-27 11:04:02.589131265 +0000 UTC m=+2262.551497371" watchObservedRunningTime="2026-02-27 11:04:02.592483166 +0000 UTC m=+2262.554849272" Feb 27 11:04:03 crc kubenswrapper[4728]: I0227 11:04:03.613443 4728 generic.go:334] "Generic (PLEG): container finished" podID="adea2aa6-d1ac-4405-87c6-287a246a5db0" containerID="501fc3d34daee083998ac9589c3bfea2b403f8082c9dfe59254aee09e6d7bcd1" exitCode=0 Feb 27 11:04:03 crc kubenswrapper[4728]: I0227 11:04:03.613523 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536504-b82wk" event={"ID":"adea2aa6-d1ac-4405-87c6-287a246a5db0","Type":"ContainerDied","Data":"501fc3d34daee083998ac9589c3bfea2b403f8082c9dfe59254aee09e6d7bcd1"} Feb 27 11:04:05 crc kubenswrapper[4728]: I0227 11:04:05.050212 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536504-b82wk" Feb 27 11:04:05 crc kubenswrapper[4728]: I0227 11:04:05.145673 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4wzh6\" (UniqueName: \"kubernetes.io/projected/adea2aa6-d1ac-4405-87c6-287a246a5db0-kube-api-access-4wzh6\") pod \"adea2aa6-d1ac-4405-87c6-287a246a5db0\" (UID: \"adea2aa6-d1ac-4405-87c6-287a246a5db0\") " Feb 27 11:04:05 crc kubenswrapper[4728]: I0227 11:04:05.153761 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/adea2aa6-d1ac-4405-87c6-287a246a5db0-kube-api-access-4wzh6" (OuterVolumeSpecName: "kube-api-access-4wzh6") pod "adea2aa6-d1ac-4405-87c6-287a246a5db0" (UID: "adea2aa6-d1ac-4405-87c6-287a246a5db0"). InnerVolumeSpecName "kube-api-access-4wzh6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:04:05 crc kubenswrapper[4728]: I0227 11:04:05.249330 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4wzh6\" (UniqueName: \"kubernetes.io/projected/adea2aa6-d1ac-4405-87c6-287a246a5db0-kube-api-access-4wzh6\") on node \"crc\" DevicePath \"\"" Feb 27 11:04:05 crc kubenswrapper[4728]: I0227 11:04:05.635942 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536504-b82wk" event={"ID":"adea2aa6-d1ac-4405-87c6-287a246a5db0","Type":"ContainerDied","Data":"7eb8823cd3730cc9f83d0e7baa4e85388ed7f67f8a2350ffb86196d6a62e84f5"} Feb 27 11:04:05 crc kubenswrapper[4728]: I0227 11:04:05.635995 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7eb8823cd3730cc9f83d0e7baa4e85388ed7f67f8a2350ffb86196d6a62e84f5" Feb 27 11:04:05 crc kubenswrapper[4728]: I0227 11:04:05.635995 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536504-b82wk" Feb 27 11:04:05 crc kubenswrapper[4728]: I0227 11:04:05.688238 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536498-kg4qj"] Feb 27 11:04:05 crc kubenswrapper[4728]: I0227 11:04:05.699705 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536498-kg4qj"] Feb 27 11:04:05 crc kubenswrapper[4728]: I0227 11:04:05.922625 4728 patch_prober.go:28] interesting pod/machine-config-daemon-mf2hh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 11:04:05 crc kubenswrapper[4728]: I0227 11:04:05.922714 4728 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 11:04:06 crc kubenswrapper[4728]: I0227 11:04:06.739634 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c4815a0-42fc-4f9d-9452-ea11ed73135b" path="/var/lib/kubelet/pods/0c4815a0-42fc-4f9d-9452-ea11ed73135b/volumes" Feb 27 11:04:23 crc kubenswrapper[4728]: I0227 11:04:23.882808 4728 scope.go:117] "RemoveContainer" containerID="ba3cc39ec142272e723f6e09dc3b0f199c93eec7744c706fa3280f9de36ca2d8" Feb 27 11:04:23 crc kubenswrapper[4728]: I0227 11:04:23.928712 4728 scope.go:117] "RemoveContainer" containerID="5b684cf6d573b4c93d90a97b6cc27fcc4526db3433995506c07a94d098162944" Feb 27 11:04:24 crc kubenswrapper[4728]: I0227 11:04:24.039762 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-sync-t5pxb"] Feb 27 11:04:24 crc kubenswrapper[4728]: I0227 11:04:24.053612 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-sync-t5pxb"] Feb 27 11:04:24 crc kubenswrapper[4728]: I0227 11:04:24.745179 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="132c4b8b-7345-46a9-8bfa-70bbb048d6f5" path="/var/lib/kubelet/pods/132c4b8b-7345-46a9-8bfa-70bbb048d6f5/volumes" Feb 27 11:04:35 crc kubenswrapper[4728]: I0227 11:04:35.922071 4728 patch_prober.go:28] interesting pod/machine-config-daemon-mf2hh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 11:04:35 crc kubenswrapper[4728]: I0227 11:04:35.922741 4728 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 11:04:35 crc kubenswrapper[4728]: I0227 11:04:35.922796 4728 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" Feb 27 11:04:35 crc kubenswrapper[4728]: I0227 11:04:35.923788 4728 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d8d628b62f7040f098c44223cc7f9acb05d4ddab156eb45d341a5998e4060b93"} pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 27 11:04:35 crc kubenswrapper[4728]: I0227 11:04:35.923880 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" containerName="machine-config-daemon" containerID="cri-o://d8d628b62f7040f098c44223cc7f9acb05d4ddab156eb45d341a5998e4060b93" gracePeriod=600 Feb 27 11:04:37 crc kubenswrapper[4728]: I0227 11:04:37.004696 4728 generic.go:334] "Generic (PLEG): container finished" podID="c2cfd349-f825-497b-b698-7fb6bc258b22" containerID="d8d628b62f7040f098c44223cc7f9acb05d4ddab156eb45d341a5998e4060b93" exitCode=0 Feb 27 11:04:37 crc kubenswrapper[4728]: I0227 11:04:37.004772 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" event={"ID":"c2cfd349-f825-497b-b698-7fb6bc258b22","Type":"ContainerDied","Data":"d8d628b62f7040f098c44223cc7f9acb05d4ddab156eb45d341a5998e4060b93"} Feb 27 11:04:37 crc kubenswrapper[4728]: I0227 11:04:37.005261 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" event={"ID":"c2cfd349-f825-497b-b698-7fb6bc258b22","Type":"ContainerStarted","Data":"57dd66d05b86c121df94f5a356294cb90c6125f234da02888c13a967c3d3eec5"} Feb 27 11:04:37 crc kubenswrapper[4728]: I0227 11:04:37.005283 4728 scope.go:117] "RemoveContainer" containerID="1e9c3f8a2c89093b960b65e9fc2c283b1d41282e75aab6b6afbc29feaad679c4" Feb 27 11:05:00 crc kubenswrapper[4728]: I0227 11:05:00.267905 4728 generic.go:334] "Generic (PLEG): container finished" podID="d9757dd1-ea1e-492b-8781-9e64f6965762" containerID="f3c70618a3bfa16bcf5d0c2f2eee9a54b7f673c3ff5199966a381878f0e1bfa7" exitCode=0 Feb 27 11:05:00 crc kubenswrapper[4728]: I0227 11:05:00.267968 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-kjj79" event={"ID":"d9757dd1-ea1e-492b-8781-9e64f6965762","Type":"ContainerDied","Data":"f3c70618a3bfa16bcf5d0c2f2eee9a54b7f673c3ff5199966a381878f0e1bfa7"} Feb 27 11:05:01 crc kubenswrapper[4728]: I0227 11:05:01.776649 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-kjj79" Feb 27 11:05:01 crc kubenswrapper[4728]: I0227 11:05:01.818580 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d9757dd1-ea1e-492b-8781-9e64f6965762-ssh-key-openstack-edpm-ipam\") pod \"d9757dd1-ea1e-492b-8781-9e64f6965762\" (UID: \"d9757dd1-ea1e-492b-8781-9e64f6965762\") " Feb 27 11:05:01 crc kubenswrapper[4728]: I0227 11:05:01.818661 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9757dd1-ea1e-492b-8781-9e64f6965762-ovn-combined-ca-bundle\") pod \"d9757dd1-ea1e-492b-8781-9e64f6965762\" (UID: \"d9757dd1-ea1e-492b-8781-9e64f6965762\") " Feb 27 11:05:01 crc kubenswrapper[4728]: I0227 11:05:01.818849 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2m42g\" (UniqueName: \"kubernetes.io/projected/d9757dd1-ea1e-492b-8781-9e64f6965762-kube-api-access-2m42g\") pod \"d9757dd1-ea1e-492b-8781-9e64f6965762\" (UID: \"d9757dd1-ea1e-492b-8781-9e64f6965762\") " Feb 27 11:05:01 crc kubenswrapper[4728]: I0227 11:05:01.818928 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d9757dd1-ea1e-492b-8781-9e64f6965762-inventory\") pod \"d9757dd1-ea1e-492b-8781-9e64f6965762\" (UID: \"d9757dd1-ea1e-492b-8781-9e64f6965762\") " Feb 27 11:05:01 crc kubenswrapper[4728]: I0227 11:05:01.819023 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/d9757dd1-ea1e-492b-8781-9e64f6965762-ovncontroller-config-0\") pod \"d9757dd1-ea1e-492b-8781-9e64f6965762\" (UID: \"d9757dd1-ea1e-492b-8781-9e64f6965762\") " Feb 27 11:05:01 crc kubenswrapper[4728]: I0227 11:05:01.847324 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9757dd1-ea1e-492b-8781-9e64f6965762-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "d9757dd1-ea1e-492b-8781-9e64f6965762" (UID: "d9757dd1-ea1e-492b-8781-9e64f6965762"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 11:05:01 crc kubenswrapper[4728]: I0227 11:05:01.849590 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9757dd1-ea1e-492b-8781-9e64f6965762-kube-api-access-2m42g" (OuterVolumeSpecName: "kube-api-access-2m42g") pod "d9757dd1-ea1e-492b-8781-9e64f6965762" (UID: "d9757dd1-ea1e-492b-8781-9e64f6965762"). InnerVolumeSpecName "kube-api-access-2m42g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:05:01 crc kubenswrapper[4728]: I0227 11:05:01.854542 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9757dd1-ea1e-492b-8781-9e64f6965762-inventory" (OuterVolumeSpecName: "inventory") pod "d9757dd1-ea1e-492b-8781-9e64f6965762" (UID: "d9757dd1-ea1e-492b-8781-9e64f6965762"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 11:05:01 crc kubenswrapper[4728]: I0227 11:05:01.861204 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9757dd1-ea1e-492b-8781-9e64f6965762-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "d9757dd1-ea1e-492b-8781-9e64f6965762" (UID: "d9757dd1-ea1e-492b-8781-9e64f6965762"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 11:05:01 crc kubenswrapper[4728]: I0227 11:05:01.862150 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d9757dd1-ea1e-492b-8781-9e64f6965762-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "d9757dd1-ea1e-492b-8781-9e64f6965762" (UID: "d9757dd1-ea1e-492b-8781-9e64f6965762"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 11:05:01 crc kubenswrapper[4728]: I0227 11:05:01.922185 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2m42g\" (UniqueName: \"kubernetes.io/projected/d9757dd1-ea1e-492b-8781-9e64f6965762-kube-api-access-2m42g\") on node \"crc\" DevicePath \"\"" Feb 27 11:05:01 crc kubenswrapper[4728]: I0227 11:05:01.922213 4728 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d9757dd1-ea1e-492b-8781-9e64f6965762-inventory\") on node \"crc\" DevicePath \"\"" Feb 27 11:05:01 crc kubenswrapper[4728]: I0227 11:05:01.922222 4728 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/d9757dd1-ea1e-492b-8781-9e64f6965762-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Feb 27 11:05:01 crc kubenswrapper[4728]: I0227 11:05:01.922232 4728 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d9757dd1-ea1e-492b-8781-9e64f6965762-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 27 11:05:01 crc kubenswrapper[4728]: I0227 11:05:01.922241 4728 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9757dd1-ea1e-492b-8781-9e64f6965762-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 11:05:02 crc kubenswrapper[4728]: I0227 11:05:02.293093 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-kjj79" event={"ID":"d9757dd1-ea1e-492b-8781-9e64f6965762","Type":"ContainerDied","Data":"22126d4c66fb89d6fa9e9236d3c5b21df66417685085e9f93e234f5bb2625937"} Feb 27 11:05:02 crc kubenswrapper[4728]: I0227 11:05:02.293143 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="22126d4c66fb89d6fa9e9236d3c5b21df66417685085e9f93e234f5bb2625937" Feb 27 11:05:02 crc kubenswrapper[4728]: I0227 11:05:02.293609 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-kjj79" Feb 27 11:05:02 crc kubenswrapper[4728]: I0227 11:05:02.433748 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-vt26h"] Feb 27 11:05:02 crc kubenswrapper[4728]: E0227 11:05:02.434726 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9757dd1-ea1e-492b-8781-9e64f6965762" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 27 11:05:02 crc kubenswrapper[4728]: I0227 11:05:02.434765 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9757dd1-ea1e-492b-8781-9e64f6965762" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 27 11:05:02 crc kubenswrapper[4728]: E0227 11:05:02.434788 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="adea2aa6-d1ac-4405-87c6-287a246a5db0" containerName="oc" Feb 27 11:05:02 crc kubenswrapper[4728]: I0227 11:05:02.434797 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="adea2aa6-d1ac-4405-87c6-287a246a5db0" containerName="oc" Feb 27 11:05:02 crc kubenswrapper[4728]: I0227 11:05:02.435271 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="d9757dd1-ea1e-492b-8781-9e64f6965762" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 27 11:05:02 crc kubenswrapper[4728]: I0227 11:05:02.435307 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="adea2aa6-d1ac-4405-87c6-287a246a5db0" containerName="oc" Feb 27 11:05:02 crc kubenswrapper[4728]: I0227 11:05:02.450431 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-vt26h"] Feb 27 11:05:02 crc kubenswrapper[4728]: I0227 11:05:02.450575 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-vt26h" Feb 27 11:05:02 crc kubenswrapper[4728]: I0227 11:05:02.453900 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 27 11:05:02 crc kubenswrapper[4728]: I0227 11:05:02.454083 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Feb 27 11:05:02 crc kubenswrapper[4728]: I0227 11:05:02.454091 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 27 11:05:02 crc kubenswrapper[4728]: I0227 11:05:02.454153 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 27 11:05:02 crc kubenswrapper[4728]: I0227 11:05:02.454202 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-r9nq7" Feb 27 11:05:02 crc kubenswrapper[4728]: I0227 11:05:02.454352 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Feb 27 11:05:02 crc kubenswrapper[4728]: I0227 11:05:02.539453 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/63c6a227-e2d7-4d6d-8519-a9f744424f6a-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-vt26h\" (UID: \"63c6a227-e2d7-4d6d-8519-a9f744424f6a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-vt26h" Feb 27 11:05:02 crc kubenswrapper[4728]: I0227 11:05:02.539630 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/63c6a227-e2d7-4d6d-8519-a9f744424f6a-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-vt26h\" (UID: \"63c6a227-e2d7-4d6d-8519-a9f744424f6a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-vt26h" Feb 27 11:05:02 crc kubenswrapper[4728]: I0227 11:05:02.539704 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6b8q\" (UniqueName: \"kubernetes.io/projected/63c6a227-e2d7-4d6d-8519-a9f744424f6a-kube-api-access-r6b8q\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-vt26h\" (UID: \"63c6a227-e2d7-4d6d-8519-a9f744424f6a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-vt26h" Feb 27 11:05:02 crc kubenswrapper[4728]: I0227 11:05:02.540218 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63c6a227-e2d7-4d6d-8519-a9f744424f6a-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-vt26h\" (UID: \"63c6a227-e2d7-4d6d-8519-a9f744424f6a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-vt26h" Feb 27 11:05:02 crc kubenswrapper[4728]: I0227 11:05:02.540406 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/63c6a227-e2d7-4d6d-8519-a9f744424f6a-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-vt26h\" (UID: \"63c6a227-e2d7-4d6d-8519-a9f744424f6a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-vt26h" Feb 27 11:05:02 crc kubenswrapper[4728]: I0227 11:05:02.540655 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/63c6a227-e2d7-4d6d-8519-a9f744424f6a-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-vt26h\" (UID: \"63c6a227-e2d7-4d6d-8519-a9f744424f6a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-vt26h" Feb 27 11:05:02 crc kubenswrapper[4728]: I0227 11:05:02.642399 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63c6a227-e2d7-4d6d-8519-a9f744424f6a-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-vt26h\" (UID: \"63c6a227-e2d7-4d6d-8519-a9f744424f6a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-vt26h" Feb 27 11:05:02 crc kubenswrapper[4728]: I0227 11:05:02.642516 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/63c6a227-e2d7-4d6d-8519-a9f744424f6a-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-vt26h\" (UID: \"63c6a227-e2d7-4d6d-8519-a9f744424f6a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-vt26h" Feb 27 11:05:02 crc kubenswrapper[4728]: I0227 11:05:02.642600 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/63c6a227-e2d7-4d6d-8519-a9f744424f6a-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-vt26h\" (UID: \"63c6a227-e2d7-4d6d-8519-a9f744424f6a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-vt26h" Feb 27 11:05:02 crc kubenswrapper[4728]: I0227 11:05:02.642653 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/63c6a227-e2d7-4d6d-8519-a9f744424f6a-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-vt26h\" (UID: \"63c6a227-e2d7-4d6d-8519-a9f744424f6a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-vt26h" Feb 27 11:05:02 crc kubenswrapper[4728]: I0227 11:05:02.642695 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/63c6a227-e2d7-4d6d-8519-a9f744424f6a-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-vt26h\" (UID: \"63c6a227-e2d7-4d6d-8519-a9f744424f6a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-vt26h" Feb 27 11:05:02 crc kubenswrapper[4728]: I0227 11:05:02.642717 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r6b8q\" (UniqueName: \"kubernetes.io/projected/63c6a227-e2d7-4d6d-8519-a9f744424f6a-kube-api-access-r6b8q\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-vt26h\" (UID: \"63c6a227-e2d7-4d6d-8519-a9f744424f6a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-vt26h" Feb 27 11:05:02 crc kubenswrapper[4728]: I0227 11:05:02.648274 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/63c6a227-e2d7-4d6d-8519-a9f744424f6a-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-vt26h\" (UID: \"63c6a227-e2d7-4d6d-8519-a9f744424f6a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-vt26h" Feb 27 11:05:02 crc kubenswrapper[4728]: I0227 11:05:02.662221 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/63c6a227-e2d7-4d6d-8519-a9f744424f6a-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-vt26h\" (UID: \"63c6a227-e2d7-4d6d-8519-a9f744424f6a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-vt26h" Feb 27 11:05:02 crc kubenswrapper[4728]: I0227 11:05:02.664517 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63c6a227-e2d7-4d6d-8519-a9f744424f6a-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-vt26h\" (UID: \"63c6a227-e2d7-4d6d-8519-a9f744424f6a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-vt26h" Feb 27 11:05:02 crc kubenswrapper[4728]: I0227 11:05:02.665118 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/63c6a227-e2d7-4d6d-8519-a9f744424f6a-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-vt26h\" (UID: \"63c6a227-e2d7-4d6d-8519-a9f744424f6a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-vt26h" Feb 27 11:05:02 crc kubenswrapper[4728]: I0227 11:05:02.669179 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/63c6a227-e2d7-4d6d-8519-a9f744424f6a-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-vt26h\" (UID: \"63c6a227-e2d7-4d6d-8519-a9f744424f6a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-vt26h" Feb 27 11:05:02 crc kubenswrapper[4728]: I0227 11:05:02.670758 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r6b8q\" (UniqueName: \"kubernetes.io/projected/63c6a227-e2d7-4d6d-8519-a9f744424f6a-kube-api-access-r6b8q\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-vt26h\" (UID: \"63c6a227-e2d7-4d6d-8519-a9f744424f6a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-vt26h" Feb 27 11:05:02 crc kubenswrapper[4728]: I0227 11:05:02.775436 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-vt26h" Feb 27 11:05:03 crc kubenswrapper[4728]: I0227 11:05:03.482467 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-vt26h"] Feb 27 11:05:04 crc kubenswrapper[4728]: I0227 11:05:04.315653 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-vt26h" event={"ID":"63c6a227-e2d7-4d6d-8519-a9f744424f6a","Type":"ContainerStarted","Data":"9fc8460758b5bfab0ffc3afce187cf866b748e86a4c8ea76661457d4ef1a06f1"} Feb 27 11:05:05 crc kubenswrapper[4728]: I0227 11:05:05.329218 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-vt26h" event={"ID":"63c6a227-e2d7-4d6d-8519-a9f744424f6a","Type":"ContainerStarted","Data":"08ccef2c118b4102795b375c57769031d2a93d6155d39f2ecc16e0251b76bffe"} Feb 27 11:05:05 crc kubenswrapper[4728]: I0227 11:05:05.365698 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-vt26h" podStartSLOduration=2.839987305 podStartE2EDuration="3.365672082s" podCreationTimestamp="2026-02-27 11:05:02 +0000 UTC" firstStartedPulling="2026-02-27 11:05:03.554020248 +0000 UTC m=+2323.516386354" lastFinishedPulling="2026-02-27 11:05:04.079705025 +0000 UTC m=+2324.042071131" observedRunningTime="2026-02-27 11:05:05.356893551 +0000 UTC m=+2325.319259657" watchObservedRunningTime="2026-02-27 11:05:05.365672082 +0000 UTC m=+2325.328038208" Feb 27 11:05:24 crc kubenswrapper[4728]: I0227 11:05:24.055798 4728 scope.go:117] "RemoveContainer" containerID="4003518932fd59a007cd1f943804b2d07f1ed587ed621acdd720ba4840735400" Feb 27 11:05:52 crc kubenswrapper[4728]: I0227 11:05:52.908189 4728 generic.go:334] "Generic (PLEG): container finished" podID="63c6a227-e2d7-4d6d-8519-a9f744424f6a" containerID="08ccef2c118b4102795b375c57769031d2a93d6155d39f2ecc16e0251b76bffe" exitCode=0 Feb 27 11:05:52 crc kubenswrapper[4728]: I0227 11:05:52.908283 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-vt26h" event={"ID":"63c6a227-e2d7-4d6d-8519-a9f744424f6a","Type":"ContainerDied","Data":"08ccef2c118b4102795b375c57769031d2a93d6155d39f2ecc16e0251b76bffe"} Feb 27 11:05:54 crc kubenswrapper[4728]: I0227 11:05:54.500983 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-vt26h" Feb 27 11:05:54 crc kubenswrapper[4728]: I0227 11:05:54.616772 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/63c6a227-e2d7-4d6d-8519-a9f744424f6a-neutron-ovn-metadata-agent-neutron-config-0\") pod \"63c6a227-e2d7-4d6d-8519-a9f744424f6a\" (UID: \"63c6a227-e2d7-4d6d-8519-a9f744424f6a\") " Feb 27 11:05:54 crc kubenswrapper[4728]: I0227 11:05:54.616882 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r6b8q\" (UniqueName: \"kubernetes.io/projected/63c6a227-e2d7-4d6d-8519-a9f744424f6a-kube-api-access-r6b8q\") pod \"63c6a227-e2d7-4d6d-8519-a9f744424f6a\" (UID: \"63c6a227-e2d7-4d6d-8519-a9f744424f6a\") " Feb 27 11:05:54 crc kubenswrapper[4728]: I0227 11:05:54.617767 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/63c6a227-e2d7-4d6d-8519-a9f744424f6a-ssh-key-openstack-edpm-ipam\") pod \"63c6a227-e2d7-4d6d-8519-a9f744424f6a\" (UID: \"63c6a227-e2d7-4d6d-8519-a9f744424f6a\") " Feb 27 11:05:54 crc kubenswrapper[4728]: I0227 11:05:54.617872 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/63c6a227-e2d7-4d6d-8519-a9f744424f6a-nova-metadata-neutron-config-0\") pod \"63c6a227-e2d7-4d6d-8519-a9f744424f6a\" (UID: \"63c6a227-e2d7-4d6d-8519-a9f744424f6a\") " Feb 27 11:05:54 crc kubenswrapper[4728]: I0227 11:05:54.617945 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63c6a227-e2d7-4d6d-8519-a9f744424f6a-neutron-metadata-combined-ca-bundle\") pod \"63c6a227-e2d7-4d6d-8519-a9f744424f6a\" (UID: \"63c6a227-e2d7-4d6d-8519-a9f744424f6a\") " Feb 27 11:05:54 crc kubenswrapper[4728]: I0227 11:05:54.618320 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/63c6a227-e2d7-4d6d-8519-a9f744424f6a-inventory\") pod \"63c6a227-e2d7-4d6d-8519-a9f744424f6a\" (UID: \"63c6a227-e2d7-4d6d-8519-a9f744424f6a\") " Feb 27 11:05:54 crc kubenswrapper[4728]: I0227 11:05:54.623018 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63c6a227-e2d7-4d6d-8519-a9f744424f6a-kube-api-access-r6b8q" (OuterVolumeSpecName: "kube-api-access-r6b8q") pod "63c6a227-e2d7-4d6d-8519-a9f744424f6a" (UID: "63c6a227-e2d7-4d6d-8519-a9f744424f6a"). InnerVolumeSpecName "kube-api-access-r6b8q". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:05:54 crc kubenswrapper[4728]: I0227 11:05:54.623066 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63c6a227-e2d7-4d6d-8519-a9f744424f6a-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "63c6a227-e2d7-4d6d-8519-a9f744424f6a" (UID: "63c6a227-e2d7-4d6d-8519-a9f744424f6a"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 11:05:54 crc kubenswrapper[4728]: I0227 11:05:54.657626 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63c6a227-e2d7-4d6d-8519-a9f744424f6a-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "63c6a227-e2d7-4d6d-8519-a9f744424f6a" (UID: "63c6a227-e2d7-4d6d-8519-a9f744424f6a"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 11:05:54 crc kubenswrapper[4728]: I0227 11:05:54.660722 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63c6a227-e2d7-4d6d-8519-a9f744424f6a-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "63c6a227-e2d7-4d6d-8519-a9f744424f6a" (UID: "63c6a227-e2d7-4d6d-8519-a9f744424f6a"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 11:05:54 crc kubenswrapper[4728]: I0227 11:05:54.666388 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63c6a227-e2d7-4d6d-8519-a9f744424f6a-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "63c6a227-e2d7-4d6d-8519-a9f744424f6a" (UID: "63c6a227-e2d7-4d6d-8519-a9f744424f6a"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 11:05:54 crc kubenswrapper[4728]: I0227 11:05:54.687710 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63c6a227-e2d7-4d6d-8519-a9f744424f6a-inventory" (OuterVolumeSpecName: "inventory") pod "63c6a227-e2d7-4d6d-8519-a9f744424f6a" (UID: "63c6a227-e2d7-4d6d-8519-a9f744424f6a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 11:05:54 crc kubenswrapper[4728]: I0227 11:05:54.721834 4728 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/63c6a227-e2d7-4d6d-8519-a9f744424f6a-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 27 11:05:54 crc kubenswrapper[4728]: I0227 11:05:54.721883 4728 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/63c6a227-e2d7-4d6d-8519-a9f744424f6a-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Feb 27 11:05:54 crc kubenswrapper[4728]: I0227 11:05:54.721898 4728 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63c6a227-e2d7-4d6d-8519-a9f744424f6a-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 11:05:54 crc kubenswrapper[4728]: I0227 11:05:54.721914 4728 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/63c6a227-e2d7-4d6d-8519-a9f744424f6a-inventory\") on node \"crc\" DevicePath \"\"" Feb 27 11:05:54 crc kubenswrapper[4728]: I0227 11:05:54.721927 4728 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/63c6a227-e2d7-4d6d-8519-a9f744424f6a-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Feb 27 11:05:54 crc kubenswrapper[4728]: I0227 11:05:54.721944 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r6b8q\" (UniqueName: \"kubernetes.io/projected/63c6a227-e2d7-4d6d-8519-a9f744424f6a-kube-api-access-r6b8q\") on node \"crc\" DevicePath \"\"" Feb 27 11:05:54 crc kubenswrapper[4728]: I0227 11:05:54.941957 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-vt26h" event={"ID":"63c6a227-e2d7-4d6d-8519-a9f744424f6a","Type":"ContainerDied","Data":"9fc8460758b5bfab0ffc3afce187cf866b748e86a4c8ea76661457d4ef1a06f1"} Feb 27 11:05:54 crc kubenswrapper[4728]: I0227 11:05:54.942032 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9fc8460758b5bfab0ffc3afce187cf866b748e86a4c8ea76661457d4ef1a06f1" Feb 27 11:05:54 crc kubenswrapper[4728]: I0227 11:05:54.942075 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-vt26h" Feb 27 11:05:55 crc kubenswrapper[4728]: I0227 11:05:55.070292 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-25clk"] Feb 27 11:05:55 crc kubenswrapper[4728]: E0227 11:05:55.070792 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63c6a227-e2d7-4d6d-8519-a9f744424f6a" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 27 11:05:55 crc kubenswrapper[4728]: I0227 11:05:55.070810 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="63c6a227-e2d7-4d6d-8519-a9f744424f6a" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 27 11:05:55 crc kubenswrapper[4728]: I0227 11:05:55.071098 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="63c6a227-e2d7-4d6d-8519-a9f744424f6a" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 27 11:05:55 crc kubenswrapper[4728]: I0227 11:05:55.072179 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-25clk" Feb 27 11:05:55 crc kubenswrapper[4728]: I0227 11:05:55.083940 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-25clk"] Feb 27 11:05:55 crc kubenswrapper[4728]: I0227 11:05:55.085222 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 27 11:05:55 crc kubenswrapper[4728]: I0227 11:05:55.085472 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Feb 27 11:05:55 crc kubenswrapper[4728]: I0227 11:05:55.085635 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-r9nq7" Feb 27 11:05:55 crc kubenswrapper[4728]: I0227 11:05:55.085670 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 27 11:05:55 crc kubenswrapper[4728]: I0227 11:05:55.089975 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 27 11:05:55 crc kubenswrapper[4728]: I0227 11:05:55.135420 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/366ef133-dc99-408d-9a1b-220869733a30-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-25clk\" (UID: \"366ef133-dc99-408d-9a1b-220869733a30\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-25clk" Feb 27 11:05:55 crc kubenswrapper[4728]: I0227 11:05:55.135587 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tw988\" (UniqueName: \"kubernetes.io/projected/366ef133-dc99-408d-9a1b-220869733a30-kube-api-access-tw988\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-25clk\" (UID: \"366ef133-dc99-408d-9a1b-220869733a30\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-25clk" Feb 27 11:05:55 crc kubenswrapper[4728]: I0227 11:05:55.135644 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/366ef133-dc99-408d-9a1b-220869733a30-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-25clk\" (UID: \"366ef133-dc99-408d-9a1b-220869733a30\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-25clk" Feb 27 11:05:55 crc kubenswrapper[4728]: I0227 11:05:55.135814 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/366ef133-dc99-408d-9a1b-220869733a30-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-25clk\" (UID: \"366ef133-dc99-408d-9a1b-220869733a30\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-25clk" Feb 27 11:05:55 crc kubenswrapper[4728]: I0227 11:05:55.136063 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/366ef133-dc99-408d-9a1b-220869733a30-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-25clk\" (UID: \"366ef133-dc99-408d-9a1b-220869733a30\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-25clk" Feb 27 11:05:55 crc kubenswrapper[4728]: I0227 11:05:55.238488 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tw988\" (UniqueName: \"kubernetes.io/projected/366ef133-dc99-408d-9a1b-220869733a30-kube-api-access-tw988\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-25clk\" (UID: \"366ef133-dc99-408d-9a1b-220869733a30\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-25clk" Feb 27 11:05:55 crc kubenswrapper[4728]: I0227 11:05:55.238877 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/366ef133-dc99-408d-9a1b-220869733a30-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-25clk\" (UID: \"366ef133-dc99-408d-9a1b-220869733a30\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-25clk" Feb 27 11:05:55 crc kubenswrapper[4728]: I0227 11:05:55.239031 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/366ef133-dc99-408d-9a1b-220869733a30-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-25clk\" (UID: \"366ef133-dc99-408d-9a1b-220869733a30\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-25clk" Feb 27 11:05:55 crc kubenswrapper[4728]: I0227 11:05:55.239137 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/366ef133-dc99-408d-9a1b-220869733a30-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-25clk\" (UID: \"366ef133-dc99-408d-9a1b-220869733a30\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-25clk" Feb 27 11:05:55 crc kubenswrapper[4728]: I0227 11:05:55.239198 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/366ef133-dc99-408d-9a1b-220869733a30-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-25clk\" (UID: \"366ef133-dc99-408d-9a1b-220869733a30\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-25clk" Feb 27 11:05:55 crc kubenswrapper[4728]: I0227 11:05:55.244355 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/366ef133-dc99-408d-9a1b-220869733a30-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-25clk\" (UID: \"366ef133-dc99-408d-9a1b-220869733a30\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-25clk" Feb 27 11:05:55 crc kubenswrapper[4728]: I0227 11:05:55.246006 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/366ef133-dc99-408d-9a1b-220869733a30-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-25clk\" (UID: \"366ef133-dc99-408d-9a1b-220869733a30\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-25clk" Feb 27 11:05:55 crc kubenswrapper[4728]: I0227 11:05:55.246399 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/366ef133-dc99-408d-9a1b-220869733a30-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-25clk\" (UID: \"366ef133-dc99-408d-9a1b-220869733a30\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-25clk" Feb 27 11:05:55 crc kubenswrapper[4728]: I0227 11:05:55.254652 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/366ef133-dc99-408d-9a1b-220869733a30-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-25clk\" (UID: \"366ef133-dc99-408d-9a1b-220869733a30\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-25clk" Feb 27 11:05:55 crc kubenswrapper[4728]: I0227 11:05:55.261274 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tw988\" (UniqueName: \"kubernetes.io/projected/366ef133-dc99-408d-9a1b-220869733a30-kube-api-access-tw988\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-25clk\" (UID: \"366ef133-dc99-408d-9a1b-220869733a30\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-25clk" Feb 27 11:05:55 crc kubenswrapper[4728]: I0227 11:05:55.390513 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-25clk" Feb 27 11:05:56 crc kubenswrapper[4728]: I0227 11:05:56.002614 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-25clk"] Feb 27 11:05:56 crc kubenswrapper[4728]: I0227 11:05:56.966039 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-25clk" event={"ID":"366ef133-dc99-408d-9a1b-220869733a30","Type":"ContainerStarted","Data":"db72c030645bb355cc2371ad2b5e47e40f5444be8164a63adfa4cd3a5725f5e7"} Feb 27 11:05:56 crc kubenswrapper[4728]: I0227 11:05:56.966831 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-25clk" event={"ID":"366ef133-dc99-408d-9a1b-220869733a30","Type":"ContainerStarted","Data":"acdd5cd97342ce70f0e93bd1b83b63739ce238b994664d18489913833355b853"} Feb 27 11:05:56 crc kubenswrapper[4728]: I0227 11:05:56.993208 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-25clk" podStartSLOduration=1.5310454949999999 podStartE2EDuration="1.993190446s" podCreationTimestamp="2026-02-27 11:05:55 +0000 UTC" firstStartedPulling="2026-02-27 11:05:56.007313732 +0000 UTC m=+2375.969679848" lastFinishedPulling="2026-02-27 11:05:56.469458693 +0000 UTC m=+2376.431824799" observedRunningTime="2026-02-27 11:05:56.987886471 +0000 UTC m=+2376.950252607" watchObservedRunningTime="2026-02-27 11:05:56.993190446 +0000 UTC m=+2376.955556552" Feb 27 11:06:00 crc kubenswrapper[4728]: I0227 11:06:00.142137 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536506-p6fnn"] Feb 27 11:06:00 crc kubenswrapper[4728]: I0227 11:06:00.144038 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536506-p6fnn" Feb 27 11:06:00 crc kubenswrapper[4728]: I0227 11:06:00.147313 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 11:06:00 crc kubenswrapper[4728]: I0227 11:06:00.147473 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 11:06:00 crc kubenswrapper[4728]: I0227 11:06:00.147932 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-wxdfj" Feb 27 11:06:00 crc kubenswrapper[4728]: I0227 11:06:00.156950 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536506-p6fnn"] Feb 27 11:06:00 crc kubenswrapper[4728]: I0227 11:06:00.258650 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ms2s2\" (UniqueName: \"kubernetes.io/projected/9fb7e9fc-8c07-4932-8b4c-6e07b0c76147-kube-api-access-ms2s2\") pod \"auto-csr-approver-29536506-p6fnn\" (UID: \"9fb7e9fc-8c07-4932-8b4c-6e07b0c76147\") " pod="openshift-infra/auto-csr-approver-29536506-p6fnn" Feb 27 11:06:00 crc kubenswrapper[4728]: I0227 11:06:00.362429 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ms2s2\" (UniqueName: \"kubernetes.io/projected/9fb7e9fc-8c07-4932-8b4c-6e07b0c76147-kube-api-access-ms2s2\") pod \"auto-csr-approver-29536506-p6fnn\" (UID: \"9fb7e9fc-8c07-4932-8b4c-6e07b0c76147\") " pod="openshift-infra/auto-csr-approver-29536506-p6fnn" Feb 27 11:06:00 crc kubenswrapper[4728]: I0227 11:06:00.382135 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ms2s2\" (UniqueName: \"kubernetes.io/projected/9fb7e9fc-8c07-4932-8b4c-6e07b0c76147-kube-api-access-ms2s2\") pod \"auto-csr-approver-29536506-p6fnn\" (UID: \"9fb7e9fc-8c07-4932-8b4c-6e07b0c76147\") " pod="openshift-infra/auto-csr-approver-29536506-p6fnn" Feb 27 11:06:00 crc kubenswrapper[4728]: I0227 11:06:00.479334 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536506-p6fnn" Feb 27 11:06:01 crc kubenswrapper[4728]: I0227 11:06:01.055211 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536506-p6fnn"] Feb 27 11:06:01 crc kubenswrapper[4728]: W0227 11:06:01.056208 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9fb7e9fc_8c07_4932_8b4c_6e07b0c76147.slice/crio-f75e1941844bc025b44084cb47db8c1d221932a7f7367819e63003d5e2f3daaf WatchSource:0}: Error finding container f75e1941844bc025b44084cb47db8c1d221932a7f7367819e63003d5e2f3daaf: Status 404 returned error can't find the container with id f75e1941844bc025b44084cb47db8c1d221932a7f7367819e63003d5e2f3daaf Feb 27 11:06:02 crc kubenswrapper[4728]: I0227 11:06:02.034395 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536506-p6fnn" event={"ID":"9fb7e9fc-8c07-4932-8b4c-6e07b0c76147","Type":"ContainerStarted","Data":"f75e1941844bc025b44084cb47db8c1d221932a7f7367819e63003d5e2f3daaf"} Feb 27 11:06:03 crc kubenswrapper[4728]: I0227 11:06:03.049056 4728 generic.go:334] "Generic (PLEG): container finished" podID="9fb7e9fc-8c07-4932-8b4c-6e07b0c76147" containerID="bb830deb67d6bc6ce24596259babcbae96fb267be31d8b66aa57dc70b0c75c1d" exitCode=0 Feb 27 11:06:03 crc kubenswrapper[4728]: I0227 11:06:03.049111 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536506-p6fnn" event={"ID":"9fb7e9fc-8c07-4932-8b4c-6e07b0c76147","Type":"ContainerDied","Data":"bb830deb67d6bc6ce24596259babcbae96fb267be31d8b66aa57dc70b0c75c1d"} Feb 27 11:06:04 crc kubenswrapper[4728]: I0227 11:06:04.451742 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536506-p6fnn" Feb 27 11:06:04 crc kubenswrapper[4728]: I0227 11:06:04.501665 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ms2s2\" (UniqueName: \"kubernetes.io/projected/9fb7e9fc-8c07-4932-8b4c-6e07b0c76147-kube-api-access-ms2s2\") pod \"9fb7e9fc-8c07-4932-8b4c-6e07b0c76147\" (UID: \"9fb7e9fc-8c07-4932-8b4c-6e07b0c76147\") " Feb 27 11:06:04 crc kubenswrapper[4728]: I0227 11:06:04.507380 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9fb7e9fc-8c07-4932-8b4c-6e07b0c76147-kube-api-access-ms2s2" (OuterVolumeSpecName: "kube-api-access-ms2s2") pod "9fb7e9fc-8c07-4932-8b4c-6e07b0c76147" (UID: "9fb7e9fc-8c07-4932-8b4c-6e07b0c76147"). InnerVolumeSpecName "kube-api-access-ms2s2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:06:04 crc kubenswrapper[4728]: I0227 11:06:04.604962 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ms2s2\" (UniqueName: \"kubernetes.io/projected/9fb7e9fc-8c07-4932-8b4c-6e07b0c76147-kube-api-access-ms2s2\") on node \"crc\" DevicePath \"\"" Feb 27 11:06:05 crc kubenswrapper[4728]: I0227 11:06:05.073804 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536506-p6fnn" event={"ID":"9fb7e9fc-8c07-4932-8b4c-6e07b0c76147","Type":"ContainerDied","Data":"f75e1941844bc025b44084cb47db8c1d221932a7f7367819e63003d5e2f3daaf"} Feb 27 11:06:05 crc kubenswrapper[4728]: I0227 11:06:05.074194 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f75e1941844bc025b44084cb47db8c1d221932a7f7367819e63003d5e2f3daaf" Feb 27 11:06:05 crc kubenswrapper[4728]: I0227 11:06:05.074075 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536506-p6fnn" Feb 27 11:06:05 crc kubenswrapper[4728]: I0227 11:06:05.523079 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536500-pqxmp"] Feb 27 11:06:05 crc kubenswrapper[4728]: I0227 11:06:05.535518 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536500-pqxmp"] Feb 27 11:06:06 crc kubenswrapper[4728]: I0227 11:06:06.741827 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="878771e5-dc09-46d1-a3b0-79628625cd3e" path="/var/lib/kubelet/pods/878771e5-dc09-46d1-a3b0-79628625cd3e/volumes" Feb 27 11:06:24 crc kubenswrapper[4728]: I0227 11:06:24.158393 4728 scope.go:117] "RemoveContainer" containerID="bb6a64d91e929eccc4a6e4f936efb33af12bc05c225938f538353c554399fb7e" Feb 27 11:07:05 crc kubenswrapper[4728]: I0227 11:07:05.922278 4728 patch_prober.go:28] interesting pod/machine-config-daemon-mf2hh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 11:07:05 crc kubenswrapper[4728]: I0227 11:07:05.922935 4728 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 11:07:35 crc kubenswrapper[4728]: I0227 11:07:35.922098 4728 patch_prober.go:28] interesting pod/machine-config-daemon-mf2hh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 11:07:35 crc kubenswrapper[4728]: I0227 11:07:35.922729 4728 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 11:08:00 crc kubenswrapper[4728]: I0227 11:08:00.169121 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536508-bszlh"] Feb 27 11:08:00 crc kubenswrapper[4728]: E0227 11:08:00.170399 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9fb7e9fc-8c07-4932-8b4c-6e07b0c76147" containerName="oc" Feb 27 11:08:00 crc kubenswrapper[4728]: I0227 11:08:00.170415 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="9fb7e9fc-8c07-4932-8b4c-6e07b0c76147" containerName="oc" Feb 27 11:08:00 crc kubenswrapper[4728]: I0227 11:08:00.170744 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="9fb7e9fc-8c07-4932-8b4c-6e07b0c76147" containerName="oc" Feb 27 11:08:00 crc kubenswrapper[4728]: I0227 11:08:00.171739 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536508-bszlh" Feb 27 11:08:00 crc kubenswrapper[4728]: I0227 11:08:00.174480 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 11:08:00 crc kubenswrapper[4728]: I0227 11:08:00.174866 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 11:08:00 crc kubenswrapper[4728]: I0227 11:08:00.175112 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-wxdfj" Feb 27 11:08:00 crc kubenswrapper[4728]: I0227 11:08:00.284012 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536508-bszlh"] Feb 27 11:08:00 crc kubenswrapper[4728]: I0227 11:08:00.328247 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dj55q\" (UniqueName: \"kubernetes.io/projected/6c5a8b83-71a1-425d-a3c9-ddf070667c4e-kube-api-access-dj55q\") pod \"auto-csr-approver-29536508-bszlh\" (UID: \"6c5a8b83-71a1-425d-a3c9-ddf070667c4e\") " pod="openshift-infra/auto-csr-approver-29536508-bszlh" Feb 27 11:08:00 crc kubenswrapper[4728]: I0227 11:08:00.430358 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dj55q\" (UniqueName: \"kubernetes.io/projected/6c5a8b83-71a1-425d-a3c9-ddf070667c4e-kube-api-access-dj55q\") pod \"auto-csr-approver-29536508-bszlh\" (UID: \"6c5a8b83-71a1-425d-a3c9-ddf070667c4e\") " pod="openshift-infra/auto-csr-approver-29536508-bszlh" Feb 27 11:08:00 crc kubenswrapper[4728]: I0227 11:08:00.461079 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dj55q\" (UniqueName: \"kubernetes.io/projected/6c5a8b83-71a1-425d-a3c9-ddf070667c4e-kube-api-access-dj55q\") pod \"auto-csr-approver-29536508-bszlh\" (UID: \"6c5a8b83-71a1-425d-a3c9-ddf070667c4e\") " pod="openshift-infra/auto-csr-approver-29536508-bszlh" Feb 27 11:08:00 crc kubenswrapper[4728]: I0227 11:08:00.496858 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536508-bszlh" Feb 27 11:08:01 crc kubenswrapper[4728]: I0227 11:08:01.146193 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536508-bszlh"] Feb 27 11:08:01 crc kubenswrapper[4728]: W0227 11:08:01.154867 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6c5a8b83_71a1_425d_a3c9_ddf070667c4e.slice/crio-71f73ecc9e1df5c0358e2b8a07ce1fb5b196ab830d9940501573105448973bce WatchSource:0}: Error finding container 71f73ecc9e1df5c0358e2b8a07ce1fb5b196ab830d9940501573105448973bce: Status 404 returned error can't find the container with id 71f73ecc9e1df5c0358e2b8a07ce1fb5b196ab830d9940501573105448973bce Feb 27 11:08:01 crc kubenswrapper[4728]: I0227 11:08:01.158817 4728 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 27 11:08:01 crc kubenswrapper[4728]: I0227 11:08:01.589923 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536508-bszlh" event={"ID":"6c5a8b83-71a1-425d-a3c9-ddf070667c4e","Type":"ContainerStarted","Data":"71f73ecc9e1df5c0358e2b8a07ce1fb5b196ab830d9940501573105448973bce"} Feb 27 11:08:03 crc kubenswrapper[4728]: I0227 11:08:03.614070 4728 generic.go:334] "Generic (PLEG): container finished" podID="6c5a8b83-71a1-425d-a3c9-ddf070667c4e" containerID="69523dc5e436cbb7808f98975e0868811490027a3a212f190267f0819f09fe50" exitCode=0 Feb 27 11:08:03 crc kubenswrapper[4728]: I0227 11:08:03.614179 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536508-bszlh" event={"ID":"6c5a8b83-71a1-425d-a3c9-ddf070667c4e","Type":"ContainerDied","Data":"69523dc5e436cbb7808f98975e0868811490027a3a212f190267f0819f09fe50"} Feb 27 11:08:05 crc kubenswrapper[4728]: I0227 11:08:05.055033 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536508-bszlh" Feb 27 11:08:05 crc kubenswrapper[4728]: I0227 11:08:05.171822 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dj55q\" (UniqueName: \"kubernetes.io/projected/6c5a8b83-71a1-425d-a3c9-ddf070667c4e-kube-api-access-dj55q\") pod \"6c5a8b83-71a1-425d-a3c9-ddf070667c4e\" (UID: \"6c5a8b83-71a1-425d-a3c9-ddf070667c4e\") " Feb 27 11:08:05 crc kubenswrapper[4728]: I0227 11:08:05.184854 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c5a8b83-71a1-425d-a3c9-ddf070667c4e-kube-api-access-dj55q" (OuterVolumeSpecName: "kube-api-access-dj55q") pod "6c5a8b83-71a1-425d-a3c9-ddf070667c4e" (UID: "6c5a8b83-71a1-425d-a3c9-ddf070667c4e"). InnerVolumeSpecName "kube-api-access-dj55q". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:08:05 crc kubenswrapper[4728]: I0227 11:08:05.276663 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dj55q\" (UniqueName: \"kubernetes.io/projected/6c5a8b83-71a1-425d-a3c9-ddf070667c4e-kube-api-access-dj55q\") on node \"crc\" DevicePath \"\"" Feb 27 11:08:05 crc kubenswrapper[4728]: I0227 11:08:05.642900 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536508-bszlh" event={"ID":"6c5a8b83-71a1-425d-a3c9-ddf070667c4e","Type":"ContainerDied","Data":"71f73ecc9e1df5c0358e2b8a07ce1fb5b196ab830d9940501573105448973bce"} Feb 27 11:08:05 crc kubenswrapper[4728]: I0227 11:08:05.642962 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="71f73ecc9e1df5c0358e2b8a07ce1fb5b196ab830d9940501573105448973bce" Feb 27 11:08:05 crc kubenswrapper[4728]: I0227 11:08:05.642971 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536508-bszlh" Feb 27 11:08:05 crc kubenswrapper[4728]: I0227 11:08:05.922395 4728 patch_prober.go:28] interesting pod/machine-config-daemon-mf2hh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 11:08:05 crc kubenswrapper[4728]: I0227 11:08:05.922482 4728 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 11:08:05 crc kubenswrapper[4728]: I0227 11:08:05.922577 4728 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" Feb 27 11:08:05 crc kubenswrapper[4728]: I0227 11:08:05.924025 4728 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"57dd66d05b86c121df94f5a356294cb90c6125f234da02888c13a967c3d3eec5"} pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 27 11:08:05 crc kubenswrapper[4728]: I0227 11:08:05.924130 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" containerName="machine-config-daemon" containerID="cri-o://57dd66d05b86c121df94f5a356294cb90c6125f234da02888c13a967c3d3eec5" gracePeriod=600 Feb 27 11:08:06 crc kubenswrapper[4728]: E0227 11:08:06.056006 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mf2hh_openshift-machine-config-operator(c2cfd349-f825-497b-b698-7fb6bc258b22)\"" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" Feb 27 11:08:06 crc kubenswrapper[4728]: I0227 11:08:06.126241 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536502-nm4pd"] Feb 27 11:08:06 crc kubenswrapper[4728]: I0227 11:08:06.137250 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536502-nm4pd"] Feb 27 11:08:06 crc kubenswrapper[4728]: I0227 11:08:06.662208 4728 generic.go:334] "Generic (PLEG): container finished" podID="c2cfd349-f825-497b-b698-7fb6bc258b22" containerID="57dd66d05b86c121df94f5a356294cb90c6125f234da02888c13a967c3d3eec5" exitCode=0 Feb 27 11:08:06 crc kubenswrapper[4728]: I0227 11:08:06.662264 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" event={"ID":"c2cfd349-f825-497b-b698-7fb6bc258b22","Type":"ContainerDied","Data":"57dd66d05b86c121df94f5a356294cb90c6125f234da02888c13a967c3d3eec5"} Feb 27 11:08:06 crc kubenswrapper[4728]: I0227 11:08:06.662556 4728 scope.go:117] "RemoveContainer" containerID="d8d628b62f7040f098c44223cc7f9acb05d4ddab156eb45d341a5998e4060b93" Feb 27 11:08:06 crc kubenswrapper[4728]: I0227 11:08:06.663624 4728 scope.go:117] "RemoveContainer" containerID="57dd66d05b86c121df94f5a356294cb90c6125f234da02888c13a967c3d3eec5" Feb 27 11:08:06 crc kubenswrapper[4728]: E0227 11:08:06.664074 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mf2hh_openshift-machine-config-operator(c2cfd349-f825-497b-b698-7fb6bc258b22)\"" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" Feb 27 11:08:06 crc kubenswrapper[4728]: I0227 11:08:06.756631 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a3bdcec-52e7-41a6-b551-b4a93c8dce37" path="/var/lib/kubelet/pods/4a3bdcec-52e7-41a6-b551-b4a93c8dce37/volumes" Feb 27 11:08:18 crc kubenswrapper[4728]: I0227 11:08:18.725062 4728 scope.go:117] "RemoveContainer" containerID="57dd66d05b86c121df94f5a356294cb90c6125f234da02888c13a967c3d3eec5" Feb 27 11:08:18 crc kubenswrapper[4728]: E0227 11:08:18.725889 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mf2hh_openshift-machine-config-operator(c2cfd349-f825-497b-b698-7fb6bc258b22)\"" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" Feb 27 11:08:24 crc kubenswrapper[4728]: I0227 11:08:24.280182 4728 scope.go:117] "RemoveContainer" containerID="6bfe573f5a38d7d588eab3f852f7360fd17cfdb0d3a7e931c6735fe74876d6a4" Feb 27 11:08:32 crc kubenswrapper[4728]: I0227 11:08:32.727021 4728 scope.go:117] "RemoveContainer" containerID="57dd66d05b86c121df94f5a356294cb90c6125f234da02888c13a967c3d3eec5" Feb 27 11:08:32 crc kubenswrapper[4728]: E0227 11:08:32.728080 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mf2hh_openshift-machine-config-operator(c2cfd349-f825-497b-b698-7fb6bc258b22)\"" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" Feb 27 11:08:47 crc kubenswrapper[4728]: I0227 11:08:47.725693 4728 scope.go:117] "RemoveContainer" containerID="57dd66d05b86c121df94f5a356294cb90c6125f234da02888c13a967c3d3eec5" Feb 27 11:08:47 crc kubenswrapper[4728]: E0227 11:08:47.726800 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mf2hh_openshift-machine-config-operator(c2cfd349-f825-497b-b698-7fb6bc258b22)\"" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" Feb 27 11:08:58 crc kubenswrapper[4728]: I0227 11:08:58.724971 4728 scope.go:117] "RemoveContainer" containerID="57dd66d05b86c121df94f5a356294cb90c6125f234da02888c13a967c3d3eec5" Feb 27 11:08:58 crc kubenswrapper[4728]: E0227 11:08:58.727224 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mf2hh_openshift-machine-config-operator(c2cfd349-f825-497b-b698-7fb6bc258b22)\"" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" Feb 27 11:09:11 crc kubenswrapper[4728]: I0227 11:09:11.724899 4728 scope.go:117] "RemoveContainer" containerID="57dd66d05b86c121df94f5a356294cb90c6125f234da02888c13a967c3d3eec5" Feb 27 11:09:11 crc kubenswrapper[4728]: E0227 11:09:11.725842 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mf2hh_openshift-machine-config-operator(c2cfd349-f825-497b-b698-7fb6bc258b22)\"" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" Feb 27 11:09:23 crc kubenswrapper[4728]: I0227 11:09:23.728063 4728 scope.go:117] "RemoveContainer" containerID="57dd66d05b86c121df94f5a356294cb90c6125f234da02888c13a967c3d3eec5" Feb 27 11:09:23 crc kubenswrapper[4728]: E0227 11:09:23.729958 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mf2hh_openshift-machine-config-operator(c2cfd349-f825-497b-b698-7fb6bc258b22)\"" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" Feb 27 11:09:35 crc kubenswrapper[4728]: I0227 11:09:35.724639 4728 scope.go:117] "RemoveContainer" containerID="57dd66d05b86c121df94f5a356294cb90c6125f234da02888c13a967c3d3eec5" Feb 27 11:09:35 crc kubenswrapper[4728]: E0227 11:09:35.725699 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mf2hh_openshift-machine-config-operator(c2cfd349-f825-497b-b698-7fb6bc258b22)\"" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" Feb 27 11:09:50 crc kubenswrapper[4728]: I0227 11:09:50.734976 4728 scope.go:117] "RemoveContainer" containerID="57dd66d05b86c121df94f5a356294cb90c6125f234da02888c13a967c3d3eec5" Feb 27 11:09:50 crc kubenswrapper[4728]: E0227 11:09:50.736041 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mf2hh_openshift-machine-config-operator(c2cfd349-f825-497b-b698-7fb6bc258b22)\"" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" Feb 27 11:09:59 crc kubenswrapper[4728]: I0227 11:09:59.088756 4728 generic.go:334] "Generic (PLEG): container finished" podID="366ef133-dc99-408d-9a1b-220869733a30" containerID="db72c030645bb355cc2371ad2b5e47e40f5444be8164a63adfa4cd3a5725f5e7" exitCode=0 Feb 27 11:09:59 crc kubenswrapper[4728]: I0227 11:09:59.089286 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-25clk" event={"ID":"366ef133-dc99-408d-9a1b-220869733a30","Type":"ContainerDied","Data":"db72c030645bb355cc2371ad2b5e47e40f5444be8164a63adfa4cd3a5725f5e7"} Feb 27 11:10:00 crc kubenswrapper[4728]: I0227 11:10:00.183715 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536510-rhn8w"] Feb 27 11:10:00 crc kubenswrapper[4728]: E0227 11:10:00.184867 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c5a8b83-71a1-425d-a3c9-ddf070667c4e" containerName="oc" Feb 27 11:10:00 crc kubenswrapper[4728]: I0227 11:10:00.184890 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c5a8b83-71a1-425d-a3c9-ddf070667c4e" containerName="oc" Feb 27 11:10:00 crc kubenswrapper[4728]: I0227 11:10:00.185312 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c5a8b83-71a1-425d-a3c9-ddf070667c4e" containerName="oc" Feb 27 11:10:00 crc kubenswrapper[4728]: I0227 11:10:00.186691 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536510-rhn8w" Feb 27 11:10:00 crc kubenswrapper[4728]: I0227 11:10:00.194183 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 11:10:00 crc kubenswrapper[4728]: I0227 11:10:00.194228 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 11:10:00 crc kubenswrapper[4728]: I0227 11:10:00.194181 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-wxdfj" Feb 27 11:10:00 crc kubenswrapper[4728]: I0227 11:10:00.197929 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536510-rhn8w"] Feb 27 11:10:00 crc kubenswrapper[4728]: I0227 11:10:00.340268 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ps2s2\" (UniqueName: \"kubernetes.io/projected/44525c67-9312-4bb2-8c2c-aadef5c13d86-kube-api-access-ps2s2\") pod \"auto-csr-approver-29536510-rhn8w\" (UID: \"44525c67-9312-4bb2-8c2c-aadef5c13d86\") " pod="openshift-infra/auto-csr-approver-29536510-rhn8w" Feb 27 11:10:00 crc kubenswrapper[4728]: I0227 11:10:00.443903 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ps2s2\" (UniqueName: \"kubernetes.io/projected/44525c67-9312-4bb2-8c2c-aadef5c13d86-kube-api-access-ps2s2\") pod \"auto-csr-approver-29536510-rhn8w\" (UID: \"44525c67-9312-4bb2-8c2c-aadef5c13d86\") " pod="openshift-infra/auto-csr-approver-29536510-rhn8w" Feb 27 11:10:00 crc kubenswrapper[4728]: I0227 11:10:00.481535 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ps2s2\" (UniqueName: \"kubernetes.io/projected/44525c67-9312-4bb2-8c2c-aadef5c13d86-kube-api-access-ps2s2\") pod \"auto-csr-approver-29536510-rhn8w\" (UID: \"44525c67-9312-4bb2-8c2c-aadef5c13d86\") " pod="openshift-infra/auto-csr-approver-29536510-rhn8w" Feb 27 11:10:00 crc kubenswrapper[4728]: I0227 11:10:00.525732 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536510-rhn8w" Feb 27 11:10:00 crc kubenswrapper[4728]: I0227 11:10:00.759008 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-25clk" Feb 27 11:10:00 crc kubenswrapper[4728]: I0227 11:10:00.860811 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/366ef133-dc99-408d-9a1b-220869733a30-libvirt-secret-0\") pod \"366ef133-dc99-408d-9a1b-220869733a30\" (UID: \"366ef133-dc99-408d-9a1b-220869733a30\") " Feb 27 11:10:00 crc kubenswrapper[4728]: I0227 11:10:00.861071 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tw988\" (UniqueName: \"kubernetes.io/projected/366ef133-dc99-408d-9a1b-220869733a30-kube-api-access-tw988\") pod \"366ef133-dc99-408d-9a1b-220869733a30\" (UID: \"366ef133-dc99-408d-9a1b-220869733a30\") " Feb 27 11:10:00 crc kubenswrapper[4728]: I0227 11:10:00.861126 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/366ef133-dc99-408d-9a1b-220869733a30-ssh-key-openstack-edpm-ipam\") pod \"366ef133-dc99-408d-9a1b-220869733a30\" (UID: \"366ef133-dc99-408d-9a1b-220869733a30\") " Feb 27 11:10:00 crc kubenswrapper[4728]: I0227 11:10:00.861220 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/366ef133-dc99-408d-9a1b-220869733a30-libvirt-combined-ca-bundle\") pod \"366ef133-dc99-408d-9a1b-220869733a30\" (UID: \"366ef133-dc99-408d-9a1b-220869733a30\") " Feb 27 11:10:00 crc kubenswrapper[4728]: I0227 11:10:00.861357 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/366ef133-dc99-408d-9a1b-220869733a30-inventory\") pod \"366ef133-dc99-408d-9a1b-220869733a30\" (UID: \"366ef133-dc99-408d-9a1b-220869733a30\") " Feb 27 11:10:00 crc kubenswrapper[4728]: I0227 11:10:00.866973 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/366ef133-dc99-408d-9a1b-220869733a30-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "366ef133-dc99-408d-9a1b-220869733a30" (UID: "366ef133-dc99-408d-9a1b-220869733a30"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 11:10:00 crc kubenswrapper[4728]: I0227 11:10:00.874121 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/366ef133-dc99-408d-9a1b-220869733a30-kube-api-access-tw988" (OuterVolumeSpecName: "kube-api-access-tw988") pod "366ef133-dc99-408d-9a1b-220869733a30" (UID: "366ef133-dc99-408d-9a1b-220869733a30"). InnerVolumeSpecName "kube-api-access-tw988". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:10:00 crc kubenswrapper[4728]: I0227 11:10:00.900593 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/366ef133-dc99-408d-9a1b-220869733a30-inventory" (OuterVolumeSpecName: "inventory") pod "366ef133-dc99-408d-9a1b-220869733a30" (UID: "366ef133-dc99-408d-9a1b-220869733a30"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 11:10:00 crc kubenswrapper[4728]: I0227 11:10:00.901273 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/366ef133-dc99-408d-9a1b-220869733a30-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "366ef133-dc99-408d-9a1b-220869733a30" (UID: "366ef133-dc99-408d-9a1b-220869733a30"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 11:10:00 crc kubenswrapper[4728]: I0227 11:10:00.903583 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/366ef133-dc99-408d-9a1b-220869733a30-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "366ef133-dc99-408d-9a1b-220869733a30" (UID: "366ef133-dc99-408d-9a1b-220869733a30"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 11:10:00 crc kubenswrapper[4728]: I0227 11:10:00.964490 4728 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/366ef133-dc99-408d-9a1b-220869733a30-inventory\") on node \"crc\" DevicePath \"\"" Feb 27 11:10:00 crc kubenswrapper[4728]: I0227 11:10:00.964542 4728 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/366ef133-dc99-408d-9a1b-220869733a30-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Feb 27 11:10:00 crc kubenswrapper[4728]: I0227 11:10:00.964554 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tw988\" (UniqueName: \"kubernetes.io/projected/366ef133-dc99-408d-9a1b-220869733a30-kube-api-access-tw988\") on node \"crc\" DevicePath \"\"" Feb 27 11:10:00 crc kubenswrapper[4728]: I0227 11:10:00.964563 4728 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/366ef133-dc99-408d-9a1b-220869733a30-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 27 11:10:00 crc kubenswrapper[4728]: I0227 11:10:00.964575 4728 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/366ef133-dc99-408d-9a1b-220869733a30-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 11:10:01 crc kubenswrapper[4728]: I0227 11:10:01.046779 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536510-rhn8w"] Feb 27 11:10:01 crc kubenswrapper[4728]: I0227 11:10:01.133393 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-25clk" event={"ID":"366ef133-dc99-408d-9a1b-220869733a30","Type":"ContainerDied","Data":"acdd5cd97342ce70f0e93bd1b83b63739ce238b994664d18489913833355b853"} Feb 27 11:10:01 crc kubenswrapper[4728]: I0227 11:10:01.133430 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="acdd5cd97342ce70f0e93bd1b83b63739ce238b994664d18489913833355b853" Feb 27 11:10:01 crc kubenswrapper[4728]: I0227 11:10:01.133413 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-25clk" Feb 27 11:10:01 crc kubenswrapper[4728]: I0227 11:10:01.134543 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536510-rhn8w" event={"ID":"44525c67-9312-4bb2-8c2c-aadef5c13d86","Type":"ContainerStarted","Data":"33dd4b90f8a252afb8c4b577c0d8dd32058bf8b3c1382527baa2526a718545a2"} Feb 27 11:10:01 crc kubenswrapper[4728]: I0227 11:10:01.291111 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-6gmbn"] Feb 27 11:10:01 crc kubenswrapper[4728]: E0227 11:10:01.292012 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="366ef133-dc99-408d-9a1b-220869733a30" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 27 11:10:01 crc kubenswrapper[4728]: I0227 11:10:01.292028 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="366ef133-dc99-408d-9a1b-220869733a30" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 27 11:10:01 crc kubenswrapper[4728]: I0227 11:10:01.292297 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="366ef133-dc99-408d-9a1b-220869733a30" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 27 11:10:01 crc kubenswrapper[4728]: I0227 11:10:01.294340 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6gmbn" Feb 27 11:10:01 crc kubenswrapper[4728]: I0227 11:10:01.299385 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Feb 27 11:10:01 crc kubenswrapper[4728]: I0227 11:10:01.302489 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 27 11:10:01 crc kubenswrapper[4728]: I0227 11:10:01.302616 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 27 11:10:01 crc kubenswrapper[4728]: I0227 11:10:01.302488 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 27 11:10:01 crc kubenswrapper[4728]: I0227 11:10:01.302909 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Feb 27 11:10:01 crc kubenswrapper[4728]: I0227 11:10:01.303103 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-r9nq7" Feb 27 11:10:01 crc kubenswrapper[4728]: I0227 11:10:01.303240 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Feb 27 11:10:01 crc kubenswrapper[4728]: I0227 11:10:01.311816 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-6gmbn"] Feb 27 11:10:01 crc kubenswrapper[4728]: I0227 11:10:01.386741 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/f5b59e71-081a-4fb3-aa6e-4b0e96c0ff03-nova-cell1-compute-config-3\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6gmbn\" (UID: \"f5b59e71-081a-4fb3-aa6e-4b0e96c0ff03\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6gmbn" Feb 27 11:10:01 crc kubenswrapper[4728]: I0227 11:10:01.386794 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f5b59e71-081a-4fb3-aa6e-4b0e96c0ff03-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6gmbn\" (UID: \"f5b59e71-081a-4fb3-aa6e-4b0e96c0ff03\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6gmbn" Feb 27 11:10:01 crc kubenswrapper[4728]: I0227 11:10:01.386839 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/f5b59e71-081a-4fb3-aa6e-4b0e96c0ff03-nova-cell1-compute-config-2\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6gmbn\" (UID: \"f5b59e71-081a-4fb3-aa6e-4b0e96c0ff03\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6gmbn" Feb 27 11:10:01 crc kubenswrapper[4728]: I0227 11:10:01.386982 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/f5b59e71-081a-4fb3-aa6e-4b0e96c0ff03-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6gmbn\" (UID: \"f5b59e71-081a-4fb3-aa6e-4b0e96c0ff03\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6gmbn" Feb 27 11:10:01 crc kubenswrapper[4728]: I0227 11:10:01.387081 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5b59e71-081a-4fb3-aa6e-4b0e96c0ff03-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6gmbn\" (UID: \"f5b59e71-081a-4fb3-aa6e-4b0e96c0ff03\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6gmbn" Feb 27 11:10:01 crc kubenswrapper[4728]: I0227 11:10:01.387105 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zvrq\" (UniqueName: \"kubernetes.io/projected/f5b59e71-081a-4fb3-aa6e-4b0e96c0ff03-kube-api-access-8zvrq\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6gmbn\" (UID: \"f5b59e71-081a-4fb3-aa6e-4b0e96c0ff03\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6gmbn" Feb 27 11:10:01 crc kubenswrapper[4728]: I0227 11:10:01.387162 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f5b59e71-081a-4fb3-aa6e-4b0e96c0ff03-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6gmbn\" (UID: \"f5b59e71-081a-4fb3-aa6e-4b0e96c0ff03\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6gmbn" Feb 27 11:10:01 crc kubenswrapper[4728]: I0227 11:10:01.387188 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/f5b59e71-081a-4fb3-aa6e-4b0e96c0ff03-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6gmbn\" (UID: \"f5b59e71-081a-4fb3-aa6e-4b0e96c0ff03\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6gmbn" Feb 27 11:10:01 crc kubenswrapper[4728]: I0227 11:10:01.387293 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/f5b59e71-081a-4fb3-aa6e-4b0e96c0ff03-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6gmbn\" (UID: \"f5b59e71-081a-4fb3-aa6e-4b0e96c0ff03\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6gmbn" Feb 27 11:10:01 crc kubenswrapper[4728]: I0227 11:10:01.387409 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/f5b59e71-081a-4fb3-aa6e-4b0e96c0ff03-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6gmbn\" (UID: \"f5b59e71-081a-4fb3-aa6e-4b0e96c0ff03\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6gmbn" Feb 27 11:10:01 crc kubenswrapper[4728]: I0227 11:10:01.387538 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/f5b59e71-081a-4fb3-aa6e-4b0e96c0ff03-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6gmbn\" (UID: \"f5b59e71-081a-4fb3-aa6e-4b0e96c0ff03\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6gmbn" Feb 27 11:10:01 crc kubenswrapper[4728]: I0227 11:10:01.489001 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5b59e71-081a-4fb3-aa6e-4b0e96c0ff03-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6gmbn\" (UID: \"f5b59e71-081a-4fb3-aa6e-4b0e96c0ff03\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6gmbn" Feb 27 11:10:01 crc kubenswrapper[4728]: I0227 11:10:01.489055 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8zvrq\" (UniqueName: \"kubernetes.io/projected/f5b59e71-081a-4fb3-aa6e-4b0e96c0ff03-kube-api-access-8zvrq\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6gmbn\" (UID: \"f5b59e71-081a-4fb3-aa6e-4b0e96c0ff03\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6gmbn" Feb 27 11:10:01 crc kubenswrapper[4728]: I0227 11:10:01.489099 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f5b59e71-081a-4fb3-aa6e-4b0e96c0ff03-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6gmbn\" (UID: \"f5b59e71-081a-4fb3-aa6e-4b0e96c0ff03\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6gmbn" Feb 27 11:10:01 crc kubenswrapper[4728]: I0227 11:10:01.489125 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/f5b59e71-081a-4fb3-aa6e-4b0e96c0ff03-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6gmbn\" (UID: \"f5b59e71-081a-4fb3-aa6e-4b0e96c0ff03\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6gmbn" Feb 27 11:10:01 crc kubenswrapper[4728]: I0227 11:10:01.489187 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/f5b59e71-081a-4fb3-aa6e-4b0e96c0ff03-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6gmbn\" (UID: \"f5b59e71-081a-4fb3-aa6e-4b0e96c0ff03\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6gmbn" Feb 27 11:10:01 crc kubenswrapper[4728]: I0227 11:10:01.489211 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/f5b59e71-081a-4fb3-aa6e-4b0e96c0ff03-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6gmbn\" (UID: \"f5b59e71-081a-4fb3-aa6e-4b0e96c0ff03\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6gmbn" Feb 27 11:10:01 crc kubenswrapper[4728]: I0227 11:10:01.489246 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/f5b59e71-081a-4fb3-aa6e-4b0e96c0ff03-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6gmbn\" (UID: \"f5b59e71-081a-4fb3-aa6e-4b0e96c0ff03\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6gmbn" Feb 27 11:10:01 crc kubenswrapper[4728]: I0227 11:10:01.489282 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/f5b59e71-081a-4fb3-aa6e-4b0e96c0ff03-nova-cell1-compute-config-3\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6gmbn\" (UID: \"f5b59e71-081a-4fb3-aa6e-4b0e96c0ff03\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6gmbn" Feb 27 11:10:01 crc kubenswrapper[4728]: I0227 11:10:01.489308 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f5b59e71-081a-4fb3-aa6e-4b0e96c0ff03-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6gmbn\" (UID: \"f5b59e71-081a-4fb3-aa6e-4b0e96c0ff03\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6gmbn" Feb 27 11:10:01 crc kubenswrapper[4728]: I0227 11:10:01.489338 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/f5b59e71-081a-4fb3-aa6e-4b0e96c0ff03-nova-cell1-compute-config-2\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6gmbn\" (UID: \"f5b59e71-081a-4fb3-aa6e-4b0e96c0ff03\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6gmbn" Feb 27 11:10:01 crc kubenswrapper[4728]: I0227 11:10:01.489384 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/f5b59e71-081a-4fb3-aa6e-4b0e96c0ff03-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6gmbn\" (UID: \"f5b59e71-081a-4fb3-aa6e-4b0e96c0ff03\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6gmbn" Feb 27 11:10:01 crc kubenswrapper[4728]: I0227 11:10:01.491899 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/f5b59e71-081a-4fb3-aa6e-4b0e96c0ff03-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6gmbn\" (UID: \"f5b59e71-081a-4fb3-aa6e-4b0e96c0ff03\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6gmbn" Feb 27 11:10:01 crc kubenswrapper[4728]: I0227 11:10:01.495472 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/f5b59e71-081a-4fb3-aa6e-4b0e96c0ff03-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6gmbn\" (UID: \"f5b59e71-081a-4fb3-aa6e-4b0e96c0ff03\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6gmbn" Feb 27 11:10:01 crc kubenswrapper[4728]: I0227 11:10:01.496690 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/f5b59e71-081a-4fb3-aa6e-4b0e96c0ff03-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6gmbn\" (UID: \"f5b59e71-081a-4fb3-aa6e-4b0e96c0ff03\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6gmbn" Feb 27 11:10:01 crc kubenswrapper[4728]: I0227 11:10:01.497056 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f5b59e71-081a-4fb3-aa6e-4b0e96c0ff03-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6gmbn\" (UID: \"f5b59e71-081a-4fb3-aa6e-4b0e96c0ff03\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6gmbn" Feb 27 11:10:01 crc kubenswrapper[4728]: I0227 11:10:01.502686 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/f5b59e71-081a-4fb3-aa6e-4b0e96c0ff03-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6gmbn\" (UID: \"f5b59e71-081a-4fb3-aa6e-4b0e96c0ff03\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6gmbn" Feb 27 11:10:01 crc kubenswrapper[4728]: I0227 11:10:01.502856 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/f5b59e71-081a-4fb3-aa6e-4b0e96c0ff03-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6gmbn\" (UID: \"f5b59e71-081a-4fb3-aa6e-4b0e96c0ff03\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6gmbn" Feb 27 11:10:01 crc kubenswrapper[4728]: I0227 11:10:01.503174 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/f5b59e71-081a-4fb3-aa6e-4b0e96c0ff03-nova-cell1-compute-config-3\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6gmbn\" (UID: \"f5b59e71-081a-4fb3-aa6e-4b0e96c0ff03\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6gmbn" Feb 27 11:10:01 crc kubenswrapper[4728]: I0227 11:10:01.504070 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f5b59e71-081a-4fb3-aa6e-4b0e96c0ff03-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6gmbn\" (UID: \"f5b59e71-081a-4fb3-aa6e-4b0e96c0ff03\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6gmbn" Feb 27 11:10:01 crc kubenswrapper[4728]: I0227 11:10:01.504729 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5b59e71-081a-4fb3-aa6e-4b0e96c0ff03-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6gmbn\" (UID: \"f5b59e71-081a-4fb3-aa6e-4b0e96c0ff03\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6gmbn" Feb 27 11:10:01 crc kubenswrapper[4728]: I0227 11:10:01.506969 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/f5b59e71-081a-4fb3-aa6e-4b0e96c0ff03-nova-cell1-compute-config-2\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6gmbn\" (UID: \"f5b59e71-081a-4fb3-aa6e-4b0e96c0ff03\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6gmbn" Feb 27 11:10:01 crc kubenswrapper[4728]: I0227 11:10:01.519964 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8zvrq\" (UniqueName: \"kubernetes.io/projected/f5b59e71-081a-4fb3-aa6e-4b0e96c0ff03-kube-api-access-8zvrq\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6gmbn\" (UID: \"f5b59e71-081a-4fb3-aa6e-4b0e96c0ff03\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6gmbn" Feb 27 11:10:01 crc kubenswrapper[4728]: I0227 11:10:01.613821 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6gmbn" Feb 27 11:10:02 crc kubenswrapper[4728]: I0227 11:10:02.209470 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-6gmbn"] Feb 27 11:10:02 crc kubenswrapper[4728]: W0227 11:10:02.218060 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf5b59e71_081a_4fb3_aa6e_4b0e96c0ff03.slice/crio-03856ce91a9bf61dfb52edcf7204e546550142006e04f0944f274d85727e9930 WatchSource:0}: Error finding container 03856ce91a9bf61dfb52edcf7204e546550142006e04f0944f274d85727e9930: Status 404 returned error can't find the container with id 03856ce91a9bf61dfb52edcf7204e546550142006e04f0944f274d85727e9930 Feb 27 11:10:03 crc kubenswrapper[4728]: I0227 11:10:03.216398 4728 generic.go:334] "Generic (PLEG): container finished" podID="44525c67-9312-4bb2-8c2c-aadef5c13d86" containerID="1cacebc133e41beac455f1fb8e5cf30936f230384b0014a6ddf34d3f6a5f72bb" exitCode=0 Feb 27 11:10:03 crc kubenswrapper[4728]: I0227 11:10:03.216923 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536510-rhn8w" event={"ID":"44525c67-9312-4bb2-8c2c-aadef5c13d86","Type":"ContainerDied","Data":"1cacebc133e41beac455f1fb8e5cf30936f230384b0014a6ddf34d3f6a5f72bb"} Feb 27 11:10:03 crc kubenswrapper[4728]: I0227 11:10:03.220729 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6gmbn" event={"ID":"f5b59e71-081a-4fb3-aa6e-4b0e96c0ff03","Type":"ContainerStarted","Data":"4e36a9c564ae1846d6e6bb9891c02c0900fb578c611dc221f238046c9b8d99b7"} Feb 27 11:10:03 crc kubenswrapper[4728]: I0227 11:10:03.220762 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6gmbn" event={"ID":"f5b59e71-081a-4fb3-aa6e-4b0e96c0ff03","Type":"ContainerStarted","Data":"03856ce91a9bf61dfb52edcf7204e546550142006e04f0944f274d85727e9930"} Feb 27 11:10:03 crc kubenswrapper[4728]: I0227 11:10:03.277073 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6gmbn" podStartSLOduration=1.8007323450000001 podStartE2EDuration="2.277043053s" podCreationTimestamp="2026-02-27 11:10:01 +0000 UTC" firstStartedPulling="2026-02-27 11:10:02.222328347 +0000 UTC m=+2622.184694443" lastFinishedPulling="2026-02-27 11:10:02.698639045 +0000 UTC m=+2622.661005151" observedRunningTime="2026-02-27 11:10:03.275542961 +0000 UTC m=+2623.237909067" watchObservedRunningTime="2026-02-27 11:10:03.277043053 +0000 UTC m=+2623.239409160" Feb 27 11:10:04 crc kubenswrapper[4728]: I0227 11:10:04.639078 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536510-rhn8w" Feb 27 11:10:04 crc kubenswrapper[4728]: I0227 11:10:04.692727 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ps2s2\" (UniqueName: \"kubernetes.io/projected/44525c67-9312-4bb2-8c2c-aadef5c13d86-kube-api-access-ps2s2\") pod \"44525c67-9312-4bb2-8c2c-aadef5c13d86\" (UID: \"44525c67-9312-4bb2-8c2c-aadef5c13d86\") " Feb 27 11:10:04 crc kubenswrapper[4728]: I0227 11:10:04.709163 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44525c67-9312-4bb2-8c2c-aadef5c13d86-kube-api-access-ps2s2" (OuterVolumeSpecName: "kube-api-access-ps2s2") pod "44525c67-9312-4bb2-8c2c-aadef5c13d86" (UID: "44525c67-9312-4bb2-8c2c-aadef5c13d86"). InnerVolumeSpecName "kube-api-access-ps2s2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:10:04 crc kubenswrapper[4728]: I0227 11:10:04.728006 4728 scope.go:117] "RemoveContainer" containerID="57dd66d05b86c121df94f5a356294cb90c6125f234da02888c13a967c3d3eec5" Feb 27 11:10:04 crc kubenswrapper[4728]: E0227 11:10:04.729089 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mf2hh_openshift-machine-config-operator(c2cfd349-f825-497b-b698-7fb6bc258b22)\"" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" Feb 27 11:10:04 crc kubenswrapper[4728]: I0227 11:10:04.797182 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ps2s2\" (UniqueName: \"kubernetes.io/projected/44525c67-9312-4bb2-8c2c-aadef5c13d86-kube-api-access-ps2s2\") on node \"crc\" DevicePath \"\"" Feb 27 11:10:05 crc kubenswrapper[4728]: I0227 11:10:05.249400 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536510-rhn8w" event={"ID":"44525c67-9312-4bb2-8c2c-aadef5c13d86","Type":"ContainerDied","Data":"33dd4b90f8a252afb8c4b577c0d8dd32058bf8b3c1382527baa2526a718545a2"} Feb 27 11:10:05 crc kubenswrapper[4728]: I0227 11:10:05.249833 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="33dd4b90f8a252afb8c4b577c0d8dd32058bf8b3c1382527baa2526a718545a2" Feb 27 11:10:05 crc kubenswrapper[4728]: I0227 11:10:05.249478 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536510-rhn8w" Feb 27 11:10:05 crc kubenswrapper[4728]: I0227 11:10:05.729084 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536504-b82wk"] Feb 27 11:10:05 crc kubenswrapper[4728]: I0227 11:10:05.739416 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536504-b82wk"] Feb 27 11:10:06 crc kubenswrapper[4728]: I0227 11:10:06.739168 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="adea2aa6-d1ac-4405-87c6-287a246a5db0" path="/var/lib/kubelet/pods/adea2aa6-d1ac-4405-87c6-287a246a5db0/volumes" Feb 27 11:10:16 crc kubenswrapper[4728]: I0227 11:10:16.725787 4728 scope.go:117] "RemoveContainer" containerID="57dd66d05b86c121df94f5a356294cb90c6125f234da02888c13a967c3d3eec5" Feb 27 11:10:16 crc kubenswrapper[4728]: E0227 11:10:16.727139 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mf2hh_openshift-machine-config-operator(c2cfd349-f825-497b-b698-7fb6bc258b22)\"" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" Feb 27 11:10:24 crc kubenswrapper[4728]: I0227 11:10:24.407912 4728 scope.go:117] "RemoveContainer" containerID="501fc3d34daee083998ac9589c3bfea2b403f8082c9dfe59254aee09e6d7bcd1" Feb 27 11:10:30 crc kubenswrapper[4728]: I0227 11:10:30.739669 4728 scope.go:117] "RemoveContainer" containerID="57dd66d05b86c121df94f5a356294cb90c6125f234da02888c13a967c3d3eec5" Feb 27 11:10:30 crc kubenswrapper[4728]: E0227 11:10:30.740830 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mf2hh_openshift-machine-config-operator(c2cfd349-f825-497b-b698-7fb6bc258b22)\"" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" Feb 27 11:10:43 crc kubenswrapper[4728]: I0227 11:10:43.726069 4728 scope.go:117] "RemoveContainer" containerID="57dd66d05b86c121df94f5a356294cb90c6125f234da02888c13a967c3d3eec5" Feb 27 11:10:43 crc kubenswrapper[4728]: E0227 11:10:43.727583 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mf2hh_openshift-machine-config-operator(c2cfd349-f825-497b-b698-7fb6bc258b22)\"" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" Feb 27 11:10:54 crc kubenswrapper[4728]: I0227 11:10:54.727063 4728 scope.go:117] "RemoveContainer" containerID="57dd66d05b86c121df94f5a356294cb90c6125f234da02888c13a967c3d3eec5" Feb 27 11:10:54 crc kubenswrapper[4728]: E0227 11:10:54.730625 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mf2hh_openshift-machine-config-operator(c2cfd349-f825-497b-b698-7fb6bc258b22)\"" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" Feb 27 11:11:08 crc kubenswrapper[4728]: I0227 11:11:08.726219 4728 scope.go:117] "RemoveContainer" containerID="57dd66d05b86c121df94f5a356294cb90c6125f234da02888c13a967c3d3eec5" Feb 27 11:11:08 crc kubenswrapper[4728]: E0227 11:11:08.727027 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mf2hh_openshift-machine-config-operator(c2cfd349-f825-497b-b698-7fb6bc258b22)\"" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" Feb 27 11:11:22 crc kubenswrapper[4728]: I0227 11:11:22.726178 4728 scope.go:117] "RemoveContainer" containerID="57dd66d05b86c121df94f5a356294cb90c6125f234da02888c13a967c3d3eec5" Feb 27 11:11:22 crc kubenswrapper[4728]: E0227 11:11:22.727201 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mf2hh_openshift-machine-config-operator(c2cfd349-f825-497b-b698-7fb6bc258b22)\"" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" Feb 27 11:11:34 crc kubenswrapper[4728]: I0227 11:11:34.725894 4728 scope.go:117] "RemoveContainer" containerID="57dd66d05b86c121df94f5a356294cb90c6125f234da02888c13a967c3d3eec5" Feb 27 11:11:34 crc kubenswrapper[4728]: E0227 11:11:34.727096 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mf2hh_openshift-machine-config-operator(c2cfd349-f825-497b-b698-7fb6bc258b22)\"" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" Feb 27 11:11:46 crc kubenswrapper[4728]: I0227 11:11:46.725318 4728 scope.go:117] "RemoveContainer" containerID="57dd66d05b86c121df94f5a356294cb90c6125f234da02888c13a967c3d3eec5" Feb 27 11:11:46 crc kubenswrapper[4728]: E0227 11:11:46.726363 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mf2hh_openshift-machine-config-operator(c2cfd349-f825-497b-b698-7fb6bc258b22)\"" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" Feb 27 11:12:00 crc kubenswrapper[4728]: I0227 11:12:00.152999 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536512-hwz24"] Feb 27 11:12:00 crc kubenswrapper[4728]: E0227 11:12:00.155674 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44525c67-9312-4bb2-8c2c-aadef5c13d86" containerName="oc" Feb 27 11:12:00 crc kubenswrapper[4728]: I0227 11:12:00.155782 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="44525c67-9312-4bb2-8c2c-aadef5c13d86" containerName="oc" Feb 27 11:12:00 crc kubenswrapper[4728]: I0227 11:12:00.156159 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="44525c67-9312-4bb2-8c2c-aadef5c13d86" containerName="oc" Feb 27 11:12:00 crc kubenswrapper[4728]: I0227 11:12:00.157242 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536512-hwz24" Feb 27 11:12:00 crc kubenswrapper[4728]: I0227 11:12:00.163970 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 11:12:00 crc kubenswrapper[4728]: I0227 11:12:00.165011 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 11:12:00 crc kubenswrapper[4728]: I0227 11:12:00.169183 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-wxdfj" Feb 27 11:12:00 crc kubenswrapper[4728]: I0227 11:12:00.172788 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536512-hwz24"] Feb 27 11:12:00 crc kubenswrapper[4728]: I0227 11:12:00.226122 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ssvt\" (UniqueName: \"kubernetes.io/projected/d584de9a-c699-4906-ab5d-5d1b397af97d-kube-api-access-8ssvt\") pod \"auto-csr-approver-29536512-hwz24\" (UID: \"d584de9a-c699-4906-ab5d-5d1b397af97d\") " pod="openshift-infra/auto-csr-approver-29536512-hwz24" Feb 27 11:12:00 crc kubenswrapper[4728]: I0227 11:12:00.328262 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8ssvt\" (UniqueName: \"kubernetes.io/projected/d584de9a-c699-4906-ab5d-5d1b397af97d-kube-api-access-8ssvt\") pod \"auto-csr-approver-29536512-hwz24\" (UID: \"d584de9a-c699-4906-ab5d-5d1b397af97d\") " pod="openshift-infra/auto-csr-approver-29536512-hwz24" Feb 27 11:12:00 crc kubenswrapper[4728]: I0227 11:12:00.348339 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8ssvt\" (UniqueName: \"kubernetes.io/projected/d584de9a-c699-4906-ab5d-5d1b397af97d-kube-api-access-8ssvt\") pod \"auto-csr-approver-29536512-hwz24\" (UID: \"d584de9a-c699-4906-ab5d-5d1b397af97d\") " pod="openshift-infra/auto-csr-approver-29536512-hwz24" Feb 27 11:12:00 crc kubenswrapper[4728]: I0227 11:12:00.480819 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536512-hwz24" Feb 27 11:12:00 crc kubenswrapper[4728]: I0227 11:12:00.930630 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536512-hwz24"] Feb 27 11:12:00 crc kubenswrapper[4728]: W0227 11:12:00.937583 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd584de9a_c699_4906_ab5d_5d1b397af97d.slice/crio-d6dff4e3c1f8dafd943bb2fa90651c88201dc1c710dcabec72ecc03ea859c371 WatchSource:0}: Error finding container d6dff4e3c1f8dafd943bb2fa90651c88201dc1c710dcabec72ecc03ea859c371: Status 404 returned error can't find the container with id d6dff4e3c1f8dafd943bb2fa90651c88201dc1c710dcabec72ecc03ea859c371 Feb 27 11:12:01 crc kubenswrapper[4728]: I0227 11:12:01.036098 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536512-hwz24" event={"ID":"d584de9a-c699-4906-ab5d-5d1b397af97d","Type":"ContainerStarted","Data":"d6dff4e3c1f8dafd943bb2fa90651c88201dc1c710dcabec72ecc03ea859c371"} Feb 27 11:12:01 crc kubenswrapper[4728]: I0227 11:12:01.726227 4728 scope.go:117] "RemoveContainer" containerID="57dd66d05b86c121df94f5a356294cb90c6125f234da02888c13a967c3d3eec5" Feb 27 11:12:01 crc kubenswrapper[4728]: E0227 11:12:01.727041 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mf2hh_openshift-machine-config-operator(c2cfd349-f825-497b-b698-7fb6bc258b22)\"" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" Feb 27 11:12:03 crc kubenswrapper[4728]: I0227 11:12:03.083731 4728 generic.go:334] "Generic (PLEG): container finished" podID="d584de9a-c699-4906-ab5d-5d1b397af97d" containerID="b9b10798071270ca99b775eff520c8776431ae9c8465dd6e7a8b593c1ca205df" exitCode=0 Feb 27 11:12:03 crc kubenswrapper[4728]: I0227 11:12:03.083784 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536512-hwz24" event={"ID":"d584de9a-c699-4906-ab5d-5d1b397af97d","Type":"ContainerDied","Data":"b9b10798071270ca99b775eff520c8776431ae9c8465dd6e7a8b593c1ca205df"} Feb 27 11:12:04 crc kubenswrapper[4728]: I0227 11:12:04.623637 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536512-hwz24" Feb 27 11:12:04 crc kubenswrapper[4728]: I0227 11:12:04.652032 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8ssvt\" (UniqueName: \"kubernetes.io/projected/d584de9a-c699-4906-ab5d-5d1b397af97d-kube-api-access-8ssvt\") pod \"d584de9a-c699-4906-ab5d-5d1b397af97d\" (UID: \"d584de9a-c699-4906-ab5d-5d1b397af97d\") " Feb 27 11:12:04 crc kubenswrapper[4728]: I0227 11:12:04.663417 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d584de9a-c699-4906-ab5d-5d1b397af97d-kube-api-access-8ssvt" (OuterVolumeSpecName: "kube-api-access-8ssvt") pod "d584de9a-c699-4906-ab5d-5d1b397af97d" (UID: "d584de9a-c699-4906-ab5d-5d1b397af97d"). InnerVolumeSpecName "kube-api-access-8ssvt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:12:04 crc kubenswrapper[4728]: I0227 11:12:04.757916 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8ssvt\" (UniqueName: \"kubernetes.io/projected/d584de9a-c699-4906-ab5d-5d1b397af97d-kube-api-access-8ssvt\") on node \"crc\" DevicePath \"\"" Feb 27 11:12:05 crc kubenswrapper[4728]: I0227 11:12:05.106475 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536512-hwz24" event={"ID":"d584de9a-c699-4906-ab5d-5d1b397af97d","Type":"ContainerDied","Data":"d6dff4e3c1f8dafd943bb2fa90651c88201dc1c710dcabec72ecc03ea859c371"} Feb 27 11:12:05 crc kubenswrapper[4728]: I0227 11:12:05.106572 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d6dff4e3c1f8dafd943bb2fa90651c88201dc1c710dcabec72ecc03ea859c371" Feb 27 11:12:05 crc kubenswrapper[4728]: I0227 11:12:05.106493 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536512-hwz24" Feb 27 11:12:05 crc kubenswrapper[4728]: I0227 11:12:05.702359 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536506-p6fnn"] Feb 27 11:12:05 crc kubenswrapper[4728]: I0227 11:12:05.714519 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536506-p6fnn"] Feb 27 11:12:06 crc kubenswrapper[4728]: I0227 11:12:06.743691 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9fb7e9fc-8c07-4932-8b4c-6e07b0c76147" path="/var/lib/kubelet/pods/9fb7e9fc-8c07-4932-8b4c-6e07b0c76147/volumes" Feb 27 11:12:14 crc kubenswrapper[4728]: I0227 11:12:14.727155 4728 scope.go:117] "RemoveContainer" containerID="57dd66d05b86c121df94f5a356294cb90c6125f234da02888c13a967c3d3eec5" Feb 27 11:12:14 crc kubenswrapper[4728]: E0227 11:12:14.727883 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mf2hh_openshift-machine-config-operator(c2cfd349-f825-497b-b698-7fb6bc258b22)\"" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" Feb 27 11:12:24 crc kubenswrapper[4728]: I0227 11:12:24.530805 4728 scope.go:117] "RemoveContainer" containerID="bb830deb67d6bc6ce24596259babcbae96fb267be31d8b66aa57dc70b0c75c1d" Feb 27 11:12:27 crc kubenswrapper[4728]: I0227 11:12:27.725100 4728 scope.go:117] "RemoveContainer" containerID="57dd66d05b86c121df94f5a356294cb90c6125f234da02888c13a967c3d3eec5" Feb 27 11:12:27 crc kubenswrapper[4728]: E0227 11:12:27.726043 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mf2hh_openshift-machine-config-operator(c2cfd349-f825-497b-b698-7fb6bc258b22)\"" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" Feb 27 11:12:33 crc kubenswrapper[4728]: I0227 11:12:33.495600 4728 generic.go:334] "Generic (PLEG): container finished" podID="f5b59e71-081a-4fb3-aa6e-4b0e96c0ff03" containerID="4e36a9c564ae1846d6e6bb9891c02c0900fb578c611dc221f238046c9b8d99b7" exitCode=0 Feb 27 11:12:33 crc kubenswrapper[4728]: I0227 11:12:33.496197 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6gmbn" event={"ID":"f5b59e71-081a-4fb3-aa6e-4b0e96c0ff03","Type":"ContainerDied","Data":"4e36a9c564ae1846d6e6bb9891c02c0900fb578c611dc221f238046c9b8d99b7"} Feb 27 11:12:35 crc kubenswrapper[4728]: I0227 11:12:35.053136 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6gmbn" Feb 27 11:12:35 crc kubenswrapper[4728]: I0227 11:12:35.192224 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/f5b59e71-081a-4fb3-aa6e-4b0e96c0ff03-nova-cell1-compute-config-1\") pod \"f5b59e71-081a-4fb3-aa6e-4b0e96c0ff03\" (UID: \"f5b59e71-081a-4fb3-aa6e-4b0e96c0ff03\") " Feb 27 11:12:35 crc kubenswrapper[4728]: I0227 11:12:35.193123 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/f5b59e71-081a-4fb3-aa6e-4b0e96c0ff03-nova-cell1-compute-config-0\") pod \"f5b59e71-081a-4fb3-aa6e-4b0e96c0ff03\" (UID: \"f5b59e71-081a-4fb3-aa6e-4b0e96c0ff03\") " Feb 27 11:12:35 crc kubenswrapper[4728]: I0227 11:12:35.193175 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f5b59e71-081a-4fb3-aa6e-4b0e96c0ff03-inventory\") pod \"f5b59e71-081a-4fb3-aa6e-4b0e96c0ff03\" (UID: \"f5b59e71-081a-4fb3-aa6e-4b0e96c0ff03\") " Feb 27 11:12:35 crc kubenswrapper[4728]: I0227 11:12:35.193288 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/f5b59e71-081a-4fb3-aa6e-4b0e96c0ff03-nova-migration-ssh-key-1\") pod \"f5b59e71-081a-4fb3-aa6e-4b0e96c0ff03\" (UID: \"f5b59e71-081a-4fb3-aa6e-4b0e96c0ff03\") " Feb 27 11:12:35 crc kubenswrapper[4728]: I0227 11:12:35.193331 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/f5b59e71-081a-4fb3-aa6e-4b0e96c0ff03-nova-migration-ssh-key-0\") pod \"f5b59e71-081a-4fb3-aa6e-4b0e96c0ff03\" (UID: \"f5b59e71-081a-4fb3-aa6e-4b0e96c0ff03\") " Feb 27 11:12:35 crc kubenswrapper[4728]: I0227 11:12:35.193468 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/f5b59e71-081a-4fb3-aa6e-4b0e96c0ff03-nova-cell1-compute-config-2\") pod \"f5b59e71-081a-4fb3-aa6e-4b0e96c0ff03\" (UID: \"f5b59e71-081a-4fb3-aa6e-4b0e96c0ff03\") " Feb 27 11:12:35 crc kubenswrapper[4728]: I0227 11:12:35.193540 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5b59e71-081a-4fb3-aa6e-4b0e96c0ff03-nova-combined-ca-bundle\") pod \"f5b59e71-081a-4fb3-aa6e-4b0e96c0ff03\" (UID: \"f5b59e71-081a-4fb3-aa6e-4b0e96c0ff03\") " Feb 27 11:12:35 crc kubenswrapper[4728]: I0227 11:12:35.193625 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8zvrq\" (UniqueName: \"kubernetes.io/projected/f5b59e71-081a-4fb3-aa6e-4b0e96c0ff03-kube-api-access-8zvrq\") pod \"f5b59e71-081a-4fb3-aa6e-4b0e96c0ff03\" (UID: \"f5b59e71-081a-4fb3-aa6e-4b0e96c0ff03\") " Feb 27 11:12:35 crc kubenswrapper[4728]: I0227 11:12:35.193735 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f5b59e71-081a-4fb3-aa6e-4b0e96c0ff03-ssh-key-openstack-edpm-ipam\") pod \"f5b59e71-081a-4fb3-aa6e-4b0e96c0ff03\" (UID: \"f5b59e71-081a-4fb3-aa6e-4b0e96c0ff03\") " Feb 27 11:12:35 crc kubenswrapper[4728]: I0227 11:12:35.193773 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/f5b59e71-081a-4fb3-aa6e-4b0e96c0ff03-nova-cell1-compute-config-3\") pod \"f5b59e71-081a-4fb3-aa6e-4b0e96c0ff03\" (UID: \"f5b59e71-081a-4fb3-aa6e-4b0e96c0ff03\") " Feb 27 11:12:35 crc kubenswrapper[4728]: I0227 11:12:35.193798 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/f5b59e71-081a-4fb3-aa6e-4b0e96c0ff03-nova-extra-config-0\") pod \"f5b59e71-081a-4fb3-aa6e-4b0e96c0ff03\" (UID: \"f5b59e71-081a-4fb3-aa6e-4b0e96c0ff03\") " Feb 27 11:12:35 crc kubenswrapper[4728]: I0227 11:12:35.199275 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5b59e71-081a-4fb3-aa6e-4b0e96c0ff03-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "f5b59e71-081a-4fb3-aa6e-4b0e96c0ff03" (UID: "f5b59e71-081a-4fb3-aa6e-4b0e96c0ff03"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 11:12:35 crc kubenswrapper[4728]: I0227 11:12:35.213595 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5b59e71-081a-4fb3-aa6e-4b0e96c0ff03-kube-api-access-8zvrq" (OuterVolumeSpecName: "kube-api-access-8zvrq") pod "f5b59e71-081a-4fb3-aa6e-4b0e96c0ff03" (UID: "f5b59e71-081a-4fb3-aa6e-4b0e96c0ff03"). InnerVolumeSpecName "kube-api-access-8zvrq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:12:35 crc kubenswrapper[4728]: I0227 11:12:35.240126 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f5b59e71-081a-4fb3-aa6e-4b0e96c0ff03-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "f5b59e71-081a-4fb3-aa6e-4b0e96c0ff03" (UID: "f5b59e71-081a-4fb3-aa6e-4b0e96c0ff03"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 11:12:35 crc kubenswrapper[4728]: I0227 11:12:35.241810 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5b59e71-081a-4fb3-aa6e-4b0e96c0ff03-inventory" (OuterVolumeSpecName: "inventory") pod "f5b59e71-081a-4fb3-aa6e-4b0e96c0ff03" (UID: "f5b59e71-081a-4fb3-aa6e-4b0e96c0ff03"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 11:12:35 crc kubenswrapper[4728]: I0227 11:12:35.246236 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5b59e71-081a-4fb3-aa6e-4b0e96c0ff03-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "f5b59e71-081a-4fb3-aa6e-4b0e96c0ff03" (UID: "f5b59e71-081a-4fb3-aa6e-4b0e96c0ff03"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 11:12:35 crc kubenswrapper[4728]: I0227 11:12:35.250728 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5b59e71-081a-4fb3-aa6e-4b0e96c0ff03-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "f5b59e71-081a-4fb3-aa6e-4b0e96c0ff03" (UID: "f5b59e71-081a-4fb3-aa6e-4b0e96c0ff03"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 11:12:35 crc kubenswrapper[4728]: I0227 11:12:35.256363 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5b59e71-081a-4fb3-aa6e-4b0e96c0ff03-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "f5b59e71-081a-4fb3-aa6e-4b0e96c0ff03" (UID: "f5b59e71-081a-4fb3-aa6e-4b0e96c0ff03"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 11:12:35 crc kubenswrapper[4728]: I0227 11:12:35.266113 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5b59e71-081a-4fb3-aa6e-4b0e96c0ff03-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "f5b59e71-081a-4fb3-aa6e-4b0e96c0ff03" (UID: "f5b59e71-081a-4fb3-aa6e-4b0e96c0ff03"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 11:12:35 crc kubenswrapper[4728]: I0227 11:12:35.279959 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5b59e71-081a-4fb3-aa6e-4b0e96c0ff03-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "f5b59e71-081a-4fb3-aa6e-4b0e96c0ff03" (UID: "f5b59e71-081a-4fb3-aa6e-4b0e96c0ff03"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 11:12:35 crc kubenswrapper[4728]: I0227 11:12:35.288717 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5b59e71-081a-4fb3-aa6e-4b0e96c0ff03-nova-cell1-compute-config-3" (OuterVolumeSpecName: "nova-cell1-compute-config-3") pod "f5b59e71-081a-4fb3-aa6e-4b0e96c0ff03" (UID: "f5b59e71-081a-4fb3-aa6e-4b0e96c0ff03"). InnerVolumeSpecName "nova-cell1-compute-config-3". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 11:12:35 crc kubenswrapper[4728]: I0227 11:12:35.297258 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8zvrq\" (UniqueName: \"kubernetes.io/projected/f5b59e71-081a-4fb3-aa6e-4b0e96c0ff03-kube-api-access-8zvrq\") on node \"crc\" DevicePath \"\"" Feb 27 11:12:35 crc kubenswrapper[4728]: I0227 11:12:35.297296 4728 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f5b59e71-081a-4fb3-aa6e-4b0e96c0ff03-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 27 11:12:35 crc kubenswrapper[4728]: I0227 11:12:35.297308 4728 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/f5b59e71-081a-4fb3-aa6e-4b0e96c0ff03-nova-cell1-compute-config-3\") on node \"crc\" DevicePath \"\"" Feb 27 11:12:35 crc kubenswrapper[4728]: I0227 11:12:35.297320 4728 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/f5b59e71-081a-4fb3-aa6e-4b0e96c0ff03-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Feb 27 11:12:35 crc kubenswrapper[4728]: I0227 11:12:35.297331 4728 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/f5b59e71-081a-4fb3-aa6e-4b0e96c0ff03-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Feb 27 11:12:35 crc kubenswrapper[4728]: I0227 11:12:35.297342 4728 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/f5b59e71-081a-4fb3-aa6e-4b0e96c0ff03-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Feb 27 11:12:35 crc kubenswrapper[4728]: I0227 11:12:35.297355 4728 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f5b59e71-081a-4fb3-aa6e-4b0e96c0ff03-inventory\") on node \"crc\" DevicePath \"\"" Feb 27 11:12:35 crc kubenswrapper[4728]: I0227 11:12:35.297365 4728 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/f5b59e71-081a-4fb3-aa6e-4b0e96c0ff03-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Feb 27 11:12:35 crc kubenswrapper[4728]: I0227 11:12:35.297374 4728 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/f5b59e71-081a-4fb3-aa6e-4b0e96c0ff03-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Feb 27 11:12:35 crc kubenswrapper[4728]: I0227 11:12:35.297384 4728 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5b59e71-081a-4fb3-aa6e-4b0e96c0ff03-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 11:12:35 crc kubenswrapper[4728]: I0227 11:12:35.313284 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5b59e71-081a-4fb3-aa6e-4b0e96c0ff03-nova-cell1-compute-config-2" (OuterVolumeSpecName: "nova-cell1-compute-config-2") pod "f5b59e71-081a-4fb3-aa6e-4b0e96c0ff03" (UID: "f5b59e71-081a-4fb3-aa6e-4b0e96c0ff03"). InnerVolumeSpecName "nova-cell1-compute-config-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 11:12:35 crc kubenswrapper[4728]: I0227 11:12:35.399776 4728 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/f5b59e71-081a-4fb3-aa6e-4b0e96c0ff03-nova-cell1-compute-config-2\") on node \"crc\" DevicePath \"\"" Feb 27 11:12:35 crc kubenswrapper[4728]: I0227 11:12:35.518441 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6gmbn" event={"ID":"f5b59e71-081a-4fb3-aa6e-4b0e96c0ff03","Type":"ContainerDied","Data":"03856ce91a9bf61dfb52edcf7204e546550142006e04f0944f274d85727e9930"} Feb 27 11:12:35 crc kubenswrapper[4728]: I0227 11:12:35.518515 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="03856ce91a9bf61dfb52edcf7204e546550142006e04f0944f274d85727e9930" Feb 27 11:12:35 crc kubenswrapper[4728]: I0227 11:12:35.518569 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6gmbn" Feb 27 11:12:35 crc kubenswrapper[4728]: I0227 11:12:35.625542 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mjg5q"] Feb 27 11:12:35 crc kubenswrapper[4728]: E0227 11:12:35.625995 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5b59e71-081a-4fb3-aa6e-4b0e96c0ff03" containerName="nova-edpm-deployment-openstack-edpm-ipam" Feb 27 11:12:35 crc kubenswrapper[4728]: I0227 11:12:35.626011 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5b59e71-081a-4fb3-aa6e-4b0e96c0ff03" containerName="nova-edpm-deployment-openstack-edpm-ipam" Feb 27 11:12:35 crc kubenswrapper[4728]: E0227 11:12:35.626039 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d584de9a-c699-4906-ab5d-5d1b397af97d" containerName="oc" Feb 27 11:12:35 crc kubenswrapper[4728]: I0227 11:12:35.626045 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="d584de9a-c699-4906-ab5d-5d1b397af97d" containerName="oc" Feb 27 11:12:35 crc kubenswrapper[4728]: I0227 11:12:35.626268 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="d584de9a-c699-4906-ab5d-5d1b397af97d" containerName="oc" Feb 27 11:12:35 crc kubenswrapper[4728]: I0227 11:12:35.626298 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5b59e71-081a-4fb3-aa6e-4b0e96c0ff03" containerName="nova-edpm-deployment-openstack-edpm-ipam" Feb 27 11:12:35 crc kubenswrapper[4728]: I0227 11:12:35.627080 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mjg5q" Feb 27 11:12:35 crc kubenswrapper[4728]: I0227 11:12:35.629744 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Feb 27 11:12:35 crc kubenswrapper[4728]: I0227 11:12:35.632214 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 27 11:12:35 crc kubenswrapper[4728]: I0227 11:12:35.632454 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 27 11:12:35 crc kubenswrapper[4728]: I0227 11:12:35.632474 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 27 11:12:35 crc kubenswrapper[4728]: I0227 11:12:35.632695 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-r9nq7" Feb 27 11:12:35 crc kubenswrapper[4728]: I0227 11:12:35.655840 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mjg5q"] Feb 27 11:12:35 crc kubenswrapper[4728]: I0227 11:12:35.706831 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/f509a2d6-f273-4497-8dad-171d2f53d125-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-mjg5q\" (UID: \"f509a2d6-f273-4497-8dad-171d2f53d125\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mjg5q" Feb 27 11:12:35 crc kubenswrapper[4728]: I0227 11:12:35.706919 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f509a2d6-f273-4497-8dad-171d2f53d125-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-mjg5q\" (UID: \"f509a2d6-f273-4497-8dad-171d2f53d125\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mjg5q" Feb 27 11:12:35 crc kubenswrapper[4728]: I0227 11:12:35.707014 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f509a2d6-f273-4497-8dad-171d2f53d125-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-mjg5q\" (UID: \"f509a2d6-f273-4497-8dad-171d2f53d125\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mjg5q" Feb 27 11:12:35 crc kubenswrapper[4728]: I0227 11:12:35.707333 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/f509a2d6-f273-4497-8dad-171d2f53d125-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-mjg5q\" (UID: \"f509a2d6-f273-4497-8dad-171d2f53d125\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mjg5q" Feb 27 11:12:35 crc kubenswrapper[4728]: I0227 11:12:35.707489 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/f509a2d6-f273-4497-8dad-171d2f53d125-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-mjg5q\" (UID: \"f509a2d6-f273-4497-8dad-171d2f53d125\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mjg5q" Feb 27 11:12:35 crc kubenswrapper[4728]: I0227 11:12:35.707626 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f509a2d6-f273-4497-8dad-171d2f53d125-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-mjg5q\" (UID: \"f509a2d6-f273-4497-8dad-171d2f53d125\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mjg5q" Feb 27 11:12:35 crc kubenswrapper[4728]: I0227 11:12:35.707655 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pzd6t\" (UniqueName: \"kubernetes.io/projected/f509a2d6-f273-4497-8dad-171d2f53d125-kube-api-access-pzd6t\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-mjg5q\" (UID: \"f509a2d6-f273-4497-8dad-171d2f53d125\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mjg5q" Feb 27 11:12:35 crc kubenswrapper[4728]: I0227 11:12:35.810536 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/f509a2d6-f273-4497-8dad-171d2f53d125-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-mjg5q\" (UID: \"f509a2d6-f273-4497-8dad-171d2f53d125\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mjg5q" Feb 27 11:12:35 crc kubenswrapper[4728]: I0227 11:12:35.810674 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/f509a2d6-f273-4497-8dad-171d2f53d125-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-mjg5q\" (UID: \"f509a2d6-f273-4497-8dad-171d2f53d125\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mjg5q" Feb 27 11:12:35 crc kubenswrapper[4728]: I0227 11:12:35.810772 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f509a2d6-f273-4497-8dad-171d2f53d125-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-mjg5q\" (UID: \"f509a2d6-f273-4497-8dad-171d2f53d125\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mjg5q" Feb 27 11:12:35 crc kubenswrapper[4728]: I0227 11:12:35.810806 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pzd6t\" (UniqueName: \"kubernetes.io/projected/f509a2d6-f273-4497-8dad-171d2f53d125-kube-api-access-pzd6t\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-mjg5q\" (UID: \"f509a2d6-f273-4497-8dad-171d2f53d125\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mjg5q" Feb 27 11:12:35 crc kubenswrapper[4728]: I0227 11:12:35.810884 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/f509a2d6-f273-4497-8dad-171d2f53d125-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-mjg5q\" (UID: \"f509a2d6-f273-4497-8dad-171d2f53d125\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mjg5q" Feb 27 11:12:35 crc kubenswrapper[4728]: I0227 11:12:35.810990 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f509a2d6-f273-4497-8dad-171d2f53d125-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-mjg5q\" (UID: \"f509a2d6-f273-4497-8dad-171d2f53d125\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mjg5q" Feb 27 11:12:35 crc kubenswrapper[4728]: I0227 11:12:35.812565 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f509a2d6-f273-4497-8dad-171d2f53d125-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-mjg5q\" (UID: \"f509a2d6-f273-4497-8dad-171d2f53d125\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mjg5q" Feb 27 11:12:35 crc kubenswrapper[4728]: I0227 11:12:35.816795 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f509a2d6-f273-4497-8dad-171d2f53d125-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-mjg5q\" (UID: \"f509a2d6-f273-4497-8dad-171d2f53d125\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mjg5q" Feb 27 11:12:35 crc kubenswrapper[4728]: I0227 11:12:35.817175 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/f509a2d6-f273-4497-8dad-171d2f53d125-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-mjg5q\" (UID: \"f509a2d6-f273-4497-8dad-171d2f53d125\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mjg5q" Feb 27 11:12:35 crc kubenswrapper[4728]: I0227 11:12:35.817349 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/f509a2d6-f273-4497-8dad-171d2f53d125-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-mjg5q\" (UID: \"f509a2d6-f273-4497-8dad-171d2f53d125\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mjg5q" Feb 27 11:12:35 crc kubenswrapper[4728]: I0227 11:12:35.818027 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/f509a2d6-f273-4497-8dad-171d2f53d125-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-mjg5q\" (UID: \"f509a2d6-f273-4497-8dad-171d2f53d125\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mjg5q" Feb 27 11:12:35 crc kubenswrapper[4728]: I0227 11:12:35.820804 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f509a2d6-f273-4497-8dad-171d2f53d125-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-mjg5q\" (UID: \"f509a2d6-f273-4497-8dad-171d2f53d125\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mjg5q" Feb 27 11:12:35 crc kubenswrapper[4728]: I0227 11:12:35.822819 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f509a2d6-f273-4497-8dad-171d2f53d125-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-mjg5q\" (UID: \"f509a2d6-f273-4497-8dad-171d2f53d125\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mjg5q" Feb 27 11:12:35 crc kubenswrapper[4728]: I0227 11:12:35.829771 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pzd6t\" (UniqueName: \"kubernetes.io/projected/f509a2d6-f273-4497-8dad-171d2f53d125-kube-api-access-pzd6t\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-mjg5q\" (UID: \"f509a2d6-f273-4497-8dad-171d2f53d125\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mjg5q" Feb 27 11:12:35 crc kubenswrapper[4728]: I0227 11:12:35.953215 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mjg5q" Feb 27 11:12:36 crc kubenswrapper[4728]: I0227 11:12:36.584572 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mjg5q"] Feb 27 11:12:37 crc kubenswrapper[4728]: I0227 11:12:37.543189 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mjg5q" event={"ID":"f509a2d6-f273-4497-8dad-171d2f53d125","Type":"ContainerStarted","Data":"3532c67a6ff71e407cffd7e041b216b5bdc3c3167e8cd40f12f7c6f2c56f3edb"} Feb 27 11:12:38 crc kubenswrapper[4728]: I0227 11:12:38.571011 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mjg5q" event={"ID":"f509a2d6-f273-4497-8dad-171d2f53d125","Type":"ContainerStarted","Data":"f53b9d417d66a58a6df449df38f90259a83e9939925cdf8797a2b073c6ba2fcb"} Feb 27 11:12:38 crc kubenswrapper[4728]: I0227 11:12:38.599593 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mjg5q" podStartSLOduration=2.477249356 podStartE2EDuration="3.599555233s" podCreationTimestamp="2026-02-27 11:12:35 +0000 UTC" firstStartedPulling="2026-02-27 11:12:36.577781129 +0000 UTC m=+2776.540147235" lastFinishedPulling="2026-02-27 11:12:37.700086986 +0000 UTC m=+2777.662453112" observedRunningTime="2026-02-27 11:12:38.598113982 +0000 UTC m=+2778.560480128" watchObservedRunningTime="2026-02-27 11:12:38.599555233 +0000 UTC m=+2778.561921429" Feb 27 11:12:38 crc kubenswrapper[4728]: I0227 11:12:38.725789 4728 scope.go:117] "RemoveContainer" containerID="57dd66d05b86c121df94f5a356294cb90c6125f234da02888c13a967c3d3eec5" Feb 27 11:12:38 crc kubenswrapper[4728]: E0227 11:12:38.726351 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mf2hh_openshift-machine-config-operator(c2cfd349-f825-497b-b698-7fb6bc258b22)\"" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" Feb 27 11:12:47 crc kubenswrapper[4728]: I0227 11:12:47.202749 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-4vls6"] Feb 27 11:12:47 crc kubenswrapper[4728]: I0227 11:12:47.207671 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4vls6" Feb 27 11:12:47 crc kubenswrapper[4728]: I0227 11:12:47.215271 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4vls6"] Feb 27 11:12:47 crc kubenswrapper[4728]: I0227 11:12:47.341050 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/397dc104-a576-4d10-a7ac-3b8da678f5ec-utilities\") pod \"certified-operators-4vls6\" (UID: \"397dc104-a576-4d10-a7ac-3b8da678f5ec\") " pod="openshift-marketplace/certified-operators-4vls6" Feb 27 11:12:47 crc kubenswrapper[4728]: I0227 11:12:47.341206 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8spm\" (UniqueName: \"kubernetes.io/projected/397dc104-a576-4d10-a7ac-3b8da678f5ec-kube-api-access-p8spm\") pod \"certified-operators-4vls6\" (UID: \"397dc104-a576-4d10-a7ac-3b8da678f5ec\") " pod="openshift-marketplace/certified-operators-4vls6" Feb 27 11:12:47 crc kubenswrapper[4728]: I0227 11:12:47.341422 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/397dc104-a576-4d10-a7ac-3b8da678f5ec-catalog-content\") pod \"certified-operators-4vls6\" (UID: \"397dc104-a576-4d10-a7ac-3b8da678f5ec\") " pod="openshift-marketplace/certified-operators-4vls6" Feb 27 11:12:47 crc kubenswrapper[4728]: I0227 11:12:47.444557 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/397dc104-a576-4d10-a7ac-3b8da678f5ec-utilities\") pod \"certified-operators-4vls6\" (UID: \"397dc104-a576-4d10-a7ac-3b8da678f5ec\") " pod="openshift-marketplace/certified-operators-4vls6" Feb 27 11:12:47 crc kubenswrapper[4728]: I0227 11:12:47.444939 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p8spm\" (UniqueName: \"kubernetes.io/projected/397dc104-a576-4d10-a7ac-3b8da678f5ec-kube-api-access-p8spm\") pod \"certified-operators-4vls6\" (UID: \"397dc104-a576-4d10-a7ac-3b8da678f5ec\") " pod="openshift-marketplace/certified-operators-4vls6" Feb 27 11:12:47 crc kubenswrapper[4728]: I0227 11:12:47.445170 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/397dc104-a576-4d10-a7ac-3b8da678f5ec-catalog-content\") pod \"certified-operators-4vls6\" (UID: \"397dc104-a576-4d10-a7ac-3b8da678f5ec\") " pod="openshift-marketplace/certified-operators-4vls6" Feb 27 11:12:47 crc kubenswrapper[4728]: I0227 11:12:47.445444 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/397dc104-a576-4d10-a7ac-3b8da678f5ec-catalog-content\") pod \"certified-operators-4vls6\" (UID: \"397dc104-a576-4d10-a7ac-3b8da678f5ec\") " pod="openshift-marketplace/certified-operators-4vls6" Feb 27 11:12:47 crc kubenswrapper[4728]: I0227 11:12:47.445220 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/397dc104-a576-4d10-a7ac-3b8da678f5ec-utilities\") pod \"certified-operators-4vls6\" (UID: \"397dc104-a576-4d10-a7ac-3b8da678f5ec\") " pod="openshift-marketplace/certified-operators-4vls6" Feb 27 11:12:47 crc kubenswrapper[4728]: I0227 11:12:47.464586 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p8spm\" (UniqueName: \"kubernetes.io/projected/397dc104-a576-4d10-a7ac-3b8da678f5ec-kube-api-access-p8spm\") pod \"certified-operators-4vls6\" (UID: \"397dc104-a576-4d10-a7ac-3b8da678f5ec\") " pod="openshift-marketplace/certified-operators-4vls6" Feb 27 11:12:47 crc kubenswrapper[4728]: I0227 11:12:47.540724 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4vls6" Feb 27 11:12:48 crc kubenswrapper[4728]: I0227 11:12:48.096221 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4vls6"] Feb 27 11:12:48 crc kubenswrapper[4728]: I0227 11:12:48.693713 4728 generic.go:334] "Generic (PLEG): container finished" podID="397dc104-a576-4d10-a7ac-3b8da678f5ec" containerID="b815ecaac0437309dc684059525cb36ae7548441ddcbc52bd965b6e1df7d7d32" exitCode=0 Feb 27 11:12:48 crc kubenswrapper[4728]: I0227 11:12:48.693873 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4vls6" event={"ID":"397dc104-a576-4d10-a7ac-3b8da678f5ec","Type":"ContainerDied","Data":"b815ecaac0437309dc684059525cb36ae7548441ddcbc52bd965b6e1df7d7d32"} Feb 27 11:12:48 crc kubenswrapper[4728]: I0227 11:12:48.694231 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4vls6" event={"ID":"397dc104-a576-4d10-a7ac-3b8da678f5ec","Type":"ContainerStarted","Data":"ff7506e55383652913d4c8e1847239f65683bfabd80d8112e98d3276bfc6625c"} Feb 27 11:12:49 crc kubenswrapper[4728]: I0227 11:12:49.708256 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4vls6" event={"ID":"397dc104-a576-4d10-a7ac-3b8da678f5ec","Type":"ContainerStarted","Data":"37b6ae5c5d94e43a4772f508bd678c3c62d775c763d4f28c377e6ee17a04b954"} Feb 27 11:12:51 crc kubenswrapper[4728]: I0227 11:12:51.725421 4728 scope.go:117] "RemoveContainer" containerID="57dd66d05b86c121df94f5a356294cb90c6125f234da02888c13a967c3d3eec5" Feb 27 11:12:51 crc kubenswrapper[4728]: E0227 11:12:51.729244 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mf2hh_openshift-machine-config-operator(c2cfd349-f825-497b-b698-7fb6bc258b22)\"" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" Feb 27 11:12:51 crc kubenswrapper[4728]: I0227 11:12:51.740389 4728 generic.go:334] "Generic (PLEG): container finished" podID="397dc104-a576-4d10-a7ac-3b8da678f5ec" containerID="37b6ae5c5d94e43a4772f508bd678c3c62d775c763d4f28c377e6ee17a04b954" exitCode=0 Feb 27 11:12:51 crc kubenswrapper[4728]: I0227 11:12:51.740460 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4vls6" event={"ID":"397dc104-a576-4d10-a7ac-3b8da678f5ec","Type":"ContainerDied","Data":"37b6ae5c5d94e43a4772f508bd678c3c62d775c763d4f28c377e6ee17a04b954"} Feb 27 11:12:52 crc kubenswrapper[4728]: I0227 11:12:52.768312 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4vls6" event={"ID":"397dc104-a576-4d10-a7ac-3b8da678f5ec","Type":"ContainerStarted","Data":"69cbf7cb26f007e804ce002318e9bad37970d4b56128d090dac3f730dc838388"} Feb 27 11:12:52 crc kubenswrapper[4728]: I0227 11:12:52.799963 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-4vls6" podStartSLOduration=2.309147693 podStartE2EDuration="5.799941281s" podCreationTimestamp="2026-02-27 11:12:47 +0000 UTC" firstStartedPulling="2026-02-27 11:12:48.697619494 +0000 UTC m=+2788.659985600" lastFinishedPulling="2026-02-27 11:12:52.188413072 +0000 UTC m=+2792.150779188" observedRunningTime="2026-02-27 11:12:52.790048629 +0000 UTC m=+2792.752414795" watchObservedRunningTime="2026-02-27 11:12:52.799941281 +0000 UTC m=+2792.762307387" Feb 27 11:12:57 crc kubenswrapper[4728]: I0227 11:12:57.541044 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-4vls6" Feb 27 11:12:57 crc kubenswrapper[4728]: I0227 11:12:57.542080 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-4vls6" Feb 27 11:12:57 crc kubenswrapper[4728]: I0227 11:12:57.634924 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-4vls6" Feb 27 11:12:57 crc kubenswrapper[4728]: I0227 11:12:57.912913 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-4vls6" Feb 27 11:12:57 crc kubenswrapper[4728]: I0227 11:12:57.970386 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4vls6"] Feb 27 11:12:59 crc kubenswrapper[4728]: I0227 11:12:59.864236 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-4vls6" podUID="397dc104-a576-4d10-a7ac-3b8da678f5ec" containerName="registry-server" containerID="cri-o://69cbf7cb26f007e804ce002318e9bad37970d4b56128d090dac3f730dc838388" gracePeriod=2 Feb 27 11:13:00 crc kubenswrapper[4728]: I0227 11:13:00.432251 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4vls6" Feb 27 11:13:00 crc kubenswrapper[4728]: I0227 11:13:00.609290 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/397dc104-a576-4d10-a7ac-3b8da678f5ec-utilities\") pod \"397dc104-a576-4d10-a7ac-3b8da678f5ec\" (UID: \"397dc104-a576-4d10-a7ac-3b8da678f5ec\") " Feb 27 11:13:00 crc kubenswrapper[4728]: I0227 11:13:00.609431 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p8spm\" (UniqueName: \"kubernetes.io/projected/397dc104-a576-4d10-a7ac-3b8da678f5ec-kube-api-access-p8spm\") pod \"397dc104-a576-4d10-a7ac-3b8da678f5ec\" (UID: \"397dc104-a576-4d10-a7ac-3b8da678f5ec\") " Feb 27 11:13:00 crc kubenswrapper[4728]: I0227 11:13:00.609777 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/397dc104-a576-4d10-a7ac-3b8da678f5ec-catalog-content\") pod \"397dc104-a576-4d10-a7ac-3b8da678f5ec\" (UID: \"397dc104-a576-4d10-a7ac-3b8da678f5ec\") " Feb 27 11:13:00 crc kubenswrapper[4728]: I0227 11:13:00.611097 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/397dc104-a576-4d10-a7ac-3b8da678f5ec-utilities" (OuterVolumeSpecName: "utilities") pod "397dc104-a576-4d10-a7ac-3b8da678f5ec" (UID: "397dc104-a576-4d10-a7ac-3b8da678f5ec"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 11:13:00 crc kubenswrapper[4728]: I0227 11:13:00.616420 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/397dc104-a576-4d10-a7ac-3b8da678f5ec-kube-api-access-p8spm" (OuterVolumeSpecName: "kube-api-access-p8spm") pod "397dc104-a576-4d10-a7ac-3b8da678f5ec" (UID: "397dc104-a576-4d10-a7ac-3b8da678f5ec"). InnerVolumeSpecName "kube-api-access-p8spm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:13:00 crc kubenswrapper[4728]: I0227 11:13:00.672455 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/397dc104-a576-4d10-a7ac-3b8da678f5ec-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "397dc104-a576-4d10-a7ac-3b8da678f5ec" (UID: "397dc104-a576-4d10-a7ac-3b8da678f5ec"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 11:13:00 crc kubenswrapper[4728]: I0227 11:13:00.712856 4728 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/397dc104-a576-4d10-a7ac-3b8da678f5ec-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 11:13:00 crc kubenswrapper[4728]: I0227 11:13:00.712897 4728 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/397dc104-a576-4d10-a7ac-3b8da678f5ec-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 11:13:00 crc kubenswrapper[4728]: I0227 11:13:00.712910 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p8spm\" (UniqueName: \"kubernetes.io/projected/397dc104-a576-4d10-a7ac-3b8da678f5ec-kube-api-access-p8spm\") on node \"crc\" DevicePath \"\"" Feb 27 11:13:00 crc kubenswrapper[4728]: I0227 11:13:00.887877 4728 generic.go:334] "Generic (PLEG): container finished" podID="397dc104-a576-4d10-a7ac-3b8da678f5ec" containerID="69cbf7cb26f007e804ce002318e9bad37970d4b56128d090dac3f730dc838388" exitCode=0 Feb 27 11:13:00 crc kubenswrapper[4728]: I0227 11:13:00.887958 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4vls6" event={"ID":"397dc104-a576-4d10-a7ac-3b8da678f5ec","Type":"ContainerDied","Data":"69cbf7cb26f007e804ce002318e9bad37970d4b56128d090dac3f730dc838388"} Feb 27 11:13:00 crc kubenswrapper[4728]: I0227 11:13:00.888019 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4vls6" Feb 27 11:13:00 crc kubenswrapper[4728]: I0227 11:13:00.888037 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4vls6" event={"ID":"397dc104-a576-4d10-a7ac-3b8da678f5ec","Type":"ContainerDied","Data":"ff7506e55383652913d4c8e1847239f65683bfabd80d8112e98d3276bfc6625c"} Feb 27 11:13:00 crc kubenswrapper[4728]: I0227 11:13:00.888086 4728 scope.go:117] "RemoveContainer" containerID="69cbf7cb26f007e804ce002318e9bad37970d4b56128d090dac3f730dc838388" Feb 27 11:13:00 crc kubenswrapper[4728]: I0227 11:13:00.928413 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4vls6"] Feb 27 11:13:00 crc kubenswrapper[4728]: I0227 11:13:00.934740 4728 scope.go:117] "RemoveContainer" containerID="37b6ae5c5d94e43a4772f508bd678c3c62d775c763d4f28c377e6ee17a04b954" Feb 27 11:13:00 crc kubenswrapper[4728]: I0227 11:13:00.941172 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-4vls6"] Feb 27 11:13:00 crc kubenswrapper[4728]: I0227 11:13:00.964169 4728 scope.go:117] "RemoveContainer" containerID="b815ecaac0437309dc684059525cb36ae7548441ddcbc52bd965b6e1df7d7d32" Feb 27 11:13:01 crc kubenswrapper[4728]: I0227 11:13:01.020455 4728 scope.go:117] "RemoveContainer" containerID="69cbf7cb26f007e804ce002318e9bad37970d4b56128d090dac3f730dc838388" Feb 27 11:13:01 crc kubenswrapper[4728]: E0227 11:13:01.020932 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"69cbf7cb26f007e804ce002318e9bad37970d4b56128d090dac3f730dc838388\": container with ID starting with 69cbf7cb26f007e804ce002318e9bad37970d4b56128d090dac3f730dc838388 not found: ID does not exist" containerID="69cbf7cb26f007e804ce002318e9bad37970d4b56128d090dac3f730dc838388" Feb 27 11:13:01 crc kubenswrapper[4728]: I0227 11:13:01.020988 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"69cbf7cb26f007e804ce002318e9bad37970d4b56128d090dac3f730dc838388"} err="failed to get container status \"69cbf7cb26f007e804ce002318e9bad37970d4b56128d090dac3f730dc838388\": rpc error: code = NotFound desc = could not find container \"69cbf7cb26f007e804ce002318e9bad37970d4b56128d090dac3f730dc838388\": container with ID starting with 69cbf7cb26f007e804ce002318e9bad37970d4b56128d090dac3f730dc838388 not found: ID does not exist" Feb 27 11:13:01 crc kubenswrapper[4728]: I0227 11:13:01.021022 4728 scope.go:117] "RemoveContainer" containerID="37b6ae5c5d94e43a4772f508bd678c3c62d775c763d4f28c377e6ee17a04b954" Feb 27 11:13:01 crc kubenswrapper[4728]: E0227 11:13:01.021432 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"37b6ae5c5d94e43a4772f508bd678c3c62d775c763d4f28c377e6ee17a04b954\": container with ID starting with 37b6ae5c5d94e43a4772f508bd678c3c62d775c763d4f28c377e6ee17a04b954 not found: ID does not exist" containerID="37b6ae5c5d94e43a4772f508bd678c3c62d775c763d4f28c377e6ee17a04b954" Feb 27 11:13:01 crc kubenswrapper[4728]: I0227 11:13:01.021552 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"37b6ae5c5d94e43a4772f508bd678c3c62d775c763d4f28c377e6ee17a04b954"} err="failed to get container status \"37b6ae5c5d94e43a4772f508bd678c3c62d775c763d4f28c377e6ee17a04b954\": rpc error: code = NotFound desc = could not find container \"37b6ae5c5d94e43a4772f508bd678c3c62d775c763d4f28c377e6ee17a04b954\": container with ID starting with 37b6ae5c5d94e43a4772f508bd678c3c62d775c763d4f28c377e6ee17a04b954 not found: ID does not exist" Feb 27 11:13:01 crc kubenswrapper[4728]: I0227 11:13:01.021615 4728 scope.go:117] "RemoveContainer" containerID="b815ecaac0437309dc684059525cb36ae7548441ddcbc52bd965b6e1df7d7d32" Feb 27 11:13:01 crc kubenswrapper[4728]: E0227 11:13:01.022060 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b815ecaac0437309dc684059525cb36ae7548441ddcbc52bd965b6e1df7d7d32\": container with ID starting with b815ecaac0437309dc684059525cb36ae7548441ddcbc52bd965b6e1df7d7d32 not found: ID does not exist" containerID="b815ecaac0437309dc684059525cb36ae7548441ddcbc52bd965b6e1df7d7d32" Feb 27 11:13:01 crc kubenswrapper[4728]: I0227 11:13:01.022134 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b815ecaac0437309dc684059525cb36ae7548441ddcbc52bd965b6e1df7d7d32"} err="failed to get container status \"b815ecaac0437309dc684059525cb36ae7548441ddcbc52bd965b6e1df7d7d32\": rpc error: code = NotFound desc = could not find container \"b815ecaac0437309dc684059525cb36ae7548441ddcbc52bd965b6e1df7d7d32\": container with ID starting with b815ecaac0437309dc684059525cb36ae7548441ddcbc52bd965b6e1df7d7d32 not found: ID does not exist" Feb 27 11:13:02 crc kubenswrapper[4728]: I0227 11:13:02.741700 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="397dc104-a576-4d10-a7ac-3b8da678f5ec" path="/var/lib/kubelet/pods/397dc104-a576-4d10-a7ac-3b8da678f5ec/volumes" Feb 27 11:13:06 crc kubenswrapper[4728]: I0227 11:13:06.725291 4728 scope.go:117] "RemoveContainer" containerID="57dd66d05b86c121df94f5a356294cb90c6125f234da02888c13a967c3d3eec5" Feb 27 11:13:08 crc kubenswrapper[4728]: I0227 11:13:08.005634 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" event={"ID":"c2cfd349-f825-497b-b698-7fb6bc258b22","Type":"ContainerStarted","Data":"2c94f1a9a09928f530d57cf1af411e7a1834b20d8115a07791ee034fe585d7b3"} Feb 27 11:13:10 crc kubenswrapper[4728]: I0227 11:13:10.377892 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-n7vgn"] Feb 27 11:13:10 crc kubenswrapper[4728]: E0227 11:13:10.379468 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="397dc104-a576-4d10-a7ac-3b8da678f5ec" containerName="extract-utilities" Feb 27 11:13:10 crc kubenswrapper[4728]: I0227 11:13:10.379490 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="397dc104-a576-4d10-a7ac-3b8da678f5ec" containerName="extract-utilities" Feb 27 11:13:10 crc kubenswrapper[4728]: E0227 11:13:10.379555 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="397dc104-a576-4d10-a7ac-3b8da678f5ec" containerName="registry-server" Feb 27 11:13:10 crc kubenswrapper[4728]: I0227 11:13:10.379565 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="397dc104-a576-4d10-a7ac-3b8da678f5ec" containerName="registry-server" Feb 27 11:13:10 crc kubenswrapper[4728]: E0227 11:13:10.379597 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="397dc104-a576-4d10-a7ac-3b8da678f5ec" containerName="extract-content" Feb 27 11:13:10 crc kubenswrapper[4728]: I0227 11:13:10.379605 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="397dc104-a576-4d10-a7ac-3b8da678f5ec" containerName="extract-content" Feb 27 11:13:10 crc kubenswrapper[4728]: I0227 11:13:10.379975 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="397dc104-a576-4d10-a7ac-3b8da678f5ec" containerName="registry-server" Feb 27 11:13:10 crc kubenswrapper[4728]: I0227 11:13:10.387099 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-n7vgn" Feb 27 11:13:10 crc kubenswrapper[4728]: I0227 11:13:10.399648 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-n7vgn"] Feb 27 11:13:10 crc kubenswrapper[4728]: I0227 11:13:10.587198 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ea477fc-2e74-4289-8d6b-fa3ddae48948-utilities\") pod \"redhat-operators-n7vgn\" (UID: \"8ea477fc-2e74-4289-8d6b-fa3ddae48948\") " pod="openshift-marketplace/redhat-operators-n7vgn" Feb 27 11:13:10 crc kubenswrapper[4728]: I0227 11:13:10.587255 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ea477fc-2e74-4289-8d6b-fa3ddae48948-catalog-content\") pod \"redhat-operators-n7vgn\" (UID: \"8ea477fc-2e74-4289-8d6b-fa3ddae48948\") " pod="openshift-marketplace/redhat-operators-n7vgn" Feb 27 11:13:10 crc kubenswrapper[4728]: I0227 11:13:10.587323 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2wf8\" (UniqueName: \"kubernetes.io/projected/8ea477fc-2e74-4289-8d6b-fa3ddae48948-kube-api-access-d2wf8\") pod \"redhat-operators-n7vgn\" (UID: \"8ea477fc-2e74-4289-8d6b-fa3ddae48948\") " pod="openshift-marketplace/redhat-operators-n7vgn" Feb 27 11:13:10 crc kubenswrapper[4728]: I0227 11:13:10.689328 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d2wf8\" (UniqueName: \"kubernetes.io/projected/8ea477fc-2e74-4289-8d6b-fa3ddae48948-kube-api-access-d2wf8\") pod \"redhat-operators-n7vgn\" (UID: \"8ea477fc-2e74-4289-8d6b-fa3ddae48948\") " pod="openshift-marketplace/redhat-operators-n7vgn" Feb 27 11:13:10 crc kubenswrapper[4728]: I0227 11:13:10.689530 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ea477fc-2e74-4289-8d6b-fa3ddae48948-utilities\") pod \"redhat-operators-n7vgn\" (UID: \"8ea477fc-2e74-4289-8d6b-fa3ddae48948\") " pod="openshift-marketplace/redhat-operators-n7vgn" Feb 27 11:13:10 crc kubenswrapper[4728]: I0227 11:13:10.689565 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ea477fc-2e74-4289-8d6b-fa3ddae48948-catalog-content\") pod \"redhat-operators-n7vgn\" (UID: \"8ea477fc-2e74-4289-8d6b-fa3ddae48948\") " pod="openshift-marketplace/redhat-operators-n7vgn" Feb 27 11:13:10 crc kubenswrapper[4728]: I0227 11:13:10.690022 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ea477fc-2e74-4289-8d6b-fa3ddae48948-catalog-content\") pod \"redhat-operators-n7vgn\" (UID: \"8ea477fc-2e74-4289-8d6b-fa3ddae48948\") " pod="openshift-marketplace/redhat-operators-n7vgn" Feb 27 11:13:10 crc kubenswrapper[4728]: I0227 11:13:10.690253 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ea477fc-2e74-4289-8d6b-fa3ddae48948-utilities\") pod \"redhat-operators-n7vgn\" (UID: \"8ea477fc-2e74-4289-8d6b-fa3ddae48948\") " pod="openshift-marketplace/redhat-operators-n7vgn" Feb 27 11:13:10 crc kubenswrapper[4728]: I0227 11:13:10.709717 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d2wf8\" (UniqueName: \"kubernetes.io/projected/8ea477fc-2e74-4289-8d6b-fa3ddae48948-kube-api-access-d2wf8\") pod \"redhat-operators-n7vgn\" (UID: \"8ea477fc-2e74-4289-8d6b-fa3ddae48948\") " pod="openshift-marketplace/redhat-operators-n7vgn" Feb 27 11:13:10 crc kubenswrapper[4728]: I0227 11:13:10.723721 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-n7vgn" Feb 27 11:13:11 crc kubenswrapper[4728]: I0227 11:13:11.226861 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-n7vgn"] Feb 27 11:13:11 crc kubenswrapper[4728]: W0227 11:13:11.227713 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8ea477fc_2e74_4289_8d6b_fa3ddae48948.slice/crio-909a992cd7ab52e00b33f634f2c6af59287673090f2ec0e46e28470509094d2e WatchSource:0}: Error finding container 909a992cd7ab52e00b33f634f2c6af59287673090f2ec0e46e28470509094d2e: Status 404 returned error can't find the container with id 909a992cd7ab52e00b33f634f2c6af59287673090f2ec0e46e28470509094d2e Feb 27 11:13:12 crc kubenswrapper[4728]: I0227 11:13:12.050020 4728 generic.go:334] "Generic (PLEG): container finished" podID="8ea477fc-2e74-4289-8d6b-fa3ddae48948" containerID="dd7131f16749b0e62b80b85e6638477f34f232d2faa5373112b66001dfeaf1d7" exitCode=0 Feb 27 11:13:12 crc kubenswrapper[4728]: I0227 11:13:12.050066 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n7vgn" event={"ID":"8ea477fc-2e74-4289-8d6b-fa3ddae48948","Type":"ContainerDied","Data":"dd7131f16749b0e62b80b85e6638477f34f232d2faa5373112b66001dfeaf1d7"} Feb 27 11:13:12 crc kubenswrapper[4728]: I0227 11:13:12.050383 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n7vgn" event={"ID":"8ea477fc-2e74-4289-8d6b-fa3ddae48948","Type":"ContainerStarted","Data":"909a992cd7ab52e00b33f634f2c6af59287673090f2ec0e46e28470509094d2e"} Feb 27 11:13:12 crc kubenswrapper[4728]: I0227 11:13:12.061994 4728 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 27 11:13:14 crc kubenswrapper[4728]: I0227 11:13:14.108035 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n7vgn" event={"ID":"8ea477fc-2e74-4289-8d6b-fa3ddae48948","Type":"ContainerStarted","Data":"fb680b36b4f3a8d520805c8f7307930b95ff5bec0622bbe4c3c695828b4a8535"} Feb 27 11:13:18 crc kubenswrapper[4728]: I0227 11:13:18.132486 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-ktqsf"] Feb 27 11:13:18 crc kubenswrapper[4728]: I0227 11:13:18.137690 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ktqsf" Feb 27 11:13:18 crc kubenswrapper[4728]: I0227 11:13:18.149043 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ktqsf"] Feb 27 11:13:18 crc kubenswrapper[4728]: I0227 11:13:18.293142 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h7mz6\" (UniqueName: \"kubernetes.io/projected/00b4334c-3f12-4336-bb78-6be880bfc501-kube-api-access-h7mz6\") pod \"community-operators-ktqsf\" (UID: \"00b4334c-3f12-4336-bb78-6be880bfc501\") " pod="openshift-marketplace/community-operators-ktqsf" Feb 27 11:13:18 crc kubenswrapper[4728]: I0227 11:13:18.293643 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/00b4334c-3f12-4336-bb78-6be880bfc501-utilities\") pod \"community-operators-ktqsf\" (UID: \"00b4334c-3f12-4336-bb78-6be880bfc501\") " pod="openshift-marketplace/community-operators-ktqsf" Feb 27 11:13:18 crc kubenswrapper[4728]: I0227 11:13:18.293914 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/00b4334c-3f12-4336-bb78-6be880bfc501-catalog-content\") pod \"community-operators-ktqsf\" (UID: \"00b4334c-3f12-4336-bb78-6be880bfc501\") " pod="openshift-marketplace/community-operators-ktqsf" Feb 27 11:13:18 crc kubenswrapper[4728]: I0227 11:13:18.396134 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/00b4334c-3f12-4336-bb78-6be880bfc501-catalog-content\") pod \"community-operators-ktqsf\" (UID: \"00b4334c-3f12-4336-bb78-6be880bfc501\") " pod="openshift-marketplace/community-operators-ktqsf" Feb 27 11:13:18 crc kubenswrapper[4728]: I0227 11:13:18.396255 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h7mz6\" (UniqueName: \"kubernetes.io/projected/00b4334c-3f12-4336-bb78-6be880bfc501-kube-api-access-h7mz6\") pod \"community-operators-ktqsf\" (UID: \"00b4334c-3f12-4336-bb78-6be880bfc501\") " pod="openshift-marketplace/community-operators-ktqsf" Feb 27 11:13:18 crc kubenswrapper[4728]: I0227 11:13:18.396353 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/00b4334c-3f12-4336-bb78-6be880bfc501-utilities\") pod \"community-operators-ktqsf\" (UID: \"00b4334c-3f12-4336-bb78-6be880bfc501\") " pod="openshift-marketplace/community-operators-ktqsf" Feb 27 11:13:18 crc kubenswrapper[4728]: I0227 11:13:18.396823 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/00b4334c-3f12-4336-bb78-6be880bfc501-catalog-content\") pod \"community-operators-ktqsf\" (UID: \"00b4334c-3f12-4336-bb78-6be880bfc501\") " pod="openshift-marketplace/community-operators-ktqsf" Feb 27 11:13:18 crc kubenswrapper[4728]: I0227 11:13:18.396869 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/00b4334c-3f12-4336-bb78-6be880bfc501-utilities\") pod \"community-operators-ktqsf\" (UID: \"00b4334c-3f12-4336-bb78-6be880bfc501\") " pod="openshift-marketplace/community-operators-ktqsf" Feb 27 11:13:18 crc kubenswrapper[4728]: I0227 11:13:18.414050 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h7mz6\" (UniqueName: \"kubernetes.io/projected/00b4334c-3f12-4336-bb78-6be880bfc501-kube-api-access-h7mz6\") pod \"community-operators-ktqsf\" (UID: \"00b4334c-3f12-4336-bb78-6be880bfc501\") " pod="openshift-marketplace/community-operators-ktqsf" Feb 27 11:13:18 crc kubenswrapper[4728]: I0227 11:13:18.477453 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ktqsf" Feb 27 11:13:19 crc kubenswrapper[4728]: I0227 11:13:19.298460 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ktqsf"] Feb 27 11:13:20 crc kubenswrapper[4728]: I0227 11:13:20.211096 4728 generic.go:334] "Generic (PLEG): container finished" podID="8ea477fc-2e74-4289-8d6b-fa3ddae48948" containerID="fb680b36b4f3a8d520805c8f7307930b95ff5bec0622bbe4c3c695828b4a8535" exitCode=0 Feb 27 11:13:20 crc kubenswrapper[4728]: I0227 11:13:20.211549 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n7vgn" event={"ID":"8ea477fc-2e74-4289-8d6b-fa3ddae48948","Type":"ContainerDied","Data":"fb680b36b4f3a8d520805c8f7307930b95ff5bec0622bbe4c3c695828b4a8535"} Feb 27 11:13:20 crc kubenswrapper[4728]: I0227 11:13:20.216206 4728 generic.go:334] "Generic (PLEG): container finished" podID="00b4334c-3f12-4336-bb78-6be880bfc501" containerID="bc6ae231f7bc8f689030c329c5fcaa1359fdb4f00b6d266ae50629de1dba50a1" exitCode=0 Feb 27 11:13:20 crc kubenswrapper[4728]: I0227 11:13:20.216294 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ktqsf" event={"ID":"00b4334c-3f12-4336-bb78-6be880bfc501","Type":"ContainerDied","Data":"bc6ae231f7bc8f689030c329c5fcaa1359fdb4f00b6d266ae50629de1dba50a1"} Feb 27 11:13:20 crc kubenswrapper[4728]: I0227 11:13:20.216337 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ktqsf" event={"ID":"00b4334c-3f12-4336-bb78-6be880bfc501","Type":"ContainerStarted","Data":"4dda6bc4e721623549db46236f4b3bda15eb0c3c5f9fcddc7904212d38859c88"} Feb 27 11:13:22 crc kubenswrapper[4728]: I0227 11:13:22.248014 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n7vgn" event={"ID":"8ea477fc-2e74-4289-8d6b-fa3ddae48948","Type":"ContainerStarted","Data":"86e94085ff29bcd3573b61f5e3a74c07386fbf3971f3ed2771655687f0ebe24f"} Feb 27 11:13:22 crc kubenswrapper[4728]: I0227 11:13:22.290883 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-n7vgn" podStartSLOduration=3.240791145 podStartE2EDuration="12.290867379s" podCreationTimestamp="2026-02-27 11:13:10 +0000 UTC" firstStartedPulling="2026-02-27 11:13:12.061771518 +0000 UTC m=+2812.024137624" lastFinishedPulling="2026-02-27 11:13:21.111847752 +0000 UTC m=+2821.074213858" observedRunningTime="2026-02-27 11:13:22.279711472 +0000 UTC m=+2822.242077578" watchObservedRunningTime="2026-02-27 11:13:22.290867379 +0000 UTC m=+2822.253233485" Feb 27 11:13:26 crc kubenswrapper[4728]: I0227 11:13:26.291057 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ktqsf" event={"ID":"00b4334c-3f12-4336-bb78-6be880bfc501","Type":"ContainerStarted","Data":"fd9384affe1199ad2869d73f16354133cd80766f5ac72a86b42753347e5b9c30"} Feb 27 11:13:27 crc kubenswrapper[4728]: I0227 11:13:27.308731 4728 generic.go:334] "Generic (PLEG): container finished" podID="00b4334c-3f12-4336-bb78-6be880bfc501" containerID="fd9384affe1199ad2869d73f16354133cd80766f5ac72a86b42753347e5b9c30" exitCode=0 Feb 27 11:13:27 crc kubenswrapper[4728]: I0227 11:13:27.308886 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ktqsf" event={"ID":"00b4334c-3f12-4336-bb78-6be880bfc501","Type":"ContainerDied","Data":"fd9384affe1199ad2869d73f16354133cd80766f5ac72a86b42753347e5b9c30"} Feb 27 11:13:28 crc kubenswrapper[4728]: I0227 11:13:28.322007 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ktqsf" event={"ID":"00b4334c-3f12-4336-bb78-6be880bfc501","Type":"ContainerStarted","Data":"51f6894248d7ba4e825c9b714d7c7f8565dc103c805c22139067dc2b70243e04"} Feb 27 11:13:28 crc kubenswrapper[4728]: I0227 11:13:28.338718 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-ktqsf" podStartSLOduration=2.550175439 podStartE2EDuration="10.338702924s" podCreationTimestamp="2026-02-27 11:13:18 +0000 UTC" firstStartedPulling="2026-02-27 11:13:20.225924499 +0000 UTC m=+2820.188290615" lastFinishedPulling="2026-02-27 11:13:28.014451994 +0000 UTC m=+2827.976818100" observedRunningTime="2026-02-27 11:13:28.338677673 +0000 UTC m=+2828.301043789" watchObservedRunningTime="2026-02-27 11:13:28.338702924 +0000 UTC m=+2828.301069030" Feb 27 11:13:28 crc kubenswrapper[4728]: I0227 11:13:28.478484 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-ktqsf" Feb 27 11:13:28 crc kubenswrapper[4728]: I0227 11:13:28.478565 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-ktqsf" Feb 27 11:13:29 crc kubenswrapper[4728]: I0227 11:13:29.524931 4728 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-ktqsf" podUID="00b4334c-3f12-4336-bb78-6be880bfc501" containerName="registry-server" probeResult="failure" output=< Feb 27 11:13:29 crc kubenswrapper[4728]: timeout: failed to connect service ":50051" within 1s Feb 27 11:13:29 crc kubenswrapper[4728]: > Feb 27 11:13:30 crc kubenswrapper[4728]: I0227 11:13:30.735492 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-n7vgn" Feb 27 11:13:30 crc kubenswrapper[4728]: I0227 11:13:30.757079 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-n7vgn" Feb 27 11:13:31 crc kubenswrapper[4728]: I0227 11:13:31.793993 4728 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-n7vgn" podUID="8ea477fc-2e74-4289-8d6b-fa3ddae48948" containerName="registry-server" probeResult="failure" output=< Feb 27 11:13:31 crc kubenswrapper[4728]: timeout: failed to connect service ":50051" within 1s Feb 27 11:13:31 crc kubenswrapper[4728]: > Feb 27 11:13:38 crc kubenswrapper[4728]: I0227 11:13:38.570101 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-ktqsf" Feb 27 11:13:38 crc kubenswrapper[4728]: I0227 11:13:38.629437 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-ktqsf" Feb 27 11:13:38 crc kubenswrapper[4728]: I0227 11:13:38.704986 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ktqsf"] Feb 27 11:13:38 crc kubenswrapper[4728]: I0227 11:13:38.819285 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-llvng"] Feb 27 11:13:38 crc kubenswrapper[4728]: I0227 11:13:38.819772 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-llvng" podUID="9ce7dd06-3478-4aa5-a8b5-d371a01feb41" containerName="registry-server" containerID="cri-o://51b75619f50f77268db3880a0aa6169dd78a1a1ee18064389c38e05e1bceed2a" gracePeriod=2 Feb 27 11:13:39 crc kubenswrapper[4728]: I0227 11:13:39.330357 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-llvng" Feb 27 11:13:39 crc kubenswrapper[4728]: I0227 11:13:39.479007 4728 generic.go:334] "Generic (PLEG): container finished" podID="9ce7dd06-3478-4aa5-a8b5-d371a01feb41" containerID="51b75619f50f77268db3880a0aa6169dd78a1a1ee18064389c38e05e1bceed2a" exitCode=0 Feb 27 11:13:39 crc kubenswrapper[4728]: I0227 11:13:39.479749 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-llvng" event={"ID":"9ce7dd06-3478-4aa5-a8b5-d371a01feb41","Type":"ContainerDied","Data":"51b75619f50f77268db3880a0aa6169dd78a1a1ee18064389c38e05e1bceed2a"} Feb 27 11:13:39 crc kubenswrapper[4728]: I0227 11:13:39.479807 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-llvng" event={"ID":"9ce7dd06-3478-4aa5-a8b5-d371a01feb41","Type":"ContainerDied","Data":"dc4eec735f6f5c260762d1076fb10978a97cd9974e0e727cccefb996c02f0fd2"} Feb 27 11:13:39 crc kubenswrapper[4728]: I0227 11:13:39.479829 4728 scope.go:117] "RemoveContainer" containerID="51b75619f50f77268db3880a0aa6169dd78a1a1ee18064389c38e05e1bceed2a" Feb 27 11:13:39 crc kubenswrapper[4728]: I0227 11:13:39.479960 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-llvng" Feb 27 11:13:39 crc kubenswrapper[4728]: I0227 11:13:39.506685 4728 scope.go:117] "RemoveContainer" containerID="90bb18ea36f0ce91ce74911bd9ce209c82857b0057360a073aaa79a4524e43d9" Feb 27 11:13:39 crc kubenswrapper[4728]: I0227 11:13:39.511232 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qsz47\" (UniqueName: \"kubernetes.io/projected/9ce7dd06-3478-4aa5-a8b5-d371a01feb41-kube-api-access-qsz47\") pod \"9ce7dd06-3478-4aa5-a8b5-d371a01feb41\" (UID: \"9ce7dd06-3478-4aa5-a8b5-d371a01feb41\") " Feb 27 11:13:39 crc kubenswrapper[4728]: I0227 11:13:39.511426 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ce7dd06-3478-4aa5-a8b5-d371a01feb41-catalog-content\") pod \"9ce7dd06-3478-4aa5-a8b5-d371a01feb41\" (UID: \"9ce7dd06-3478-4aa5-a8b5-d371a01feb41\") " Feb 27 11:13:39 crc kubenswrapper[4728]: I0227 11:13:39.511638 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ce7dd06-3478-4aa5-a8b5-d371a01feb41-utilities\") pod \"9ce7dd06-3478-4aa5-a8b5-d371a01feb41\" (UID: \"9ce7dd06-3478-4aa5-a8b5-d371a01feb41\") " Feb 27 11:13:39 crc kubenswrapper[4728]: I0227 11:13:39.512736 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9ce7dd06-3478-4aa5-a8b5-d371a01feb41-utilities" (OuterVolumeSpecName: "utilities") pod "9ce7dd06-3478-4aa5-a8b5-d371a01feb41" (UID: "9ce7dd06-3478-4aa5-a8b5-d371a01feb41"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 11:13:39 crc kubenswrapper[4728]: I0227 11:13:39.518254 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ce7dd06-3478-4aa5-a8b5-d371a01feb41-kube-api-access-qsz47" (OuterVolumeSpecName: "kube-api-access-qsz47") pod "9ce7dd06-3478-4aa5-a8b5-d371a01feb41" (UID: "9ce7dd06-3478-4aa5-a8b5-d371a01feb41"). InnerVolumeSpecName "kube-api-access-qsz47". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:13:39 crc kubenswrapper[4728]: I0227 11:13:39.545638 4728 scope.go:117] "RemoveContainer" containerID="d5e9e36435e91a5e1ad34be3be5c5cc40246316f297857f0b669b241a4bddd0b" Feb 27 11:13:39 crc kubenswrapper[4728]: I0227 11:13:39.582375 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9ce7dd06-3478-4aa5-a8b5-d371a01feb41-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9ce7dd06-3478-4aa5-a8b5-d371a01feb41" (UID: "9ce7dd06-3478-4aa5-a8b5-d371a01feb41"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 11:13:39 crc kubenswrapper[4728]: I0227 11:13:39.614935 4728 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ce7dd06-3478-4aa5-a8b5-d371a01feb41-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 11:13:39 crc kubenswrapper[4728]: I0227 11:13:39.615166 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qsz47\" (UniqueName: \"kubernetes.io/projected/9ce7dd06-3478-4aa5-a8b5-d371a01feb41-kube-api-access-qsz47\") on node \"crc\" DevicePath \"\"" Feb 27 11:13:39 crc kubenswrapper[4728]: I0227 11:13:39.615264 4728 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ce7dd06-3478-4aa5-a8b5-d371a01feb41-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 11:13:39 crc kubenswrapper[4728]: I0227 11:13:39.640948 4728 scope.go:117] "RemoveContainer" containerID="51b75619f50f77268db3880a0aa6169dd78a1a1ee18064389c38e05e1bceed2a" Feb 27 11:13:39 crc kubenswrapper[4728]: E0227 11:13:39.641586 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"51b75619f50f77268db3880a0aa6169dd78a1a1ee18064389c38e05e1bceed2a\": container with ID starting with 51b75619f50f77268db3880a0aa6169dd78a1a1ee18064389c38e05e1bceed2a not found: ID does not exist" containerID="51b75619f50f77268db3880a0aa6169dd78a1a1ee18064389c38e05e1bceed2a" Feb 27 11:13:39 crc kubenswrapper[4728]: I0227 11:13:39.641699 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"51b75619f50f77268db3880a0aa6169dd78a1a1ee18064389c38e05e1bceed2a"} err="failed to get container status \"51b75619f50f77268db3880a0aa6169dd78a1a1ee18064389c38e05e1bceed2a\": rpc error: code = NotFound desc = could not find container \"51b75619f50f77268db3880a0aa6169dd78a1a1ee18064389c38e05e1bceed2a\": container with ID starting with 51b75619f50f77268db3880a0aa6169dd78a1a1ee18064389c38e05e1bceed2a not found: ID does not exist" Feb 27 11:13:39 crc kubenswrapper[4728]: I0227 11:13:39.641819 4728 scope.go:117] "RemoveContainer" containerID="90bb18ea36f0ce91ce74911bd9ce209c82857b0057360a073aaa79a4524e43d9" Feb 27 11:13:39 crc kubenswrapper[4728]: E0227 11:13:39.642271 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"90bb18ea36f0ce91ce74911bd9ce209c82857b0057360a073aaa79a4524e43d9\": container with ID starting with 90bb18ea36f0ce91ce74911bd9ce209c82857b0057360a073aaa79a4524e43d9 not found: ID does not exist" containerID="90bb18ea36f0ce91ce74911bd9ce209c82857b0057360a073aaa79a4524e43d9" Feb 27 11:13:39 crc kubenswrapper[4728]: I0227 11:13:39.642315 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"90bb18ea36f0ce91ce74911bd9ce209c82857b0057360a073aaa79a4524e43d9"} err="failed to get container status \"90bb18ea36f0ce91ce74911bd9ce209c82857b0057360a073aaa79a4524e43d9\": rpc error: code = NotFound desc = could not find container \"90bb18ea36f0ce91ce74911bd9ce209c82857b0057360a073aaa79a4524e43d9\": container with ID starting with 90bb18ea36f0ce91ce74911bd9ce209c82857b0057360a073aaa79a4524e43d9 not found: ID does not exist" Feb 27 11:13:39 crc kubenswrapper[4728]: I0227 11:13:39.642343 4728 scope.go:117] "RemoveContainer" containerID="d5e9e36435e91a5e1ad34be3be5c5cc40246316f297857f0b669b241a4bddd0b" Feb 27 11:13:39 crc kubenswrapper[4728]: E0227 11:13:39.642815 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d5e9e36435e91a5e1ad34be3be5c5cc40246316f297857f0b669b241a4bddd0b\": container with ID starting with d5e9e36435e91a5e1ad34be3be5c5cc40246316f297857f0b669b241a4bddd0b not found: ID does not exist" containerID="d5e9e36435e91a5e1ad34be3be5c5cc40246316f297857f0b669b241a4bddd0b" Feb 27 11:13:39 crc kubenswrapper[4728]: I0227 11:13:39.642875 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d5e9e36435e91a5e1ad34be3be5c5cc40246316f297857f0b669b241a4bddd0b"} err="failed to get container status \"d5e9e36435e91a5e1ad34be3be5c5cc40246316f297857f0b669b241a4bddd0b\": rpc error: code = NotFound desc = could not find container \"d5e9e36435e91a5e1ad34be3be5c5cc40246316f297857f0b669b241a4bddd0b\": container with ID starting with d5e9e36435e91a5e1ad34be3be5c5cc40246316f297857f0b669b241a4bddd0b not found: ID does not exist" Feb 27 11:13:39 crc kubenswrapper[4728]: I0227 11:13:39.846561 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-llvng"] Feb 27 11:13:39 crc kubenswrapper[4728]: I0227 11:13:39.858900 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-llvng"] Feb 27 11:13:40 crc kubenswrapper[4728]: I0227 11:13:40.741407 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ce7dd06-3478-4aa5-a8b5-d371a01feb41" path="/var/lib/kubelet/pods/9ce7dd06-3478-4aa5-a8b5-d371a01feb41/volumes" Feb 27 11:13:41 crc kubenswrapper[4728]: I0227 11:13:41.781791 4728 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-n7vgn" podUID="8ea477fc-2e74-4289-8d6b-fa3ddae48948" containerName="registry-server" probeResult="failure" output=< Feb 27 11:13:41 crc kubenswrapper[4728]: timeout: failed to connect service ":50051" within 1s Feb 27 11:13:41 crc kubenswrapper[4728]: > Feb 27 11:13:51 crc kubenswrapper[4728]: I0227 11:13:51.799804 4728 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-n7vgn" podUID="8ea477fc-2e74-4289-8d6b-fa3ddae48948" containerName="registry-server" probeResult="failure" output=< Feb 27 11:13:51 crc kubenswrapper[4728]: timeout: failed to connect service ":50051" within 1s Feb 27 11:13:51 crc kubenswrapper[4728]: > Feb 27 11:14:00 crc kubenswrapper[4728]: I0227 11:14:00.151728 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536514-bxqr4"] Feb 27 11:14:00 crc kubenswrapper[4728]: E0227 11:14:00.153216 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ce7dd06-3478-4aa5-a8b5-d371a01feb41" containerName="extract-content" Feb 27 11:14:00 crc kubenswrapper[4728]: I0227 11:14:00.153233 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ce7dd06-3478-4aa5-a8b5-d371a01feb41" containerName="extract-content" Feb 27 11:14:00 crc kubenswrapper[4728]: E0227 11:14:00.153245 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ce7dd06-3478-4aa5-a8b5-d371a01feb41" containerName="extract-utilities" Feb 27 11:14:00 crc kubenswrapper[4728]: I0227 11:14:00.153251 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ce7dd06-3478-4aa5-a8b5-d371a01feb41" containerName="extract-utilities" Feb 27 11:14:00 crc kubenswrapper[4728]: E0227 11:14:00.153266 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ce7dd06-3478-4aa5-a8b5-d371a01feb41" containerName="registry-server" Feb 27 11:14:00 crc kubenswrapper[4728]: I0227 11:14:00.153289 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ce7dd06-3478-4aa5-a8b5-d371a01feb41" containerName="registry-server" Feb 27 11:14:00 crc kubenswrapper[4728]: I0227 11:14:00.153538 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ce7dd06-3478-4aa5-a8b5-d371a01feb41" containerName="registry-server" Feb 27 11:14:00 crc kubenswrapper[4728]: I0227 11:14:00.154342 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536514-bxqr4" Feb 27 11:14:00 crc kubenswrapper[4728]: I0227 11:14:00.156362 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-wxdfj" Feb 27 11:14:00 crc kubenswrapper[4728]: I0227 11:14:00.157713 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 11:14:00 crc kubenswrapper[4728]: I0227 11:14:00.157967 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 11:14:00 crc kubenswrapper[4728]: I0227 11:14:00.172431 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536514-bxqr4"] Feb 27 11:14:00 crc kubenswrapper[4728]: I0227 11:14:00.246004 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9ms4\" (UniqueName: \"kubernetes.io/projected/c5129a27-a7d3-4b11-a624-118592d0473e-kube-api-access-j9ms4\") pod \"auto-csr-approver-29536514-bxqr4\" (UID: \"c5129a27-a7d3-4b11-a624-118592d0473e\") " pod="openshift-infra/auto-csr-approver-29536514-bxqr4" Feb 27 11:14:00 crc kubenswrapper[4728]: I0227 11:14:00.349386 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j9ms4\" (UniqueName: \"kubernetes.io/projected/c5129a27-a7d3-4b11-a624-118592d0473e-kube-api-access-j9ms4\") pod \"auto-csr-approver-29536514-bxqr4\" (UID: \"c5129a27-a7d3-4b11-a624-118592d0473e\") " pod="openshift-infra/auto-csr-approver-29536514-bxqr4" Feb 27 11:14:00 crc kubenswrapper[4728]: I0227 11:14:00.370570 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j9ms4\" (UniqueName: \"kubernetes.io/projected/c5129a27-a7d3-4b11-a624-118592d0473e-kube-api-access-j9ms4\") pod \"auto-csr-approver-29536514-bxqr4\" (UID: \"c5129a27-a7d3-4b11-a624-118592d0473e\") " pod="openshift-infra/auto-csr-approver-29536514-bxqr4" Feb 27 11:14:00 crc kubenswrapper[4728]: I0227 11:14:00.478553 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536514-bxqr4" Feb 27 11:14:00 crc kubenswrapper[4728]: I0227 11:14:00.784067 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-n7vgn" Feb 27 11:14:00 crc kubenswrapper[4728]: I0227 11:14:00.843963 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-n7vgn" Feb 27 11:14:00 crc kubenswrapper[4728]: I0227 11:14:00.953293 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536514-bxqr4"] Feb 27 11:14:01 crc kubenswrapper[4728]: I0227 11:14:01.021266 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-n7vgn"] Feb 27 11:14:01 crc kubenswrapper[4728]: I0227 11:14:01.723626 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536514-bxqr4" event={"ID":"c5129a27-a7d3-4b11-a624-118592d0473e","Type":"ContainerStarted","Data":"f3ba7699f8da6a0da089ec1d088653201e1fab2e552e8877d87eadf441a0da2f"} Feb 27 11:14:02 crc kubenswrapper[4728]: I0227 11:14:02.736442 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-n7vgn" podUID="8ea477fc-2e74-4289-8d6b-fa3ddae48948" containerName="registry-server" containerID="cri-o://86e94085ff29bcd3573b61f5e3a74c07386fbf3971f3ed2771655687f0ebe24f" gracePeriod=2 Feb 27 11:14:02 crc kubenswrapper[4728]: I0227 11:14:02.740096 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536514-bxqr4" event={"ID":"c5129a27-a7d3-4b11-a624-118592d0473e","Type":"ContainerStarted","Data":"dc14fff0104f62edcd9fc59605c96e1d9919e1ba79e61e880f6d7acf81aed0ec"} Feb 27 11:14:02 crc kubenswrapper[4728]: I0227 11:14:02.774130 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29536514-bxqr4" podStartSLOduration=1.863587348 podStartE2EDuration="2.774111535s" podCreationTimestamp="2026-02-27 11:14:00 +0000 UTC" firstStartedPulling="2026-02-27 11:14:00.959939107 +0000 UTC m=+2860.922305213" lastFinishedPulling="2026-02-27 11:14:01.870463294 +0000 UTC m=+2861.832829400" observedRunningTime="2026-02-27 11:14:02.766043565 +0000 UTC m=+2862.728409681" watchObservedRunningTime="2026-02-27 11:14:02.774111535 +0000 UTC m=+2862.736477641" Feb 27 11:14:03 crc kubenswrapper[4728]: I0227 11:14:03.277442 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-n7vgn" Feb 27 11:14:03 crc kubenswrapper[4728]: I0227 11:14:03.424027 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d2wf8\" (UniqueName: \"kubernetes.io/projected/8ea477fc-2e74-4289-8d6b-fa3ddae48948-kube-api-access-d2wf8\") pod \"8ea477fc-2e74-4289-8d6b-fa3ddae48948\" (UID: \"8ea477fc-2e74-4289-8d6b-fa3ddae48948\") " Feb 27 11:14:03 crc kubenswrapper[4728]: I0227 11:14:03.424469 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ea477fc-2e74-4289-8d6b-fa3ddae48948-catalog-content\") pod \"8ea477fc-2e74-4289-8d6b-fa3ddae48948\" (UID: \"8ea477fc-2e74-4289-8d6b-fa3ddae48948\") " Feb 27 11:14:03 crc kubenswrapper[4728]: I0227 11:14:03.424746 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ea477fc-2e74-4289-8d6b-fa3ddae48948-utilities\") pod \"8ea477fc-2e74-4289-8d6b-fa3ddae48948\" (UID: \"8ea477fc-2e74-4289-8d6b-fa3ddae48948\") " Feb 27 11:14:03 crc kubenswrapper[4728]: I0227 11:14:03.426522 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8ea477fc-2e74-4289-8d6b-fa3ddae48948-utilities" (OuterVolumeSpecName: "utilities") pod "8ea477fc-2e74-4289-8d6b-fa3ddae48948" (UID: "8ea477fc-2e74-4289-8d6b-fa3ddae48948"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 11:14:03 crc kubenswrapper[4728]: I0227 11:14:03.431806 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ea477fc-2e74-4289-8d6b-fa3ddae48948-kube-api-access-d2wf8" (OuterVolumeSpecName: "kube-api-access-d2wf8") pod "8ea477fc-2e74-4289-8d6b-fa3ddae48948" (UID: "8ea477fc-2e74-4289-8d6b-fa3ddae48948"). InnerVolumeSpecName "kube-api-access-d2wf8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:14:03 crc kubenswrapper[4728]: I0227 11:14:03.531461 4728 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ea477fc-2e74-4289-8d6b-fa3ddae48948-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 11:14:03 crc kubenswrapper[4728]: I0227 11:14:03.531515 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d2wf8\" (UniqueName: \"kubernetes.io/projected/8ea477fc-2e74-4289-8d6b-fa3ddae48948-kube-api-access-d2wf8\") on node \"crc\" DevicePath \"\"" Feb 27 11:14:03 crc kubenswrapper[4728]: I0227 11:14:03.580750 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8ea477fc-2e74-4289-8d6b-fa3ddae48948-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8ea477fc-2e74-4289-8d6b-fa3ddae48948" (UID: "8ea477fc-2e74-4289-8d6b-fa3ddae48948"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 11:14:03 crc kubenswrapper[4728]: I0227 11:14:03.633181 4728 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ea477fc-2e74-4289-8d6b-fa3ddae48948-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 11:14:03 crc kubenswrapper[4728]: I0227 11:14:03.752526 4728 generic.go:334] "Generic (PLEG): container finished" podID="8ea477fc-2e74-4289-8d6b-fa3ddae48948" containerID="86e94085ff29bcd3573b61f5e3a74c07386fbf3971f3ed2771655687f0ebe24f" exitCode=0 Feb 27 11:14:03 crc kubenswrapper[4728]: I0227 11:14:03.752613 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-n7vgn" Feb 27 11:14:03 crc kubenswrapper[4728]: I0227 11:14:03.752616 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n7vgn" event={"ID":"8ea477fc-2e74-4289-8d6b-fa3ddae48948","Type":"ContainerDied","Data":"86e94085ff29bcd3573b61f5e3a74c07386fbf3971f3ed2771655687f0ebe24f"} Feb 27 11:14:03 crc kubenswrapper[4728]: I0227 11:14:03.752776 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n7vgn" event={"ID":"8ea477fc-2e74-4289-8d6b-fa3ddae48948","Type":"ContainerDied","Data":"909a992cd7ab52e00b33f634f2c6af59287673090f2ec0e46e28470509094d2e"} Feb 27 11:14:03 crc kubenswrapper[4728]: I0227 11:14:03.752815 4728 scope.go:117] "RemoveContainer" containerID="86e94085ff29bcd3573b61f5e3a74c07386fbf3971f3ed2771655687f0ebe24f" Feb 27 11:14:03 crc kubenswrapper[4728]: I0227 11:14:03.762755 4728 generic.go:334] "Generic (PLEG): container finished" podID="c5129a27-a7d3-4b11-a624-118592d0473e" containerID="dc14fff0104f62edcd9fc59605c96e1d9919e1ba79e61e880f6d7acf81aed0ec" exitCode=0 Feb 27 11:14:03 crc kubenswrapper[4728]: I0227 11:14:03.762846 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536514-bxqr4" event={"ID":"c5129a27-a7d3-4b11-a624-118592d0473e","Type":"ContainerDied","Data":"dc14fff0104f62edcd9fc59605c96e1d9919e1ba79e61e880f6d7acf81aed0ec"} Feb 27 11:14:03 crc kubenswrapper[4728]: I0227 11:14:03.827373 4728 scope.go:117] "RemoveContainer" containerID="fb680b36b4f3a8d520805c8f7307930b95ff5bec0622bbe4c3c695828b4a8535" Feb 27 11:14:03 crc kubenswrapper[4728]: I0227 11:14:03.928564 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-n7vgn"] Feb 27 11:14:03 crc kubenswrapper[4728]: I0227 11:14:03.930255 4728 scope.go:117] "RemoveContainer" containerID="dd7131f16749b0e62b80b85e6638477f34f232d2faa5373112b66001dfeaf1d7" Feb 27 11:14:03 crc kubenswrapper[4728]: I0227 11:14:03.941801 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-n7vgn"] Feb 27 11:14:03 crc kubenswrapper[4728]: I0227 11:14:03.969593 4728 scope.go:117] "RemoveContainer" containerID="86e94085ff29bcd3573b61f5e3a74c07386fbf3971f3ed2771655687f0ebe24f" Feb 27 11:14:03 crc kubenswrapper[4728]: E0227 11:14:03.976962 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"86e94085ff29bcd3573b61f5e3a74c07386fbf3971f3ed2771655687f0ebe24f\": container with ID starting with 86e94085ff29bcd3573b61f5e3a74c07386fbf3971f3ed2771655687f0ebe24f not found: ID does not exist" containerID="86e94085ff29bcd3573b61f5e3a74c07386fbf3971f3ed2771655687f0ebe24f" Feb 27 11:14:03 crc kubenswrapper[4728]: I0227 11:14:03.977004 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"86e94085ff29bcd3573b61f5e3a74c07386fbf3971f3ed2771655687f0ebe24f"} err="failed to get container status \"86e94085ff29bcd3573b61f5e3a74c07386fbf3971f3ed2771655687f0ebe24f\": rpc error: code = NotFound desc = could not find container \"86e94085ff29bcd3573b61f5e3a74c07386fbf3971f3ed2771655687f0ebe24f\": container with ID starting with 86e94085ff29bcd3573b61f5e3a74c07386fbf3971f3ed2771655687f0ebe24f not found: ID does not exist" Feb 27 11:14:03 crc kubenswrapper[4728]: I0227 11:14:03.977028 4728 scope.go:117] "RemoveContainer" containerID="fb680b36b4f3a8d520805c8f7307930b95ff5bec0622bbe4c3c695828b4a8535" Feb 27 11:14:03 crc kubenswrapper[4728]: E0227 11:14:03.977430 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fb680b36b4f3a8d520805c8f7307930b95ff5bec0622bbe4c3c695828b4a8535\": container with ID starting with fb680b36b4f3a8d520805c8f7307930b95ff5bec0622bbe4c3c695828b4a8535 not found: ID does not exist" containerID="fb680b36b4f3a8d520805c8f7307930b95ff5bec0622bbe4c3c695828b4a8535" Feb 27 11:14:03 crc kubenswrapper[4728]: I0227 11:14:03.977484 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb680b36b4f3a8d520805c8f7307930b95ff5bec0622bbe4c3c695828b4a8535"} err="failed to get container status \"fb680b36b4f3a8d520805c8f7307930b95ff5bec0622bbe4c3c695828b4a8535\": rpc error: code = NotFound desc = could not find container \"fb680b36b4f3a8d520805c8f7307930b95ff5bec0622bbe4c3c695828b4a8535\": container with ID starting with fb680b36b4f3a8d520805c8f7307930b95ff5bec0622bbe4c3c695828b4a8535 not found: ID does not exist" Feb 27 11:14:03 crc kubenswrapper[4728]: I0227 11:14:03.977560 4728 scope.go:117] "RemoveContainer" containerID="dd7131f16749b0e62b80b85e6638477f34f232d2faa5373112b66001dfeaf1d7" Feb 27 11:14:03 crc kubenswrapper[4728]: E0227 11:14:03.978764 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dd7131f16749b0e62b80b85e6638477f34f232d2faa5373112b66001dfeaf1d7\": container with ID starting with dd7131f16749b0e62b80b85e6638477f34f232d2faa5373112b66001dfeaf1d7 not found: ID does not exist" containerID="dd7131f16749b0e62b80b85e6638477f34f232d2faa5373112b66001dfeaf1d7" Feb 27 11:14:03 crc kubenswrapper[4728]: I0227 11:14:03.978822 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd7131f16749b0e62b80b85e6638477f34f232d2faa5373112b66001dfeaf1d7"} err="failed to get container status \"dd7131f16749b0e62b80b85e6638477f34f232d2faa5373112b66001dfeaf1d7\": rpc error: code = NotFound desc = could not find container \"dd7131f16749b0e62b80b85e6638477f34f232d2faa5373112b66001dfeaf1d7\": container with ID starting with dd7131f16749b0e62b80b85e6638477f34f232d2faa5373112b66001dfeaf1d7 not found: ID does not exist" Feb 27 11:14:04 crc kubenswrapper[4728]: I0227 11:14:04.750958 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ea477fc-2e74-4289-8d6b-fa3ddae48948" path="/var/lib/kubelet/pods/8ea477fc-2e74-4289-8d6b-fa3ddae48948/volumes" Feb 27 11:14:05 crc kubenswrapper[4728]: I0227 11:14:05.258830 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536514-bxqr4" Feb 27 11:14:05 crc kubenswrapper[4728]: I0227 11:14:05.710899 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j9ms4\" (UniqueName: \"kubernetes.io/projected/c5129a27-a7d3-4b11-a624-118592d0473e-kube-api-access-j9ms4\") pod \"c5129a27-a7d3-4b11-a624-118592d0473e\" (UID: \"c5129a27-a7d3-4b11-a624-118592d0473e\") " Feb 27 11:14:05 crc kubenswrapper[4728]: I0227 11:14:05.723493 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5129a27-a7d3-4b11-a624-118592d0473e-kube-api-access-j9ms4" (OuterVolumeSpecName: "kube-api-access-j9ms4") pod "c5129a27-a7d3-4b11-a624-118592d0473e" (UID: "c5129a27-a7d3-4b11-a624-118592d0473e"). InnerVolumeSpecName "kube-api-access-j9ms4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:14:05 crc kubenswrapper[4728]: I0227 11:14:05.796176 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536514-bxqr4" event={"ID":"c5129a27-a7d3-4b11-a624-118592d0473e","Type":"ContainerDied","Data":"f3ba7699f8da6a0da089ec1d088653201e1fab2e552e8877d87eadf441a0da2f"} Feb 27 11:14:05 crc kubenswrapper[4728]: I0227 11:14:05.796215 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f3ba7699f8da6a0da089ec1d088653201e1fab2e552e8877d87eadf441a0da2f" Feb 27 11:14:05 crc kubenswrapper[4728]: I0227 11:14:05.796264 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536514-bxqr4" Feb 27 11:14:05 crc kubenswrapper[4728]: I0227 11:14:05.815225 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j9ms4\" (UniqueName: \"kubernetes.io/projected/c5129a27-a7d3-4b11-a624-118592d0473e-kube-api-access-j9ms4\") on node \"crc\" DevicePath \"\"" Feb 27 11:14:05 crc kubenswrapper[4728]: I0227 11:14:05.826434 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536508-bszlh"] Feb 27 11:14:05 crc kubenswrapper[4728]: I0227 11:14:05.837983 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536508-bszlh"] Feb 27 11:14:06 crc kubenswrapper[4728]: I0227 11:14:06.743275 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c5a8b83-71a1-425d-a3c9-ddf070667c4e" path="/var/lib/kubelet/pods/6c5a8b83-71a1-425d-a3c9-ddf070667c4e/volumes" Feb 27 11:14:25 crc kubenswrapper[4728]: I0227 11:14:25.659667 4728 scope.go:117] "RemoveContainer" containerID="69523dc5e436cbb7808f98975e0868811490027a3a212f190267f0819f09fe50" Feb 27 11:15:00 crc kubenswrapper[4728]: I0227 11:15:00.168862 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29536515-tdtdb"] Feb 27 11:15:00 crc kubenswrapper[4728]: E0227 11:15:00.170731 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ea477fc-2e74-4289-8d6b-fa3ddae48948" containerName="extract-utilities" Feb 27 11:15:00 crc kubenswrapper[4728]: I0227 11:15:00.170749 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ea477fc-2e74-4289-8d6b-fa3ddae48948" containerName="extract-utilities" Feb 27 11:15:00 crc kubenswrapper[4728]: E0227 11:15:00.170764 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5129a27-a7d3-4b11-a624-118592d0473e" containerName="oc" Feb 27 11:15:00 crc kubenswrapper[4728]: I0227 11:15:00.170770 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5129a27-a7d3-4b11-a624-118592d0473e" containerName="oc" Feb 27 11:15:00 crc kubenswrapper[4728]: E0227 11:15:00.170812 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ea477fc-2e74-4289-8d6b-fa3ddae48948" containerName="extract-content" Feb 27 11:15:00 crc kubenswrapper[4728]: I0227 11:15:00.170820 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ea477fc-2e74-4289-8d6b-fa3ddae48948" containerName="extract-content" Feb 27 11:15:00 crc kubenswrapper[4728]: E0227 11:15:00.170850 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ea477fc-2e74-4289-8d6b-fa3ddae48948" containerName="registry-server" Feb 27 11:15:00 crc kubenswrapper[4728]: I0227 11:15:00.170859 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ea477fc-2e74-4289-8d6b-fa3ddae48948" containerName="registry-server" Feb 27 11:15:00 crc kubenswrapper[4728]: I0227 11:15:00.171121 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5129a27-a7d3-4b11-a624-118592d0473e" containerName="oc" Feb 27 11:15:00 crc kubenswrapper[4728]: I0227 11:15:00.171146 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ea477fc-2e74-4289-8d6b-fa3ddae48948" containerName="registry-server" Feb 27 11:15:00 crc kubenswrapper[4728]: I0227 11:15:00.172383 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29536515-tdtdb" Feb 27 11:15:00 crc kubenswrapper[4728]: I0227 11:15:00.184610 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29536515-tdtdb"] Feb 27 11:15:00 crc kubenswrapper[4728]: I0227 11:15:00.216295 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 27 11:15:00 crc kubenswrapper[4728]: I0227 11:15:00.216596 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 27 11:15:00 crc kubenswrapper[4728]: I0227 11:15:00.330599 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/75a807f2-ce70-431e-8122-9aa5c60d2efb-config-volume\") pod \"collect-profiles-29536515-tdtdb\" (UID: \"75a807f2-ce70-431e-8122-9aa5c60d2efb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536515-tdtdb" Feb 27 11:15:00 crc kubenswrapper[4728]: I0227 11:15:00.331197 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fmjms\" (UniqueName: \"kubernetes.io/projected/75a807f2-ce70-431e-8122-9aa5c60d2efb-kube-api-access-fmjms\") pod \"collect-profiles-29536515-tdtdb\" (UID: \"75a807f2-ce70-431e-8122-9aa5c60d2efb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536515-tdtdb" Feb 27 11:15:00 crc kubenswrapper[4728]: I0227 11:15:00.332073 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/75a807f2-ce70-431e-8122-9aa5c60d2efb-secret-volume\") pod \"collect-profiles-29536515-tdtdb\" (UID: \"75a807f2-ce70-431e-8122-9aa5c60d2efb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536515-tdtdb" Feb 27 11:15:00 crc kubenswrapper[4728]: I0227 11:15:00.434753 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fmjms\" (UniqueName: \"kubernetes.io/projected/75a807f2-ce70-431e-8122-9aa5c60d2efb-kube-api-access-fmjms\") pod \"collect-profiles-29536515-tdtdb\" (UID: \"75a807f2-ce70-431e-8122-9aa5c60d2efb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536515-tdtdb" Feb 27 11:15:00 crc kubenswrapper[4728]: I0227 11:15:00.434886 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/75a807f2-ce70-431e-8122-9aa5c60d2efb-secret-volume\") pod \"collect-profiles-29536515-tdtdb\" (UID: \"75a807f2-ce70-431e-8122-9aa5c60d2efb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536515-tdtdb" Feb 27 11:15:00 crc kubenswrapper[4728]: I0227 11:15:00.434968 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/75a807f2-ce70-431e-8122-9aa5c60d2efb-config-volume\") pod \"collect-profiles-29536515-tdtdb\" (UID: \"75a807f2-ce70-431e-8122-9aa5c60d2efb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536515-tdtdb" Feb 27 11:15:00 crc kubenswrapper[4728]: I0227 11:15:00.435905 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/75a807f2-ce70-431e-8122-9aa5c60d2efb-config-volume\") pod \"collect-profiles-29536515-tdtdb\" (UID: \"75a807f2-ce70-431e-8122-9aa5c60d2efb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536515-tdtdb" Feb 27 11:15:00 crc kubenswrapper[4728]: I0227 11:15:00.444164 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/75a807f2-ce70-431e-8122-9aa5c60d2efb-secret-volume\") pod \"collect-profiles-29536515-tdtdb\" (UID: \"75a807f2-ce70-431e-8122-9aa5c60d2efb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536515-tdtdb" Feb 27 11:15:00 crc kubenswrapper[4728]: I0227 11:15:00.454750 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fmjms\" (UniqueName: \"kubernetes.io/projected/75a807f2-ce70-431e-8122-9aa5c60d2efb-kube-api-access-fmjms\") pod \"collect-profiles-29536515-tdtdb\" (UID: \"75a807f2-ce70-431e-8122-9aa5c60d2efb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536515-tdtdb" Feb 27 11:15:00 crc kubenswrapper[4728]: I0227 11:15:00.542289 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29536515-tdtdb" Feb 27 11:15:01 crc kubenswrapper[4728]: I0227 11:15:01.084517 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29536515-tdtdb"] Feb 27 11:15:01 crc kubenswrapper[4728]: I0227 11:15:01.468193 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29536515-tdtdb" event={"ID":"75a807f2-ce70-431e-8122-9aa5c60d2efb","Type":"ContainerStarted","Data":"2c513bb8834b63f61b290eeddbb7ef88059e4ba37289e40354e132c4a2a74ff6"} Feb 27 11:15:01 crc kubenswrapper[4728]: I0227 11:15:01.468529 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29536515-tdtdb" event={"ID":"75a807f2-ce70-431e-8122-9aa5c60d2efb","Type":"ContainerStarted","Data":"9791ca0cb3ea862fadd666c63a8d47d8136a7b234dfcf3d471fc54615d7d19c9"} Feb 27 11:15:01 crc kubenswrapper[4728]: I0227 11:15:01.492775 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29536515-tdtdb" podStartSLOduration=1.492755882 podStartE2EDuration="1.492755882s" podCreationTimestamp="2026-02-27 11:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 11:15:01.485980458 +0000 UTC m=+2921.448346564" watchObservedRunningTime="2026-02-27 11:15:01.492755882 +0000 UTC m=+2921.455121988" Feb 27 11:15:02 crc kubenswrapper[4728]: I0227 11:15:02.483213 4728 generic.go:334] "Generic (PLEG): container finished" podID="75a807f2-ce70-431e-8122-9aa5c60d2efb" containerID="2c513bb8834b63f61b290eeddbb7ef88059e4ba37289e40354e132c4a2a74ff6" exitCode=0 Feb 27 11:15:02 crc kubenswrapper[4728]: I0227 11:15:02.483316 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29536515-tdtdb" event={"ID":"75a807f2-ce70-431e-8122-9aa5c60d2efb","Type":"ContainerDied","Data":"2c513bb8834b63f61b290eeddbb7ef88059e4ba37289e40354e132c4a2a74ff6"} Feb 27 11:15:03 crc kubenswrapper[4728]: I0227 11:15:03.933733 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29536515-tdtdb" Feb 27 11:15:04 crc kubenswrapper[4728]: I0227 11:15:04.020968 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fmjms\" (UniqueName: \"kubernetes.io/projected/75a807f2-ce70-431e-8122-9aa5c60d2efb-kube-api-access-fmjms\") pod \"75a807f2-ce70-431e-8122-9aa5c60d2efb\" (UID: \"75a807f2-ce70-431e-8122-9aa5c60d2efb\") " Feb 27 11:15:04 crc kubenswrapper[4728]: I0227 11:15:04.021071 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/75a807f2-ce70-431e-8122-9aa5c60d2efb-config-volume\") pod \"75a807f2-ce70-431e-8122-9aa5c60d2efb\" (UID: \"75a807f2-ce70-431e-8122-9aa5c60d2efb\") " Feb 27 11:15:04 crc kubenswrapper[4728]: I0227 11:15:04.021298 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/75a807f2-ce70-431e-8122-9aa5c60d2efb-secret-volume\") pod \"75a807f2-ce70-431e-8122-9aa5c60d2efb\" (UID: \"75a807f2-ce70-431e-8122-9aa5c60d2efb\") " Feb 27 11:15:04 crc kubenswrapper[4728]: I0227 11:15:04.021869 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75a807f2-ce70-431e-8122-9aa5c60d2efb-config-volume" (OuterVolumeSpecName: "config-volume") pod "75a807f2-ce70-431e-8122-9aa5c60d2efb" (UID: "75a807f2-ce70-431e-8122-9aa5c60d2efb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 11:15:04 crc kubenswrapper[4728]: I0227 11:15:04.022179 4728 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/75a807f2-ce70-431e-8122-9aa5c60d2efb-config-volume\") on node \"crc\" DevicePath \"\"" Feb 27 11:15:04 crc kubenswrapper[4728]: I0227 11:15:04.027447 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75a807f2-ce70-431e-8122-9aa5c60d2efb-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "75a807f2-ce70-431e-8122-9aa5c60d2efb" (UID: "75a807f2-ce70-431e-8122-9aa5c60d2efb"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 11:15:04 crc kubenswrapper[4728]: I0227 11:15:04.027447 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75a807f2-ce70-431e-8122-9aa5c60d2efb-kube-api-access-fmjms" (OuterVolumeSpecName: "kube-api-access-fmjms") pod "75a807f2-ce70-431e-8122-9aa5c60d2efb" (UID: "75a807f2-ce70-431e-8122-9aa5c60d2efb"). InnerVolumeSpecName "kube-api-access-fmjms". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:15:04 crc kubenswrapper[4728]: I0227 11:15:04.123953 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fmjms\" (UniqueName: \"kubernetes.io/projected/75a807f2-ce70-431e-8122-9aa5c60d2efb-kube-api-access-fmjms\") on node \"crc\" DevicePath \"\"" Feb 27 11:15:04 crc kubenswrapper[4728]: I0227 11:15:04.123994 4728 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/75a807f2-ce70-431e-8122-9aa5c60d2efb-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 27 11:15:04 crc kubenswrapper[4728]: I0227 11:15:04.507003 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29536515-tdtdb" event={"ID":"75a807f2-ce70-431e-8122-9aa5c60d2efb","Type":"ContainerDied","Data":"9791ca0cb3ea862fadd666c63a8d47d8136a7b234dfcf3d471fc54615d7d19c9"} Feb 27 11:15:04 crc kubenswrapper[4728]: I0227 11:15:04.507249 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9791ca0cb3ea862fadd666c63a8d47d8136a7b234dfcf3d471fc54615d7d19c9" Feb 27 11:15:04 crc kubenswrapper[4728]: I0227 11:15:04.507112 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29536515-tdtdb" Feb 27 11:15:04 crc kubenswrapper[4728]: I0227 11:15:04.572534 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29536470-g4k6j"] Feb 27 11:15:04 crc kubenswrapper[4728]: I0227 11:15:04.584479 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29536470-g4k6j"] Feb 27 11:15:04 crc kubenswrapper[4728]: I0227 11:15:04.739528 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="11c85e5f-01f8-436d-96d7-d623c640df36" path="/var/lib/kubelet/pods/11c85e5f-01f8-436d-96d7-d623c640df36/volumes" Feb 27 11:15:13 crc kubenswrapper[4728]: I0227 11:15:13.614589 4728 generic.go:334] "Generic (PLEG): container finished" podID="f509a2d6-f273-4497-8dad-171d2f53d125" containerID="f53b9d417d66a58a6df449df38f90259a83e9939925cdf8797a2b073c6ba2fcb" exitCode=0 Feb 27 11:15:13 crc kubenswrapper[4728]: I0227 11:15:13.614641 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mjg5q" event={"ID":"f509a2d6-f273-4497-8dad-171d2f53d125","Type":"ContainerDied","Data":"f53b9d417d66a58a6df449df38f90259a83e9939925cdf8797a2b073c6ba2fcb"} Feb 27 11:15:15 crc kubenswrapper[4728]: I0227 11:15:15.164894 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mjg5q" Feb 27 11:15:15 crc kubenswrapper[4728]: I0227 11:15:15.197801 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pzd6t\" (UniqueName: \"kubernetes.io/projected/f509a2d6-f273-4497-8dad-171d2f53d125-kube-api-access-pzd6t\") pod \"f509a2d6-f273-4497-8dad-171d2f53d125\" (UID: \"f509a2d6-f273-4497-8dad-171d2f53d125\") " Feb 27 11:15:15 crc kubenswrapper[4728]: I0227 11:15:15.197852 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f509a2d6-f273-4497-8dad-171d2f53d125-telemetry-combined-ca-bundle\") pod \"f509a2d6-f273-4497-8dad-171d2f53d125\" (UID: \"f509a2d6-f273-4497-8dad-171d2f53d125\") " Feb 27 11:15:15 crc kubenswrapper[4728]: I0227 11:15:15.197903 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/f509a2d6-f273-4497-8dad-171d2f53d125-ceilometer-compute-config-data-0\") pod \"f509a2d6-f273-4497-8dad-171d2f53d125\" (UID: \"f509a2d6-f273-4497-8dad-171d2f53d125\") " Feb 27 11:15:15 crc kubenswrapper[4728]: I0227 11:15:15.197962 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f509a2d6-f273-4497-8dad-171d2f53d125-ssh-key-openstack-edpm-ipam\") pod \"f509a2d6-f273-4497-8dad-171d2f53d125\" (UID: \"f509a2d6-f273-4497-8dad-171d2f53d125\") " Feb 27 11:15:15 crc kubenswrapper[4728]: I0227 11:15:15.198021 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f509a2d6-f273-4497-8dad-171d2f53d125-inventory\") pod \"f509a2d6-f273-4497-8dad-171d2f53d125\" (UID: \"f509a2d6-f273-4497-8dad-171d2f53d125\") " Feb 27 11:15:15 crc kubenswrapper[4728]: I0227 11:15:15.198132 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/f509a2d6-f273-4497-8dad-171d2f53d125-ceilometer-compute-config-data-2\") pod \"f509a2d6-f273-4497-8dad-171d2f53d125\" (UID: \"f509a2d6-f273-4497-8dad-171d2f53d125\") " Feb 27 11:15:15 crc kubenswrapper[4728]: I0227 11:15:15.198299 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/f509a2d6-f273-4497-8dad-171d2f53d125-ceilometer-compute-config-data-1\") pod \"f509a2d6-f273-4497-8dad-171d2f53d125\" (UID: \"f509a2d6-f273-4497-8dad-171d2f53d125\") " Feb 27 11:15:15 crc kubenswrapper[4728]: I0227 11:15:15.232543 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f509a2d6-f273-4497-8dad-171d2f53d125-kube-api-access-pzd6t" (OuterVolumeSpecName: "kube-api-access-pzd6t") pod "f509a2d6-f273-4497-8dad-171d2f53d125" (UID: "f509a2d6-f273-4497-8dad-171d2f53d125"). InnerVolumeSpecName "kube-api-access-pzd6t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:15:15 crc kubenswrapper[4728]: I0227 11:15:15.234698 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f509a2d6-f273-4497-8dad-171d2f53d125-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "f509a2d6-f273-4497-8dad-171d2f53d125" (UID: "f509a2d6-f273-4497-8dad-171d2f53d125"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 11:15:15 crc kubenswrapper[4728]: I0227 11:15:15.237436 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f509a2d6-f273-4497-8dad-171d2f53d125-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "f509a2d6-f273-4497-8dad-171d2f53d125" (UID: "f509a2d6-f273-4497-8dad-171d2f53d125"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 11:15:15 crc kubenswrapper[4728]: I0227 11:15:15.244295 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f509a2d6-f273-4497-8dad-171d2f53d125-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "f509a2d6-f273-4497-8dad-171d2f53d125" (UID: "f509a2d6-f273-4497-8dad-171d2f53d125"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 11:15:15 crc kubenswrapper[4728]: I0227 11:15:15.275513 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f509a2d6-f273-4497-8dad-171d2f53d125-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "f509a2d6-f273-4497-8dad-171d2f53d125" (UID: "f509a2d6-f273-4497-8dad-171d2f53d125"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 11:15:15 crc kubenswrapper[4728]: I0227 11:15:15.281583 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f509a2d6-f273-4497-8dad-171d2f53d125-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "f509a2d6-f273-4497-8dad-171d2f53d125" (UID: "f509a2d6-f273-4497-8dad-171d2f53d125"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 11:15:15 crc kubenswrapper[4728]: I0227 11:15:15.283444 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f509a2d6-f273-4497-8dad-171d2f53d125-inventory" (OuterVolumeSpecName: "inventory") pod "f509a2d6-f273-4497-8dad-171d2f53d125" (UID: "f509a2d6-f273-4497-8dad-171d2f53d125"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 11:15:15 crc kubenswrapper[4728]: I0227 11:15:15.301621 4728 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/f509a2d6-f273-4497-8dad-171d2f53d125-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Feb 27 11:15:15 crc kubenswrapper[4728]: I0227 11:15:15.301658 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pzd6t\" (UniqueName: \"kubernetes.io/projected/f509a2d6-f273-4497-8dad-171d2f53d125-kube-api-access-pzd6t\") on node \"crc\" DevicePath \"\"" Feb 27 11:15:15 crc kubenswrapper[4728]: I0227 11:15:15.301671 4728 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f509a2d6-f273-4497-8dad-171d2f53d125-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 11:15:15 crc kubenswrapper[4728]: I0227 11:15:15.301684 4728 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/f509a2d6-f273-4497-8dad-171d2f53d125-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Feb 27 11:15:15 crc kubenswrapper[4728]: I0227 11:15:15.301694 4728 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f509a2d6-f273-4497-8dad-171d2f53d125-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 27 11:15:15 crc kubenswrapper[4728]: I0227 11:15:15.301705 4728 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f509a2d6-f273-4497-8dad-171d2f53d125-inventory\") on node \"crc\" DevicePath \"\"" Feb 27 11:15:15 crc kubenswrapper[4728]: I0227 11:15:15.301718 4728 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/f509a2d6-f273-4497-8dad-171d2f53d125-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Feb 27 11:15:15 crc kubenswrapper[4728]: I0227 11:15:15.644856 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mjg5q" event={"ID":"f509a2d6-f273-4497-8dad-171d2f53d125","Type":"ContainerDied","Data":"3532c67a6ff71e407cffd7e041b216b5bdc3c3167e8cd40f12f7c6f2c56f3edb"} Feb 27 11:15:15 crc kubenswrapper[4728]: I0227 11:15:15.644916 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3532c67a6ff71e407cffd7e041b216b5bdc3c3167e8cd40f12f7c6f2c56f3edb" Feb 27 11:15:15 crc kubenswrapper[4728]: I0227 11:15:15.644954 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mjg5q" Feb 27 11:15:15 crc kubenswrapper[4728]: I0227 11:15:15.789332 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-s8ngc"] Feb 27 11:15:15 crc kubenswrapper[4728]: E0227 11:15:15.789873 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75a807f2-ce70-431e-8122-9aa5c60d2efb" containerName="collect-profiles" Feb 27 11:15:15 crc kubenswrapper[4728]: I0227 11:15:15.789894 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="75a807f2-ce70-431e-8122-9aa5c60d2efb" containerName="collect-profiles" Feb 27 11:15:15 crc kubenswrapper[4728]: E0227 11:15:15.789920 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f509a2d6-f273-4497-8dad-171d2f53d125" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 27 11:15:15 crc kubenswrapper[4728]: I0227 11:15:15.789928 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="f509a2d6-f273-4497-8dad-171d2f53d125" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 27 11:15:15 crc kubenswrapper[4728]: I0227 11:15:15.790213 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="f509a2d6-f273-4497-8dad-171d2f53d125" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 27 11:15:15 crc kubenswrapper[4728]: I0227 11:15:15.790263 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="75a807f2-ce70-431e-8122-9aa5c60d2efb" containerName="collect-profiles" Feb 27 11:15:15 crc kubenswrapper[4728]: I0227 11:15:15.791269 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-s8ngc" Feb 27 11:15:15 crc kubenswrapper[4728]: I0227 11:15:15.796228 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-r9nq7" Feb 27 11:15:15 crc kubenswrapper[4728]: I0227 11:15:15.796606 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 27 11:15:15 crc kubenswrapper[4728]: I0227 11:15:15.797872 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 27 11:15:15 crc kubenswrapper[4728]: I0227 11:15:15.808330 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-ipmi-config-data" Feb 27 11:15:15 crc kubenswrapper[4728]: I0227 11:15:15.808523 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 27 11:15:15 crc kubenswrapper[4728]: I0227 11:15:15.830085 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-s8ngc"] Feb 27 11:15:15 crc kubenswrapper[4728]: I0227 11:15:15.924068 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db929680-5ba2-4112-8eaf-c94cdd3b0f89-telemetry-power-monitoring-combined-ca-bundle\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-s8ngc\" (UID: \"db929680-5ba2-4112-8eaf-c94cdd3b0f89\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-s8ngc" Feb 27 11:15:15 crc kubenswrapper[4728]: I0227 11:15:15.924142 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/db929680-5ba2-4112-8eaf-c94cdd3b0f89-ceilometer-ipmi-config-data-2\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-s8ngc\" (UID: \"db929680-5ba2-4112-8eaf-c94cdd3b0f89\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-s8ngc" Feb 27 11:15:15 crc kubenswrapper[4728]: I0227 11:15:15.924274 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/db929680-5ba2-4112-8eaf-c94cdd3b0f89-ceilometer-ipmi-config-data-1\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-s8ngc\" (UID: \"db929680-5ba2-4112-8eaf-c94cdd3b0f89\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-s8ngc" Feb 27 11:15:15 crc kubenswrapper[4728]: I0227 11:15:15.924323 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/db929680-5ba2-4112-8eaf-c94cdd3b0f89-inventory\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-s8ngc\" (UID: \"db929680-5ba2-4112-8eaf-c94cdd3b0f89\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-s8ngc" Feb 27 11:15:15 crc kubenswrapper[4728]: I0227 11:15:15.924362 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/db929680-5ba2-4112-8eaf-c94cdd3b0f89-ceilometer-ipmi-config-data-0\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-s8ngc\" (UID: \"db929680-5ba2-4112-8eaf-c94cdd3b0f89\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-s8ngc" Feb 27 11:15:15 crc kubenswrapper[4728]: I0227 11:15:15.924657 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/db929680-5ba2-4112-8eaf-c94cdd3b0f89-ssh-key-openstack-edpm-ipam\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-s8ngc\" (UID: \"db929680-5ba2-4112-8eaf-c94cdd3b0f89\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-s8ngc" Feb 27 11:15:15 crc kubenswrapper[4728]: I0227 11:15:15.924869 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6v4kh\" (UniqueName: \"kubernetes.io/projected/db929680-5ba2-4112-8eaf-c94cdd3b0f89-kube-api-access-6v4kh\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-s8ngc\" (UID: \"db929680-5ba2-4112-8eaf-c94cdd3b0f89\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-s8ngc" Feb 27 11:15:16 crc kubenswrapper[4728]: I0227 11:15:16.028239 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/db929680-5ba2-4112-8eaf-c94cdd3b0f89-ceilometer-ipmi-config-data-1\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-s8ngc\" (UID: \"db929680-5ba2-4112-8eaf-c94cdd3b0f89\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-s8ngc" Feb 27 11:15:16 crc kubenswrapper[4728]: I0227 11:15:16.028738 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/db929680-5ba2-4112-8eaf-c94cdd3b0f89-inventory\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-s8ngc\" (UID: \"db929680-5ba2-4112-8eaf-c94cdd3b0f89\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-s8ngc" Feb 27 11:15:16 crc kubenswrapper[4728]: I0227 11:15:16.028801 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/db929680-5ba2-4112-8eaf-c94cdd3b0f89-ceilometer-ipmi-config-data-0\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-s8ngc\" (UID: \"db929680-5ba2-4112-8eaf-c94cdd3b0f89\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-s8ngc" Feb 27 11:15:16 crc kubenswrapper[4728]: I0227 11:15:16.029134 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/db929680-5ba2-4112-8eaf-c94cdd3b0f89-ssh-key-openstack-edpm-ipam\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-s8ngc\" (UID: \"db929680-5ba2-4112-8eaf-c94cdd3b0f89\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-s8ngc" Feb 27 11:15:16 crc kubenswrapper[4728]: I0227 11:15:16.029268 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6v4kh\" (UniqueName: \"kubernetes.io/projected/db929680-5ba2-4112-8eaf-c94cdd3b0f89-kube-api-access-6v4kh\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-s8ngc\" (UID: \"db929680-5ba2-4112-8eaf-c94cdd3b0f89\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-s8ngc" Feb 27 11:15:16 crc kubenswrapper[4728]: I0227 11:15:16.029435 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db929680-5ba2-4112-8eaf-c94cdd3b0f89-telemetry-power-monitoring-combined-ca-bundle\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-s8ngc\" (UID: \"db929680-5ba2-4112-8eaf-c94cdd3b0f89\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-s8ngc" Feb 27 11:15:16 crc kubenswrapper[4728]: I0227 11:15:16.029485 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/db929680-5ba2-4112-8eaf-c94cdd3b0f89-ceilometer-ipmi-config-data-2\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-s8ngc\" (UID: \"db929680-5ba2-4112-8eaf-c94cdd3b0f89\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-s8ngc" Feb 27 11:15:16 crc kubenswrapper[4728]: I0227 11:15:16.032913 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/db929680-5ba2-4112-8eaf-c94cdd3b0f89-inventory\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-s8ngc\" (UID: \"db929680-5ba2-4112-8eaf-c94cdd3b0f89\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-s8ngc" Feb 27 11:15:16 crc kubenswrapper[4728]: I0227 11:15:16.034236 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db929680-5ba2-4112-8eaf-c94cdd3b0f89-telemetry-power-monitoring-combined-ca-bundle\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-s8ngc\" (UID: \"db929680-5ba2-4112-8eaf-c94cdd3b0f89\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-s8ngc" Feb 27 11:15:16 crc kubenswrapper[4728]: I0227 11:15:16.034660 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/db929680-5ba2-4112-8eaf-c94cdd3b0f89-ceilometer-ipmi-config-data-0\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-s8ngc\" (UID: \"db929680-5ba2-4112-8eaf-c94cdd3b0f89\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-s8ngc" Feb 27 11:15:16 crc kubenswrapper[4728]: I0227 11:15:16.035827 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/db929680-5ba2-4112-8eaf-c94cdd3b0f89-ceilometer-ipmi-config-data-1\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-s8ngc\" (UID: \"db929680-5ba2-4112-8eaf-c94cdd3b0f89\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-s8ngc" Feb 27 11:15:16 crc kubenswrapper[4728]: I0227 11:15:16.037744 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/db929680-5ba2-4112-8eaf-c94cdd3b0f89-ssh-key-openstack-edpm-ipam\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-s8ngc\" (UID: \"db929680-5ba2-4112-8eaf-c94cdd3b0f89\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-s8ngc" Feb 27 11:15:16 crc kubenswrapper[4728]: I0227 11:15:16.046940 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/db929680-5ba2-4112-8eaf-c94cdd3b0f89-ceilometer-ipmi-config-data-2\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-s8ngc\" (UID: \"db929680-5ba2-4112-8eaf-c94cdd3b0f89\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-s8ngc" Feb 27 11:15:16 crc kubenswrapper[4728]: I0227 11:15:16.057197 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6v4kh\" (UniqueName: \"kubernetes.io/projected/db929680-5ba2-4112-8eaf-c94cdd3b0f89-kube-api-access-6v4kh\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-s8ngc\" (UID: \"db929680-5ba2-4112-8eaf-c94cdd3b0f89\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-s8ngc" Feb 27 11:15:16 crc kubenswrapper[4728]: I0227 11:15:16.120269 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-s8ngc" Feb 27 11:15:16 crc kubenswrapper[4728]: W0227 11:15:16.697080 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddb929680_5ba2_4112_8eaf_c94cdd3b0f89.slice/crio-fce966a088fc2a52dfd094eeff0fd5e789b00e03cb09a8de931fcf7f209c5a5a WatchSource:0}: Error finding container fce966a088fc2a52dfd094eeff0fd5e789b00e03cb09a8de931fcf7f209c5a5a: Status 404 returned error can't find the container with id fce966a088fc2a52dfd094eeff0fd5e789b00e03cb09a8de931fcf7f209c5a5a Feb 27 11:15:16 crc kubenswrapper[4728]: I0227 11:15:16.697300 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-s8ngc"] Feb 27 11:15:17 crc kubenswrapper[4728]: I0227 11:15:17.667227 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-s8ngc" event={"ID":"db929680-5ba2-4112-8eaf-c94cdd3b0f89","Type":"ContainerStarted","Data":"80060f88b88b832b06cf7f2d42e15210aec03cbfd7c038298442c1a247578ff0"} Feb 27 11:15:17 crc kubenswrapper[4728]: I0227 11:15:17.667800 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-s8ngc" event={"ID":"db929680-5ba2-4112-8eaf-c94cdd3b0f89","Type":"ContainerStarted","Data":"fce966a088fc2a52dfd094eeff0fd5e789b00e03cb09a8de931fcf7f209c5a5a"} Feb 27 11:15:17 crc kubenswrapper[4728]: I0227 11:15:17.686772 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-s8ngc" podStartSLOduration=2.19362779 podStartE2EDuration="2.686750062s" podCreationTimestamp="2026-02-27 11:15:15 +0000 UTC" firstStartedPulling="2026-02-27 11:15:16.70278266 +0000 UTC m=+2936.665148766" lastFinishedPulling="2026-02-27 11:15:17.195904932 +0000 UTC m=+2937.158271038" observedRunningTime="2026-02-27 11:15:17.685332343 +0000 UTC m=+2937.647698449" watchObservedRunningTime="2026-02-27 11:15:17.686750062 +0000 UTC m=+2937.649116178" Feb 27 11:15:25 crc kubenswrapper[4728]: I0227 11:15:25.773172 4728 scope.go:117] "RemoveContainer" containerID="d93b3d5c2c323f1d67065ea5b5844296d0bde41810b6c050efd46056af158962" Feb 27 11:15:35 crc kubenswrapper[4728]: I0227 11:15:35.934737 4728 patch_prober.go:28] interesting pod/machine-config-daemon-mf2hh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 11:15:35 crc kubenswrapper[4728]: I0227 11:15:35.935495 4728 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 11:15:54 crc kubenswrapper[4728]: I0227 11:15:54.558117 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-62wl4"] Feb 27 11:15:54 crc kubenswrapper[4728]: I0227 11:15:54.561719 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-62wl4" Feb 27 11:15:54 crc kubenswrapper[4728]: I0227 11:15:54.577408 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-62wl4"] Feb 27 11:15:54 crc kubenswrapper[4728]: I0227 11:15:54.587788 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02c9fb2f-59b1-4fd5-9892-2105db3b31c9-utilities\") pod \"redhat-marketplace-62wl4\" (UID: \"02c9fb2f-59b1-4fd5-9892-2105db3b31c9\") " pod="openshift-marketplace/redhat-marketplace-62wl4" Feb 27 11:15:54 crc kubenswrapper[4728]: I0227 11:15:54.587870 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79gp2\" (UniqueName: \"kubernetes.io/projected/02c9fb2f-59b1-4fd5-9892-2105db3b31c9-kube-api-access-79gp2\") pod \"redhat-marketplace-62wl4\" (UID: \"02c9fb2f-59b1-4fd5-9892-2105db3b31c9\") " pod="openshift-marketplace/redhat-marketplace-62wl4" Feb 27 11:15:54 crc kubenswrapper[4728]: I0227 11:15:54.588148 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02c9fb2f-59b1-4fd5-9892-2105db3b31c9-catalog-content\") pod \"redhat-marketplace-62wl4\" (UID: \"02c9fb2f-59b1-4fd5-9892-2105db3b31c9\") " pod="openshift-marketplace/redhat-marketplace-62wl4" Feb 27 11:15:54 crc kubenswrapper[4728]: I0227 11:15:54.691105 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02c9fb2f-59b1-4fd5-9892-2105db3b31c9-utilities\") pod \"redhat-marketplace-62wl4\" (UID: \"02c9fb2f-59b1-4fd5-9892-2105db3b31c9\") " pod="openshift-marketplace/redhat-marketplace-62wl4" Feb 27 11:15:54 crc kubenswrapper[4728]: I0227 11:15:54.691215 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-79gp2\" (UniqueName: \"kubernetes.io/projected/02c9fb2f-59b1-4fd5-9892-2105db3b31c9-kube-api-access-79gp2\") pod \"redhat-marketplace-62wl4\" (UID: \"02c9fb2f-59b1-4fd5-9892-2105db3b31c9\") " pod="openshift-marketplace/redhat-marketplace-62wl4" Feb 27 11:15:54 crc kubenswrapper[4728]: I0227 11:15:54.691535 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02c9fb2f-59b1-4fd5-9892-2105db3b31c9-utilities\") pod \"redhat-marketplace-62wl4\" (UID: \"02c9fb2f-59b1-4fd5-9892-2105db3b31c9\") " pod="openshift-marketplace/redhat-marketplace-62wl4" Feb 27 11:15:54 crc kubenswrapper[4728]: I0227 11:15:54.691757 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02c9fb2f-59b1-4fd5-9892-2105db3b31c9-catalog-content\") pod \"redhat-marketplace-62wl4\" (UID: \"02c9fb2f-59b1-4fd5-9892-2105db3b31c9\") " pod="openshift-marketplace/redhat-marketplace-62wl4" Feb 27 11:15:54 crc kubenswrapper[4728]: I0227 11:15:54.692115 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02c9fb2f-59b1-4fd5-9892-2105db3b31c9-catalog-content\") pod \"redhat-marketplace-62wl4\" (UID: \"02c9fb2f-59b1-4fd5-9892-2105db3b31c9\") " pod="openshift-marketplace/redhat-marketplace-62wl4" Feb 27 11:15:54 crc kubenswrapper[4728]: I0227 11:15:54.712354 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-79gp2\" (UniqueName: \"kubernetes.io/projected/02c9fb2f-59b1-4fd5-9892-2105db3b31c9-kube-api-access-79gp2\") pod \"redhat-marketplace-62wl4\" (UID: \"02c9fb2f-59b1-4fd5-9892-2105db3b31c9\") " pod="openshift-marketplace/redhat-marketplace-62wl4" Feb 27 11:15:54 crc kubenswrapper[4728]: I0227 11:15:54.908432 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-62wl4" Feb 27 11:15:55 crc kubenswrapper[4728]: I0227 11:15:55.420234 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-62wl4"] Feb 27 11:15:56 crc kubenswrapper[4728]: I0227 11:15:56.177096 4728 generic.go:334] "Generic (PLEG): container finished" podID="02c9fb2f-59b1-4fd5-9892-2105db3b31c9" containerID="cfe1eb37753c5b8750a34e7acf021877c390782cc00b34d55a10bf1725dc4a71" exitCode=0 Feb 27 11:15:56 crc kubenswrapper[4728]: I0227 11:15:56.177151 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-62wl4" event={"ID":"02c9fb2f-59b1-4fd5-9892-2105db3b31c9","Type":"ContainerDied","Data":"cfe1eb37753c5b8750a34e7acf021877c390782cc00b34d55a10bf1725dc4a71"} Feb 27 11:15:56 crc kubenswrapper[4728]: I0227 11:15:56.177407 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-62wl4" event={"ID":"02c9fb2f-59b1-4fd5-9892-2105db3b31c9","Type":"ContainerStarted","Data":"2562c37d3f93e1db7ab42f006bc05ce66c99da9139688e980e37c24717e842a9"} Feb 27 11:15:58 crc kubenswrapper[4728]: I0227 11:15:58.204636 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-62wl4" event={"ID":"02c9fb2f-59b1-4fd5-9892-2105db3b31c9","Type":"ContainerStarted","Data":"e6689ca3c81871fb4c869313731d7dbfc1fc32b35ed784ef5c80244a5e02d0ee"} Feb 27 11:15:59 crc kubenswrapper[4728]: I0227 11:15:59.215720 4728 generic.go:334] "Generic (PLEG): container finished" podID="02c9fb2f-59b1-4fd5-9892-2105db3b31c9" containerID="e6689ca3c81871fb4c869313731d7dbfc1fc32b35ed784ef5c80244a5e02d0ee" exitCode=0 Feb 27 11:15:59 crc kubenswrapper[4728]: I0227 11:15:59.215917 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-62wl4" event={"ID":"02c9fb2f-59b1-4fd5-9892-2105db3b31c9","Type":"ContainerDied","Data":"e6689ca3c81871fb4c869313731d7dbfc1fc32b35ed784ef5c80244a5e02d0ee"} Feb 27 11:16:00 crc kubenswrapper[4728]: I0227 11:16:00.148240 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536516-t2t5z"] Feb 27 11:16:00 crc kubenswrapper[4728]: I0227 11:16:00.150068 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536516-t2t5z" Feb 27 11:16:00 crc kubenswrapper[4728]: I0227 11:16:00.161016 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-wxdfj" Feb 27 11:16:00 crc kubenswrapper[4728]: I0227 11:16:00.161876 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 11:16:00 crc kubenswrapper[4728]: I0227 11:16:00.162064 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 11:16:00 crc kubenswrapper[4728]: I0227 11:16:00.172563 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536516-t2t5z"] Feb 27 11:16:00 crc kubenswrapper[4728]: I0227 11:16:00.232851 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-774f2\" (UniqueName: \"kubernetes.io/projected/9cf1b72c-2cb3-4759-bb95-dc030a19d8f9-kube-api-access-774f2\") pod \"auto-csr-approver-29536516-t2t5z\" (UID: \"9cf1b72c-2cb3-4759-bb95-dc030a19d8f9\") " pod="openshift-infra/auto-csr-approver-29536516-t2t5z" Feb 27 11:16:00 crc kubenswrapper[4728]: I0227 11:16:00.234738 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-62wl4" event={"ID":"02c9fb2f-59b1-4fd5-9892-2105db3b31c9","Type":"ContainerStarted","Data":"62a9e2fa4c567b8c89afc4436c26902a41793ef6da7d144e95117d7d56aa4877"} Feb 27 11:16:00 crc kubenswrapper[4728]: I0227 11:16:00.256518 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-62wl4" podStartSLOduration=2.795284189 podStartE2EDuration="6.256483542s" podCreationTimestamp="2026-02-27 11:15:54 +0000 UTC" firstStartedPulling="2026-02-27 11:15:56.180137515 +0000 UTC m=+2976.142503621" lastFinishedPulling="2026-02-27 11:15:59.641336858 +0000 UTC m=+2979.603702974" observedRunningTime="2026-02-27 11:16:00.25567118 +0000 UTC m=+2980.218037306" watchObservedRunningTime="2026-02-27 11:16:00.256483542 +0000 UTC m=+2980.218849648" Feb 27 11:16:00 crc kubenswrapper[4728]: I0227 11:16:00.335153 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-774f2\" (UniqueName: \"kubernetes.io/projected/9cf1b72c-2cb3-4759-bb95-dc030a19d8f9-kube-api-access-774f2\") pod \"auto-csr-approver-29536516-t2t5z\" (UID: \"9cf1b72c-2cb3-4759-bb95-dc030a19d8f9\") " pod="openshift-infra/auto-csr-approver-29536516-t2t5z" Feb 27 11:16:00 crc kubenswrapper[4728]: I0227 11:16:00.356465 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-774f2\" (UniqueName: \"kubernetes.io/projected/9cf1b72c-2cb3-4759-bb95-dc030a19d8f9-kube-api-access-774f2\") pod \"auto-csr-approver-29536516-t2t5z\" (UID: \"9cf1b72c-2cb3-4759-bb95-dc030a19d8f9\") " pod="openshift-infra/auto-csr-approver-29536516-t2t5z" Feb 27 11:16:00 crc kubenswrapper[4728]: I0227 11:16:00.470743 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536516-t2t5z" Feb 27 11:16:00 crc kubenswrapper[4728]: I0227 11:16:00.953268 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536516-t2t5z"] Feb 27 11:16:01 crc kubenswrapper[4728]: I0227 11:16:01.245974 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536516-t2t5z" event={"ID":"9cf1b72c-2cb3-4759-bb95-dc030a19d8f9","Type":"ContainerStarted","Data":"ede6fe8058235dc8f22b67d5b9e6d2dfe5b279aa4734beab74c4079d16ec8c12"} Feb 27 11:16:03 crc kubenswrapper[4728]: I0227 11:16:03.270784 4728 generic.go:334] "Generic (PLEG): container finished" podID="9cf1b72c-2cb3-4759-bb95-dc030a19d8f9" containerID="47732d1260f979b85489339bd0a18408e71329130fe3462d46b7073ae8166751" exitCode=0 Feb 27 11:16:03 crc kubenswrapper[4728]: I0227 11:16:03.270878 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536516-t2t5z" event={"ID":"9cf1b72c-2cb3-4759-bb95-dc030a19d8f9","Type":"ContainerDied","Data":"47732d1260f979b85489339bd0a18408e71329130fe3462d46b7073ae8166751"} Feb 27 11:16:04 crc kubenswrapper[4728]: I0227 11:16:04.727845 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536516-t2t5z" Feb 27 11:16:04 crc kubenswrapper[4728]: I0227 11:16:04.849732 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-774f2\" (UniqueName: \"kubernetes.io/projected/9cf1b72c-2cb3-4759-bb95-dc030a19d8f9-kube-api-access-774f2\") pod \"9cf1b72c-2cb3-4759-bb95-dc030a19d8f9\" (UID: \"9cf1b72c-2cb3-4759-bb95-dc030a19d8f9\") " Feb 27 11:16:04 crc kubenswrapper[4728]: I0227 11:16:04.857985 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9cf1b72c-2cb3-4759-bb95-dc030a19d8f9-kube-api-access-774f2" (OuterVolumeSpecName: "kube-api-access-774f2") pod "9cf1b72c-2cb3-4759-bb95-dc030a19d8f9" (UID: "9cf1b72c-2cb3-4759-bb95-dc030a19d8f9"). InnerVolumeSpecName "kube-api-access-774f2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:16:04 crc kubenswrapper[4728]: I0227 11:16:04.909142 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-62wl4" Feb 27 11:16:04 crc kubenswrapper[4728]: I0227 11:16:04.909292 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-62wl4" Feb 27 11:16:04 crc kubenswrapper[4728]: I0227 11:16:04.953122 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-774f2\" (UniqueName: \"kubernetes.io/projected/9cf1b72c-2cb3-4759-bb95-dc030a19d8f9-kube-api-access-774f2\") on node \"crc\" DevicePath \"\"" Feb 27 11:16:04 crc kubenswrapper[4728]: I0227 11:16:04.958997 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-62wl4" Feb 27 11:16:05 crc kubenswrapper[4728]: I0227 11:16:05.299773 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536516-t2t5z" event={"ID":"9cf1b72c-2cb3-4759-bb95-dc030a19d8f9","Type":"ContainerDied","Data":"ede6fe8058235dc8f22b67d5b9e6d2dfe5b279aa4734beab74c4079d16ec8c12"} Feb 27 11:16:05 crc kubenswrapper[4728]: I0227 11:16:05.299830 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ede6fe8058235dc8f22b67d5b9e6d2dfe5b279aa4734beab74c4079d16ec8c12" Feb 27 11:16:05 crc kubenswrapper[4728]: I0227 11:16:05.299828 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536516-t2t5z" Feb 27 11:16:05 crc kubenswrapper[4728]: I0227 11:16:05.374983 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-62wl4" Feb 27 11:16:05 crc kubenswrapper[4728]: I0227 11:16:05.437043 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-62wl4"] Feb 27 11:16:05 crc kubenswrapper[4728]: I0227 11:16:05.831448 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536510-rhn8w"] Feb 27 11:16:05 crc kubenswrapper[4728]: I0227 11:16:05.857289 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536510-rhn8w"] Feb 27 11:16:05 crc kubenswrapper[4728]: I0227 11:16:05.922863 4728 patch_prober.go:28] interesting pod/machine-config-daemon-mf2hh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 11:16:05 crc kubenswrapper[4728]: I0227 11:16:05.922945 4728 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 11:16:06 crc kubenswrapper[4728]: I0227 11:16:06.739175 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44525c67-9312-4bb2-8c2c-aadef5c13d86" path="/var/lib/kubelet/pods/44525c67-9312-4bb2-8c2c-aadef5c13d86/volumes" Feb 27 11:16:07 crc kubenswrapper[4728]: I0227 11:16:07.323772 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-62wl4" podUID="02c9fb2f-59b1-4fd5-9892-2105db3b31c9" containerName="registry-server" containerID="cri-o://62a9e2fa4c567b8c89afc4436c26902a41793ef6da7d144e95117d7d56aa4877" gracePeriod=2 Feb 27 11:16:07 crc kubenswrapper[4728]: I0227 11:16:07.849032 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-62wl4" Feb 27 11:16:08 crc kubenswrapper[4728]: I0227 11:16:08.037867 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02c9fb2f-59b1-4fd5-9892-2105db3b31c9-utilities\") pod \"02c9fb2f-59b1-4fd5-9892-2105db3b31c9\" (UID: \"02c9fb2f-59b1-4fd5-9892-2105db3b31c9\") " Feb 27 11:16:08 crc kubenswrapper[4728]: I0227 11:16:08.038053 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-79gp2\" (UniqueName: \"kubernetes.io/projected/02c9fb2f-59b1-4fd5-9892-2105db3b31c9-kube-api-access-79gp2\") pod \"02c9fb2f-59b1-4fd5-9892-2105db3b31c9\" (UID: \"02c9fb2f-59b1-4fd5-9892-2105db3b31c9\") " Feb 27 11:16:08 crc kubenswrapper[4728]: I0227 11:16:08.038121 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02c9fb2f-59b1-4fd5-9892-2105db3b31c9-catalog-content\") pod \"02c9fb2f-59b1-4fd5-9892-2105db3b31c9\" (UID: \"02c9fb2f-59b1-4fd5-9892-2105db3b31c9\") " Feb 27 11:16:08 crc kubenswrapper[4728]: I0227 11:16:08.038648 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/02c9fb2f-59b1-4fd5-9892-2105db3b31c9-utilities" (OuterVolumeSpecName: "utilities") pod "02c9fb2f-59b1-4fd5-9892-2105db3b31c9" (UID: "02c9fb2f-59b1-4fd5-9892-2105db3b31c9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 11:16:08 crc kubenswrapper[4728]: I0227 11:16:08.038872 4728 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02c9fb2f-59b1-4fd5-9892-2105db3b31c9-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 11:16:08 crc kubenswrapper[4728]: I0227 11:16:08.045582 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02c9fb2f-59b1-4fd5-9892-2105db3b31c9-kube-api-access-79gp2" (OuterVolumeSpecName: "kube-api-access-79gp2") pod "02c9fb2f-59b1-4fd5-9892-2105db3b31c9" (UID: "02c9fb2f-59b1-4fd5-9892-2105db3b31c9"). InnerVolumeSpecName "kube-api-access-79gp2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:16:08 crc kubenswrapper[4728]: I0227 11:16:08.141702 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-79gp2\" (UniqueName: \"kubernetes.io/projected/02c9fb2f-59b1-4fd5-9892-2105db3b31c9-kube-api-access-79gp2\") on node \"crc\" DevicePath \"\"" Feb 27 11:16:08 crc kubenswrapper[4728]: I0227 11:16:08.170400 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/02c9fb2f-59b1-4fd5-9892-2105db3b31c9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "02c9fb2f-59b1-4fd5-9892-2105db3b31c9" (UID: "02c9fb2f-59b1-4fd5-9892-2105db3b31c9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 11:16:08 crc kubenswrapper[4728]: I0227 11:16:08.243672 4728 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02c9fb2f-59b1-4fd5-9892-2105db3b31c9-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 11:16:08 crc kubenswrapper[4728]: I0227 11:16:08.335859 4728 generic.go:334] "Generic (PLEG): container finished" podID="02c9fb2f-59b1-4fd5-9892-2105db3b31c9" containerID="62a9e2fa4c567b8c89afc4436c26902a41793ef6da7d144e95117d7d56aa4877" exitCode=0 Feb 27 11:16:08 crc kubenswrapper[4728]: I0227 11:16:08.335912 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-62wl4" event={"ID":"02c9fb2f-59b1-4fd5-9892-2105db3b31c9","Type":"ContainerDied","Data":"62a9e2fa4c567b8c89afc4436c26902a41793ef6da7d144e95117d7d56aa4877"} Feb 27 11:16:08 crc kubenswrapper[4728]: I0227 11:16:08.335964 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-62wl4" event={"ID":"02c9fb2f-59b1-4fd5-9892-2105db3b31c9","Type":"ContainerDied","Data":"2562c37d3f93e1db7ab42f006bc05ce66c99da9139688e980e37c24717e842a9"} Feb 27 11:16:08 crc kubenswrapper[4728]: I0227 11:16:08.335989 4728 scope.go:117] "RemoveContainer" containerID="62a9e2fa4c567b8c89afc4436c26902a41793ef6da7d144e95117d7d56aa4877" Feb 27 11:16:08 crc kubenswrapper[4728]: I0227 11:16:08.335989 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-62wl4" Feb 27 11:16:08 crc kubenswrapper[4728]: I0227 11:16:08.361441 4728 scope.go:117] "RemoveContainer" containerID="e6689ca3c81871fb4c869313731d7dbfc1fc32b35ed784ef5c80244a5e02d0ee" Feb 27 11:16:08 crc kubenswrapper[4728]: I0227 11:16:08.392824 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-62wl4"] Feb 27 11:16:08 crc kubenswrapper[4728]: I0227 11:16:08.401626 4728 scope.go:117] "RemoveContainer" containerID="cfe1eb37753c5b8750a34e7acf021877c390782cc00b34d55a10bf1725dc4a71" Feb 27 11:16:08 crc kubenswrapper[4728]: I0227 11:16:08.418780 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-62wl4"] Feb 27 11:16:08 crc kubenswrapper[4728]: I0227 11:16:08.462110 4728 scope.go:117] "RemoveContainer" containerID="62a9e2fa4c567b8c89afc4436c26902a41793ef6da7d144e95117d7d56aa4877" Feb 27 11:16:08 crc kubenswrapper[4728]: E0227 11:16:08.462629 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"62a9e2fa4c567b8c89afc4436c26902a41793ef6da7d144e95117d7d56aa4877\": container with ID starting with 62a9e2fa4c567b8c89afc4436c26902a41793ef6da7d144e95117d7d56aa4877 not found: ID does not exist" containerID="62a9e2fa4c567b8c89afc4436c26902a41793ef6da7d144e95117d7d56aa4877" Feb 27 11:16:08 crc kubenswrapper[4728]: I0227 11:16:08.462672 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"62a9e2fa4c567b8c89afc4436c26902a41793ef6da7d144e95117d7d56aa4877"} err="failed to get container status \"62a9e2fa4c567b8c89afc4436c26902a41793ef6da7d144e95117d7d56aa4877\": rpc error: code = NotFound desc = could not find container \"62a9e2fa4c567b8c89afc4436c26902a41793ef6da7d144e95117d7d56aa4877\": container with ID starting with 62a9e2fa4c567b8c89afc4436c26902a41793ef6da7d144e95117d7d56aa4877 not found: ID does not exist" Feb 27 11:16:08 crc kubenswrapper[4728]: I0227 11:16:08.462707 4728 scope.go:117] "RemoveContainer" containerID="e6689ca3c81871fb4c869313731d7dbfc1fc32b35ed784ef5c80244a5e02d0ee" Feb 27 11:16:08 crc kubenswrapper[4728]: E0227 11:16:08.463123 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e6689ca3c81871fb4c869313731d7dbfc1fc32b35ed784ef5c80244a5e02d0ee\": container with ID starting with e6689ca3c81871fb4c869313731d7dbfc1fc32b35ed784ef5c80244a5e02d0ee not found: ID does not exist" containerID="e6689ca3c81871fb4c869313731d7dbfc1fc32b35ed784ef5c80244a5e02d0ee" Feb 27 11:16:08 crc kubenswrapper[4728]: I0227 11:16:08.463176 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e6689ca3c81871fb4c869313731d7dbfc1fc32b35ed784ef5c80244a5e02d0ee"} err="failed to get container status \"e6689ca3c81871fb4c869313731d7dbfc1fc32b35ed784ef5c80244a5e02d0ee\": rpc error: code = NotFound desc = could not find container \"e6689ca3c81871fb4c869313731d7dbfc1fc32b35ed784ef5c80244a5e02d0ee\": container with ID starting with e6689ca3c81871fb4c869313731d7dbfc1fc32b35ed784ef5c80244a5e02d0ee not found: ID does not exist" Feb 27 11:16:08 crc kubenswrapper[4728]: I0227 11:16:08.463205 4728 scope.go:117] "RemoveContainer" containerID="cfe1eb37753c5b8750a34e7acf021877c390782cc00b34d55a10bf1725dc4a71" Feb 27 11:16:08 crc kubenswrapper[4728]: E0227 11:16:08.463617 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cfe1eb37753c5b8750a34e7acf021877c390782cc00b34d55a10bf1725dc4a71\": container with ID starting with cfe1eb37753c5b8750a34e7acf021877c390782cc00b34d55a10bf1725dc4a71 not found: ID does not exist" containerID="cfe1eb37753c5b8750a34e7acf021877c390782cc00b34d55a10bf1725dc4a71" Feb 27 11:16:08 crc kubenswrapper[4728]: I0227 11:16:08.463640 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cfe1eb37753c5b8750a34e7acf021877c390782cc00b34d55a10bf1725dc4a71"} err="failed to get container status \"cfe1eb37753c5b8750a34e7acf021877c390782cc00b34d55a10bf1725dc4a71\": rpc error: code = NotFound desc = could not find container \"cfe1eb37753c5b8750a34e7acf021877c390782cc00b34d55a10bf1725dc4a71\": container with ID starting with cfe1eb37753c5b8750a34e7acf021877c390782cc00b34d55a10bf1725dc4a71 not found: ID does not exist" Feb 27 11:16:08 crc kubenswrapper[4728]: E0227 11:16:08.590811 4728 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod02c9fb2f_59b1_4fd5_9892_2105db3b31c9.slice/crio-2562c37d3f93e1db7ab42f006bc05ce66c99da9139688e980e37c24717e842a9\": RecentStats: unable to find data in memory cache]" Feb 27 11:16:08 crc kubenswrapper[4728]: I0227 11:16:08.747925 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02c9fb2f-59b1-4fd5-9892-2105db3b31c9" path="/var/lib/kubelet/pods/02c9fb2f-59b1-4fd5-9892-2105db3b31c9/volumes" Feb 27 11:16:25 crc kubenswrapper[4728]: I0227 11:16:25.865705 4728 scope.go:117] "RemoveContainer" containerID="1cacebc133e41beac455f1fb8e5cf30936f230384b0014a6ddf34d3f6a5f72bb" Feb 27 11:16:35 crc kubenswrapper[4728]: I0227 11:16:35.922747 4728 patch_prober.go:28] interesting pod/machine-config-daemon-mf2hh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 11:16:35 crc kubenswrapper[4728]: I0227 11:16:35.923442 4728 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 11:16:35 crc kubenswrapper[4728]: I0227 11:16:35.923534 4728 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" Feb 27 11:16:35 crc kubenswrapper[4728]: I0227 11:16:35.924674 4728 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2c94f1a9a09928f530d57cf1af411e7a1834b20d8115a07791ee034fe585d7b3"} pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 27 11:16:35 crc kubenswrapper[4728]: I0227 11:16:35.924778 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" containerName="machine-config-daemon" containerID="cri-o://2c94f1a9a09928f530d57cf1af411e7a1834b20d8115a07791ee034fe585d7b3" gracePeriod=600 Feb 27 11:16:36 crc kubenswrapper[4728]: I0227 11:16:36.739847 4728 generic.go:334] "Generic (PLEG): container finished" podID="c2cfd349-f825-497b-b698-7fb6bc258b22" containerID="2c94f1a9a09928f530d57cf1af411e7a1834b20d8115a07791ee034fe585d7b3" exitCode=0 Feb 27 11:16:36 crc kubenswrapper[4728]: I0227 11:16:36.752558 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" event={"ID":"c2cfd349-f825-497b-b698-7fb6bc258b22","Type":"ContainerDied","Data":"2c94f1a9a09928f530d57cf1af411e7a1834b20d8115a07791ee034fe585d7b3"} Feb 27 11:16:36 crc kubenswrapper[4728]: I0227 11:16:36.752661 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" event={"ID":"c2cfd349-f825-497b-b698-7fb6bc258b22","Type":"ContainerStarted","Data":"4b26cdc47a7e7e6f6d03cb73756691c7c42ce277f12dd365521433a950382ed6"} Feb 27 11:16:36 crc kubenswrapper[4728]: I0227 11:16:36.752707 4728 scope.go:117] "RemoveContainer" containerID="57dd66d05b86c121df94f5a356294cb90c6125f234da02888c13a967c3d3eec5" Feb 27 11:17:16 crc kubenswrapper[4728]: I0227 11:17:16.196719 4728 generic.go:334] "Generic (PLEG): container finished" podID="db929680-5ba2-4112-8eaf-c94cdd3b0f89" containerID="80060f88b88b832b06cf7f2d42e15210aec03cbfd7c038298442c1a247578ff0" exitCode=0 Feb 27 11:17:16 crc kubenswrapper[4728]: I0227 11:17:16.196864 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-s8ngc" event={"ID":"db929680-5ba2-4112-8eaf-c94cdd3b0f89","Type":"ContainerDied","Data":"80060f88b88b832b06cf7f2d42e15210aec03cbfd7c038298442c1a247578ff0"} Feb 27 11:17:17 crc kubenswrapper[4728]: I0227 11:17:17.754259 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-s8ngc" Feb 27 11:17:17 crc kubenswrapper[4728]: I0227 11:17:17.860288 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/db929680-5ba2-4112-8eaf-c94cdd3b0f89-ceilometer-ipmi-config-data-2\") pod \"db929680-5ba2-4112-8eaf-c94cdd3b0f89\" (UID: \"db929680-5ba2-4112-8eaf-c94cdd3b0f89\") " Feb 27 11:17:17 crc kubenswrapper[4728]: I0227 11:17:17.860401 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/db929680-5ba2-4112-8eaf-c94cdd3b0f89-ceilometer-ipmi-config-data-1\") pod \"db929680-5ba2-4112-8eaf-c94cdd3b0f89\" (UID: \"db929680-5ba2-4112-8eaf-c94cdd3b0f89\") " Feb 27 11:17:17 crc kubenswrapper[4728]: I0227 11:17:17.860485 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6v4kh\" (UniqueName: \"kubernetes.io/projected/db929680-5ba2-4112-8eaf-c94cdd3b0f89-kube-api-access-6v4kh\") pod \"db929680-5ba2-4112-8eaf-c94cdd3b0f89\" (UID: \"db929680-5ba2-4112-8eaf-c94cdd3b0f89\") " Feb 27 11:17:17 crc kubenswrapper[4728]: I0227 11:17:17.860557 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/db929680-5ba2-4112-8eaf-c94cdd3b0f89-inventory\") pod \"db929680-5ba2-4112-8eaf-c94cdd3b0f89\" (UID: \"db929680-5ba2-4112-8eaf-c94cdd3b0f89\") " Feb 27 11:17:17 crc kubenswrapper[4728]: I0227 11:17:17.860689 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/db929680-5ba2-4112-8eaf-c94cdd3b0f89-ssh-key-openstack-edpm-ipam\") pod \"db929680-5ba2-4112-8eaf-c94cdd3b0f89\" (UID: \"db929680-5ba2-4112-8eaf-c94cdd3b0f89\") " Feb 27 11:17:17 crc kubenswrapper[4728]: I0227 11:17:17.860811 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/db929680-5ba2-4112-8eaf-c94cdd3b0f89-ceilometer-ipmi-config-data-0\") pod \"db929680-5ba2-4112-8eaf-c94cdd3b0f89\" (UID: \"db929680-5ba2-4112-8eaf-c94cdd3b0f89\") " Feb 27 11:17:17 crc kubenswrapper[4728]: I0227 11:17:17.860904 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db929680-5ba2-4112-8eaf-c94cdd3b0f89-telemetry-power-monitoring-combined-ca-bundle\") pod \"db929680-5ba2-4112-8eaf-c94cdd3b0f89\" (UID: \"db929680-5ba2-4112-8eaf-c94cdd3b0f89\") " Feb 27 11:17:17 crc kubenswrapper[4728]: I0227 11:17:17.878992 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db929680-5ba2-4112-8eaf-c94cdd3b0f89-telemetry-power-monitoring-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-power-monitoring-combined-ca-bundle") pod "db929680-5ba2-4112-8eaf-c94cdd3b0f89" (UID: "db929680-5ba2-4112-8eaf-c94cdd3b0f89"). InnerVolumeSpecName "telemetry-power-monitoring-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 11:17:17 crc kubenswrapper[4728]: I0227 11:17:17.879133 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db929680-5ba2-4112-8eaf-c94cdd3b0f89-kube-api-access-6v4kh" (OuterVolumeSpecName: "kube-api-access-6v4kh") pod "db929680-5ba2-4112-8eaf-c94cdd3b0f89" (UID: "db929680-5ba2-4112-8eaf-c94cdd3b0f89"). InnerVolumeSpecName "kube-api-access-6v4kh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:17:17 crc kubenswrapper[4728]: I0227 11:17:17.895131 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db929680-5ba2-4112-8eaf-c94cdd3b0f89-ceilometer-ipmi-config-data-1" (OuterVolumeSpecName: "ceilometer-ipmi-config-data-1") pod "db929680-5ba2-4112-8eaf-c94cdd3b0f89" (UID: "db929680-5ba2-4112-8eaf-c94cdd3b0f89"). InnerVolumeSpecName "ceilometer-ipmi-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 11:17:17 crc kubenswrapper[4728]: I0227 11:17:17.901387 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db929680-5ba2-4112-8eaf-c94cdd3b0f89-inventory" (OuterVolumeSpecName: "inventory") pod "db929680-5ba2-4112-8eaf-c94cdd3b0f89" (UID: "db929680-5ba2-4112-8eaf-c94cdd3b0f89"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 11:17:17 crc kubenswrapper[4728]: I0227 11:17:17.904396 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db929680-5ba2-4112-8eaf-c94cdd3b0f89-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "db929680-5ba2-4112-8eaf-c94cdd3b0f89" (UID: "db929680-5ba2-4112-8eaf-c94cdd3b0f89"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 11:17:17 crc kubenswrapper[4728]: I0227 11:17:17.912026 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db929680-5ba2-4112-8eaf-c94cdd3b0f89-ceilometer-ipmi-config-data-2" (OuterVolumeSpecName: "ceilometer-ipmi-config-data-2") pod "db929680-5ba2-4112-8eaf-c94cdd3b0f89" (UID: "db929680-5ba2-4112-8eaf-c94cdd3b0f89"). InnerVolumeSpecName "ceilometer-ipmi-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 11:17:17 crc kubenswrapper[4728]: I0227 11:17:17.930968 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db929680-5ba2-4112-8eaf-c94cdd3b0f89-ceilometer-ipmi-config-data-0" (OuterVolumeSpecName: "ceilometer-ipmi-config-data-0") pod "db929680-5ba2-4112-8eaf-c94cdd3b0f89" (UID: "db929680-5ba2-4112-8eaf-c94cdd3b0f89"). InnerVolumeSpecName "ceilometer-ipmi-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 11:17:17 crc kubenswrapper[4728]: I0227 11:17:17.963848 4728 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/db929680-5ba2-4112-8eaf-c94cdd3b0f89-inventory\") on node \"crc\" DevicePath \"\"" Feb 27 11:17:17 crc kubenswrapper[4728]: I0227 11:17:17.963885 4728 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/db929680-5ba2-4112-8eaf-c94cdd3b0f89-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 27 11:17:17 crc kubenswrapper[4728]: I0227 11:17:17.963896 4728 reconciler_common.go:293] "Volume detached for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/db929680-5ba2-4112-8eaf-c94cdd3b0f89-ceilometer-ipmi-config-data-0\") on node \"crc\" DevicePath \"\"" Feb 27 11:17:17 crc kubenswrapper[4728]: I0227 11:17:17.963906 4728 reconciler_common.go:293] "Volume detached for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db929680-5ba2-4112-8eaf-c94cdd3b0f89-telemetry-power-monitoring-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 11:17:17 crc kubenswrapper[4728]: I0227 11:17:17.963919 4728 reconciler_common.go:293] "Volume detached for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/db929680-5ba2-4112-8eaf-c94cdd3b0f89-ceilometer-ipmi-config-data-2\") on node \"crc\" DevicePath \"\"" Feb 27 11:17:17 crc kubenswrapper[4728]: I0227 11:17:17.963929 4728 reconciler_common.go:293] "Volume detached for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/db929680-5ba2-4112-8eaf-c94cdd3b0f89-ceilometer-ipmi-config-data-1\") on node \"crc\" DevicePath \"\"" Feb 27 11:17:17 crc kubenswrapper[4728]: I0227 11:17:17.963939 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6v4kh\" (UniqueName: \"kubernetes.io/projected/db929680-5ba2-4112-8eaf-c94cdd3b0f89-kube-api-access-6v4kh\") on node \"crc\" DevicePath \"\"" Feb 27 11:17:18 crc kubenswrapper[4728]: I0227 11:17:18.226120 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-s8ngc" event={"ID":"db929680-5ba2-4112-8eaf-c94cdd3b0f89","Type":"ContainerDied","Data":"fce966a088fc2a52dfd094eeff0fd5e789b00e03cb09a8de931fcf7f209c5a5a"} Feb 27 11:17:18 crc kubenswrapper[4728]: I0227 11:17:18.226172 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fce966a088fc2a52dfd094eeff0fd5e789b00e03cb09a8de931fcf7f209c5a5a" Feb 27 11:17:18 crc kubenswrapper[4728]: I0227 11:17:18.226240 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-s8ngc" Feb 27 11:17:18 crc kubenswrapper[4728]: I0227 11:17:18.360937 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/logging-edpm-deployment-openstack-edpm-ipam-8m47v"] Feb 27 11:17:18 crc kubenswrapper[4728]: E0227 11:17:18.361490 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02c9fb2f-59b1-4fd5-9892-2105db3b31c9" containerName="extract-utilities" Feb 27 11:17:18 crc kubenswrapper[4728]: I0227 11:17:18.361525 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="02c9fb2f-59b1-4fd5-9892-2105db3b31c9" containerName="extract-utilities" Feb 27 11:17:18 crc kubenswrapper[4728]: E0227 11:17:18.361548 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db929680-5ba2-4112-8eaf-c94cdd3b0f89" containerName="telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam" Feb 27 11:17:18 crc kubenswrapper[4728]: I0227 11:17:18.361557 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="db929680-5ba2-4112-8eaf-c94cdd3b0f89" containerName="telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam" Feb 27 11:17:18 crc kubenswrapper[4728]: E0227 11:17:18.361580 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9cf1b72c-2cb3-4759-bb95-dc030a19d8f9" containerName="oc" Feb 27 11:17:18 crc kubenswrapper[4728]: I0227 11:17:18.361586 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="9cf1b72c-2cb3-4759-bb95-dc030a19d8f9" containerName="oc" Feb 27 11:17:18 crc kubenswrapper[4728]: E0227 11:17:18.361623 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02c9fb2f-59b1-4fd5-9892-2105db3b31c9" containerName="extract-content" Feb 27 11:17:18 crc kubenswrapper[4728]: I0227 11:17:18.361629 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="02c9fb2f-59b1-4fd5-9892-2105db3b31c9" containerName="extract-content" Feb 27 11:17:18 crc kubenswrapper[4728]: E0227 11:17:18.361646 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02c9fb2f-59b1-4fd5-9892-2105db3b31c9" containerName="registry-server" Feb 27 11:17:18 crc kubenswrapper[4728]: I0227 11:17:18.361651 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="02c9fb2f-59b1-4fd5-9892-2105db3b31c9" containerName="registry-server" Feb 27 11:17:18 crc kubenswrapper[4728]: I0227 11:17:18.361875 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="db929680-5ba2-4112-8eaf-c94cdd3b0f89" containerName="telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam" Feb 27 11:17:18 crc kubenswrapper[4728]: I0227 11:17:18.361896 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="9cf1b72c-2cb3-4759-bb95-dc030a19d8f9" containerName="oc" Feb 27 11:17:18 crc kubenswrapper[4728]: I0227 11:17:18.361910 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="02c9fb2f-59b1-4fd5-9892-2105db3b31c9" containerName="registry-server" Feb 27 11:17:18 crc kubenswrapper[4728]: I0227 11:17:18.362799 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-8m47v" Feb 27 11:17:18 crc kubenswrapper[4728]: I0227 11:17:18.366127 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 27 11:17:18 crc kubenswrapper[4728]: I0227 11:17:18.369705 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 27 11:17:18 crc kubenswrapper[4728]: I0227 11:17:18.370094 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"logging-compute-config-data" Feb 27 11:17:18 crc kubenswrapper[4728]: I0227 11:17:18.370486 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-r9nq7" Feb 27 11:17:18 crc kubenswrapper[4728]: I0227 11:17:18.373299 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 27 11:17:18 crc kubenswrapper[4728]: I0227 11:17:18.375989 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/logging-edpm-deployment-openstack-edpm-ipam-8m47v"] Feb 27 11:17:18 crc kubenswrapper[4728]: I0227 11:17:18.479580 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/7b98335a-8a74-44b4-aed8-8a56081f60ab-logging-compute-config-data-0\") pod \"logging-edpm-deployment-openstack-edpm-ipam-8m47v\" (UID: \"7b98335a-8a74-44b4-aed8-8a56081f60ab\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-8m47v" Feb 27 11:17:18 crc kubenswrapper[4728]: I0227 11:17:18.479649 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7b98335a-8a74-44b4-aed8-8a56081f60ab-ssh-key-openstack-edpm-ipam\") pod \"logging-edpm-deployment-openstack-edpm-ipam-8m47v\" (UID: \"7b98335a-8a74-44b4-aed8-8a56081f60ab\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-8m47v" Feb 27 11:17:18 crc kubenswrapper[4728]: I0227 11:17:18.479742 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/7b98335a-8a74-44b4-aed8-8a56081f60ab-logging-compute-config-data-1\") pod \"logging-edpm-deployment-openstack-edpm-ipam-8m47v\" (UID: \"7b98335a-8a74-44b4-aed8-8a56081f60ab\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-8m47v" Feb 27 11:17:18 crc kubenswrapper[4728]: I0227 11:17:18.479813 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7b98335a-8a74-44b4-aed8-8a56081f60ab-inventory\") pod \"logging-edpm-deployment-openstack-edpm-ipam-8m47v\" (UID: \"7b98335a-8a74-44b4-aed8-8a56081f60ab\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-8m47v" Feb 27 11:17:18 crc kubenswrapper[4728]: I0227 11:17:18.479859 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rb49l\" (UniqueName: \"kubernetes.io/projected/7b98335a-8a74-44b4-aed8-8a56081f60ab-kube-api-access-rb49l\") pod \"logging-edpm-deployment-openstack-edpm-ipam-8m47v\" (UID: \"7b98335a-8a74-44b4-aed8-8a56081f60ab\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-8m47v" Feb 27 11:17:18 crc kubenswrapper[4728]: I0227 11:17:18.581968 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/7b98335a-8a74-44b4-aed8-8a56081f60ab-logging-compute-config-data-0\") pod \"logging-edpm-deployment-openstack-edpm-ipam-8m47v\" (UID: \"7b98335a-8a74-44b4-aed8-8a56081f60ab\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-8m47v" Feb 27 11:17:18 crc kubenswrapper[4728]: I0227 11:17:18.582033 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7b98335a-8a74-44b4-aed8-8a56081f60ab-ssh-key-openstack-edpm-ipam\") pod \"logging-edpm-deployment-openstack-edpm-ipam-8m47v\" (UID: \"7b98335a-8a74-44b4-aed8-8a56081f60ab\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-8m47v" Feb 27 11:17:18 crc kubenswrapper[4728]: I0227 11:17:18.582131 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/7b98335a-8a74-44b4-aed8-8a56081f60ab-logging-compute-config-data-1\") pod \"logging-edpm-deployment-openstack-edpm-ipam-8m47v\" (UID: \"7b98335a-8a74-44b4-aed8-8a56081f60ab\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-8m47v" Feb 27 11:17:18 crc kubenswrapper[4728]: I0227 11:17:18.582200 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7b98335a-8a74-44b4-aed8-8a56081f60ab-inventory\") pod \"logging-edpm-deployment-openstack-edpm-ipam-8m47v\" (UID: \"7b98335a-8a74-44b4-aed8-8a56081f60ab\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-8m47v" Feb 27 11:17:18 crc kubenswrapper[4728]: I0227 11:17:18.582234 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rb49l\" (UniqueName: \"kubernetes.io/projected/7b98335a-8a74-44b4-aed8-8a56081f60ab-kube-api-access-rb49l\") pod \"logging-edpm-deployment-openstack-edpm-ipam-8m47v\" (UID: \"7b98335a-8a74-44b4-aed8-8a56081f60ab\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-8m47v" Feb 27 11:17:18 crc kubenswrapper[4728]: I0227 11:17:18.589014 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/7b98335a-8a74-44b4-aed8-8a56081f60ab-logging-compute-config-data-0\") pod \"logging-edpm-deployment-openstack-edpm-ipam-8m47v\" (UID: \"7b98335a-8a74-44b4-aed8-8a56081f60ab\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-8m47v" Feb 27 11:17:18 crc kubenswrapper[4728]: I0227 11:17:18.589145 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7b98335a-8a74-44b4-aed8-8a56081f60ab-inventory\") pod \"logging-edpm-deployment-openstack-edpm-ipam-8m47v\" (UID: \"7b98335a-8a74-44b4-aed8-8a56081f60ab\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-8m47v" Feb 27 11:17:18 crc kubenswrapper[4728]: I0227 11:17:18.591725 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/7b98335a-8a74-44b4-aed8-8a56081f60ab-logging-compute-config-data-1\") pod \"logging-edpm-deployment-openstack-edpm-ipam-8m47v\" (UID: \"7b98335a-8a74-44b4-aed8-8a56081f60ab\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-8m47v" Feb 27 11:17:18 crc kubenswrapper[4728]: I0227 11:17:18.595374 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7b98335a-8a74-44b4-aed8-8a56081f60ab-ssh-key-openstack-edpm-ipam\") pod \"logging-edpm-deployment-openstack-edpm-ipam-8m47v\" (UID: \"7b98335a-8a74-44b4-aed8-8a56081f60ab\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-8m47v" Feb 27 11:17:18 crc kubenswrapper[4728]: I0227 11:17:18.603108 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rb49l\" (UniqueName: \"kubernetes.io/projected/7b98335a-8a74-44b4-aed8-8a56081f60ab-kube-api-access-rb49l\") pod \"logging-edpm-deployment-openstack-edpm-ipam-8m47v\" (UID: \"7b98335a-8a74-44b4-aed8-8a56081f60ab\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-8m47v" Feb 27 11:17:18 crc kubenswrapper[4728]: I0227 11:17:18.697307 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-8m47v" Feb 27 11:17:19 crc kubenswrapper[4728]: I0227 11:17:19.295384 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/logging-edpm-deployment-openstack-edpm-ipam-8m47v"] Feb 27 11:17:20 crc kubenswrapper[4728]: I0227 11:17:20.246494 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-8m47v" event={"ID":"7b98335a-8a74-44b4-aed8-8a56081f60ab","Type":"ContainerStarted","Data":"5e6bd4abdc1aa5271bf0d854cc88883a476f4b99d0ccc48c760d100946bb1aaa"} Feb 27 11:17:21 crc kubenswrapper[4728]: I0227 11:17:21.263971 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-8m47v" event={"ID":"7b98335a-8a74-44b4-aed8-8a56081f60ab","Type":"ContainerStarted","Data":"25a166f5f2bce4017915eeaa9fecc6fb05ac51691ff690e36e0e81e944be10f3"} Feb 27 11:17:21 crc kubenswrapper[4728]: I0227 11:17:21.292760 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-8m47v" podStartSLOduration=2.604759186 podStartE2EDuration="3.292740828s" podCreationTimestamp="2026-02-27 11:17:18 +0000 UTC" firstStartedPulling="2026-02-27 11:17:19.303858741 +0000 UTC m=+3059.266224847" lastFinishedPulling="2026-02-27 11:17:19.991840393 +0000 UTC m=+3059.954206489" observedRunningTime="2026-02-27 11:17:21.281358638 +0000 UTC m=+3061.243724754" watchObservedRunningTime="2026-02-27 11:17:21.292740828 +0000 UTC m=+3061.255106934" Feb 27 11:17:35 crc kubenswrapper[4728]: I0227 11:17:35.408738 4728 generic.go:334] "Generic (PLEG): container finished" podID="7b98335a-8a74-44b4-aed8-8a56081f60ab" containerID="25a166f5f2bce4017915eeaa9fecc6fb05ac51691ff690e36e0e81e944be10f3" exitCode=0 Feb 27 11:17:35 crc kubenswrapper[4728]: I0227 11:17:35.409283 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-8m47v" event={"ID":"7b98335a-8a74-44b4-aed8-8a56081f60ab","Type":"ContainerDied","Data":"25a166f5f2bce4017915eeaa9fecc6fb05ac51691ff690e36e0e81e944be10f3"} Feb 27 11:17:36 crc kubenswrapper[4728]: I0227 11:17:36.924298 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-8m47v" Feb 27 11:17:37 crc kubenswrapper[4728]: I0227 11:17:37.089734 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/7b98335a-8a74-44b4-aed8-8a56081f60ab-logging-compute-config-data-1\") pod \"7b98335a-8a74-44b4-aed8-8a56081f60ab\" (UID: \"7b98335a-8a74-44b4-aed8-8a56081f60ab\") " Feb 27 11:17:37 crc kubenswrapper[4728]: I0227 11:17:37.089883 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7b98335a-8a74-44b4-aed8-8a56081f60ab-ssh-key-openstack-edpm-ipam\") pod \"7b98335a-8a74-44b4-aed8-8a56081f60ab\" (UID: \"7b98335a-8a74-44b4-aed8-8a56081f60ab\") " Feb 27 11:17:37 crc kubenswrapper[4728]: I0227 11:17:37.089907 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7b98335a-8a74-44b4-aed8-8a56081f60ab-inventory\") pod \"7b98335a-8a74-44b4-aed8-8a56081f60ab\" (UID: \"7b98335a-8a74-44b4-aed8-8a56081f60ab\") " Feb 27 11:17:37 crc kubenswrapper[4728]: I0227 11:17:37.089954 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/7b98335a-8a74-44b4-aed8-8a56081f60ab-logging-compute-config-data-0\") pod \"7b98335a-8a74-44b4-aed8-8a56081f60ab\" (UID: \"7b98335a-8a74-44b4-aed8-8a56081f60ab\") " Feb 27 11:17:37 crc kubenswrapper[4728]: I0227 11:17:37.090096 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rb49l\" (UniqueName: \"kubernetes.io/projected/7b98335a-8a74-44b4-aed8-8a56081f60ab-kube-api-access-rb49l\") pod \"7b98335a-8a74-44b4-aed8-8a56081f60ab\" (UID: \"7b98335a-8a74-44b4-aed8-8a56081f60ab\") " Feb 27 11:17:37 crc kubenswrapper[4728]: I0227 11:17:37.095757 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b98335a-8a74-44b4-aed8-8a56081f60ab-kube-api-access-rb49l" (OuterVolumeSpecName: "kube-api-access-rb49l") pod "7b98335a-8a74-44b4-aed8-8a56081f60ab" (UID: "7b98335a-8a74-44b4-aed8-8a56081f60ab"). InnerVolumeSpecName "kube-api-access-rb49l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:17:37 crc kubenswrapper[4728]: I0227 11:17:37.128176 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b98335a-8a74-44b4-aed8-8a56081f60ab-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "7b98335a-8a74-44b4-aed8-8a56081f60ab" (UID: "7b98335a-8a74-44b4-aed8-8a56081f60ab"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 11:17:37 crc kubenswrapper[4728]: I0227 11:17:37.131644 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b98335a-8a74-44b4-aed8-8a56081f60ab-logging-compute-config-data-1" (OuterVolumeSpecName: "logging-compute-config-data-1") pod "7b98335a-8a74-44b4-aed8-8a56081f60ab" (UID: "7b98335a-8a74-44b4-aed8-8a56081f60ab"). InnerVolumeSpecName "logging-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 11:17:37 crc kubenswrapper[4728]: I0227 11:17:37.133896 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b98335a-8a74-44b4-aed8-8a56081f60ab-logging-compute-config-data-0" (OuterVolumeSpecName: "logging-compute-config-data-0") pod "7b98335a-8a74-44b4-aed8-8a56081f60ab" (UID: "7b98335a-8a74-44b4-aed8-8a56081f60ab"). InnerVolumeSpecName "logging-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 11:17:37 crc kubenswrapper[4728]: I0227 11:17:37.146771 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b98335a-8a74-44b4-aed8-8a56081f60ab-inventory" (OuterVolumeSpecName: "inventory") pod "7b98335a-8a74-44b4-aed8-8a56081f60ab" (UID: "7b98335a-8a74-44b4-aed8-8a56081f60ab"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 11:17:37 crc kubenswrapper[4728]: I0227 11:17:37.192673 4728 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7b98335a-8a74-44b4-aed8-8a56081f60ab-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 27 11:17:37 crc kubenswrapper[4728]: I0227 11:17:37.192721 4728 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7b98335a-8a74-44b4-aed8-8a56081f60ab-inventory\") on node \"crc\" DevicePath \"\"" Feb 27 11:17:37 crc kubenswrapper[4728]: I0227 11:17:37.192736 4728 reconciler_common.go:293] "Volume detached for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/7b98335a-8a74-44b4-aed8-8a56081f60ab-logging-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Feb 27 11:17:37 crc kubenswrapper[4728]: I0227 11:17:37.192751 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rb49l\" (UniqueName: \"kubernetes.io/projected/7b98335a-8a74-44b4-aed8-8a56081f60ab-kube-api-access-rb49l\") on node \"crc\" DevicePath \"\"" Feb 27 11:17:37 crc kubenswrapper[4728]: I0227 11:17:37.192765 4728 reconciler_common.go:293] "Volume detached for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/7b98335a-8a74-44b4-aed8-8a56081f60ab-logging-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Feb 27 11:17:37 crc kubenswrapper[4728]: I0227 11:17:37.432110 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-8m47v" event={"ID":"7b98335a-8a74-44b4-aed8-8a56081f60ab","Type":"ContainerDied","Data":"5e6bd4abdc1aa5271bf0d854cc88883a476f4b99d0ccc48c760d100946bb1aaa"} Feb 27 11:17:37 crc kubenswrapper[4728]: I0227 11:17:37.432150 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5e6bd4abdc1aa5271bf0d854cc88883a476f4b99d0ccc48c760d100946bb1aaa" Feb 27 11:17:37 crc kubenswrapper[4728]: I0227 11:17:37.432198 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-8m47v" Feb 27 11:18:00 crc kubenswrapper[4728]: I0227 11:18:00.151000 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536518-7rm4c"] Feb 27 11:18:00 crc kubenswrapper[4728]: E0227 11:18:00.152087 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b98335a-8a74-44b4-aed8-8a56081f60ab" containerName="logging-edpm-deployment-openstack-edpm-ipam" Feb 27 11:18:00 crc kubenswrapper[4728]: I0227 11:18:00.152101 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b98335a-8a74-44b4-aed8-8a56081f60ab" containerName="logging-edpm-deployment-openstack-edpm-ipam" Feb 27 11:18:00 crc kubenswrapper[4728]: I0227 11:18:00.152339 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b98335a-8a74-44b4-aed8-8a56081f60ab" containerName="logging-edpm-deployment-openstack-edpm-ipam" Feb 27 11:18:00 crc kubenswrapper[4728]: I0227 11:18:00.153108 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536518-7rm4c" Feb 27 11:18:00 crc kubenswrapper[4728]: I0227 11:18:00.157268 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-wxdfj" Feb 27 11:18:00 crc kubenswrapper[4728]: I0227 11:18:00.157398 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 11:18:00 crc kubenswrapper[4728]: I0227 11:18:00.157653 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 11:18:00 crc kubenswrapper[4728]: I0227 11:18:00.163310 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536518-7rm4c"] Feb 27 11:18:00 crc kubenswrapper[4728]: I0227 11:18:00.278811 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwzxn\" (UniqueName: \"kubernetes.io/projected/164c972e-39ae-42c0-8754-d8776d5bba3e-kube-api-access-bwzxn\") pod \"auto-csr-approver-29536518-7rm4c\" (UID: \"164c972e-39ae-42c0-8754-d8776d5bba3e\") " pod="openshift-infra/auto-csr-approver-29536518-7rm4c" Feb 27 11:18:00 crc kubenswrapper[4728]: I0227 11:18:00.380825 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bwzxn\" (UniqueName: \"kubernetes.io/projected/164c972e-39ae-42c0-8754-d8776d5bba3e-kube-api-access-bwzxn\") pod \"auto-csr-approver-29536518-7rm4c\" (UID: \"164c972e-39ae-42c0-8754-d8776d5bba3e\") " pod="openshift-infra/auto-csr-approver-29536518-7rm4c" Feb 27 11:18:00 crc kubenswrapper[4728]: I0227 11:18:00.410210 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bwzxn\" (UniqueName: \"kubernetes.io/projected/164c972e-39ae-42c0-8754-d8776d5bba3e-kube-api-access-bwzxn\") pod \"auto-csr-approver-29536518-7rm4c\" (UID: \"164c972e-39ae-42c0-8754-d8776d5bba3e\") " pod="openshift-infra/auto-csr-approver-29536518-7rm4c" Feb 27 11:18:00 crc kubenswrapper[4728]: I0227 11:18:00.483676 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536518-7rm4c" Feb 27 11:18:01 crc kubenswrapper[4728]: I0227 11:18:01.601403 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536518-7rm4c"] Feb 27 11:18:01 crc kubenswrapper[4728]: I0227 11:18:01.729812 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536518-7rm4c" event={"ID":"164c972e-39ae-42c0-8754-d8776d5bba3e","Type":"ContainerStarted","Data":"7e7f66cab0524cbfbce78d678f4e80c4f27bb2d5bdbf9705b7732f2e22960df7"} Feb 27 11:18:03 crc kubenswrapper[4728]: I0227 11:18:03.754979 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536518-7rm4c" event={"ID":"164c972e-39ae-42c0-8754-d8776d5bba3e","Type":"ContainerStarted","Data":"6761677d3c7b13ac279a3c90b1f5ae76b625c917cb2ed4b003e4b4bfee231ea5"} Feb 27 11:18:03 crc kubenswrapper[4728]: I0227 11:18:03.772142 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29536518-7rm4c" podStartSLOduration=2.092225778 podStartE2EDuration="3.77210638s" podCreationTimestamp="2026-02-27 11:18:00 +0000 UTC" firstStartedPulling="2026-02-27 11:18:01.607526766 +0000 UTC m=+3101.569892862" lastFinishedPulling="2026-02-27 11:18:03.287407358 +0000 UTC m=+3103.249773464" observedRunningTime="2026-02-27 11:18:03.770484386 +0000 UTC m=+3103.732850492" watchObservedRunningTime="2026-02-27 11:18:03.77210638 +0000 UTC m=+3103.734472486" Feb 27 11:18:04 crc kubenswrapper[4728]: I0227 11:18:04.767424 4728 generic.go:334] "Generic (PLEG): container finished" podID="164c972e-39ae-42c0-8754-d8776d5bba3e" containerID="6761677d3c7b13ac279a3c90b1f5ae76b625c917cb2ed4b003e4b4bfee231ea5" exitCode=0 Feb 27 11:18:04 crc kubenswrapper[4728]: I0227 11:18:04.767494 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536518-7rm4c" event={"ID":"164c972e-39ae-42c0-8754-d8776d5bba3e","Type":"ContainerDied","Data":"6761677d3c7b13ac279a3c90b1f5ae76b625c917cb2ed4b003e4b4bfee231ea5"} Feb 27 11:18:06 crc kubenswrapper[4728]: I0227 11:18:06.195229 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536518-7rm4c" Feb 27 11:18:06 crc kubenswrapper[4728]: I0227 11:18:06.239614 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bwzxn\" (UniqueName: \"kubernetes.io/projected/164c972e-39ae-42c0-8754-d8776d5bba3e-kube-api-access-bwzxn\") pod \"164c972e-39ae-42c0-8754-d8776d5bba3e\" (UID: \"164c972e-39ae-42c0-8754-d8776d5bba3e\") " Feb 27 11:18:06 crc kubenswrapper[4728]: I0227 11:18:06.246891 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/164c972e-39ae-42c0-8754-d8776d5bba3e-kube-api-access-bwzxn" (OuterVolumeSpecName: "kube-api-access-bwzxn") pod "164c972e-39ae-42c0-8754-d8776d5bba3e" (UID: "164c972e-39ae-42c0-8754-d8776d5bba3e"). InnerVolumeSpecName "kube-api-access-bwzxn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:18:06 crc kubenswrapper[4728]: I0227 11:18:06.343831 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bwzxn\" (UniqueName: \"kubernetes.io/projected/164c972e-39ae-42c0-8754-d8776d5bba3e-kube-api-access-bwzxn\") on node \"crc\" DevicePath \"\"" Feb 27 11:18:06 crc kubenswrapper[4728]: I0227 11:18:06.800493 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536518-7rm4c" event={"ID":"164c972e-39ae-42c0-8754-d8776d5bba3e","Type":"ContainerDied","Data":"7e7f66cab0524cbfbce78d678f4e80c4f27bb2d5bdbf9705b7732f2e22960df7"} Feb 27 11:18:06 crc kubenswrapper[4728]: I0227 11:18:06.800549 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7e7f66cab0524cbfbce78d678f4e80c4f27bb2d5bdbf9705b7732f2e22960df7" Feb 27 11:18:06 crc kubenswrapper[4728]: I0227 11:18:06.800611 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536518-7rm4c" Feb 27 11:18:06 crc kubenswrapper[4728]: I0227 11:18:06.857460 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536512-hwz24"] Feb 27 11:18:06 crc kubenswrapper[4728]: I0227 11:18:06.867870 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536512-hwz24"] Feb 27 11:18:08 crc kubenswrapper[4728]: I0227 11:18:08.757319 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d584de9a-c699-4906-ab5d-5d1b397af97d" path="/var/lib/kubelet/pods/d584de9a-c699-4906-ab5d-5d1b397af97d/volumes" Feb 27 11:18:26 crc kubenswrapper[4728]: I0227 11:18:26.003966 4728 scope.go:117] "RemoveContainer" containerID="b9b10798071270ca99b775eff520c8776431ae9c8465dd6e7a8b593c1ca205df" Feb 27 11:19:05 crc kubenswrapper[4728]: I0227 11:19:05.922388 4728 patch_prober.go:28] interesting pod/machine-config-daemon-mf2hh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 11:19:05 crc kubenswrapper[4728]: I0227 11:19:05.923152 4728 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 11:19:35 crc kubenswrapper[4728]: I0227 11:19:35.921924 4728 patch_prober.go:28] interesting pod/machine-config-daemon-mf2hh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 11:19:35 crc kubenswrapper[4728]: I0227 11:19:35.922680 4728 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 11:20:00 crc kubenswrapper[4728]: I0227 11:20:00.200565 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536520-986fg"] Feb 27 11:20:00 crc kubenswrapper[4728]: E0227 11:20:00.201988 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="164c972e-39ae-42c0-8754-d8776d5bba3e" containerName="oc" Feb 27 11:20:00 crc kubenswrapper[4728]: I0227 11:20:00.202012 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="164c972e-39ae-42c0-8754-d8776d5bba3e" containerName="oc" Feb 27 11:20:00 crc kubenswrapper[4728]: I0227 11:20:00.202452 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="164c972e-39ae-42c0-8754-d8776d5bba3e" containerName="oc" Feb 27 11:20:00 crc kubenswrapper[4728]: I0227 11:20:00.203854 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536520-986fg" Feb 27 11:20:00 crc kubenswrapper[4728]: I0227 11:20:00.206500 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 11:20:00 crc kubenswrapper[4728]: I0227 11:20:00.206988 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 11:20:00 crc kubenswrapper[4728]: I0227 11:20:00.207820 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-wxdfj" Feb 27 11:20:00 crc kubenswrapper[4728]: I0227 11:20:00.211367 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536520-986fg"] Feb 27 11:20:00 crc kubenswrapper[4728]: I0227 11:20:00.344855 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwhg6\" (UniqueName: \"kubernetes.io/projected/2601deca-07c9-4871-bef6-57a312202700-kube-api-access-wwhg6\") pod \"auto-csr-approver-29536520-986fg\" (UID: \"2601deca-07c9-4871-bef6-57a312202700\") " pod="openshift-infra/auto-csr-approver-29536520-986fg" Feb 27 11:20:00 crc kubenswrapper[4728]: I0227 11:20:00.448607 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wwhg6\" (UniqueName: \"kubernetes.io/projected/2601deca-07c9-4871-bef6-57a312202700-kube-api-access-wwhg6\") pod \"auto-csr-approver-29536520-986fg\" (UID: \"2601deca-07c9-4871-bef6-57a312202700\") " pod="openshift-infra/auto-csr-approver-29536520-986fg" Feb 27 11:20:00 crc kubenswrapper[4728]: I0227 11:20:00.471447 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wwhg6\" (UniqueName: \"kubernetes.io/projected/2601deca-07c9-4871-bef6-57a312202700-kube-api-access-wwhg6\") pod \"auto-csr-approver-29536520-986fg\" (UID: \"2601deca-07c9-4871-bef6-57a312202700\") " pod="openshift-infra/auto-csr-approver-29536520-986fg" Feb 27 11:20:00 crc kubenswrapper[4728]: I0227 11:20:00.530806 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536520-986fg" Feb 27 11:20:01 crc kubenswrapper[4728]: I0227 11:20:01.057862 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536520-986fg"] Feb 27 11:20:01 crc kubenswrapper[4728]: I0227 11:20:01.059959 4728 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 27 11:20:01 crc kubenswrapper[4728]: I0227 11:20:01.283004 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536520-986fg" event={"ID":"2601deca-07c9-4871-bef6-57a312202700","Type":"ContainerStarted","Data":"d5ba13d9464de08afb830cc63dd2040f86fec50f09f2e656d8d47e98283f0b61"} Feb 27 11:20:05 crc kubenswrapper[4728]: I0227 11:20:05.329403 4728 generic.go:334] "Generic (PLEG): container finished" podID="2601deca-07c9-4871-bef6-57a312202700" containerID="baa3431c9e42e09122cec31771f2b9f2848397d77155ec37bdd4a6387e0844ae" exitCode=0 Feb 27 11:20:05 crc kubenswrapper[4728]: I0227 11:20:05.329482 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536520-986fg" event={"ID":"2601deca-07c9-4871-bef6-57a312202700","Type":"ContainerDied","Data":"baa3431c9e42e09122cec31771f2b9f2848397d77155ec37bdd4a6387e0844ae"} Feb 27 11:20:05 crc kubenswrapper[4728]: I0227 11:20:05.922440 4728 patch_prober.go:28] interesting pod/machine-config-daemon-mf2hh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 11:20:05 crc kubenswrapper[4728]: I0227 11:20:05.922544 4728 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 11:20:05 crc kubenswrapper[4728]: I0227 11:20:05.922604 4728 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" Feb 27 11:20:05 crc kubenswrapper[4728]: I0227 11:20:05.923610 4728 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4b26cdc47a7e7e6f6d03cb73756691c7c42ce277f12dd365521433a950382ed6"} pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 27 11:20:05 crc kubenswrapper[4728]: I0227 11:20:05.923679 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" containerName="machine-config-daemon" containerID="cri-o://4b26cdc47a7e7e6f6d03cb73756691c7c42ce277f12dd365521433a950382ed6" gracePeriod=600 Feb 27 11:20:06 crc kubenswrapper[4728]: E0227 11:20:06.063177 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mf2hh_openshift-machine-config-operator(c2cfd349-f825-497b-b698-7fb6bc258b22)\"" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" Feb 27 11:20:06 crc kubenswrapper[4728]: I0227 11:20:06.349725 4728 generic.go:334] "Generic (PLEG): container finished" podID="c2cfd349-f825-497b-b698-7fb6bc258b22" containerID="4b26cdc47a7e7e6f6d03cb73756691c7c42ce277f12dd365521433a950382ed6" exitCode=0 Feb 27 11:20:06 crc kubenswrapper[4728]: I0227 11:20:06.350025 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" event={"ID":"c2cfd349-f825-497b-b698-7fb6bc258b22","Type":"ContainerDied","Data":"4b26cdc47a7e7e6f6d03cb73756691c7c42ce277f12dd365521433a950382ed6"} Feb 27 11:20:06 crc kubenswrapper[4728]: I0227 11:20:06.350068 4728 scope.go:117] "RemoveContainer" containerID="2c94f1a9a09928f530d57cf1af411e7a1834b20d8115a07791ee034fe585d7b3" Feb 27 11:20:06 crc kubenswrapper[4728]: I0227 11:20:06.351569 4728 scope.go:117] "RemoveContainer" containerID="4b26cdc47a7e7e6f6d03cb73756691c7c42ce277f12dd365521433a950382ed6" Feb 27 11:20:06 crc kubenswrapper[4728]: E0227 11:20:06.352368 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mf2hh_openshift-machine-config-operator(c2cfd349-f825-497b-b698-7fb6bc258b22)\"" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" Feb 27 11:20:06 crc kubenswrapper[4728]: I0227 11:20:06.838600 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536520-986fg" Feb 27 11:20:06 crc kubenswrapper[4728]: I0227 11:20:06.986594 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wwhg6\" (UniqueName: \"kubernetes.io/projected/2601deca-07c9-4871-bef6-57a312202700-kube-api-access-wwhg6\") pod \"2601deca-07c9-4871-bef6-57a312202700\" (UID: \"2601deca-07c9-4871-bef6-57a312202700\") " Feb 27 11:20:06 crc kubenswrapper[4728]: I0227 11:20:06.995345 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2601deca-07c9-4871-bef6-57a312202700-kube-api-access-wwhg6" (OuterVolumeSpecName: "kube-api-access-wwhg6") pod "2601deca-07c9-4871-bef6-57a312202700" (UID: "2601deca-07c9-4871-bef6-57a312202700"). InnerVolumeSpecName "kube-api-access-wwhg6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:20:07 crc kubenswrapper[4728]: I0227 11:20:07.089828 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wwhg6\" (UniqueName: \"kubernetes.io/projected/2601deca-07c9-4871-bef6-57a312202700-kube-api-access-wwhg6\") on node \"crc\" DevicePath \"\"" Feb 27 11:20:07 crc kubenswrapper[4728]: I0227 11:20:07.362078 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536520-986fg" event={"ID":"2601deca-07c9-4871-bef6-57a312202700","Type":"ContainerDied","Data":"d5ba13d9464de08afb830cc63dd2040f86fec50f09f2e656d8d47e98283f0b61"} Feb 27 11:20:07 crc kubenswrapper[4728]: I0227 11:20:07.363400 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d5ba13d9464de08afb830cc63dd2040f86fec50f09f2e656d8d47e98283f0b61" Feb 27 11:20:07 crc kubenswrapper[4728]: I0227 11:20:07.362128 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536520-986fg" Feb 27 11:20:07 crc kubenswrapper[4728]: I0227 11:20:07.955720 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536514-bxqr4"] Feb 27 11:20:07 crc kubenswrapper[4728]: I0227 11:20:07.969547 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536514-bxqr4"] Feb 27 11:20:08 crc kubenswrapper[4728]: I0227 11:20:08.743141 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5129a27-a7d3-4b11-a624-118592d0473e" path="/var/lib/kubelet/pods/c5129a27-a7d3-4b11-a624-118592d0473e/volumes" Feb 27 11:20:17 crc kubenswrapper[4728]: I0227 11:20:17.725889 4728 scope.go:117] "RemoveContainer" containerID="4b26cdc47a7e7e6f6d03cb73756691c7c42ce277f12dd365521433a950382ed6" Feb 27 11:20:17 crc kubenswrapper[4728]: E0227 11:20:17.726848 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mf2hh_openshift-machine-config-operator(c2cfd349-f825-497b-b698-7fb6bc258b22)\"" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" Feb 27 11:20:26 crc kubenswrapper[4728]: I0227 11:20:26.110966 4728 scope.go:117] "RemoveContainer" containerID="dc14fff0104f62edcd9fc59605c96e1d9919e1ba79e61e880f6d7acf81aed0ec" Feb 27 11:20:30 crc kubenswrapper[4728]: I0227 11:20:30.740212 4728 scope.go:117] "RemoveContainer" containerID="4b26cdc47a7e7e6f6d03cb73756691c7c42ce277f12dd365521433a950382ed6" Feb 27 11:20:30 crc kubenswrapper[4728]: E0227 11:20:30.741543 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mf2hh_openshift-machine-config-operator(c2cfd349-f825-497b-b698-7fb6bc258b22)\"" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" Feb 27 11:20:41 crc kubenswrapper[4728]: I0227 11:20:41.725233 4728 scope.go:117] "RemoveContainer" containerID="4b26cdc47a7e7e6f6d03cb73756691c7c42ce277f12dd365521433a950382ed6" Feb 27 11:20:41 crc kubenswrapper[4728]: E0227 11:20:41.726253 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mf2hh_openshift-machine-config-operator(c2cfd349-f825-497b-b698-7fb6bc258b22)\"" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" Feb 27 11:20:56 crc kubenswrapper[4728]: I0227 11:20:56.726877 4728 scope.go:117] "RemoveContainer" containerID="4b26cdc47a7e7e6f6d03cb73756691c7c42ce277f12dd365521433a950382ed6" Feb 27 11:20:56 crc kubenswrapper[4728]: E0227 11:20:56.727769 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mf2hh_openshift-machine-config-operator(c2cfd349-f825-497b-b698-7fb6bc258b22)\"" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" Feb 27 11:21:09 crc kubenswrapper[4728]: I0227 11:21:09.725303 4728 scope.go:117] "RemoveContainer" containerID="4b26cdc47a7e7e6f6d03cb73756691c7c42ce277f12dd365521433a950382ed6" Feb 27 11:21:09 crc kubenswrapper[4728]: E0227 11:21:09.726386 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mf2hh_openshift-machine-config-operator(c2cfd349-f825-497b-b698-7fb6bc258b22)\"" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" Feb 27 11:21:24 crc kubenswrapper[4728]: I0227 11:21:24.725593 4728 scope.go:117] "RemoveContainer" containerID="4b26cdc47a7e7e6f6d03cb73756691c7c42ce277f12dd365521433a950382ed6" Feb 27 11:21:24 crc kubenswrapper[4728]: E0227 11:21:24.726986 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mf2hh_openshift-machine-config-operator(c2cfd349-f825-497b-b698-7fb6bc258b22)\"" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" Feb 27 11:21:37 crc kubenswrapper[4728]: I0227 11:21:37.724790 4728 scope.go:117] "RemoveContainer" containerID="4b26cdc47a7e7e6f6d03cb73756691c7c42ce277f12dd365521433a950382ed6" Feb 27 11:21:37 crc kubenswrapper[4728]: E0227 11:21:37.725470 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mf2hh_openshift-machine-config-operator(c2cfd349-f825-497b-b698-7fb6bc258b22)\"" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" Feb 27 11:21:50 crc kubenswrapper[4728]: I0227 11:21:50.731353 4728 scope.go:117] "RemoveContainer" containerID="4b26cdc47a7e7e6f6d03cb73756691c7c42ce277f12dd365521433a950382ed6" Feb 27 11:21:50 crc kubenswrapper[4728]: E0227 11:21:50.732063 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mf2hh_openshift-machine-config-operator(c2cfd349-f825-497b-b698-7fb6bc258b22)\"" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" Feb 27 11:22:00 crc kubenswrapper[4728]: I0227 11:22:00.160749 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536522-qhglf"] Feb 27 11:22:00 crc kubenswrapper[4728]: E0227 11:22:00.161663 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2601deca-07c9-4871-bef6-57a312202700" containerName="oc" Feb 27 11:22:00 crc kubenswrapper[4728]: I0227 11:22:00.161675 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="2601deca-07c9-4871-bef6-57a312202700" containerName="oc" Feb 27 11:22:00 crc kubenswrapper[4728]: I0227 11:22:00.161872 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="2601deca-07c9-4871-bef6-57a312202700" containerName="oc" Feb 27 11:22:00 crc kubenswrapper[4728]: I0227 11:22:00.162662 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536522-qhglf" Feb 27 11:22:00 crc kubenswrapper[4728]: I0227 11:22:00.164825 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 11:22:00 crc kubenswrapper[4728]: I0227 11:22:00.165374 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 11:22:00 crc kubenswrapper[4728]: I0227 11:22:00.167058 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-wxdfj" Feb 27 11:22:00 crc kubenswrapper[4728]: I0227 11:22:00.179829 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536522-qhglf"] Feb 27 11:22:00 crc kubenswrapper[4728]: I0227 11:22:00.271481 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7spf9\" (UniqueName: \"kubernetes.io/projected/94404864-351e-4827-b8f1-a59bf9a35f03-kube-api-access-7spf9\") pod \"auto-csr-approver-29536522-qhglf\" (UID: \"94404864-351e-4827-b8f1-a59bf9a35f03\") " pod="openshift-infra/auto-csr-approver-29536522-qhglf" Feb 27 11:22:00 crc kubenswrapper[4728]: I0227 11:22:00.373657 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7spf9\" (UniqueName: \"kubernetes.io/projected/94404864-351e-4827-b8f1-a59bf9a35f03-kube-api-access-7spf9\") pod \"auto-csr-approver-29536522-qhglf\" (UID: \"94404864-351e-4827-b8f1-a59bf9a35f03\") " pod="openshift-infra/auto-csr-approver-29536522-qhglf" Feb 27 11:22:00 crc kubenswrapper[4728]: I0227 11:22:00.395240 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7spf9\" (UniqueName: \"kubernetes.io/projected/94404864-351e-4827-b8f1-a59bf9a35f03-kube-api-access-7spf9\") pod \"auto-csr-approver-29536522-qhglf\" (UID: \"94404864-351e-4827-b8f1-a59bf9a35f03\") " pod="openshift-infra/auto-csr-approver-29536522-qhglf" Feb 27 11:22:00 crc kubenswrapper[4728]: I0227 11:22:00.483456 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536522-qhglf" Feb 27 11:22:01 crc kubenswrapper[4728]: I0227 11:22:01.013596 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536522-qhglf"] Feb 27 11:22:01 crc kubenswrapper[4728]: I0227 11:22:01.777284 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536522-qhglf" event={"ID":"94404864-351e-4827-b8f1-a59bf9a35f03","Type":"ContainerStarted","Data":"ae06f6bd000ce1360b160a2019e454fa83c5b275e9ba4f2b87c7272b757eae09"} Feb 27 11:22:02 crc kubenswrapper[4728]: I0227 11:22:02.725563 4728 scope.go:117] "RemoveContainer" containerID="4b26cdc47a7e7e6f6d03cb73756691c7c42ce277f12dd365521433a950382ed6" Feb 27 11:22:02 crc kubenswrapper[4728]: E0227 11:22:02.726960 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mf2hh_openshift-machine-config-operator(c2cfd349-f825-497b-b698-7fb6bc258b22)\"" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" Feb 27 11:22:02 crc kubenswrapper[4728]: I0227 11:22:02.791822 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536522-qhglf" event={"ID":"94404864-351e-4827-b8f1-a59bf9a35f03","Type":"ContainerStarted","Data":"f11f2b3c85c0c86e06c9baa9744e78fb733c7b7af32ceaf877eec8d3a0df98e7"} Feb 27 11:22:02 crc kubenswrapper[4728]: I0227 11:22:02.817261 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29536522-qhglf" podStartSLOduration=1.659091232 podStartE2EDuration="2.817224953s" podCreationTimestamp="2026-02-27 11:22:00 +0000 UTC" firstStartedPulling="2026-02-27 11:22:01.027566637 +0000 UTC m=+3340.989932743" lastFinishedPulling="2026-02-27 11:22:02.185700358 +0000 UTC m=+3342.148066464" observedRunningTime="2026-02-27 11:22:02.806521332 +0000 UTC m=+3342.768887438" watchObservedRunningTime="2026-02-27 11:22:02.817224953 +0000 UTC m=+3342.779591059" Feb 27 11:22:03 crc kubenswrapper[4728]: I0227 11:22:03.810380 4728 generic.go:334] "Generic (PLEG): container finished" podID="94404864-351e-4827-b8f1-a59bf9a35f03" containerID="f11f2b3c85c0c86e06c9baa9744e78fb733c7b7af32ceaf877eec8d3a0df98e7" exitCode=0 Feb 27 11:22:03 crc kubenswrapper[4728]: I0227 11:22:03.810429 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536522-qhglf" event={"ID":"94404864-351e-4827-b8f1-a59bf9a35f03","Type":"ContainerDied","Data":"f11f2b3c85c0c86e06c9baa9744e78fb733c7b7af32ceaf877eec8d3a0df98e7"} Feb 27 11:22:05 crc kubenswrapper[4728]: I0227 11:22:05.359375 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536522-qhglf" Feb 27 11:22:05 crc kubenswrapper[4728]: I0227 11:22:05.503955 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7spf9\" (UniqueName: \"kubernetes.io/projected/94404864-351e-4827-b8f1-a59bf9a35f03-kube-api-access-7spf9\") pod \"94404864-351e-4827-b8f1-a59bf9a35f03\" (UID: \"94404864-351e-4827-b8f1-a59bf9a35f03\") " Feb 27 11:22:05 crc kubenswrapper[4728]: I0227 11:22:05.512789 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94404864-351e-4827-b8f1-a59bf9a35f03-kube-api-access-7spf9" (OuterVolumeSpecName: "kube-api-access-7spf9") pod "94404864-351e-4827-b8f1-a59bf9a35f03" (UID: "94404864-351e-4827-b8f1-a59bf9a35f03"). InnerVolumeSpecName "kube-api-access-7spf9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:22:05 crc kubenswrapper[4728]: I0227 11:22:05.607883 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7spf9\" (UniqueName: \"kubernetes.io/projected/94404864-351e-4827-b8f1-a59bf9a35f03-kube-api-access-7spf9\") on node \"crc\" DevicePath \"\"" Feb 27 11:22:05 crc kubenswrapper[4728]: I0227 11:22:05.854664 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536522-qhglf" event={"ID":"94404864-351e-4827-b8f1-a59bf9a35f03","Type":"ContainerDied","Data":"ae06f6bd000ce1360b160a2019e454fa83c5b275e9ba4f2b87c7272b757eae09"} Feb 27 11:22:05 crc kubenswrapper[4728]: I0227 11:22:05.854700 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ae06f6bd000ce1360b160a2019e454fa83c5b275e9ba4f2b87c7272b757eae09" Feb 27 11:22:05 crc kubenswrapper[4728]: I0227 11:22:05.854707 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536522-qhglf" Feb 27 11:22:05 crc kubenswrapper[4728]: I0227 11:22:05.898703 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536516-t2t5z"] Feb 27 11:22:05 crc kubenswrapper[4728]: I0227 11:22:05.909428 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536516-t2t5z"] Feb 27 11:22:06 crc kubenswrapper[4728]: I0227 11:22:06.742406 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9cf1b72c-2cb3-4759-bb95-dc030a19d8f9" path="/var/lib/kubelet/pods/9cf1b72c-2cb3-4759-bb95-dc030a19d8f9/volumes" Feb 27 11:22:15 crc kubenswrapper[4728]: I0227 11:22:15.725394 4728 scope.go:117] "RemoveContainer" containerID="4b26cdc47a7e7e6f6d03cb73756691c7c42ce277f12dd365521433a950382ed6" Feb 27 11:22:15 crc kubenswrapper[4728]: E0227 11:22:15.726296 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mf2hh_openshift-machine-config-operator(c2cfd349-f825-497b-b698-7fb6bc258b22)\"" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" Feb 27 11:22:26 crc kubenswrapper[4728]: I0227 11:22:26.264254 4728 scope.go:117] "RemoveContainer" containerID="47732d1260f979b85489339bd0a18408e71329130fe3462d46b7073ae8166751" Feb 27 11:22:26 crc kubenswrapper[4728]: I0227 11:22:26.731849 4728 scope.go:117] "RemoveContainer" containerID="4b26cdc47a7e7e6f6d03cb73756691c7c42ce277f12dd365521433a950382ed6" Feb 27 11:22:26 crc kubenswrapper[4728]: E0227 11:22:26.732682 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mf2hh_openshift-machine-config-operator(c2cfd349-f825-497b-b698-7fb6bc258b22)\"" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" Feb 27 11:22:37 crc kubenswrapper[4728]: I0227 11:22:37.725804 4728 scope.go:117] "RemoveContainer" containerID="4b26cdc47a7e7e6f6d03cb73756691c7c42ce277f12dd365521433a950382ed6" Feb 27 11:22:37 crc kubenswrapper[4728]: E0227 11:22:37.726487 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mf2hh_openshift-machine-config-operator(c2cfd349-f825-497b-b698-7fb6bc258b22)\"" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" Feb 27 11:22:48 crc kubenswrapper[4728]: I0227 11:22:48.724955 4728 scope.go:117] "RemoveContainer" containerID="4b26cdc47a7e7e6f6d03cb73756691c7c42ce277f12dd365521433a950382ed6" Feb 27 11:22:48 crc kubenswrapper[4728]: E0227 11:22:48.726000 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mf2hh_openshift-machine-config-operator(c2cfd349-f825-497b-b698-7fb6bc258b22)\"" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" Feb 27 11:22:54 crc kubenswrapper[4728]: I0227 11:22:54.062379 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-fc4zg"] Feb 27 11:22:54 crc kubenswrapper[4728]: E0227 11:22:54.063635 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94404864-351e-4827-b8f1-a59bf9a35f03" containerName="oc" Feb 27 11:22:54 crc kubenswrapper[4728]: I0227 11:22:54.063654 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="94404864-351e-4827-b8f1-a59bf9a35f03" containerName="oc" Feb 27 11:22:54 crc kubenswrapper[4728]: I0227 11:22:54.063897 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="94404864-351e-4827-b8f1-a59bf9a35f03" containerName="oc" Feb 27 11:22:54 crc kubenswrapper[4728]: I0227 11:22:54.067583 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fc4zg" Feb 27 11:22:54 crc kubenswrapper[4728]: I0227 11:22:54.075015 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fc4zg"] Feb 27 11:22:54 crc kubenswrapper[4728]: I0227 11:22:54.099965 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nm6z7\" (UniqueName: \"kubernetes.io/projected/3d6d67d0-dafd-4ecc-af66-535398f1e80b-kube-api-access-nm6z7\") pod \"certified-operators-fc4zg\" (UID: \"3d6d67d0-dafd-4ecc-af66-535398f1e80b\") " pod="openshift-marketplace/certified-operators-fc4zg" Feb 27 11:22:54 crc kubenswrapper[4728]: I0227 11:22:54.100212 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d6d67d0-dafd-4ecc-af66-535398f1e80b-catalog-content\") pod \"certified-operators-fc4zg\" (UID: \"3d6d67d0-dafd-4ecc-af66-535398f1e80b\") " pod="openshift-marketplace/certified-operators-fc4zg" Feb 27 11:22:54 crc kubenswrapper[4728]: I0227 11:22:54.100494 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d6d67d0-dafd-4ecc-af66-535398f1e80b-utilities\") pod \"certified-operators-fc4zg\" (UID: \"3d6d67d0-dafd-4ecc-af66-535398f1e80b\") " pod="openshift-marketplace/certified-operators-fc4zg" Feb 27 11:22:54 crc kubenswrapper[4728]: I0227 11:22:54.202049 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d6d67d0-dafd-4ecc-af66-535398f1e80b-catalog-content\") pod \"certified-operators-fc4zg\" (UID: \"3d6d67d0-dafd-4ecc-af66-535398f1e80b\") " pod="openshift-marketplace/certified-operators-fc4zg" Feb 27 11:22:54 crc kubenswrapper[4728]: I0227 11:22:54.202410 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d6d67d0-dafd-4ecc-af66-535398f1e80b-utilities\") pod \"certified-operators-fc4zg\" (UID: \"3d6d67d0-dafd-4ecc-af66-535398f1e80b\") " pod="openshift-marketplace/certified-operators-fc4zg" Feb 27 11:22:54 crc kubenswrapper[4728]: I0227 11:22:54.202615 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nm6z7\" (UniqueName: \"kubernetes.io/projected/3d6d67d0-dafd-4ecc-af66-535398f1e80b-kube-api-access-nm6z7\") pod \"certified-operators-fc4zg\" (UID: \"3d6d67d0-dafd-4ecc-af66-535398f1e80b\") " pod="openshift-marketplace/certified-operators-fc4zg" Feb 27 11:22:54 crc kubenswrapper[4728]: I0227 11:22:54.203011 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d6d67d0-dafd-4ecc-af66-535398f1e80b-catalog-content\") pod \"certified-operators-fc4zg\" (UID: \"3d6d67d0-dafd-4ecc-af66-535398f1e80b\") " pod="openshift-marketplace/certified-operators-fc4zg" Feb 27 11:22:54 crc kubenswrapper[4728]: I0227 11:22:54.203114 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d6d67d0-dafd-4ecc-af66-535398f1e80b-utilities\") pod \"certified-operators-fc4zg\" (UID: \"3d6d67d0-dafd-4ecc-af66-535398f1e80b\") " pod="openshift-marketplace/certified-operators-fc4zg" Feb 27 11:22:54 crc kubenswrapper[4728]: I0227 11:22:54.242278 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nm6z7\" (UniqueName: \"kubernetes.io/projected/3d6d67d0-dafd-4ecc-af66-535398f1e80b-kube-api-access-nm6z7\") pod \"certified-operators-fc4zg\" (UID: \"3d6d67d0-dafd-4ecc-af66-535398f1e80b\") " pod="openshift-marketplace/certified-operators-fc4zg" Feb 27 11:22:54 crc kubenswrapper[4728]: I0227 11:22:54.390243 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fc4zg" Feb 27 11:22:54 crc kubenswrapper[4728]: I0227 11:22:54.909969 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fc4zg"] Feb 27 11:22:55 crc kubenswrapper[4728]: I0227 11:22:55.440359 4728 generic.go:334] "Generic (PLEG): container finished" podID="3d6d67d0-dafd-4ecc-af66-535398f1e80b" containerID="4ba7d3117ff3bcb8fbb4c3aacb3ee82a74f89800297b7886c1bf9e1698b76f25" exitCode=0 Feb 27 11:22:55 crc kubenswrapper[4728]: I0227 11:22:55.440446 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fc4zg" event={"ID":"3d6d67d0-dafd-4ecc-af66-535398f1e80b","Type":"ContainerDied","Data":"4ba7d3117ff3bcb8fbb4c3aacb3ee82a74f89800297b7886c1bf9e1698b76f25"} Feb 27 11:22:55 crc kubenswrapper[4728]: I0227 11:22:55.440707 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fc4zg" event={"ID":"3d6d67d0-dafd-4ecc-af66-535398f1e80b","Type":"ContainerStarted","Data":"b390214e4237bd8c350064c25d46b6e181d5c2a42cfb8589bd087129d47c69b3"} Feb 27 11:22:57 crc kubenswrapper[4728]: I0227 11:22:57.464638 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fc4zg" event={"ID":"3d6d67d0-dafd-4ecc-af66-535398f1e80b","Type":"ContainerStarted","Data":"4ab3188b515e4072e42b3644d0deff42382b8561f09c0064377005133dbab39d"} Feb 27 11:22:58 crc kubenswrapper[4728]: I0227 11:22:58.479974 4728 generic.go:334] "Generic (PLEG): container finished" podID="3d6d67d0-dafd-4ecc-af66-535398f1e80b" containerID="4ab3188b515e4072e42b3644d0deff42382b8561f09c0064377005133dbab39d" exitCode=0 Feb 27 11:22:58 crc kubenswrapper[4728]: I0227 11:22:58.480333 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fc4zg" event={"ID":"3d6d67d0-dafd-4ecc-af66-535398f1e80b","Type":"ContainerDied","Data":"4ab3188b515e4072e42b3644d0deff42382b8561f09c0064377005133dbab39d"} Feb 27 11:22:59 crc kubenswrapper[4728]: I0227 11:22:59.493561 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fc4zg" event={"ID":"3d6d67d0-dafd-4ecc-af66-535398f1e80b","Type":"ContainerStarted","Data":"95c0290c38dd2a7d260253f69014453585ceb4fa26e149829747664dd55b863b"} Feb 27 11:22:59 crc kubenswrapper[4728]: I0227 11:22:59.519579 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-fc4zg" podStartSLOduration=2.024037528 podStartE2EDuration="5.519556485s" podCreationTimestamp="2026-02-27 11:22:54 +0000 UTC" firstStartedPulling="2026-02-27 11:22:55.444784107 +0000 UTC m=+3395.407150203" lastFinishedPulling="2026-02-27 11:22:58.940303054 +0000 UTC m=+3398.902669160" observedRunningTime="2026-02-27 11:22:59.509994645 +0000 UTC m=+3399.472360751" watchObservedRunningTime="2026-02-27 11:22:59.519556485 +0000 UTC m=+3399.481922601" Feb 27 11:23:02 crc kubenswrapper[4728]: I0227 11:23:02.726285 4728 scope.go:117] "RemoveContainer" containerID="4b26cdc47a7e7e6f6d03cb73756691c7c42ce277f12dd365521433a950382ed6" Feb 27 11:23:02 crc kubenswrapper[4728]: E0227 11:23:02.727092 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mf2hh_openshift-machine-config-operator(c2cfd349-f825-497b-b698-7fb6bc258b22)\"" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" Feb 27 11:23:04 crc kubenswrapper[4728]: I0227 11:23:04.391139 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-fc4zg" Feb 27 11:23:04 crc kubenswrapper[4728]: I0227 11:23:04.391381 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-fc4zg" Feb 27 11:23:04 crc kubenswrapper[4728]: I0227 11:23:04.500545 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-fc4zg" Feb 27 11:23:04 crc kubenswrapper[4728]: I0227 11:23:04.602731 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-fc4zg" Feb 27 11:23:04 crc kubenswrapper[4728]: I0227 11:23:04.744733 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fc4zg"] Feb 27 11:23:06 crc kubenswrapper[4728]: I0227 11:23:06.573138 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-fc4zg" podUID="3d6d67d0-dafd-4ecc-af66-535398f1e80b" containerName="registry-server" containerID="cri-o://95c0290c38dd2a7d260253f69014453585ceb4fa26e149829747664dd55b863b" gracePeriod=2 Feb 27 11:23:07 crc kubenswrapper[4728]: I0227 11:23:07.253099 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fc4zg" Feb 27 11:23:07 crc kubenswrapper[4728]: I0227 11:23:07.353968 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d6d67d0-dafd-4ecc-af66-535398f1e80b-utilities\") pod \"3d6d67d0-dafd-4ecc-af66-535398f1e80b\" (UID: \"3d6d67d0-dafd-4ecc-af66-535398f1e80b\") " Feb 27 11:23:07 crc kubenswrapper[4728]: I0227 11:23:07.354097 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d6d67d0-dafd-4ecc-af66-535398f1e80b-catalog-content\") pod \"3d6d67d0-dafd-4ecc-af66-535398f1e80b\" (UID: \"3d6d67d0-dafd-4ecc-af66-535398f1e80b\") " Feb 27 11:23:07 crc kubenswrapper[4728]: I0227 11:23:07.354287 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nm6z7\" (UniqueName: \"kubernetes.io/projected/3d6d67d0-dafd-4ecc-af66-535398f1e80b-kube-api-access-nm6z7\") pod \"3d6d67d0-dafd-4ecc-af66-535398f1e80b\" (UID: \"3d6d67d0-dafd-4ecc-af66-535398f1e80b\") " Feb 27 11:23:07 crc kubenswrapper[4728]: I0227 11:23:07.356624 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3d6d67d0-dafd-4ecc-af66-535398f1e80b-utilities" (OuterVolumeSpecName: "utilities") pod "3d6d67d0-dafd-4ecc-af66-535398f1e80b" (UID: "3d6d67d0-dafd-4ecc-af66-535398f1e80b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 11:23:07 crc kubenswrapper[4728]: I0227 11:23:07.390280 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d6d67d0-dafd-4ecc-af66-535398f1e80b-kube-api-access-nm6z7" (OuterVolumeSpecName: "kube-api-access-nm6z7") pod "3d6d67d0-dafd-4ecc-af66-535398f1e80b" (UID: "3d6d67d0-dafd-4ecc-af66-535398f1e80b"). InnerVolumeSpecName "kube-api-access-nm6z7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:23:07 crc kubenswrapper[4728]: I0227 11:23:07.443093 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3d6d67d0-dafd-4ecc-af66-535398f1e80b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3d6d67d0-dafd-4ecc-af66-535398f1e80b" (UID: "3d6d67d0-dafd-4ecc-af66-535398f1e80b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 11:23:07 crc kubenswrapper[4728]: I0227 11:23:07.457241 4728 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d6d67d0-dafd-4ecc-af66-535398f1e80b-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 11:23:07 crc kubenswrapper[4728]: I0227 11:23:07.457270 4728 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d6d67d0-dafd-4ecc-af66-535398f1e80b-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 11:23:07 crc kubenswrapper[4728]: I0227 11:23:07.457280 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nm6z7\" (UniqueName: \"kubernetes.io/projected/3d6d67d0-dafd-4ecc-af66-535398f1e80b-kube-api-access-nm6z7\") on node \"crc\" DevicePath \"\"" Feb 27 11:23:07 crc kubenswrapper[4728]: I0227 11:23:07.586237 4728 generic.go:334] "Generic (PLEG): container finished" podID="3d6d67d0-dafd-4ecc-af66-535398f1e80b" containerID="95c0290c38dd2a7d260253f69014453585ceb4fa26e149829747664dd55b863b" exitCode=0 Feb 27 11:23:07 crc kubenswrapper[4728]: I0227 11:23:07.586280 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fc4zg" event={"ID":"3d6d67d0-dafd-4ecc-af66-535398f1e80b","Type":"ContainerDied","Data":"95c0290c38dd2a7d260253f69014453585ceb4fa26e149829747664dd55b863b"} Feb 27 11:23:07 crc kubenswrapper[4728]: I0227 11:23:07.586345 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fc4zg" event={"ID":"3d6d67d0-dafd-4ecc-af66-535398f1e80b","Type":"ContainerDied","Data":"b390214e4237bd8c350064c25d46b6e181d5c2a42cfb8589bd087129d47c69b3"} Feb 27 11:23:07 crc kubenswrapper[4728]: I0227 11:23:07.586373 4728 scope.go:117] "RemoveContainer" containerID="95c0290c38dd2a7d260253f69014453585ceb4fa26e149829747664dd55b863b" Feb 27 11:23:07 crc kubenswrapper[4728]: I0227 11:23:07.586364 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fc4zg" Feb 27 11:23:07 crc kubenswrapper[4728]: I0227 11:23:07.612482 4728 scope.go:117] "RemoveContainer" containerID="4ab3188b515e4072e42b3644d0deff42382b8561f09c0064377005133dbab39d" Feb 27 11:23:07 crc kubenswrapper[4728]: I0227 11:23:07.626867 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fc4zg"] Feb 27 11:23:07 crc kubenswrapper[4728]: I0227 11:23:07.637568 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-fc4zg"] Feb 27 11:23:07 crc kubenswrapper[4728]: I0227 11:23:07.649809 4728 scope.go:117] "RemoveContainer" containerID="4ba7d3117ff3bcb8fbb4c3aacb3ee82a74f89800297b7886c1bf9e1698b76f25" Feb 27 11:23:07 crc kubenswrapper[4728]: I0227 11:23:07.695842 4728 scope.go:117] "RemoveContainer" containerID="95c0290c38dd2a7d260253f69014453585ceb4fa26e149829747664dd55b863b" Feb 27 11:23:07 crc kubenswrapper[4728]: E0227 11:23:07.696576 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"95c0290c38dd2a7d260253f69014453585ceb4fa26e149829747664dd55b863b\": container with ID starting with 95c0290c38dd2a7d260253f69014453585ceb4fa26e149829747664dd55b863b not found: ID does not exist" containerID="95c0290c38dd2a7d260253f69014453585ceb4fa26e149829747664dd55b863b" Feb 27 11:23:07 crc kubenswrapper[4728]: I0227 11:23:07.696623 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"95c0290c38dd2a7d260253f69014453585ceb4fa26e149829747664dd55b863b"} err="failed to get container status \"95c0290c38dd2a7d260253f69014453585ceb4fa26e149829747664dd55b863b\": rpc error: code = NotFound desc = could not find container \"95c0290c38dd2a7d260253f69014453585ceb4fa26e149829747664dd55b863b\": container with ID starting with 95c0290c38dd2a7d260253f69014453585ceb4fa26e149829747664dd55b863b not found: ID does not exist" Feb 27 11:23:07 crc kubenswrapper[4728]: I0227 11:23:07.696646 4728 scope.go:117] "RemoveContainer" containerID="4ab3188b515e4072e42b3644d0deff42382b8561f09c0064377005133dbab39d" Feb 27 11:23:07 crc kubenswrapper[4728]: E0227 11:23:07.696914 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4ab3188b515e4072e42b3644d0deff42382b8561f09c0064377005133dbab39d\": container with ID starting with 4ab3188b515e4072e42b3644d0deff42382b8561f09c0064377005133dbab39d not found: ID does not exist" containerID="4ab3188b515e4072e42b3644d0deff42382b8561f09c0064377005133dbab39d" Feb 27 11:23:07 crc kubenswrapper[4728]: I0227 11:23:07.696944 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ab3188b515e4072e42b3644d0deff42382b8561f09c0064377005133dbab39d"} err="failed to get container status \"4ab3188b515e4072e42b3644d0deff42382b8561f09c0064377005133dbab39d\": rpc error: code = NotFound desc = could not find container \"4ab3188b515e4072e42b3644d0deff42382b8561f09c0064377005133dbab39d\": container with ID starting with 4ab3188b515e4072e42b3644d0deff42382b8561f09c0064377005133dbab39d not found: ID does not exist" Feb 27 11:23:07 crc kubenswrapper[4728]: I0227 11:23:07.696965 4728 scope.go:117] "RemoveContainer" containerID="4ba7d3117ff3bcb8fbb4c3aacb3ee82a74f89800297b7886c1bf9e1698b76f25" Feb 27 11:23:07 crc kubenswrapper[4728]: E0227 11:23:07.697175 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4ba7d3117ff3bcb8fbb4c3aacb3ee82a74f89800297b7886c1bf9e1698b76f25\": container with ID starting with 4ba7d3117ff3bcb8fbb4c3aacb3ee82a74f89800297b7886c1bf9e1698b76f25 not found: ID does not exist" containerID="4ba7d3117ff3bcb8fbb4c3aacb3ee82a74f89800297b7886c1bf9e1698b76f25" Feb 27 11:23:07 crc kubenswrapper[4728]: I0227 11:23:07.697213 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ba7d3117ff3bcb8fbb4c3aacb3ee82a74f89800297b7886c1bf9e1698b76f25"} err="failed to get container status \"4ba7d3117ff3bcb8fbb4c3aacb3ee82a74f89800297b7886c1bf9e1698b76f25\": rpc error: code = NotFound desc = could not find container \"4ba7d3117ff3bcb8fbb4c3aacb3ee82a74f89800297b7886c1bf9e1698b76f25\": container with ID starting with 4ba7d3117ff3bcb8fbb4c3aacb3ee82a74f89800297b7886c1bf9e1698b76f25 not found: ID does not exist" Feb 27 11:23:08 crc kubenswrapper[4728]: I0227 11:23:08.752247 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d6d67d0-dafd-4ecc-af66-535398f1e80b" path="/var/lib/kubelet/pods/3d6d67d0-dafd-4ecc-af66-535398f1e80b/volumes" Feb 27 11:23:16 crc kubenswrapper[4728]: I0227 11:23:16.725643 4728 scope.go:117] "RemoveContainer" containerID="4b26cdc47a7e7e6f6d03cb73756691c7c42ce277f12dd365521433a950382ed6" Feb 27 11:23:16 crc kubenswrapper[4728]: E0227 11:23:16.726524 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mf2hh_openshift-machine-config-operator(c2cfd349-f825-497b-b698-7fb6bc258b22)\"" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" Feb 27 11:23:20 crc kubenswrapper[4728]: I0227 11:23:20.141900 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-q5qtj"] Feb 27 11:23:20 crc kubenswrapper[4728]: E0227 11:23:20.142951 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d6d67d0-dafd-4ecc-af66-535398f1e80b" containerName="registry-server" Feb 27 11:23:20 crc kubenswrapper[4728]: I0227 11:23:20.142964 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d6d67d0-dafd-4ecc-af66-535398f1e80b" containerName="registry-server" Feb 27 11:23:20 crc kubenswrapper[4728]: E0227 11:23:20.142990 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d6d67d0-dafd-4ecc-af66-535398f1e80b" containerName="extract-utilities" Feb 27 11:23:20 crc kubenswrapper[4728]: I0227 11:23:20.142997 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d6d67d0-dafd-4ecc-af66-535398f1e80b" containerName="extract-utilities" Feb 27 11:23:20 crc kubenswrapper[4728]: E0227 11:23:20.143035 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d6d67d0-dafd-4ecc-af66-535398f1e80b" containerName="extract-content" Feb 27 11:23:20 crc kubenswrapper[4728]: I0227 11:23:20.143042 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d6d67d0-dafd-4ecc-af66-535398f1e80b" containerName="extract-content" Feb 27 11:23:20 crc kubenswrapper[4728]: I0227 11:23:20.143388 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d6d67d0-dafd-4ecc-af66-535398f1e80b" containerName="registry-server" Feb 27 11:23:20 crc kubenswrapper[4728]: I0227 11:23:20.145671 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q5qtj" Feb 27 11:23:20 crc kubenswrapper[4728]: I0227 11:23:20.166252 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-q5qtj"] Feb 27 11:23:20 crc kubenswrapper[4728]: I0227 11:23:20.242145 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjqb7\" (UniqueName: \"kubernetes.io/projected/f6c46f9c-5ab3-48c3-bb8b-1df473cb92e5-kube-api-access-jjqb7\") pod \"redhat-operators-q5qtj\" (UID: \"f6c46f9c-5ab3-48c3-bb8b-1df473cb92e5\") " pod="openshift-marketplace/redhat-operators-q5qtj" Feb 27 11:23:20 crc kubenswrapper[4728]: I0227 11:23:20.242555 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6c46f9c-5ab3-48c3-bb8b-1df473cb92e5-utilities\") pod \"redhat-operators-q5qtj\" (UID: \"f6c46f9c-5ab3-48c3-bb8b-1df473cb92e5\") " pod="openshift-marketplace/redhat-operators-q5qtj" Feb 27 11:23:20 crc kubenswrapper[4728]: I0227 11:23:20.242604 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6c46f9c-5ab3-48c3-bb8b-1df473cb92e5-catalog-content\") pod \"redhat-operators-q5qtj\" (UID: \"f6c46f9c-5ab3-48c3-bb8b-1df473cb92e5\") " pod="openshift-marketplace/redhat-operators-q5qtj" Feb 27 11:23:20 crc kubenswrapper[4728]: I0227 11:23:20.345042 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jjqb7\" (UniqueName: \"kubernetes.io/projected/f6c46f9c-5ab3-48c3-bb8b-1df473cb92e5-kube-api-access-jjqb7\") pod \"redhat-operators-q5qtj\" (UID: \"f6c46f9c-5ab3-48c3-bb8b-1df473cb92e5\") " pod="openshift-marketplace/redhat-operators-q5qtj" Feb 27 11:23:20 crc kubenswrapper[4728]: I0227 11:23:20.345221 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6c46f9c-5ab3-48c3-bb8b-1df473cb92e5-utilities\") pod \"redhat-operators-q5qtj\" (UID: \"f6c46f9c-5ab3-48c3-bb8b-1df473cb92e5\") " pod="openshift-marketplace/redhat-operators-q5qtj" Feb 27 11:23:20 crc kubenswrapper[4728]: I0227 11:23:20.345243 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6c46f9c-5ab3-48c3-bb8b-1df473cb92e5-catalog-content\") pod \"redhat-operators-q5qtj\" (UID: \"f6c46f9c-5ab3-48c3-bb8b-1df473cb92e5\") " pod="openshift-marketplace/redhat-operators-q5qtj" Feb 27 11:23:20 crc kubenswrapper[4728]: I0227 11:23:20.345753 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6c46f9c-5ab3-48c3-bb8b-1df473cb92e5-catalog-content\") pod \"redhat-operators-q5qtj\" (UID: \"f6c46f9c-5ab3-48c3-bb8b-1df473cb92e5\") " pod="openshift-marketplace/redhat-operators-q5qtj" Feb 27 11:23:20 crc kubenswrapper[4728]: I0227 11:23:20.345854 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6c46f9c-5ab3-48c3-bb8b-1df473cb92e5-utilities\") pod \"redhat-operators-q5qtj\" (UID: \"f6c46f9c-5ab3-48c3-bb8b-1df473cb92e5\") " pod="openshift-marketplace/redhat-operators-q5qtj" Feb 27 11:23:20 crc kubenswrapper[4728]: I0227 11:23:20.368465 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jjqb7\" (UniqueName: \"kubernetes.io/projected/f6c46f9c-5ab3-48c3-bb8b-1df473cb92e5-kube-api-access-jjqb7\") pod \"redhat-operators-q5qtj\" (UID: \"f6c46f9c-5ab3-48c3-bb8b-1df473cb92e5\") " pod="openshift-marketplace/redhat-operators-q5qtj" Feb 27 11:23:20 crc kubenswrapper[4728]: I0227 11:23:20.484674 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q5qtj" Feb 27 11:23:21 crc kubenswrapper[4728]: I0227 11:23:21.090750 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-q5qtj"] Feb 27 11:23:21 crc kubenswrapper[4728]: I0227 11:23:21.759279 4728 generic.go:334] "Generic (PLEG): container finished" podID="f6c46f9c-5ab3-48c3-bb8b-1df473cb92e5" containerID="b3803f76b963f8b33549b8217ae47a2a2e3da1d7e2dc50d25e44d7abc1535732" exitCode=0 Feb 27 11:23:21 crc kubenswrapper[4728]: I0227 11:23:21.759510 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q5qtj" event={"ID":"f6c46f9c-5ab3-48c3-bb8b-1df473cb92e5","Type":"ContainerDied","Data":"b3803f76b963f8b33549b8217ae47a2a2e3da1d7e2dc50d25e44d7abc1535732"} Feb 27 11:23:21 crc kubenswrapper[4728]: I0227 11:23:21.759538 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q5qtj" event={"ID":"f6c46f9c-5ab3-48c3-bb8b-1df473cb92e5","Type":"ContainerStarted","Data":"b10bdde5b6d4c1fe66c1048c69fdb00c54e9eff8c358343f1034b7339d5b54fb"} Feb 27 11:23:22 crc kubenswrapper[4728]: I0227 11:23:22.776143 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q5qtj" event={"ID":"f6c46f9c-5ab3-48c3-bb8b-1df473cb92e5","Type":"ContainerStarted","Data":"950c1b4dba37d581dccd14cdf81557ba4eebdc39b99b83fc9f3a54922019090e"} Feb 27 11:23:27 crc kubenswrapper[4728]: I0227 11:23:27.725678 4728 scope.go:117] "RemoveContainer" containerID="4b26cdc47a7e7e6f6d03cb73756691c7c42ce277f12dd365521433a950382ed6" Feb 27 11:23:27 crc kubenswrapper[4728]: E0227 11:23:27.727467 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mf2hh_openshift-machine-config-operator(c2cfd349-f825-497b-b698-7fb6bc258b22)\"" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" Feb 27 11:23:28 crc kubenswrapper[4728]: I0227 11:23:28.873823 4728 generic.go:334] "Generic (PLEG): container finished" podID="f6c46f9c-5ab3-48c3-bb8b-1df473cb92e5" containerID="950c1b4dba37d581dccd14cdf81557ba4eebdc39b99b83fc9f3a54922019090e" exitCode=0 Feb 27 11:23:28 crc kubenswrapper[4728]: I0227 11:23:28.873878 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q5qtj" event={"ID":"f6c46f9c-5ab3-48c3-bb8b-1df473cb92e5","Type":"ContainerDied","Data":"950c1b4dba37d581dccd14cdf81557ba4eebdc39b99b83fc9f3a54922019090e"} Feb 27 11:23:29 crc kubenswrapper[4728]: I0227 11:23:29.893851 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q5qtj" event={"ID":"f6c46f9c-5ab3-48c3-bb8b-1df473cb92e5","Type":"ContainerStarted","Data":"d833ebe4123267cee335deaedac27cf56e8e7838fd59cf6cae1bb780f0931ec0"} Feb 27 11:23:29 crc kubenswrapper[4728]: I0227 11:23:29.932383 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-q5qtj" podStartSLOduration=2.33087865 podStartE2EDuration="9.932357568s" podCreationTimestamp="2026-02-27 11:23:20 +0000 UTC" firstStartedPulling="2026-02-27 11:23:21.762280929 +0000 UTC m=+3421.724647035" lastFinishedPulling="2026-02-27 11:23:29.363759847 +0000 UTC m=+3429.326125953" observedRunningTime="2026-02-27 11:23:29.918963232 +0000 UTC m=+3429.881329348" watchObservedRunningTime="2026-02-27 11:23:29.932357568 +0000 UTC m=+3429.894723674" Feb 27 11:23:30 crc kubenswrapper[4728]: I0227 11:23:30.485205 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-q5qtj" Feb 27 11:23:30 crc kubenswrapper[4728]: I0227 11:23:30.485269 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-q5qtj" Feb 27 11:23:31 crc kubenswrapper[4728]: I0227 11:23:31.581524 4728 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-q5qtj" podUID="f6c46f9c-5ab3-48c3-bb8b-1df473cb92e5" containerName="registry-server" probeResult="failure" output=< Feb 27 11:23:31 crc kubenswrapper[4728]: timeout: failed to connect service ":50051" within 1s Feb 27 11:23:31 crc kubenswrapper[4728]: > Feb 27 11:23:38 crc kubenswrapper[4728]: I0227 11:23:38.725302 4728 scope.go:117] "RemoveContainer" containerID="4b26cdc47a7e7e6f6d03cb73756691c7c42ce277f12dd365521433a950382ed6" Feb 27 11:23:38 crc kubenswrapper[4728]: E0227 11:23:38.726151 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mf2hh_openshift-machine-config-operator(c2cfd349-f825-497b-b698-7fb6bc258b22)\"" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" Feb 27 11:23:41 crc kubenswrapper[4728]: I0227 11:23:41.538749 4728 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-q5qtj" podUID="f6c46f9c-5ab3-48c3-bb8b-1df473cb92e5" containerName="registry-server" probeResult="failure" output=< Feb 27 11:23:41 crc kubenswrapper[4728]: timeout: failed to connect service ":50051" within 1s Feb 27 11:23:41 crc kubenswrapper[4728]: > Feb 27 11:23:51 crc kubenswrapper[4728]: I0227 11:23:51.541265 4728 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-q5qtj" podUID="f6c46f9c-5ab3-48c3-bb8b-1df473cb92e5" containerName="registry-server" probeResult="failure" output=< Feb 27 11:23:51 crc kubenswrapper[4728]: timeout: failed to connect service ":50051" within 1s Feb 27 11:23:51 crc kubenswrapper[4728]: > Feb 27 11:23:51 crc kubenswrapper[4728]: I0227 11:23:51.725935 4728 scope.go:117] "RemoveContainer" containerID="4b26cdc47a7e7e6f6d03cb73756691c7c42ce277f12dd365521433a950382ed6" Feb 27 11:23:51 crc kubenswrapper[4728]: E0227 11:23:51.726570 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mf2hh_openshift-machine-config-operator(c2cfd349-f825-497b-b698-7fb6bc258b22)\"" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" Feb 27 11:24:00 crc kubenswrapper[4728]: I0227 11:24:00.094665 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-dhndm"] Feb 27 11:24:00 crc kubenswrapper[4728]: I0227 11:24:00.097570 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dhndm" Feb 27 11:24:00 crc kubenswrapper[4728]: I0227 11:24:00.111914 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dhndm"] Feb 27 11:24:00 crc kubenswrapper[4728]: I0227 11:24:00.117184 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lszpw\" (UniqueName: \"kubernetes.io/projected/bb23efbd-a263-46bf-bcb8-64110410e8d1-kube-api-access-lszpw\") pod \"community-operators-dhndm\" (UID: \"bb23efbd-a263-46bf-bcb8-64110410e8d1\") " pod="openshift-marketplace/community-operators-dhndm" Feb 27 11:24:00 crc kubenswrapper[4728]: I0227 11:24:00.117276 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb23efbd-a263-46bf-bcb8-64110410e8d1-utilities\") pod \"community-operators-dhndm\" (UID: \"bb23efbd-a263-46bf-bcb8-64110410e8d1\") " pod="openshift-marketplace/community-operators-dhndm" Feb 27 11:24:00 crc kubenswrapper[4728]: I0227 11:24:00.117338 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb23efbd-a263-46bf-bcb8-64110410e8d1-catalog-content\") pod \"community-operators-dhndm\" (UID: \"bb23efbd-a263-46bf-bcb8-64110410e8d1\") " pod="openshift-marketplace/community-operators-dhndm" Feb 27 11:24:00 crc kubenswrapper[4728]: I0227 11:24:00.192448 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536524-57mpz"] Feb 27 11:24:00 crc kubenswrapper[4728]: I0227 11:24:00.193956 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536524-57mpz" Feb 27 11:24:00 crc kubenswrapper[4728]: I0227 11:24:00.197103 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-wxdfj" Feb 27 11:24:00 crc kubenswrapper[4728]: I0227 11:24:00.197611 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 11:24:00 crc kubenswrapper[4728]: I0227 11:24:00.197746 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 11:24:00 crc kubenswrapper[4728]: I0227 11:24:00.221001 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb23efbd-a263-46bf-bcb8-64110410e8d1-utilities\") pod \"community-operators-dhndm\" (UID: \"bb23efbd-a263-46bf-bcb8-64110410e8d1\") " pod="openshift-marketplace/community-operators-dhndm" Feb 27 11:24:00 crc kubenswrapper[4728]: I0227 11:24:00.221178 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb23efbd-a263-46bf-bcb8-64110410e8d1-catalog-content\") pod \"community-operators-dhndm\" (UID: \"bb23efbd-a263-46bf-bcb8-64110410e8d1\") " pod="openshift-marketplace/community-operators-dhndm" Feb 27 11:24:00 crc kubenswrapper[4728]: I0227 11:24:00.221260 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-snh8g\" (UniqueName: \"kubernetes.io/projected/6b7b33ea-08da-4e39-9c2e-11292a8b4901-kube-api-access-snh8g\") pod \"auto-csr-approver-29536524-57mpz\" (UID: \"6b7b33ea-08da-4e39-9c2e-11292a8b4901\") " pod="openshift-infra/auto-csr-approver-29536524-57mpz" Feb 27 11:24:00 crc kubenswrapper[4728]: I0227 11:24:00.221573 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lszpw\" (UniqueName: \"kubernetes.io/projected/bb23efbd-a263-46bf-bcb8-64110410e8d1-kube-api-access-lszpw\") pod \"community-operators-dhndm\" (UID: \"bb23efbd-a263-46bf-bcb8-64110410e8d1\") " pod="openshift-marketplace/community-operators-dhndm" Feb 27 11:24:00 crc kubenswrapper[4728]: I0227 11:24:00.225089 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb23efbd-a263-46bf-bcb8-64110410e8d1-catalog-content\") pod \"community-operators-dhndm\" (UID: \"bb23efbd-a263-46bf-bcb8-64110410e8d1\") " pod="openshift-marketplace/community-operators-dhndm" Feb 27 11:24:00 crc kubenswrapper[4728]: I0227 11:24:00.232924 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb23efbd-a263-46bf-bcb8-64110410e8d1-utilities\") pod \"community-operators-dhndm\" (UID: \"bb23efbd-a263-46bf-bcb8-64110410e8d1\") " pod="openshift-marketplace/community-operators-dhndm" Feb 27 11:24:00 crc kubenswrapper[4728]: I0227 11:24:00.237736 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536524-57mpz"] Feb 27 11:24:00 crc kubenswrapper[4728]: I0227 11:24:00.245560 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lszpw\" (UniqueName: \"kubernetes.io/projected/bb23efbd-a263-46bf-bcb8-64110410e8d1-kube-api-access-lszpw\") pod \"community-operators-dhndm\" (UID: \"bb23efbd-a263-46bf-bcb8-64110410e8d1\") " pod="openshift-marketplace/community-operators-dhndm" Feb 27 11:24:00 crc kubenswrapper[4728]: I0227 11:24:00.326168 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-snh8g\" (UniqueName: \"kubernetes.io/projected/6b7b33ea-08da-4e39-9c2e-11292a8b4901-kube-api-access-snh8g\") pod \"auto-csr-approver-29536524-57mpz\" (UID: \"6b7b33ea-08da-4e39-9c2e-11292a8b4901\") " pod="openshift-infra/auto-csr-approver-29536524-57mpz" Feb 27 11:24:00 crc kubenswrapper[4728]: I0227 11:24:00.350409 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-snh8g\" (UniqueName: \"kubernetes.io/projected/6b7b33ea-08da-4e39-9c2e-11292a8b4901-kube-api-access-snh8g\") pod \"auto-csr-approver-29536524-57mpz\" (UID: \"6b7b33ea-08da-4e39-9c2e-11292a8b4901\") " pod="openshift-infra/auto-csr-approver-29536524-57mpz" Feb 27 11:24:00 crc kubenswrapper[4728]: I0227 11:24:00.417166 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dhndm" Feb 27 11:24:00 crc kubenswrapper[4728]: I0227 11:24:00.525012 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536524-57mpz" Feb 27 11:24:00 crc kubenswrapper[4728]: I0227 11:24:00.569233 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-q5qtj" Feb 27 11:24:00 crc kubenswrapper[4728]: I0227 11:24:00.640789 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-q5qtj" Feb 27 11:24:01 crc kubenswrapper[4728]: I0227 11:24:01.096933 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dhndm"] Feb 27 11:24:01 crc kubenswrapper[4728]: I0227 11:24:01.284263 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dhndm" event={"ID":"bb23efbd-a263-46bf-bcb8-64110410e8d1","Type":"ContainerStarted","Data":"e5c523dfcc333bd89d98bf44b33e11fcc889da0505a18136595e58b3508ffe7a"} Feb 27 11:24:01 crc kubenswrapper[4728]: I0227 11:24:01.284315 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dhndm" event={"ID":"bb23efbd-a263-46bf-bcb8-64110410e8d1","Type":"ContainerStarted","Data":"8c29de48c50a063f563f83029fb29b2c531396e80024f868df1ccf55d6f3ca73"} Feb 27 11:24:01 crc kubenswrapper[4728]: I0227 11:24:01.320797 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536524-57mpz"] Feb 27 11:24:01 crc kubenswrapper[4728]: W0227 11:24:01.325410 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6b7b33ea_08da_4e39_9c2e_11292a8b4901.slice/crio-6b4413f2491fd20319d5058a2ca1e03764404a81ea0dd492b0796abb82131088 WatchSource:0}: Error finding container 6b4413f2491fd20319d5058a2ca1e03764404a81ea0dd492b0796abb82131088: Status 404 returned error can't find the container with id 6b4413f2491fd20319d5058a2ca1e03764404a81ea0dd492b0796abb82131088 Feb 27 11:24:02 crc kubenswrapper[4728]: I0227 11:24:02.302630 4728 generic.go:334] "Generic (PLEG): container finished" podID="bb23efbd-a263-46bf-bcb8-64110410e8d1" containerID="e5c523dfcc333bd89d98bf44b33e11fcc889da0505a18136595e58b3508ffe7a" exitCode=0 Feb 27 11:24:02 crc kubenswrapper[4728]: I0227 11:24:02.302837 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dhndm" event={"ID":"bb23efbd-a263-46bf-bcb8-64110410e8d1","Type":"ContainerDied","Data":"e5c523dfcc333bd89d98bf44b33e11fcc889da0505a18136595e58b3508ffe7a"} Feb 27 11:24:02 crc kubenswrapper[4728]: I0227 11:24:02.307335 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536524-57mpz" event={"ID":"6b7b33ea-08da-4e39-9c2e-11292a8b4901","Type":"ContainerStarted","Data":"6b4413f2491fd20319d5058a2ca1e03764404a81ea0dd492b0796abb82131088"} Feb 27 11:24:02 crc kubenswrapper[4728]: I0227 11:24:02.878951 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-q5qtj"] Feb 27 11:24:02 crc kubenswrapper[4728]: I0227 11:24:02.880999 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-q5qtj" podUID="f6c46f9c-5ab3-48c3-bb8b-1df473cb92e5" containerName="registry-server" containerID="cri-o://d833ebe4123267cee335deaedac27cf56e8e7838fd59cf6cae1bb780f0931ec0" gracePeriod=2 Feb 27 11:24:03 crc kubenswrapper[4728]: I0227 11:24:03.335320 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dhndm" event={"ID":"bb23efbd-a263-46bf-bcb8-64110410e8d1","Type":"ContainerStarted","Data":"21096b7938064dbece42457d47572d1f5b51d1ddd5d53826338733864a14ab87"} Feb 27 11:24:03 crc kubenswrapper[4728]: I0227 11:24:03.343437 4728 generic.go:334] "Generic (PLEG): container finished" podID="6b7b33ea-08da-4e39-9c2e-11292a8b4901" containerID="f14258437bbc9dd89432d81129d1039a48ccdb53d17fd1aec5ead6d4c06e6c3c" exitCode=0 Feb 27 11:24:03 crc kubenswrapper[4728]: I0227 11:24:03.343747 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536524-57mpz" event={"ID":"6b7b33ea-08da-4e39-9c2e-11292a8b4901","Type":"ContainerDied","Data":"f14258437bbc9dd89432d81129d1039a48ccdb53d17fd1aec5ead6d4c06e6c3c"} Feb 27 11:24:03 crc kubenswrapper[4728]: I0227 11:24:03.347746 4728 generic.go:334] "Generic (PLEG): container finished" podID="f6c46f9c-5ab3-48c3-bb8b-1df473cb92e5" containerID="d833ebe4123267cee335deaedac27cf56e8e7838fd59cf6cae1bb780f0931ec0" exitCode=0 Feb 27 11:24:03 crc kubenswrapper[4728]: I0227 11:24:03.347799 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q5qtj" event={"ID":"f6c46f9c-5ab3-48c3-bb8b-1df473cb92e5","Type":"ContainerDied","Data":"d833ebe4123267cee335deaedac27cf56e8e7838fd59cf6cae1bb780f0931ec0"} Feb 27 11:24:03 crc kubenswrapper[4728]: I0227 11:24:03.496028 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q5qtj" Feb 27 11:24:03 crc kubenswrapper[4728]: I0227 11:24:03.628438 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6c46f9c-5ab3-48c3-bb8b-1df473cb92e5-utilities\") pod \"f6c46f9c-5ab3-48c3-bb8b-1df473cb92e5\" (UID: \"f6c46f9c-5ab3-48c3-bb8b-1df473cb92e5\") " Feb 27 11:24:03 crc kubenswrapper[4728]: I0227 11:24:03.628675 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jjqb7\" (UniqueName: \"kubernetes.io/projected/f6c46f9c-5ab3-48c3-bb8b-1df473cb92e5-kube-api-access-jjqb7\") pod \"f6c46f9c-5ab3-48c3-bb8b-1df473cb92e5\" (UID: \"f6c46f9c-5ab3-48c3-bb8b-1df473cb92e5\") " Feb 27 11:24:03 crc kubenswrapper[4728]: I0227 11:24:03.628801 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6c46f9c-5ab3-48c3-bb8b-1df473cb92e5-catalog-content\") pod \"f6c46f9c-5ab3-48c3-bb8b-1df473cb92e5\" (UID: \"f6c46f9c-5ab3-48c3-bb8b-1df473cb92e5\") " Feb 27 11:24:03 crc kubenswrapper[4728]: I0227 11:24:03.629146 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f6c46f9c-5ab3-48c3-bb8b-1df473cb92e5-utilities" (OuterVolumeSpecName: "utilities") pod "f6c46f9c-5ab3-48c3-bb8b-1df473cb92e5" (UID: "f6c46f9c-5ab3-48c3-bb8b-1df473cb92e5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 11:24:03 crc kubenswrapper[4728]: I0227 11:24:03.629665 4728 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6c46f9c-5ab3-48c3-bb8b-1df473cb92e5-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 11:24:03 crc kubenswrapper[4728]: I0227 11:24:03.648834 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6c46f9c-5ab3-48c3-bb8b-1df473cb92e5-kube-api-access-jjqb7" (OuterVolumeSpecName: "kube-api-access-jjqb7") pod "f6c46f9c-5ab3-48c3-bb8b-1df473cb92e5" (UID: "f6c46f9c-5ab3-48c3-bb8b-1df473cb92e5"). InnerVolumeSpecName "kube-api-access-jjqb7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:24:03 crc kubenswrapper[4728]: I0227 11:24:03.724791 4728 scope.go:117] "RemoveContainer" containerID="4b26cdc47a7e7e6f6d03cb73756691c7c42ce277f12dd365521433a950382ed6" Feb 27 11:24:03 crc kubenswrapper[4728]: E0227 11:24:03.725552 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mf2hh_openshift-machine-config-operator(c2cfd349-f825-497b-b698-7fb6bc258b22)\"" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" Feb 27 11:24:03 crc kubenswrapper[4728]: I0227 11:24:03.731718 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jjqb7\" (UniqueName: \"kubernetes.io/projected/f6c46f9c-5ab3-48c3-bb8b-1df473cb92e5-kube-api-access-jjqb7\") on node \"crc\" DevicePath \"\"" Feb 27 11:24:03 crc kubenswrapper[4728]: I0227 11:24:03.759408 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f6c46f9c-5ab3-48c3-bb8b-1df473cb92e5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f6c46f9c-5ab3-48c3-bb8b-1df473cb92e5" (UID: "f6c46f9c-5ab3-48c3-bb8b-1df473cb92e5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 11:24:03 crc kubenswrapper[4728]: I0227 11:24:03.834266 4728 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6c46f9c-5ab3-48c3-bb8b-1df473cb92e5-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 11:24:04 crc kubenswrapper[4728]: I0227 11:24:04.368708 4728 generic.go:334] "Generic (PLEG): container finished" podID="bb23efbd-a263-46bf-bcb8-64110410e8d1" containerID="21096b7938064dbece42457d47572d1f5b51d1ddd5d53826338733864a14ab87" exitCode=0 Feb 27 11:24:04 crc kubenswrapper[4728]: I0227 11:24:04.368862 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dhndm" event={"ID":"bb23efbd-a263-46bf-bcb8-64110410e8d1","Type":"ContainerDied","Data":"21096b7938064dbece42457d47572d1f5b51d1ddd5d53826338733864a14ab87"} Feb 27 11:24:04 crc kubenswrapper[4728]: I0227 11:24:04.375092 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q5qtj" Feb 27 11:24:04 crc kubenswrapper[4728]: I0227 11:24:04.375127 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q5qtj" event={"ID":"f6c46f9c-5ab3-48c3-bb8b-1df473cb92e5","Type":"ContainerDied","Data":"b10bdde5b6d4c1fe66c1048c69fdb00c54e9eff8c358343f1034b7339d5b54fb"} Feb 27 11:24:04 crc kubenswrapper[4728]: I0227 11:24:04.375197 4728 scope.go:117] "RemoveContainer" containerID="d833ebe4123267cee335deaedac27cf56e8e7838fd59cf6cae1bb780f0931ec0" Feb 27 11:24:04 crc kubenswrapper[4728]: I0227 11:24:04.452265 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-q5qtj"] Feb 27 11:24:04 crc kubenswrapper[4728]: I0227 11:24:04.472724 4728 scope.go:117] "RemoveContainer" containerID="950c1b4dba37d581dccd14cdf81557ba4eebdc39b99b83fc9f3a54922019090e" Feb 27 11:24:04 crc kubenswrapper[4728]: I0227 11:24:04.479022 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-q5qtj"] Feb 27 11:24:04 crc kubenswrapper[4728]: I0227 11:24:04.495648 4728 scope.go:117] "RemoveContainer" containerID="b3803f76b963f8b33549b8217ae47a2a2e3da1d7e2dc50d25e44d7abc1535732" Feb 27 11:24:04 crc kubenswrapper[4728]: I0227 11:24:04.738272 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f6c46f9c-5ab3-48c3-bb8b-1df473cb92e5" path="/var/lib/kubelet/pods/f6c46f9c-5ab3-48c3-bb8b-1df473cb92e5/volumes" Feb 27 11:24:04 crc kubenswrapper[4728]: I0227 11:24:04.815337 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536524-57mpz" Feb 27 11:24:04 crc kubenswrapper[4728]: I0227 11:24:04.965674 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-snh8g\" (UniqueName: \"kubernetes.io/projected/6b7b33ea-08da-4e39-9c2e-11292a8b4901-kube-api-access-snh8g\") pod \"6b7b33ea-08da-4e39-9c2e-11292a8b4901\" (UID: \"6b7b33ea-08da-4e39-9c2e-11292a8b4901\") " Feb 27 11:24:04 crc kubenswrapper[4728]: I0227 11:24:04.972133 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b7b33ea-08da-4e39-9c2e-11292a8b4901-kube-api-access-snh8g" (OuterVolumeSpecName: "kube-api-access-snh8g") pod "6b7b33ea-08da-4e39-9c2e-11292a8b4901" (UID: "6b7b33ea-08da-4e39-9c2e-11292a8b4901"). InnerVolumeSpecName "kube-api-access-snh8g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:24:05 crc kubenswrapper[4728]: I0227 11:24:05.068729 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-snh8g\" (UniqueName: \"kubernetes.io/projected/6b7b33ea-08da-4e39-9c2e-11292a8b4901-kube-api-access-snh8g\") on node \"crc\" DevicePath \"\"" Feb 27 11:24:05 crc kubenswrapper[4728]: I0227 11:24:05.398779 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536524-57mpz" Feb 27 11:24:05 crc kubenswrapper[4728]: I0227 11:24:05.398758 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536524-57mpz" event={"ID":"6b7b33ea-08da-4e39-9c2e-11292a8b4901","Type":"ContainerDied","Data":"6b4413f2491fd20319d5058a2ca1e03764404a81ea0dd492b0796abb82131088"} Feb 27 11:24:05 crc kubenswrapper[4728]: I0227 11:24:05.399430 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6b4413f2491fd20319d5058a2ca1e03764404a81ea0dd492b0796abb82131088" Feb 27 11:24:05 crc kubenswrapper[4728]: I0227 11:24:05.405052 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dhndm" event={"ID":"bb23efbd-a263-46bf-bcb8-64110410e8d1","Type":"ContainerStarted","Data":"628ecdff8a58bbfca27b9b418007fbebdba2fbd39c9d5b745e0b9d174d00abca"} Feb 27 11:24:05 crc kubenswrapper[4728]: I0227 11:24:05.444653 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-dhndm" podStartSLOduration=2.815993643 podStartE2EDuration="5.444619885s" podCreationTimestamp="2026-02-27 11:24:00 +0000 UTC" firstStartedPulling="2026-02-27 11:24:02.310078781 +0000 UTC m=+3462.272444887" lastFinishedPulling="2026-02-27 11:24:04.938705003 +0000 UTC m=+3464.901071129" observedRunningTime="2026-02-27 11:24:05.424538828 +0000 UTC m=+3465.386905004" watchObservedRunningTime="2026-02-27 11:24:05.444619885 +0000 UTC m=+3465.406986041" Feb 27 11:24:05 crc kubenswrapper[4728]: I0227 11:24:05.893989 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536518-7rm4c"] Feb 27 11:24:05 crc kubenswrapper[4728]: I0227 11:24:05.907809 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536518-7rm4c"] Feb 27 11:24:06 crc kubenswrapper[4728]: I0227 11:24:06.755726 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="164c972e-39ae-42c0-8754-d8776d5bba3e" path="/var/lib/kubelet/pods/164c972e-39ae-42c0-8754-d8776d5bba3e/volumes" Feb 27 11:24:10 crc kubenswrapper[4728]: I0227 11:24:10.417562 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-dhndm" Feb 27 11:24:10 crc kubenswrapper[4728]: I0227 11:24:10.418199 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-dhndm" Feb 27 11:24:10 crc kubenswrapper[4728]: I0227 11:24:10.954293 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-dhndm" Feb 27 11:24:11 crc kubenswrapper[4728]: I0227 11:24:11.010787 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-dhndm" Feb 27 11:24:11 crc kubenswrapper[4728]: I0227 11:24:11.197589 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dhndm"] Feb 27 11:24:12 crc kubenswrapper[4728]: I0227 11:24:12.488924 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-dhndm" podUID="bb23efbd-a263-46bf-bcb8-64110410e8d1" containerName="registry-server" containerID="cri-o://628ecdff8a58bbfca27b9b418007fbebdba2fbd39c9d5b745e0b9d174d00abca" gracePeriod=2 Feb 27 11:24:13 crc kubenswrapper[4728]: I0227 11:24:13.022420 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dhndm" Feb 27 11:24:13 crc kubenswrapper[4728]: I0227 11:24:13.081936 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb23efbd-a263-46bf-bcb8-64110410e8d1-utilities\") pod \"bb23efbd-a263-46bf-bcb8-64110410e8d1\" (UID: \"bb23efbd-a263-46bf-bcb8-64110410e8d1\") " Feb 27 11:24:13 crc kubenswrapper[4728]: I0227 11:24:13.082020 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb23efbd-a263-46bf-bcb8-64110410e8d1-catalog-content\") pod \"bb23efbd-a263-46bf-bcb8-64110410e8d1\" (UID: \"bb23efbd-a263-46bf-bcb8-64110410e8d1\") " Feb 27 11:24:13 crc kubenswrapper[4728]: I0227 11:24:13.082243 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lszpw\" (UniqueName: \"kubernetes.io/projected/bb23efbd-a263-46bf-bcb8-64110410e8d1-kube-api-access-lszpw\") pod \"bb23efbd-a263-46bf-bcb8-64110410e8d1\" (UID: \"bb23efbd-a263-46bf-bcb8-64110410e8d1\") " Feb 27 11:24:13 crc kubenswrapper[4728]: I0227 11:24:13.082889 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb23efbd-a263-46bf-bcb8-64110410e8d1-utilities" (OuterVolumeSpecName: "utilities") pod "bb23efbd-a263-46bf-bcb8-64110410e8d1" (UID: "bb23efbd-a263-46bf-bcb8-64110410e8d1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 11:24:13 crc kubenswrapper[4728]: I0227 11:24:13.083399 4728 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb23efbd-a263-46bf-bcb8-64110410e8d1-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 11:24:13 crc kubenswrapper[4728]: I0227 11:24:13.088546 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb23efbd-a263-46bf-bcb8-64110410e8d1-kube-api-access-lszpw" (OuterVolumeSpecName: "kube-api-access-lszpw") pod "bb23efbd-a263-46bf-bcb8-64110410e8d1" (UID: "bb23efbd-a263-46bf-bcb8-64110410e8d1"). InnerVolumeSpecName "kube-api-access-lszpw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:24:13 crc kubenswrapper[4728]: I0227 11:24:13.154673 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb23efbd-a263-46bf-bcb8-64110410e8d1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bb23efbd-a263-46bf-bcb8-64110410e8d1" (UID: "bb23efbd-a263-46bf-bcb8-64110410e8d1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 11:24:13 crc kubenswrapper[4728]: I0227 11:24:13.186602 4728 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb23efbd-a263-46bf-bcb8-64110410e8d1-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 11:24:13 crc kubenswrapper[4728]: I0227 11:24:13.186641 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lszpw\" (UniqueName: \"kubernetes.io/projected/bb23efbd-a263-46bf-bcb8-64110410e8d1-kube-api-access-lszpw\") on node \"crc\" DevicePath \"\"" Feb 27 11:24:13 crc kubenswrapper[4728]: I0227 11:24:13.502053 4728 generic.go:334] "Generic (PLEG): container finished" podID="bb23efbd-a263-46bf-bcb8-64110410e8d1" containerID="628ecdff8a58bbfca27b9b418007fbebdba2fbd39c9d5b745e0b9d174d00abca" exitCode=0 Feb 27 11:24:13 crc kubenswrapper[4728]: I0227 11:24:13.502097 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dhndm" event={"ID":"bb23efbd-a263-46bf-bcb8-64110410e8d1","Type":"ContainerDied","Data":"628ecdff8a58bbfca27b9b418007fbebdba2fbd39c9d5b745e0b9d174d00abca"} Feb 27 11:24:13 crc kubenswrapper[4728]: I0227 11:24:13.502134 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dhndm" event={"ID":"bb23efbd-a263-46bf-bcb8-64110410e8d1","Type":"ContainerDied","Data":"8c29de48c50a063f563f83029fb29b2c531396e80024f868df1ccf55d6f3ca73"} Feb 27 11:24:13 crc kubenswrapper[4728]: I0227 11:24:13.502156 4728 scope.go:117] "RemoveContainer" containerID="628ecdff8a58bbfca27b9b418007fbebdba2fbd39c9d5b745e0b9d174d00abca" Feb 27 11:24:13 crc kubenswrapper[4728]: I0227 11:24:13.502161 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dhndm" Feb 27 11:24:13 crc kubenswrapper[4728]: I0227 11:24:13.524934 4728 scope.go:117] "RemoveContainer" containerID="21096b7938064dbece42457d47572d1f5b51d1ddd5d53826338733864a14ab87" Feb 27 11:24:13 crc kubenswrapper[4728]: I0227 11:24:13.558741 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dhndm"] Feb 27 11:24:13 crc kubenswrapper[4728]: I0227 11:24:13.562832 4728 scope.go:117] "RemoveContainer" containerID="e5c523dfcc333bd89d98bf44b33e11fcc889da0505a18136595e58b3508ffe7a" Feb 27 11:24:13 crc kubenswrapper[4728]: I0227 11:24:13.566445 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-dhndm"] Feb 27 11:24:13 crc kubenswrapper[4728]: I0227 11:24:13.632365 4728 scope.go:117] "RemoveContainer" containerID="628ecdff8a58bbfca27b9b418007fbebdba2fbd39c9d5b745e0b9d174d00abca" Feb 27 11:24:13 crc kubenswrapper[4728]: E0227 11:24:13.632859 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"628ecdff8a58bbfca27b9b418007fbebdba2fbd39c9d5b745e0b9d174d00abca\": container with ID starting with 628ecdff8a58bbfca27b9b418007fbebdba2fbd39c9d5b745e0b9d174d00abca not found: ID does not exist" containerID="628ecdff8a58bbfca27b9b418007fbebdba2fbd39c9d5b745e0b9d174d00abca" Feb 27 11:24:13 crc kubenswrapper[4728]: I0227 11:24:13.632884 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"628ecdff8a58bbfca27b9b418007fbebdba2fbd39c9d5b745e0b9d174d00abca"} err="failed to get container status \"628ecdff8a58bbfca27b9b418007fbebdba2fbd39c9d5b745e0b9d174d00abca\": rpc error: code = NotFound desc = could not find container \"628ecdff8a58bbfca27b9b418007fbebdba2fbd39c9d5b745e0b9d174d00abca\": container with ID starting with 628ecdff8a58bbfca27b9b418007fbebdba2fbd39c9d5b745e0b9d174d00abca not found: ID does not exist" Feb 27 11:24:13 crc kubenswrapper[4728]: I0227 11:24:13.632915 4728 scope.go:117] "RemoveContainer" containerID="21096b7938064dbece42457d47572d1f5b51d1ddd5d53826338733864a14ab87" Feb 27 11:24:13 crc kubenswrapper[4728]: E0227 11:24:13.633462 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"21096b7938064dbece42457d47572d1f5b51d1ddd5d53826338733864a14ab87\": container with ID starting with 21096b7938064dbece42457d47572d1f5b51d1ddd5d53826338733864a14ab87 not found: ID does not exist" containerID="21096b7938064dbece42457d47572d1f5b51d1ddd5d53826338733864a14ab87" Feb 27 11:24:13 crc kubenswrapper[4728]: I0227 11:24:13.633484 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"21096b7938064dbece42457d47572d1f5b51d1ddd5d53826338733864a14ab87"} err="failed to get container status \"21096b7938064dbece42457d47572d1f5b51d1ddd5d53826338733864a14ab87\": rpc error: code = NotFound desc = could not find container \"21096b7938064dbece42457d47572d1f5b51d1ddd5d53826338733864a14ab87\": container with ID starting with 21096b7938064dbece42457d47572d1f5b51d1ddd5d53826338733864a14ab87 not found: ID does not exist" Feb 27 11:24:13 crc kubenswrapper[4728]: I0227 11:24:13.633497 4728 scope.go:117] "RemoveContainer" containerID="e5c523dfcc333bd89d98bf44b33e11fcc889da0505a18136595e58b3508ffe7a" Feb 27 11:24:13 crc kubenswrapper[4728]: E0227 11:24:13.634346 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e5c523dfcc333bd89d98bf44b33e11fcc889da0505a18136595e58b3508ffe7a\": container with ID starting with e5c523dfcc333bd89d98bf44b33e11fcc889da0505a18136595e58b3508ffe7a not found: ID does not exist" containerID="e5c523dfcc333bd89d98bf44b33e11fcc889da0505a18136595e58b3508ffe7a" Feb 27 11:24:13 crc kubenswrapper[4728]: I0227 11:24:13.634386 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e5c523dfcc333bd89d98bf44b33e11fcc889da0505a18136595e58b3508ffe7a"} err="failed to get container status \"e5c523dfcc333bd89d98bf44b33e11fcc889da0505a18136595e58b3508ffe7a\": rpc error: code = NotFound desc = could not find container \"e5c523dfcc333bd89d98bf44b33e11fcc889da0505a18136595e58b3508ffe7a\": container with ID starting with e5c523dfcc333bd89d98bf44b33e11fcc889da0505a18136595e58b3508ffe7a not found: ID does not exist" Feb 27 11:24:14 crc kubenswrapper[4728]: I0227 11:24:14.737764 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb23efbd-a263-46bf-bcb8-64110410e8d1" path="/var/lib/kubelet/pods/bb23efbd-a263-46bf-bcb8-64110410e8d1/volumes" Feb 27 11:24:16 crc kubenswrapper[4728]: I0227 11:24:16.730620 4728 scope.go:117] "RemoveContainer" containerID="4b26cdc47a7e7e6f6d03cb73756691c7c42ce277f12dd365521433a950382ed6" Feb 27 11:24:16 crc kubenswrapper[4728]: E0227 11:24:16.731269 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mf2hh_openshift-machine-config-operator(c2cfd349-f825-497b-b698-7fb6bc258b22)\"" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" Feb 27 11:24:26 crc kubenswrapper[4728]: I0227 11:24:26.412610 4728 scope.go:117] "RemoveContainer" containerID="6761677d3c7b13ac279a3c90b1f5ae76b625c917cb2ed4b003e4b4bfee231ea5" Feb 27 11:24:30 crc kubenswrapper[4728]: I0227 11:24:30.740574 4728 scope.go:117] "RemoveContainer" containerID="4b26cdc47a7e7e6f6d03cb73756691c7c42ce277f12dd365521433a950382ed6" Feb 27 11:24:30 crc kubenswrapper[4728]: E0227 11:24:30.741149 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mf2hh_openshift-machine-config-operator(c2cfd349-f825-497b-b698-7fb6bc258b22)\"" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" Feb 27 11:24:45 crc kubenswrapper[4728]: I0227 11:24:45.727825 4728 scope.go:117] "RemoveContainer" containerID="4b26cdc47a7e7e6f6d03cb73756691c7c42ce277f12dd365521433a950382ed6" Feb 27 11:24:45 crc kubenswrapper[4728]: E0227 11:24:45.729249 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mf2hh_openshift-machine-config-operator(c2cfd349-f825-497b-b698-7fb6bc258b22)\"" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" Feb 27 11:25:00 crc kubenswrapper[4728]: I0227 11:25:00.733529 4728 scope.go:117] "RemoveContainer" containerID="4b26cdc47a7e7e6f6d03cb73756691c7c42ce277f12dd365521433a950382ed6" Feb 27 11:25:00 crc kubenswrapper[4728]: E0227 11:25:00.734591 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mf2hh_openshift-machine-config-operator(c2cfd349-f825-497b-b698-7fb6bc258b22)\"" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" Feb 27 11:25:11 crc kubenswrapper[4728]: I0227 11:25:11.726334 4728 scope.go:117] "RemoveContainer" containerID="4b26cdc47a7e7e6f6d03cb73756691c7c42ce277f12dd365521433a950382ed6" Feb 27 11:25:12 crc kubenswrapper[4728]: I0227 11:25:12.247383 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" event={"ID":"c2cfd349-f825-497b-b698-7fb6bc258b22","Type":"ContainerStarted","Data":"c82c2d54b29d2520aadcad1224fcb062e86dca5cffd7978c2b852d83c9a59843"} Feb 27 11:26:00 crc kubenswrapper[4728]: I0227 11:26:00.145333 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536526-qnr5w"] Feb 27 11:26:00 crc kubenswrapper[4728]: E0227 11:26:00.146309 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb23efbd-a263-46bf-bcb8-64110410e8d1" containerName="extract-content" Feb 27 11:26:00 crc kubenswrapper[4728]: I0227 11:26:00.146322 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb23efbd-a263-46bf-bcb8-64110410e8d1" containerName="extract-content" Feb 27 11:26:00 crc kubenswrapper[4728]: E0227 11:26:00.146345 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb23efbd-a263-46bf-bcb8-64110410e8d1" containerName="registry-server" Feb 27 11:26:00 crc kubenswrapper[4728]: I0227 11:26:00.146350 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb23efbd-a263-46bf-bcb8-64110410e8d1" containerName="registry-server" Feb 27 11:26:00 crc kubenswrapper[4728]: E0227 11:26:00.146367 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b7b33ea-08da-4e39-9c2e-11292a8b4901" containerName="oc" Feb 27 11:26:00 crc kubenswrapper[4728]: I0227 11:26:00.146374 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b7b33ea-08da-4e39-9c2e-11292a8b4901" containerName="oc" Feb 27 11:26:00 crc kubenswrapper[4728]: E0227 11:26:00.146384 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb23efbd-a263-46bf-bcb8-64110410e8d1" containerName="extract-utilities" Feb 27 11:26:00 crc kubenswrapper[4728]: I0227 11:26:00.146390 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb23efbd-a263-46bf-bcb8-64110410e8d1" containerName="extract-utilities" Feb 27 11:26:00 crc kubenswrapper[4728]: E0227 11:26:00.146409 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6c46f9c-5ab3-48c3-bb8b-1df473cb92e5" containerName="extract-utilities" Feb 27 11:26:00 crc kubenswrapper[4728]: I0227 11:26:00.146415 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6c46f9c-5ab3-48c3-bb8b-1df473cb92e5" containerName="extract-utilities" Feb 27 11:26:00 crc kubenswrapper[4728]: E0227 11:26:00.146422 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6c46f9c-5ab3-48c3-bb8b-1df473cb92e5" containerName="registry-server" Feb 27 11:26:00 crc kubenswrapper[4728]: I0227 11:26:00.146429 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6c46f9c-5ab3-48c3-bb8b-1df473cb92e5" containerName="registry-server" Feb 27 11:26:00 crc kubenswrapper[4728]: E0227 11:26:00.146443 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6c46f9c-5ab3-48c3-bb8b-1df473cb92e5" containerName="extract-content" Feb 27 11:26:00 crc kubenswrapper[4728]: I0227 11:26:00.146448 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6c46f9c-5ab3-48c3-bb8b-1df473cb92e5" containerName="extract-content" Feb 27 11:26:00 crc kubenswrapper[4728]: I0227 11:26:00.146684 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6c46f9c-5ab3-48c3-bb8b-1df473cb92e5" containerName="registry-server" Feb 27 11:26:00 crc kubenswrapper[4728]: I0227 11:26:00.146701 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b7b33ea-08da-4e39-9c2e-11292a8b4901" containerName="oc" Feb 27 11:26:00 crc kubenswrapper[4728]: I0227 11:26:00.146717 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb23efbd-a263-46bf-bcb8-64110410e8d1" containerName="registry-server" Feb 27 11:26:00 crc kubenswrapper[4728]: I0227 11:26:00.148738 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536526-qnr5w" Feb 27 11:26:00 crc kubenswrapper[4728]: I0227 11:26:00.151303 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-wxdfj" Feb 27 11:26:00 crc kubenswrapper[4728]: I0227 11:26:00.151368 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 11:26:00 crc kubenswrapper[4728]: I0227 11:26:00.151669 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 11:26:00 crc kubenswrapper[4728]: I0227 11:26:00.186996 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536526-qnr5w"] Feb 27 11:26:00 crc kubenswrapper[4728]: I0227 11:26:00.245545 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-692bz\" (UniqueName: \"kubernetes.io/projected/e720dd4b-f96d-490b-92c3-1aa62f21c98f-kube-api-access-692bz\") pod \"auto-csr-approver-29536526-qnr5w\" (UID: \"e720dd4b-f96d-490b-92c3-1aa62f21c98f\") " pod="openshift-infra/auto-csr-approver-29536526-qnr5w" Feb 27 11:26:00 crc kubenswrapper[4728]: I0227 11:26:00.348379 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-692bz\" (UniqueName: \"kubernetes.io/projected/e720dd4b-f96d-490b-92c3-1aa62f21c98f-kube-api-access-692bz\") pod \"auto-csr-approver-29536526-qnr5w\" (UID: \"e720dd4b-f96d-490b-92c3-1aa62f21c98f\") " pod="openshift-infra/auto-csr-approver-29536526-qnr5w" Feb 27 11:26:00 crc kubenswrapper[4728]: I0227 11:26:00.367937 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-692bz\" (UniqueName: \"kubernetes.io/projected/e720dd4b-f96d-490b-92c3-1aa62f21c98f-kube-api-access-692bz\") pod \"auto-csr-approver-29536526-qnr5w\" (UID: \"e720dd4b-f96d-490b-92c3-1aa62f21c98f\") " pod="openshift-infra/auto-csr-approver-29536526-qnr5w" Feb 27 11:26:00 crc kubenswrapper[4728]: I0227 11:26:00.473412 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536526-qnr5w" Feb 27 11:26:00 crc kubenswrapper[4728]: I0227 11:26:00.992470 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536526-qnr5w"] Feb 27 11:26:00 crc kubenswrapper[4728]: I0227 11:26:00.992660 4728 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 27 11:26:01 crc kubenswrapper[4728]: I0227 11:26:01.821468 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536526-qnr5w" event={"ID":"e720dd4b-f96d-490b-92c3-1aa62f21c98f","Type":"ContainerStarted","Data":"3b4f044bbee1fb5ff5e13a460c18a14522dbac04b3b6cce820378e3f188f828f"} Feb 27 11:26:02 crc kubenswrapper[4728]: I0227 11:26:02.836244 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536526-qnr5w" event={"ID":"e720dd4b-f96d-490b-92c3-1aa62f21c98f","Type":"ContainerStarted","Data":"a1a505cfa4eebdc4fbea80bad8f5db119d250514e1201c2107f41fd20b244c95"} Feb 27 11:26:02 crc kubenswrapper[4728]: I0227 11:26:02.861879 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29536526-qnr5w" podStartSLOduration=1.684595074 podStartE2EDuration="2.861859176s" podCreationTimestamp="2026-02-27 11:26:00 +0000 UTC" firstStartedPulling="2026-02-27 11:26:00.992012796 +0000 UTC m=+3580.954378922" lastFinishedPulling="2026-02-27 11:26:02.169276918 +0000 UTC m=+3582.131643024" observedRunningTime="2026-02-27 11:26:02.853303783 +0000 UTC m=+3582.815669889" watchObservedRunningTime="2026-02-27 11:26:02.861859176 +0000 UTC m=+3582.824225282" Feb 27 11:26:03 crc kubenswrapper[4728]: I0227 11:26:03.856449 4728 generic.go:334] "Generic (PLEG): container finished" podID="e720dd4b-f96d-490b-92c3-1aa62f21c98f" containerID="a1a505cfa4eebdc4fbea80bad8f5db119d250514e1201c2107f41fd20b244c95" exitCode=0 Feb 27 11:26:03 crc kubenswrapper[4728]: I0227 11:26:03.856834 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536526-qnr5w" event={"ID":"e720dd4b-f96d-490b-92c3-1aa62f21c98f","Type":"ContainerDied","Data":"a1a505cfa4eebdc4fbea80bad8f5db119d250514e1201c2107f41fd20b244c95"} Feb 27 11:26:05 crc kubenswrapper[4728]: I0227 11:26:05.370922 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536526-qnr5w" Feb 27 11:26:05 crc kubenswrapper[4728]: I0227 11:26:05.425007 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-692bz\" (UniqueName: \"kubernetes.io/projected/e720dd4b-f96d-490b-92c3-1aa62f21c98f-kube-api-access-692bz\") pod \"e720dd4b-f96d-490b-92c3-1aa62f21c98f\" (UID: \"e720dd4b-f96d-490b-92c3-1aa62f21c98f\") " Feb 27 11:26:05 crc kubenswrapper[4728]: I0227 11:26:05.430828 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e720dd4b-f96d-490b-92c3-1aa62f21c98f-kube-api-access-692bz" (OuterVolumeSpecName: "kube-api-access-692bz") pod "e720dd4b-f96d-490b-92c3-1aa62f21c98f" (UID: "e720dd4b-f96d-490b-92c3-1aa62f21c98f"). InnerVolumeSpecName "kube-api-access-692bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:26:05 crc kubenswrapper[4728]: I0227 11:26:05.528540 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-692bz\" (UniqueName: \"kubernetes.io/projected/e720dd4b-f96d-490b-92c3-1aa62f21c98f-kube-api-access-692bz\") on node \"crc\" DevicePath \"\"" Feb 27 11:26:05 crc kubenswrapper[4728]: I0227 11:26:05.884614 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536526-qnr5w" event={"ID":"e720dd4b-f96d-490b-92c3-1aa62f21c98f","Type":"ContainerDied","Data":"3b4f044bbee1fb5ff5e13a460c18a14522dbac04b3b6cce820378e3f188f828f"} Feb 27 11:26:05 crc kubenswrapper[4728]: I0227 11:26:05.884971 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3b4f044bbee1fb5ff5e13a460c18a14522dbac04b3b6cce820378e3f188f828f" Feb 27 11:26:05 crc kubenswrapper[4728]: I0227 11:26:05.884661 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536526-qnr5w" Feb 27 11:26:05 crc kubenswrapper[4728]: I0227 11:26:05.941975 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536520-986fg"] Feb 27 11:26:05 crc kubenswrapper[4728]: I0227 11:26:05.957589 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536520-986fg"] Feb 27 11:26:06 crc kubenswrapper[4728]: I0227 11:26:06.742464 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2601deca-07c9-4871-bef6-57a312202700" path="/var/lib/kubelet/pods/2601deca-07c9-4871-bef6-57a312202700/volumes" Feb 27 11:26:26 crc kubenswrapper[4728]: I0227 11:26:26.568930 4728 scope.go:117] "RemoveContainer" containerID="baa3431c9e42e09122cec31771f2b9f2848397d77155ec37bdd4a6387e0844ae" Feb 27 11:26:44 crc kubenswrapper[4728]: I0227 11:26:44.028319 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-wvnhk"] Feb 27 11:26:44 crc kubenswrapper[4728]: E0227 11:26:44.036475 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e720dd4b-f96d-490b-92c3-1aa62f21c98f" containerName="oc" Feb 27 11:26:44 crc kubenswrapper[4728]: I0227 11:26:44.036500 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="e720dd4b-f96d-490b-92c3-1aa62f21c98f" containerName="oc" Feb 27 11:26:44 crc kubenswrapper[4728]: I0227 11:26:44.036834 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="e720dd4b-f96d-490b-92c3-1aa62f21c98f" containerName="oc" Feb 27 11:26:44 crc kubenswrapper[4728]: I0227 11:26:44.039361 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wvnhk" Feb 27 11:26:44 crc kubenswrapper[4728]: I0227 11:26:44.040039 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wvnhk"] Feb 27 11:26:44 crc kubenswrapper[4728]: I0227 11:26:44.168819 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c82e83a6-408b-428a-945a-d595f7ffe57e-catalog-content\") pod \"redhat-marketplace-wvnhk\" (UID: \"c82e83a6-408b-428a-945a-d595f7ffe57e\") " pod="openshift-marketplace/redhat-marketplace-wvnhk" Feb 27 11:26:44 crc kubenswrapper[4728]: I0227 11:26:44.168880 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sn4fb\" (UniqueName: \"kubernetes.io/projected/c82e83a6-408b-428a-945a-d595f7ffe57e-kube-api-access-sn4fb\") pod \"redhat-marketplace-wvnhk\" (UID: \"c82e83a6-408b-428a-945a-d595f7ffe57e\") " pod="openshift-marketplace/redhat-marketplace-wvnhk" Feb 27 11:26:44 crc kubenswrapper[4728]: I0227 11:26:44.168927 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c82e83a6-408b-428a-945a-d595f7ffe57e-utilities\") pod \"redhat-marketplace-wvnhk\" (UID: \"c82e83a6-408b-428a-945a-d595f7ffe57e\") " pod="openshift-marketplace/redhat-marketplace-wvnhk" Feb 27 11:26:44 crc kubenswrapper[4728]: I0227 11:26:44.271249 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c82e83a6-408b-428a-945a-d595f7ffe57e-catalog-content\") pod \"redhat-marketplace-wvnhk\" (UID: \"c82e83a6-408b-428a-945a-d595f7ffe57e\") " pod="openshift-marketplace/redhat-marketplace-wvnhk" Feb 27 11:26:44 crc kubenswrapper[4728]: I0227 11:26:44.271325 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sn4fb\" (UniqueName: \"kubernetes.io/projected/c82e83a6-408b-428a-945a-d595f7ffe57e-kube-api-access-sn4fb\") pod \"redhat-marketplace-wvnhk\" (UID: \"c82e83a6-408b-428a-945a-d595f7ffe57e\") " pod="openshift-marketplace/redhat-marketplace-wvnhk" Feb 27 11:26:44 crc kubenswrapper[4728]: I0227 11:26:44.271386 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c82e83a6-408b-428a-945a-d595f7ffe57e-utilities\") pod \"redhat-marketplace-wvnhk\" (UID: \"c82e83a6-408b-428a-945a-d595f7ffe57e\") " pod="openshift-marketplace/redhat-marketplace-wvnhk" Feb 27 11:26:44 crc kubenswrapper[4728]: I0227 11:26:44.271974 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c82e83a6-408b-428a-945a-d595f7ffe57e-utilities\") pod \"redhat-marketplace-wvnhk\" (UID: \"c82e83a6-408b-428a-945a-d595f7ffe57e\") " pod="openshift-marketplace/redhat-marketplace-wvnhk" Feb 27 11:26:44 crc kubenswrapper[4728]: I0227 11:26:44.272214 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c82e83a6-408b-428a-945a-d595f7ffe57e-catalog-content\") pod \"redhat-marketplace-wvnhk\" (UID: \"c82e83a6-408b-428a-945a-d595f7ffe57e\") " pod="openshift-marketplace/redhat-marketplace-wvnhk" Feb 27 11:26:44 crc kubenswrapper[4728]: I0227 11:26:44.292705 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sn4fb\" (UniqueName: \"kubernetes.io/projected/c82e83a6-408b-428a-945a-d595f7ffe57e-kube-api-access-sn4fb\") pod \"redhat-marketplace-wvnhk\" (UID: \"c82e83a6-408b-428a-945a-d595f7ffe57e\") " pod="openshift-marketplace/redhat-marketplace-wvnhk" Feb 27 11:26:44 crc kubenswrapper[4728]: I0227 11:26:44.393642 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wvnhk" Feb 27 11:26:44 crc kubenswrapper[4728]: I0227 11:26:44.917037 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wvnhk"] Feb 27 11:26:45 crc kubenswrapper[4728]: I0227 11:26:45.384159 4728 generic.go:334] "Generic (PLEG): container finished" podID="c82e83a6-408b-428a-945a-d595f7ffe57e" containerID="105deb6ccc5fcb32a893709f02f1f6e4c197b52171c0b1e7faf9ee257cd4f54f" exitCode=0 Feb 27 11:26:45 crc kubenswrapper[4728]: I0227 11:26:45.384494 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wvnhk" event={"ID":"c82e83a6-408b-428a-945a-d595f7ffe57e","Type":"ContainerDied","Data":"105deb6ccc5fcb32a893709f02f1f6e4c197b52171c0b1e7faf9ee257cd4f54f"} Feb 27 11:26:45 crc kubenswrapper[4728]: I0227 11:26:45.384560 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wvnhk" event={"ID":"c82e83a6-408b-428a-945a-d595f7ffe57e","Type":"ContainerStarted","Data":"aa6939b8193a6663eca140e841f7cfd8129a333500fe115e7b8f1b6defa23687"} Feb 27 11:26:48 crc kubenswrapper[4728]: I0227 11:26:48.421059 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wvnhk" event={"ID":"c82e83a6-408b-428a-945a-d595f7ffe57e","Type":"ContainerStarted","Data":"0c4171ff8f974f5846817f27ef589ba5343e397aca7235d21cfdf83ae16c4f31"} Feb 27 11:26:52 crc kubenswrapper[4728]: I0227 11:26:52.502047 4728 generic.go:334] "Generic (PLEG): container finished" podID="c82e83a6-408b-428a-945a-d595f7ffe57e" containerID="0c4171ff8f974f5846817f27ef589ba5343e397aca7235d21cfdf83ae16c4f31" exitCode=0 Feb 27 11:26:52 crc kubenswrapper[4728]: I0227 11:26:52.502136 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wvnhk" event={"ID":"c82e83a6-408b-428a-945a-d595f7ffe57e","Type":"ContainerDied","Data":"0c4171ff8f974f5846817f27ef589ba5343e397aca7235d21cfdf83ae16c4f31"} Feb 27 11:26:53 crc kubenswrapper[4728]: I0227 11:26:53.519968 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wvnhk" event={"ID":"c82e83a6-408b-428a-945a-d595f7ffe57e","Type":"ContainerStarted","Data":"271a9e1bea865ae40a3deb2a59dcdb93cf877442a2f3d490e636de527f7e95c2"} Feb 27 11:26:54 crc kubenswrapper[4728]: I0227 11:26:54.394192 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-wvnhk" Feb 27 11:26:54 crc kubenswrapper[4728]: I0227 11:26:54.394683 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-wvnhk" Feb 27 11:26:55 crc kubenswrapper[4728]: I0227 11:26:55.476833 4728 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-wvnhk" podUID="c82e83a6-408b-428a-945a-d595f7ffe57e" containerName="registry-server" probeResult="failure" output=< Feb 27 11:26:55 crc kubenswrapper[4728]: timeout: failed to connect service ":50051" within 1s Feb 27 11:26:55 crc kubenswrapper[4728]: > Feb 27 11:27:04 crc kubenswrapper[4728]: I0227 11:27:04.453935 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-wvnhk" Feb 27 11:27:04 crc kubenswrapper[4728]: I0227 11:27:04.482999 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-wvnhk" podStartSLOduration=13.877717312 podStartE2EDuration="21.482977963s" podCreationTimestamp="2026-02-27 11:26:43 +0000 UTC" firstStartedPulling="2026-02-27 11:26:45.386578346 +0000 UTC m=+3625.348944502" lastFinishedPulling="2026-02-27 11:26:52.991839017 +0000 UTC m=+3632.954205153" observedRunningTime="2026-02-27 11:26:53.559827881 +0000 UTC m=+3633.522193987" watchObservedRunningTime="2026-02-27 11:27:04.482977963 +0000 UTC m=+3644.445344069" Feb 27 11:27:04 crc kubenswrapper[4728]: I0227 11:27:04.507591 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-wvnhk" Feb 27 11:27:04 crc kubenswrapper[4728]: I0227 11:27:04.702948 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wvnhk"] Feb 27 11:27:05 crc kubenswrapper[4728]: I0227 11:27:05.667244 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-wvnhk" podUID="c82e83a6-408b-428a-945a-d595f7ffe57e" containerName="registry-server" containerID="cri-o://271a9e1bea865ae40a3deb2a59dcdb93cf877442a2f3d490e636de527f7e95c2" gracePeriod=2 Feb 27 11:27:06 crc kubenswrapper[4728]: I0227 11:27:06.277450 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wvnhk" Feb 27 11:27:06 crc kubenswrapper[4728]: I0227 11:27:06.439479 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sn4fb\" (UniqueName: \"kubernetes.io/projected/c82e83a6-408b-428a-945a-d595f7ffe57e-kube-api-access-sn4fb\") pod \"c82e83a6-408b-428a-945a-d595f7ffe57e\" (UID: \"c82e83a6-408b-428a-945a-d595f7ffe57e\") " Feb 27 11:27:06 crc kubenswrapper[4728]: I0227 11:27:06.439539 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c82e83a6-408b-428a-945a-d595f7ffe57e-utilities\") pod \"c82e83a6-408b-428a-945a-d595f7ffe57e\" (UID: \"c82e83a6-408b-428a-945a-d595f7ffe57e\") " Feb 27 11:27:06 crc kubenswrapper[4728]: I0227 11:27:06.439665 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c82e83a6-408b-428a-945a-d595f7ffe57e-catalog-content\") pod \"c82e83a6-408b-428a-945a-d595f7ffe57e\" (UID: \"c82e83a6-408b-428a-945a-d595f7ffe57e\") " Feb 27 11:27:06 crc kubenswrapper[4728]: I0227 11:27:06.446300 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c82e83a6-408b-428a-945a-d595f7ffe57e-utilities" (OuterVolumeSpecName: "utilities") pod "c82e83a6-408b-428a-945a-d595f7ffe57e" (UID: "c82e83a6-408b-428a-945a-d595f7ffe57e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 11:27:06 crc kubenswrapper[4728]: I0227 11:27:06.451422 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c82e83a6-408b-428a-945a-d595f7ffe57e-kube-api-access-sn4fb" (OuterVolumeSpecName: "kube-api-access-sn4fb") pod "c82e83a6-408b-428a-945a-d595f7ffe57e" (UID: "c82e83a6-408b-428a-945a-d595f7ffe57e"). InnerVolumeSpecName "kube-api-access-sn4fb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:27:06 crc kubenswrapper[4728]: I0227 11:27:06.462779 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c82e83a6-408b-428a-945a-d595f7ffe57e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c82e83a6-408b-428a-945a-d595f7ffe57e" (UID: "c82e83a6-408b-428a-945a-d595f7ffe57e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 11:27:06 crc kubenswrapper[4728]: I0227 11:27:06.542790 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sn4fb\" (UniqueName: \"kubernetes.io/projected/c82e83a6-408b-428a-945a-d595f7ffe57e-kube-api-access-sn4fb\") on node \"crc\" DevicePath \"\"" Feb 27 11:27:06 crc kubenswrapper[4728]: I0227 11:27:06.542835 4728 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c82e83a6-408b-428a-945a-d595f7ffe57e-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 11:27:06 crc kubenswrapper[4728]: I0227 11:27:06.542846 4728 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c82e83a6-408b-428a-945a-d595f7ffe57e-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 11:27:06 crc kubenswrapper[4728]: I0227 11:27:06.683300 4728 generic.go:334] "Generic (PLEG): container finished" podID="c82e83a6-408b-428a-945a-d595f7ffe57e" containerID="271a9e1bea865ae40a3deb2a59dcdb93cf877442a2f3d490e636de527f7e95c2" exitCode=0 Feb 27 11:27:06 crc kubenswrapper[4728]: I0227 11:27:06.683339 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wvnhk" event={"ID":"c82e83a6-408b-428a-945a-d595f7ffe57e","Type":"ContainerDied","Data":"271a9e1bea865ae40a3deb2a59dcdb93cf877442a2f3d490e636de527f7e95c2"} Feb 27 11:27:06 crc kubenswrapper[4728]: I0227 11:27:06.683362 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wvnhk" event={"ID":"c82e83a6-408b-428a-945a-d595f7ffe57e","Type":"ContainerDied","Data":"aa6939b8193a6663eca140e841f7cfd8129a333500fe115e7b8f1b6defa23687"} Feb 27 11:27:06 crc kubenswrapper[4728]: I0227 11:27:06.683382 4728 scope.go:117] "RemoveContainer" containerID="271a9e1bea865ae40a3deb2a59dcdb93cf877442a2f3d490e636de527f7e95c2" Feb 27 11:27:06 crc kubenswrapper[4728]: I0227 11:27:06.684350 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wvnhk" Feb 27 11:27:06 crc kubenswrapper[4728]: I0227 11:27:06.713856 4728 scope.go:117] "RemoveContainer" containerID="0c4171ff8f974f5846817f27ef589ba5343e397aca7235d21cfdf83ae16c4f31" Feb 27 11:27:06 crc kubenswrapper[4728]: I0227 11:27:06.718369 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wvnhk"] Feb 27 11:27:06 crc kubenswrapper[4728]: I0227 11:27:06.744075 4728 scope.go:117] "RemoveContainer" containerID="105deb6ccc5fcb32a893709f02f1f6e4c197b52171c0b1e7faf9ee257cd4f54f" Feb 27 11:27:06 crc kubenswrapper[4728]: I0227 11:27:06.763009 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-wvnhk"] Feb 27 11:27:06 crc kubenswrapper[4728]: I0227 11:27:06.812149 4728 scope.go:117] "RemoveContainer" containerID="271a9e1bea865ae40a3deb2a59dcdb93cf877442a2f3d490e636de527f7e95c2" Feb 27 11:27:06 crc kubenswrapper[4728]: E0227 11:27:06.812983 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"271a9e1bea865ae40a3deb2a59dcdb93cf877442a2f3d490e636de527f7e95c2\": container with ID starting with 271a9e1bea865ae40a3deb2a59dcdb93cf877442a2f3d490e636de527f7e95c2 not found: ID does not exist" containerID="271a9e1bea865ae40a3deb2a59dcdb93cf877442a2f3d490e636de527f7e95c2" Feb 27 11:27:06 crc kubenswrapper[4728]: I0227 11:27:06.813045 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"271a9e1bea865ae40a3deb2a59dcdb93cf877442a2f3d490e636de527f7e95c2"} err="failed to get container status \"271a9e1bea865ae40a3deb2a59dcdb93cf877442a2f3d490e636de527f7e95c2\": rpc error: code = NotFound desc = could not find container \"271a9e1bea865ae40a3deb2a59dcdb93cf877442a2f3d490e636de527f7e95c2\": container with ID starting with 271a9e1bea865ae40a3deb2a59dcdb93cf877442a2f3d490e636de527f7e95c2 not found: ID does not exist" Feb 27 11:27:06 crc kubenswrapper[4728]: I0227 11:27:06.813068 4728 scope.go:117] "RemoveContainer" containerID="0c4171ff8f974f5846817f27ef589ba5343e397aca7235d21cfdf83ae16c4f31" Feb 27 11:27:06 crc kubenswrapper[4728]: E0227 11:27:06.813456 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0c4171ff8f974f5846817f27ef589ba5343e397aca7235d21cfdf83ae16c4f31\": container with ID starting with 0c4171ff8f974f5846817f27ef589ba5343e397aca7235d21cfdf83ae16c4f31 not found: ID does not exist" containerID="0c4171ff8f974f5846817f27ef589ba5343e397aca7235d21cfdf83ae16c4f31" Feb 27 11:27:06 crc kubenswrapper[4728]: I0227 11:27:06.813486 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c4171ff8f974f5846817f27ef589ba5343e397aca7235d21cfdf83ae16c4f31"} err="failed to get container status \"0c4171ff8f974f5846817f27ef589ba5343e397aca7235d21cfdf83ae16c4f31\": rpc error: code = NotFound desc = could not find container \"0c4171ff8f974f5846817f27ef589ba5343e397aca7235d21cfdf83ae16c4f31\": container with ID starting with 0c4171ff8f974f5846817f27ef589ba5343e397aca7235d21cfdf83ae16c4f31 not found: ID does not exist" Feb 27 11:27:06 crc kubenswrapper[4728]: I0227 11:27:06.813554 4728 scope.go:117] "RemoveContainer" containerID="105deb6ccc5fcb32a893709f02f1f6e4c197b52171c0b1e7faf9ee257cd4f54f" Feb 27 11:27:06 crc kubenswrapper[4728]: E0227 11:27:06.813866 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"105deb6ccc5fcb32a893709f02f1f6e4c197b52171c0b1e7faf9ee257cd4f54f\": container with ID starting with 105deb6ccc5fcb32a893709f02f1f6e4c197b52171c0b1e7faf9ee257cd4f54f not found: ID does not exist" containerID="105deb6ccc5fcb32a893709f02f1f6e4c197b52171c0b1e7faf9ee257cd4f54f" Feb 27 11:27:06 crc kubenswrapper[4728]: I0227 11:27:06.813918 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"105deb6ccc5fcb32a893709f02f1f6e4c197b52171c0b1e7faf9ee257cd4f54f"} err="failed to get container status \"105deb6ccc5fcb32a893709f02f1f6e4c197b52171c0b1e7faf9ee257cd4f54f\": rpc error: code = NotFound desc = could not find container \"105deb6ccc5fcb32a893709f02f1f6e4c197b52171c0b1e7faf9ee257cd4f54f\": container with ID starting with 105deb6ccc5fcb32a893709f02f1f6e4c197b52171c0b1e7faf9ee257cd4f54f not found: ID does not exist" Feb 27 11:27:08 crc kubenswrapper[4728]: I0227 11:27:08.738406 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c82e83a6-408b-428a-945a-d595f7ffe57e" path="/var/lib/kubelet/pods/c82e83a6-408b-428a-945a-d595f7ffe57e/volumes" Feb 27 11:27:35 crc kubenswrapper[4728]: I0227 11:27:35.922719 4728 patch_prober.go:28] interesting pod/machine-config-daemon-mf2hh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 11:27:35 crc kubenswrapper[4728]: I0227 11:27:35.923353 4728 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 11:28:00 crc kubenswrapper[4728]: I0227 11:28:00.180376 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536528-f2kcm"] Feb 27 11:28:00 crc kubenswrapper[4728]: E0227 11:28:00.181690 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c82e83a6-408b-428a-945a-d595f7ffe57e" containerName="registry-server" Feb 27 11:28:00 crc kubenswrapper[4728]: I0227 11:28:00.181709 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="c82e83a6-408b-428a-945a-d595f7ffe57e" containerName="registry-server" Feb 27 11:28:00 crc kubenswrapper[4728]: E0227 11:28:00.181760 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c82e83a6-408b-428a-945a-d595f7ffe57e" containerName="extract-utilities" Feb 27 11:28:00 crc kubenswrapper[4728]: I0227 11:28:00.181770 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="c82e83a6-408b-428a-945a-d595f7ffe57e" containerName="extract-utilities" Feb 27 11:28:00 crc kubenswrapper[4728]: E0227 11:28:00.181812 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c82e83a6-408b-428a-945a-d595f7ffe57e" containerName="extract-content" Feb 27 11:28:00 crc kubenswrapper[4728]: I0227 11:28:00.181820 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="c82e83a6-408b-428a-945a-d595f7ffe57e" containerName="extract-content" Feb 27 11:28:00 crc kubenswrapper[4728]: I0227 11:28:00.182153 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="c82e83a6-408b-428a-945a-d595f7ffe57e" containerName="registry-server" Feb 27 11:28:00 crc kubenswrapper[4728]: I0227 11:28:00.183376 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536528-f2kcm" Feb 27 11:28:00 crc kubenswrapper[4728]: I0227 11:28:00.185328 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-wxdfj" Feb 27 11:28:00 crc kubenswrapper[4728]: I0227 11:28:00.186316 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 11:28:00 crc kubenswrapper[4728]: I0227 11:28:00.187701 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 11:28:00 crc kubenswrapper[4728]: I0227 11:28:00.208190 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536528-f2kcm"] Feb 27 11:28:00 crc kubenswrapper[4728]: I0227 11:28:00.352800 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d29jf\" (UniqueName: \"kubernetes.io/projected/b37a29cd-efd0-447e-8f9c-567ea9a93dee-kube-api-access-d29jf\") pod \"auto-csr-approver-29536528-f2kcm\" (UID: \"b37a29cd-efd0-447e-8f9c-567ea9a93dee\") " pod="openshift-infra/auto-csr-approver-29536528-f2kcm" Feb 27 11:28:00 crc kubenswrapper[4728]: I0227 11:28:00.454634 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d29jf\" (UniqueName: \"kubernetes.io/projected/b37a29cd-efd0-447e-8f9c-567ea9a93dee-kube-api-access-d29jf\") pod \"auto-csr-approver-29536528-f2kcm\" (UID: \"b37a29cd-efd0-447e-8f9c-567ea9a93dee\") " pod="openshift-infra/auto-csr-approver-29536528-f2kcm" Feb 27 11:28:00 crc kubenswrapper[4728]: I0227 11:28:00.478185 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d29jf\" (UniqueName: \"kubernetes.io/projected/b37a29cd-efd0-447e-8f9c-567ea9a93dee-kube-api-access-d29jf\") pod \"auto-csr-approver-29536528-f2kcm\" (UID: \"b37a29cd-efd0-447e-8f9c-567ea9a93dee\") " pod="openshift-infra/auto-csr-approver-29536528-f2kcm" Feb 27 11:28:00 crc kubenswrapper[4728]: I0227 11:28:00.511186 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536528-f2kcm" Feb 27 11:28:00 crc kubenswrapper[4728]: I0227 11:28:00.986832 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536528-f2kcm"] Feb 27 11:28:01 crc kubenswrapper[4728]: I0227 11:28:01.326108 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536528-f2kcm" event={"ID":"b37a29cd-efd0-447e-8f9c-567ea9a93dee","Type":"ContainerStarted","Data":"2dcae1aba8d900883cab3b33e3aa0c458b464c58b199796d6bf0ad5651f3e5ff"} Feb 27 11:28:03 crc kubenswrapper[4728]: I0227 11:28:03.351234 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536528-f2kcm" event={"ID":"b37a29cd-efd0-447e-8f9c-567ea9a93dee","Type":"ContainerStarted","Data":"4985d1e189a1d195fb24e205379bb9c046b84be64f358b115659602ce1cd2c4e"} Feb 27 11:28:03 crc kubenswrapper[4728]: I0227 11:28:03.384372 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29536528-f2kcm" podStartSLOduration=1.842978235 podStartE2EDuration="3.384342117s" podCreationTimestamp="2026-02-27 11:28:00 +0000 UTC" firstStartedPulling="2026-02-27 11:28:00.98131286 +0000 UTC m=+3700.943678956" lastFinishedPulling="2026-02-27 11:28:02.522676732 +0000 UTC m=+3702.485042838" observedRunningTime="2026-02-27 11:28:03.379173566 +0000 UTC m=+3703.341539672" watchObservedRunningTime="2026-02-27 11:28:03.384342117 +0000 UTC m=+3703.346708223" Feb 27 11:28:04 crc kubenswrapper[4728]: I0227 11:28:04.371466 4728 generic.go:334] "Generic (PLEG): container finished" podID="b37a29cd-efd0-447e-8f9c-567ea9a93dee" containerID="4985d1e189a1d195fb24e205379bb9c046b84be64f358b115659602ce1cd2c4e" exitCode=0 Feb 27 11:28:04 crc kubenswrapper[4728]: I0227 11:28:04.371694 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536528-f2kcm" event={"ID":"b37a29cd-efd0-447e-8f9c-567ea9a93dee","Type":"ContainerDied","Data":"4985d1e189a1d195fb24e205379bb9c046b84be64f358b115659602ce1cd2c4e"} Feb 27 11:28:05 crc kubenswrapper[4728]: I0227 11:28:05.802984 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536528-f2kcm" Feb 27 11:28:05 crc kubenswrapper[4728]: I0227 11:28:05.922591 4728 patch_prober.go:28] interesting pod/machine-config-daemon-mf2hh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 11:28:05 crc kubenswrapper[4728]: I0227 11:28:05.922962 4728 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 11:28:05 crc kubenswrapper[4728]: I0227 11:28:05.935657 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d29jf\" (UniqueName: \"kubernetes.io/projected/b37a29cd-efd0-447e-8f9c-567ea9a93dee-kube-api-access-d29jf\") pod \"b37a29cd-efd0-447e-8f9c-567ea9a93dee\" (UID: \"b37a29cd-efd0-447e-8f9c-567ea9a93dee\") " Feb 27 11:28:05 crc kubenswrapper[4728]: I0227 11:28:05.949751 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b37a29cd-efd0-447e-8f9c-567ea9a93dee-kube-api-access-d29jf" (OuterVolumeSpecName: "kube-api-access-d29jf") pod "b37a29cd-efd0-447e-8f9c-567ea9a93dee" (UID: "b37a29cd-efd0-447e-8f9c-567ea9a93dee"). InnerVolumeSpecName "kube-api-access-d29jf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:28:06 crc kubenswrapper[4728]: I0227 11:28:06.038958 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d29jf\" (UniqueName: \"kubernetes.io/projected/b37a29cd-efd0-447e-8f9c-567ea9a93dee-kube-api-access-d29jf\") on node \"crc\" DevicePath \"\"" Feb 27 11:28:06 crc kubenswrapper[4728]: I0227 11:28:06.395453 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536528-f2kcm" event={"ID":"b37a29cd-efd0-447e-8f9c-567ea9a93dee","Type":"ContainerDied","Data":"2dcae1aba8d900883cab3b33e3aa0c458b464c58b199796d6bf0ad5651f3e5ff"} Feb 27 11:28:06 crc kubenswrapper[4728]: I0227 11:28:06.395491 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2dcae1aba8d900883cab3b33e3aa0c458b464c58b199796d6bf0ad5651f3e5ff" Feb 27 11:28:06 crc kubenswrapper[4728]: I0227 11:28:06.395514 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536528-f2kcm" Feb 27 11:28:06 crc kubenswrapper[4728]: I0227 11:28:06.454681 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536522-qhglf"] Feb 27 11:28:06 crc kubenswrapper[4728]: I0227 11:28:06.465310 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536522-qhglf"] Feb 27 11:28:06 crc kubenswrapper[4728]: I0227 11:28:06.742945 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94404864-351e-4827-b8f1-a59bf9a35f03" path="/var/lib/kubelet/pods/94404864-351e-4827-b8f1-a59bf9a35f03/volumes" Feb 27 11:28:26 crc kubenswrapper[4728]: I0227 11:28:26.725234 4728 scope.go:117] "RemoveContainer" containerID="f11f2b3c85c0c86e06c9baa9744e78fb733c7b7af32ceaf877eec8d3a0df98e7" Feb 27 11:28:35 crc kubenswrapper[4728]: I0227 11:28:35.922933 4728 patch_prober.go:28] interesting pod/machine-config-daemon-mf2hh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 11:28:35 crc kubenswrapper[4728]: I0227 11:28:35.923614 4728 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 11:28:35 crc kubenswrapper[4728]: I0227 11:28:35.923687 4728 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" Feb 27 11:28:35 crc kubenswrapper[4728]: I0227 11:28:35.925026 4728 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c82c2d54b29d2520aadcad1224fcb062e86dca5cffd7978c2b852d83c9a59843"} pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 27 11:28:35 crc kubenswrapper[4728]: I0227 11:28:35.925102 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" containerName="machine-config-daemon" containerID="cri-o://c82c2d54b29d2520aadcad1224fcb062e86dca5cffd7978c2b852d83c9a59843" gracePeriod=600 Feb 27 11:28:36 crc kubenswrapper[4728]: I0227 11:28:36.810723 4728 generic.go:334] "Generic (PLEG): container finished" podID="c2cfd349-f825-497b-b698-7fb6bc258b22" containerID="c82c2d54b29d2520aadcad1224fcb062e86dca5cffd7978c2b852d83c9a59843" exitCode=0 Feb 27 11:28:36 crc kubenswrapper[4728]: I0227 11:28:36.810813 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" event={"ID":"c2cfd349-f825-497b-b698-7fb6bc258b22","Type":"ContainerDied","Data":"c82c2d54b29d2520aadcad1224fcb062e86dca5cffd7978c2b852d83c9a59843"} Feb 27 11:28:36 crc kubenswrapper[4728]: I0227 11:28:36.811362 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" event={"ID":"c2cfd349-f825-497b-b698-7fb6bc258b22","Type":"ContainerStarted","Data":"d3c42e3baa5b4af9ba58d5b44dbb6d220b09d79c410d643dc0fba85cce7058ed"} Feb 27 11:28:36 crc kubenswrapper[4728]: I0227 11:28:36.811390 4728 scope.go:117] "RemoveContainer" containerID="4b26cdc47a7e7e6f6d03cb73756691c7c42ce277f12dd365521433a950382ed6" Feb 27 11:30:00 crc kubenswrapper[4728]: I0227 11:30:00.158918 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536530-fbx9s"] Feb 27 11:30:00 crc kubenswrapper[4728]: E0227 11:30:00.160221 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b37a29cd-efd0-447e-8f9c-567ea9a93dee" containerName="oc" Feb 27 11:30:00 crc kubenswrapper[4728]: I0227 11:30:00.160243 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="b37a29cd-efd0-447e-8f9c-567ea9a93dee" containerName="oc" Feb 27 11:30:00 crc kubenswrapper[4728]: I0227 11:30:00.160734 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="b37a29cd-efd0-447e-8f9c-567ea9a93dee" containerName="oc" Feb 27 11:30:00 crc kubenswrapper[4728]: I0227 11:30:00.161788 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536530-fbx9s" Feb 27 11:30:00 crc kubenswrapper[4728]: I0227 11:30:00.164757 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 11:30:00 crc kubenswrapper[4728]: I0227 11:30:00.164969 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 11:30:00 crc kubenswrapper[4728]: I0227 11:30:00.165129 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-wxdfj" Feb 27 11:30:00 crc kubenswrapper[4728]: I0227 11:30:00.200704 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29536530-xxgb6"] Feb 27 11:30:00 crc kubenswrapper[4728]: I0227 11:30:00.202314 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29536530-xxgb6" Feb 27 11:30:00 crc kubenswrapper[4728]: I0227 11:30:00.204290 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 27 11:30:00 crc kubenswrapper[4728]: I0227 11:30:00.204561 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 27 11:30:00 crc kubenswrapper[4728]: I0227 11:30:00.213998 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536530-fbx9s"] Feb 27 11:30:00 crc kubenswrapper[4728]: I0227 11:30:00.233182 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29536530-xxgb6"] Feb 27 11:30:00 crc kubenswrapper[4728]: I0227 11:30:00.303653 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5dbp9\" (UniqueName: \"kubernetes.io/projected/e377a6ae-105e-4b6b-86c7-59224b2409d4-kube-api-access-5dbp9\") pod \"auto-csr-approver-29536530-fbx9s\" (UID: \"e377a6ae-105e-4b6b-86c7-59224b2409d4\") " pod="openshift-infra/auto-csr-approver-29536530-fbx9s" Feb 27 11:30:00 crc kubenswrapper[4728]: I0227 11:30:00.303710 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/44b8e204-6403-4250-8ca4-c046062ad67a-config-volume\") pod \"collect-profiles-29536530-xxgb6\" (UID: \"44b8e204-6403-4250-8ca4-c046062ad67a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536530-xxgb6" Feb 27 11:30:00 crc kubenswrapper[4728]: I0227 11:30:00.303738 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbx2n\" (UniqueName: \"kubernetes.io/projected/44b8e204-6403-4250-8ca4-c046062ad67a-kube-api-access-xbx2n\") pod \"collect-profiles-29536530-xxgb6\" (UID: \"44b8e204-6403-4250-8ca4-c046062ad67a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536530-xxgb6" Feb 27 11:30:00 crc kubenswrapper[4728]: I0227 11:30:00.303788 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/44b8e204-6403-4250-8ca4-c046062ad67a-secret-volume\") pod \"collect-profiles-29536530-xxgb6\" (UID: \"44b8e204-6403-4250-8ca4-c046062ad67a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536530-xxgb6" Feb 27 11:30:00 crc kubenswrapper[4728]: I0227 11:30:00.405419 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5dbp9\" (UniqueName: \"kubernetes.io/projected/e377a6ae-105e-4b6b-86c7-59224b2409d4-kube-api-access-5dbp9\") pod \"auto-csr-approver-29536530-fbx9s\" (UID: \"e377a6ae-105e-4b6b-86c7-59224b2409d4\") " pod="openshift-infra/auto-csr-approver-29536530-fbx9s" Feb 27 11:30:00 crc kubenswrapper[4728]: I0227 11:30:00.405485 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/44b8e204-6403-4250-8ca4-c046062ad67a-config-volume\") pod \"collect-profiles-29536530-xxgb6\" (UID: \"44b8e204-6403-4250-8ca4-c046062ad67a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536530-xxgb6" Feb 27 11:30:00 crc kubenswrapper[4728]: I0227 11:30:00.405536 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xbx2n\" (UniqueName: \"kubernetes.io/projected/44b8e204-6403-4250-8ca4-c046062ad67a-kube-api-access-xbx2n\") pod \"collect-profiles-29536530-xxgb6\" (UID: \"44b8e204-6403-4250-8ca4-c046062ad67a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536530-xxgb6" Feb 27 11:30:00 crc kubenswrapper[4728]: I0227 11:30:00.405593 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/44b8e204-6403-4250-8ca4-c046062ad67a-secret-volume\") pod \"collect-profiles-29536530-xxgb6\" (UID: \"44b8e204-6403-4250-8ca4-c046062ad67a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536530-xxgb6" Feb 27 11:30:00 crc kubenswrapper[4728]: I0227 11:30:00.407174 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/44b8e204-6403-4250-8ca4-c046062ad67a-config-volume\") pod \"collect-profiles-29536530-xxgb6\" (UID: \"44b8e204-6403-4250-8ca4-c046062ad67a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536530-xxgb6" Feb 27 11:30:00 crc kubenswrapper[4728]: I0227 11:30:00.415318 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/44b8e204-6403-4250-8ca4-c046062ad67a-secret-volume\") pod \"collect-profiles-29536530-xxgb6\" (UID: \"44b8e204-6403-4250-8ca4-c046062ad67a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536530-xxgb6" Feb 27 11:30:00 crc kubenswrapper[4728]: I0227 11:30:00.430626 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5dbp9\" (UniqueName: \"kubernetes.io/projected/e377a6ae-105e-4b6b-86c7-59224b2409d4-kube-api-access-5dbp9\") pod \"auto-csr-approver-29536530-fbx9s\" (UID: \"e377a6ae-105e-4b6b-86c7-59224b2409d4\") " pod="openshift-infra/auto-csr-approver-29536530-fbx9s" Feb 27 11:30:00 crc kubenswrapper[4728]: I0227 11:30:00.434173 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xbx2n\" (UniqueName: \"kubernetes.io/projected/44b8e204-6403-4250-8ca4-c046062ad67a-kube-api-access-xbx2n\") pod \"collect-profiles-29536530-xxgb6\" (UID: \"44b8e204-6403-4250-8ca4-c046062ad67a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536530-xxgb6" Feb 27 11:30:00 crc kubenswrapper[4728]: I0227 11:30:00.494029 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536530-fbx9s" Feb 27 11:30:00 crc kubenswrapper[4728]: I0227 11:30:00.522330 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29536530-xxgb6" Feb 27 11:30:01 crc kubenswrapper[4728]: I0227 11:30:01.019161 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536530-fbx9s"] Feb 27 11:30:01 crc kubenswrapper[4728]: W0227 11:30:01.092630 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod44b8e204_6403_4250_8ca4_c046062ad67a.slice/crio-042b1a1555eb986c70cc5f98559bf5fe4f612822d91a6b6f99e65b187dd771fc WatchSource:0}: Error finding container 042b1a1555eb986c70cc5f98559bf5fe4f612822d91a6b6f99e65b187dd771fc: Status 404 returned error can't find the container with id 042b1a1555eb986c70cc5f98559bf5fe4f612822d91a6b6f99e65b187dd771fc Feb 27 11:30:01 crc kubenswrapper[4728]: I0227 11:30:01.101340 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29536530-xxgb6"] Feb 27 11:30:01 crc kubenswrapper[4728]: I0227 11:30:01.813244 4728 generic.go:334] "Generic (PLEG): container finished" podID="44b8e204-6403-4250-8ca4-c046062ad67a" containerID="9a727b3c7bb1ff1701c86f10138b4e58167a47d818a4e5a2c2ab098bc5eebe14" exitCode=0 Feb 27 11:30:01 crc kubenswrapper[4728]: I0227 11:30:01.813306 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29536530-xxgb6" event={"ID":"44b8e204-6403-4250-8ca4-c046062ad67a","Type":"ContainerDied","Data":"9a727b3c7bb1ff1701c86f10138b4e58167a47d818a4e5a2c2ab098bc5eebe14"} Feb 27 11:30:01 crc kubenswrapper[4728]: I0227 11:30:01.813579 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29536530-xxgb6" event={"ID":"44b8e204-6403-4250-8ca4-c046062ad67a","Type":"ContainerStarted","Data":"042b1a1555eb986c70cc5f98559bf5fe4f612822d91a6b6f99e65b187dd771fc"} Feb 27 11:30:01 crc kubenswrapper[4728]: I0227 11:30:01.814453 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536530-fbx9s" event={"ID":"e377a6ae-105e-4b6b-86c7-59224b2409d4","Type":"ContainerStarted","Data":"476f50a69d7c4b895b4a7382da9a411868bb094e682a91071e990e220004fbfd"} Feb 27 11:30:03 crc kubenswrapper[4728]: I0227 11:30:03.417555 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29536530-xxgb6" Feb 27 11:30:03 crc kubenswrapper[4728]: I0227 11:30:03.581543 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xbx2n\" (UniqueName: \"kubernetes.io/projected/44b8e204-6403-4250-8ca4-c046062ad67a-kube-api-access-xbx2n\") pod \"44b8e204-6403-4250-8ca4-c046062ad67a\" (UID: \"44b8e204-6403-4250-8ca4-c046062ad67a\") " Feb 27 11:30:03 crc kubenswrapper[4728]: I0227 11:30:03.581842 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/44b8e204-6403-4250-8ca4-c046062ad67a-secret-volume\") pod \"44b8e204-6403-4250-8ca4-c046062ad67a\" (UID: \"44b8e204-6403-4250-8ca4-c046062ad67a\") " Feb 27 11:30:03 crc kubenswrapper[4728]: I0227 11:30:03.581993 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/44b8e204-6403-4250-8ca4-c046062ad67a-config-volume\") pod \"44b8e204-6403-4250-8ca4-c046062ad67a\" (UID: \"44b8e204-6403-4250-8ca4-c046062ad67a\") " Feb 27 11:30:03 crc kubenswrapper[4728]: I0227 11:30:03.583085 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/44b8e204-6403-4250-8ca4-c046062ad67a-config-volume" (OuterVolumeSpecName: "config-volume") pod "44b8e204-6403-4250-8ca4-c046062ad67a" (UID: "44b8e204-6403-4250-8ca4-c046062ad67a"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 11:30:03 crc kubenswrapper[4728]: I0227 11:30:03.591027 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/44b8e204-6403-4250-8ca4-c046062ad67a-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "44b8e204-6403-4250-8ca4-c046062ad67a" (UID: "44b8e204-6403-4250-8ca4-c046062ad67a"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 11:30:03 crc kubenswrapper[4728]: I0227 11:30:03.591676 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44b8e204-6403-4250-8ca4-c046062ad67a-kube-api-access-xbx2n" (OuterVolumeSpecName: "kube-api-access-xbx2n") pod "44b8e204-6403-4250-8ca4-c046062ad67a" (UID: "44b8e204-6403-4250-8ca4-c046062ad67a"). InnerVolumeSpecName "kube-api-access-xbx2n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:30:03 crc kubenswrapper[4728]: I0227 11:30:03.684127 4728 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/44b8e204-6403-4250-8ca4-c046062ad67a-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 27 11:30:03 crc kubenswrapper[4728]: I0227 11:30:03.684158 4728 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/44b8e204-6403-4250-8ca4-c046062ad67a-config-volume\") on node \"crc\" DevicePath \"\"" Feb 27 11:30:03 crc kubenswrapper[4728]: I0227 11:30:03.684169 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xbx2n\" (UniqueName: \"kubernetes.io/projected/44b8e204-6403-4250-8ca4-c046062ad67a-kube-api-access-xbx2n\") on node \"crc\" DevicePath \"\"" Feb 27 11:30:03 crc kubenswrapper[4728]: I0227 11:30:03.835229 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536530-fbx9s" event={"ID":"e377a6ae-105e-4b6b-86c7-59224b2409d4","Type":"ContainerStarted","Data":"a9e034d6e6a4d90653cdc9a63b2c73396d706e170ff08c383ad3419edb24aa48"} Feb 27 11:30:03 crc kubenswrapper[4728]: I0227 11:30:03.837218 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29536530-xxgb6" event={"ID":"44b8e204-6403-4250-8ca4-c046062ad67a","Type":"ContainerDied","Data":"042b1a1555eb986c70cc5f98559bf5fe4f612822d91a6b6f99e65b187dd771fc"} Feb 27 11:30:03 crc kubenswrapper[4728]: I0227 11:30:03.837250 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="042b1a1555eb986c70cc5f98559bf5fe4f612822d91a6b6f99e65b187dd771fc" Feb 27 11:30:03 crc kubenswrapper[4728]: I0227 11:30:03.837290 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29536530-xxgb6" Feb 27 11:30:04 crc kubenswrapper[4728]: I0227 11:30:04.475121 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29536530-fbx9s" podStartSLOduration=1.9983870000000001 podStartE2EDuration="4.475097875s" podCreationTimestamp="2026-02-27 11:30:00 +0000 UTC" firstStartedPulling="2026-02-27 11:30:01.029445832 +0000 UTC m=+3820.991811938" lastFinishedPulling="2026-02-27 11:30:03.506156707 +0000 UTC m=+3823.468522813" observedRunningTime="2026-02-27 11:30:03.861922658 +0000 UTC m=+3823.824288784" watchObservedRunningTime="2026-02-27 11:30:04.475097875 +0000 UTC m=+3824.437463981" Feb 27 11:30:04 crc kubenswrapper[4728]: I0227 11:30:04.506756 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29536485-vwhxc"] Feb 27 11:30:04 crc kubenswrapper[4728]: I0227 11:30:04.520925 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29536485-vwhxc"] Feb 27 11:30:04 crc kubenswrapper[4728]: I0227 11:30:04.745579 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="90322058-3b16-4e4a-8116-7f02e4865437" path="/var/lib/kubelet/pods/90322058-3b16-4e4a-8116-7f02e4865437/volumes" Feb 27 11:30:04 crc kubenswrapper[4728]: I0227 11:30:04.852117 4728 generic.go:334] "Generic (PLEG): container finished" podID="e377a6ae-105e-4b6b-86c7-59224b2409d4" containerID="a9e034d6e6a4d90653cdc9a63b2c73396d706e170ff08c383ad3419edb24aa48" exitCode=0 Feb 27 11:30:04 crc kubenswrapper[4728]: I0227 11:30:04.852183 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536530-fbx9s" event={"ID":"e377a6ae-105e-4b6b-86c7-59224b2409d4","Type":"ContainerDied","Data":"a9e034d6e6a4d90653cdc9a63b2c73396d706e170ff08c383ad3419edb24aa48"} Feb 27 11:30:06 crc kubenswrapper[4728]: I0227 11:30:06.347908 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536530-fbx9s" Feb 27 11:30:06 crc kubenswrapper[4728]: I0227 11:30:06.485818 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5dbp9\" (UniqueName: \"kubernetes.io/projected/e377a6ae-105e-4b6b-86c7-59224b2409d4-kube-api-access-5dbp9\") pod \"e377a6ae-105e-4b6b-86c7-59224b2409d4\" (UID: \"e377a6ae-105e-4b6b-86c7-59224b2409d4\") " Feb 27 11:30:06 crc kubenswrapper[4728]: I0227 11:30:06.492697 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e377a6ae-105e-4b6b-86c7-59224b2409d4-kube-api-access-5dbp9" (OuterVolumeSpecName: "kube-api-access-5dbp9") pod "e377a6ae-105e-4b6b-86c7-59224b2409d4" (UID: "e377a6ae-105e-4b6b-86c7-59224b2409d4"). InnerVolumeSpecName "kube-api-access-5dbp9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:30:06 crc kubenswrapper[4728]: I0227 11:30:06.588884 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5dbp9\" (UniqueName: \"kubernetes.io/projected/e377a6ae-105e-4b6b-86c7-59224b2409d4-kube-api-access-5dbp9\") on node \"crc\" DevicePath \"\"" Feb 27 11:30:06 crc kubenswrapper[4728]: I0227 11:30:06.875975 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536530-fbx9s" event={"ID":"e377a6ae-105e-4b6b-86c7-59224b2409d4","Type":"ContainerDied","Data":"476f50a69d7c4b895b4a7382da9a411868bb094e682a91071e990e220004fbfd"} Feb 27 11:30:06 crc kubenswrapper[4728]: I0227 11:30:06.876549 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="476f50a69d7c4b895b4a7382da9a411868bb094e682a91071e990e220004fbfd" Feb 27 11:30:06 crc kubenswrapper[4728]: I0227 11:30:06.876025 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536530-fbx9s" Feb 27 11:30:06 crc kubenswrapper[4728]: I0227 11:30:06.946837 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536524-57mpz"] Feb 27 11:30:06 crc kubenswrapper[4728]: I0227 11:30:06.961400 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536524-57mpz"] Feb 27 11:30:08 crc kubenswrapper[4728]: I0227 11:30:08.748247 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6b7b33ea-08da-4e39-9c2e-11292a8b4901" path="/var/lib/kubelet/pods/6b7b33ea-08da-4e39-9c2e-11292a8b4901/volumes" Feb 27 11:30:26 crc kubenswrapper[4728]: I0227 11:30:26.854717 4728 scope.go:117] "RemoveContainer" containerID="f14258437bbc9dd89432d81129d1039a48ccdb53d17fd1aec5ead6d4c06e6c3c" Feb 27 11:30:26 crc kubenswrapper[4728]: I0227 11:30:26.907468 4728 scope.go:117] "RemoveContainer" containerID="63b19765d61ef3e37790b3aeddbf0afca9b520422fc33a8c0d77158817c683d6" Feb 27 11:31:05 crc kubenswrapper[4728]: I0227 11:31:05.921773 4728 patch_prober.go:28] interesting pod/machine-config-daemon-mf2hh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 11:31:05 crc kubenswrapper[4728]: I0227 11:31:05.922381 4728 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 11:31:35 crc kubenswrapper[4728]: I0227 11:31:35.922057 4728 patch_prober.go:28] interesting pod/machine-config-daemon-mf2hh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 11:31:35 crc kubenswrapper[4728]: I0227 11:31:35.922592 4728 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 11:32:00 crc kubenswrapper[4728]: I0227 11:32:00.150882 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536532-bhg7z"] Feb 27 11:32:00 crc kubenswrapper[4728]: E0227 11:32:00.152364 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44b8e204-6403-4250-8ca4-c046062ad67a" containerName="collect-profiles" Feb 27 11:32:00 crc kubenswrapper[4728]: I0227 11:32:00.152385 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="44b8e204-6403-4250-8ca4-c046062ad67a" containerName="collect-profiles" Feb 27 11:32:00 crc kubenswrapper[4728]: E0227 11:32:00.152414 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e377a6ae-105e-4b6b-86c7-59224b2409d4" containerName="oc" Feb 27 11:32:00 crc kubenswrapper[4728]: I0227 11:32:00.152424 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="e377a6ae-105e-4b6b-86c7-59224b2409d4" containerName="oc" Feb 27 11:32:00 crc kubenswrapper[4728]: I0227 11:32:00.152840 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="e377a6ae-105e-4b6b-86c7-59224b2409d4" containerName="oc" Feb 27 11:32:00 crc kubenswrapper[4728]: I0227 11:32:00.152869 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="44b8e204-6403-4250-8ca4-c046062ad67a" containerName="collect-profiles" Feb 27 11:32:00 crc kubenswrapper[4728]: I0227 11:32:00.153910 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536532-bhg7z" Feb 27 11:32:00 crc kubenswrapper[4728]: I0227 11:32:00.156431 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 11:32:00 crc kubenswrapper[4728]: I0227 11:32:00.156662 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-wxdfj" Feb 27 11:32:00 crc kubenswrapper[4728]: I0227 11:32:00.156856 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 11:32:00 crc kubenswrapper[4728]: I0227 11:32:00.162891 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536532-bhg7z"] Feb 27 11:32:00 crc kubenswrapper[4728]: I0227 11:32:00.323954 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wz92p\" (UniqueName: \"kubernetes.io/projected/6a2534bc-84b6-49d4-b11f-fc16880d0678-kube-api-access-wz92p\") pod \"auto-csr-approver-29536532-bhg7z\" (UID: \"6a2534bc-84b6-49d4-b11f-fc16880d0678\") " pod="openshift-infra/auto-csr-approver-29536532-bhg7z" Feb 27 11:32:00 crc kubenswrapper[4728]: I0227 11:32:00.426317 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wz92p\" (UniqueName: \"kubernetes.io/projected/6a2534bc-84b6-49d4-b11f-fc16880d0678-kube-api-access-wz92p\") pod \"auto-csr-approver-29536532-bhg7z\" (UID: \"6a2534bc-84b6-49d4-b11f-fc16880d0678\") " pod="openshift-infra/auto-csr-approver-29536532-bhg7z" Feb 27 11:32:00 crc kubenswrapper[4728]: I0227 11:32:00.460534 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wz92p\" (UniqueName: \"kubernetes.io/projected/6a2534bc-84b6-49d4-b11f-fc16880d0678-kube-api-access-wz92p\") pod \"auto-csr-approver-29536532-bhg7z\" (UID: \"6a2534bc-84b6-49d4-b11f-fc16880d0678\") " pod="openshift-infra/auto-csr-approver-29536532-bhg7z" Feb 27 11:32:00 crc kubenswrapper[4728]: I0227 11:32:00.474689 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536532-bhg7z" Feb 27 11:32:01 crc kubenswrapper[4728]: I0227 11:32:01.070914 4728 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 27 11:32:01 crc kubenswrapper[4728]: I0227 11:32:01.070920 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536532-bhg7z"] Feb 27 11:32:01 crc kubenswrapper[4728]: I0227 11:32:01.352578 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536532-bhg7z" event={"ID":"6a2534bc-84b6-49d4-b11f-fc16880d0678","Type":"ContainerStarted","Data":"43c3eaf54d02238dda30b7437be7e7d400fd0711cbba889f2ae5462c652bc98e"} Feb 27 11:32:03 crc kubenswrapper[4728]: I0227 11:32:03.379427 4728 generic.go:334] "Generic (PLEG): container finished" podID="6a2534bc-84b6-49d4-b11f-fc16880d0678" containerID="922f927ff18edaa702920e7b2efe5653ff357816ea647108574d39381cd22aca" exitCode=0 Feb 27 11:32:03 crc kubenswrapper[4728]: I0227 11:32:03.379583 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536532-bhg7z" event={"ID":"6a2534bc-84b6-49d4-b11f-fc16880d0678","Type":"ContainerDied","Data":"922f927ff18edaa702920e7b2efe5653ff357816ea647108574d39381cd22aca"} Feb 27 11:32:04 crc kubenswrapper[4728]: I0227 11:32:04.923114 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536532-bhg7z" Feb 27 11:32:05 crc kubenswrapper[4728]: I0227 11:32:05.045691 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wz92p\" (UniqueName: \"kubernetes.io/projected/6a2534bc-84b6-49d4-b11f-fc16880d0678-kube-api-access-wz92p\") pod \"6a2534bc-84b6-49d4-b11f-fc16880d0678\" (UID: \"6a2534bc-84b6-49d4-b11f-fc16880d0678\") " Feb 27 11:32:05 crc kubenswrapper[4728]: I0227 11:32:05.052258 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a2534bc-84b6-49d4-b11f-fc16880d0678-kube-api-access-wz92p" (OuterVolumeSpecName: "kube-api-access-wz92p") pod "6a2534bc-84b6-49d4-b11f-fc16880d0678" (UID: "6a2534bc-84b6-49d4-b11f-fc16880d0678"). InnerVolumeSpecName "kube-api-access-wz92p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:32:05 crc kubenswrapper[4728]: I0227 11:32:05.149267 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wz92p\" (UniqueName: \"kubernetes.io/projected/6a2534bc-84b6-49d4-b11f-fc16880d0678-kube-api-access-wz92p\") on node \"crc\" DevicePath \"\"" Feb 27 11:32:05 crc kubenswrapper[4728]: I0227 11:32:05.405969 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536532-bhg7z" event={"ID":"6a2534bc-84b6-49d4-b11f-fc16880d0678","Type":"ContainerDied","Data":"43c3eaf54d02238dda30b7437be7e7d400fd0711cbba889f2ae5462c652bc98e"} Feb 27 11:32:05 crc kubenswrapper[4728]: I0227 11:32:05.406027 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="43c3eaf54d02238dda30b7437be7e7d400fd0711cbba889f2ae5462c652bc98e" Feb 27 11:32:05 crc kubenswrapper[4728]: I0227 11:32:05.406060 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536532-bhg7z" Feb 27 11:32:05 crc kubenswrapper[4728]: I0227 11:32:05.922599 4728 patch_prober.go:28] interesting pod/machine-config-daemon-mf2hh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 11:32:05 crc kubenswrapper[4728]: I0227 11:32:05.922969 4728 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 11:32:05 crc kubenswrapper[4728]: I0227 11:32:05.923035 4728 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" Feb 27 11:32:05 crc kubenswrapper[4728]: I0227 11:32:05.923986 4728 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d3c42e3baa5b4af9ba58d5b44dbb6d220b09d79c410d643dc0fba85cce7058ed"} pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 27 11:32:05 crc kubenswrapper[4728]: I0227 11:32:05.924046 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" containerName="machine-config-daemon" containerID="cri-o://d3c42e3baa5b4af9ba58d5b44dbb6d220b09d79c410d643dc0fba85cce7058ed" gracePeriod=600 Feb 27 11:32:06 crc kubenswrapper[4728]: I0227 11:32:06.015135 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536526-qnr5w"] Feb 27 11:32:06 crc kubenswrapper[4728]: I0227 11:32:06.028785 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536526-qnr5w"] Feb 27 11:32:06 crc kubenswrapper[4728]: E0227 11:32:06.052486 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mf2hh_openshift-machine-config-operator(c2cfd349-f825-497b-b698-7fb6bc258b22)\"" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" Feb 27 11:32:06 crc kubenswrapper[4728]: I0227 11:32:06.423143 4728 generic.go:334] "Generic (PLEG): container finished" podID="c2cfd349-f825-497b-b698-7fb6bc258b22" containerID="d3c42e3baa5b4af9ba58d5b44dbb6d220b09d79c410d643dc0fba85cce7058ed" exitCode=0 Feb 27 11:32:06 crc kubenswrapper[4728]: I0227 11:32:06.423352 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" event={"ID":"c2cfd349-f825-497b-b698-7fb6bc258b22","Type":"ContainerDied","Data":"d3c42e3baa5b4af9ba58d5b44dbb6d220b09d79c410d643dc0fba85cce7058ed"} Feb 27 11:32:06 crc kubenswrapper[4728]: I0227 11:32:06.423621 4728 scope.go:117] "RemoveContainer" containerID="c82c2d54b29d2520aadcad1224fcb062e86dca5cffd7978c2b852d83c9a59843" Feb 27 11:32:06 crc kubenswrapper[4728]: I0227 11:32:06.424681 4728 scope.go:117] "RemoveContainer" containerID="d3c42e3baa5b4af9ba58d5b44dbb6d220b09d79c410d643dc0fba85cce7058ed" Feb 27 11:32:06 crc kubenswrapper[4728]: E0227 11:32:06.425185 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mf2hh_openshift-machine-config-operator(c2cfd349-f825-497b-b698-7fb6bc258b22)\"" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" Feb 27 11:32:06 crc kubenswrapper[4728]: I0227 11:32:06.738857 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e720dd4b-f96d-490b-92c3-1aa62f21c98f" path="/var/lib/kubelet/pods/e720dd4b-f96d-490b-92c3-1aa62f21c98f/volumes" Feb 27 11:32:21 crc kubenswrapper[4728]: I0227 11:32:21.725294 4728 scope.go:117] "RemoveContainer" containerID="d3c42e3baa5b4af9ba58d5b44dbb6d220b09d79c410d643dc0fba85cce7058ed" Feb 27 11:32:21 crc kubenswrapper[4728]: E0227 11:32:21.726152 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mf2hh_openshift-machine-config-operator(c2cfd349-f825-497b-b698-7fb6bc258b22)\"" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" Feb 27 11:32:27 crc kubenswrapper[4728]: I0227 11:32:27.055838 4728 scope.go:117] "RemoveContainer" containerID="a1a505cfa4eebdc4fbea80bad8f5db119d250514e1201c2107f41fd20b244c95" Feb 27 11:32:33 crc kubenswrapper[4728]: I0227 11:32:33.725676 4728 scope.go:117] "RemoveContainer" containerID="d3c42e3baa5b4af9ba58d5b44dbb6d220b09d79c410d643dc0fba85cce7058ed" Feb 27 11:32:33 crc kubenswrapper[4728]: E0227 11:32:33.726945 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mf2hh_openshift-machine-config-operator(c2cfd349-f825-497b-b698-7fb6bc258b22)\"" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" Feb 27 11:32:44 crc kubenswrapper[4728]: I0227 11:32:44.727825 4728 scope.go:117] "RemoveContainer" containerID="d3c42e3baa5b4af9ba58d5b44dbb6d220b09d79c410d643dc0fba85cce7058ed" Feb 27 11:32:44 crc kubenswrapper[4728]: E0227 11:32:44.728554 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mf2hh_openshift-machine-config-operator(c2cfd349-f825-497b-b698-7fb6bc258b22)\"" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" Feb 27 11:32:57 crc kubenswrapper[4728]: I0227 11:32:57.725652 4728 scope.go:117] "RemoveContainer" containerID="d3c42e3baa5b4af9ba58d5b44dbb6d220b09d79c410d643dc0fba85cce7058ed" Feb 27 11:32:57 crc kubenswrapper[4728]: E0227 11:32:57.728165 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mf2hh_openshift-machine-config-operator(c2cfd349-f825-497b-b698-7fb6bc258b22)\"" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" Feb 27 11:33:08 crc kubenswrapper[4728]: I0227 11:33:08.726631 4728 scope.go:117] "RemoveContainer" containerID="d3c42e3baa5b4af9ba58d5b44dbb6d220b09d79c410d643dc0fba85cce7058ed" Feb 27 11:33:08 crc kubenswrapper[4728]: E0227 11:33:08.728571 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mf2hh_openshift-machine-config-operator(c2cfd349-f825-497b-b698-7fb6bc258b22)\"" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" Feb 27 11:33:23 crc kubenswrapper[4728]: I0227 11:33:23.725427 4728 scope.go:117] "RemoveContainer" containerID="d3c42e3baa5b4af9ba58d5b44dbb6d220b09d79c410d643dc0fba85cce7058ed" Feb 27 11:33:23 crc kubenswrapper[4728]: E0227 11:33:23.727014 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mf2hh_openshift-machine-config-operator(c2cfd349-f825-497b-b698-7fb6bc258b22)\"" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" Feb 27 11:33:37 crc kubenswrapper[4728]: I0227 11:33:37.725333 4728 scope.go:117] "RemoveContainer" containerID="d3c42e3baa5b4af9ba58d5b44dbb6d220b09d79c410d643dc0fba85cce7058ed" Feb 27 11:33:37 crc kubenswrapper[4728]: E0227 11:33:37.726946 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mf2hh_openshift-machine-config-operator(c2cfd349-f825-497b-b698-7fb6bc258b22)\"" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" Feb 27 11:33:41 crc kubenswrapper[4728]: I0227 11:33:41.141733 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-x5h92"] Feb 27 11:33:41 crc kubenswrapper[4728]: E0227 11:33:41.143268 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a2534bc-84b6-49d4-b11f-fc16880d0678" containerName="oc" Feb 27 11:33:41 crc kubenswrapper[4728]: I0227 11:33:41.143287 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a2534bc-84b6-49d4-b11f-fc16880d0678" containerName="oc" Feb 27 11:33:41 crc kubenswrapper[4728]: I0227 11:33:41.143643 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a2534bc-84b6-49d4-b11f-fc16880d0678" containerName="oc" Feb 27 11:33:41 crc kubenswrapper[4728]: I0227 11:33:41.145789 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-x5h92" Feb 27 11:33:41 crc kubenswrapper[4728]: I0227 11:33:41.181054 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-x5h92"] Feb 27 11:33:41 crc kubenswrapper[4728]: I0227 11:33:41.262276 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05786a93-ff1e-4308-99cf-6b8bcbee1840-utilities\") pod \"certified-operators-x5h92\" (UID: \"05786a93-ff1e-4308-99cf-6b8bcbee1840\") " pod="openshift-marketplace/certified-operators-x5h92" Feb 27 11:33:41 crc kubenswrapper[4728]: I0227 11:33:41.262321 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05786a93-ff1e-4308-99cf-6b8bcbee1840-catalog-content\") pod \"certified-operators-x5h92\" (UID: \"05786a93-ff1e-4308-99cf-6b8bcbee1840\") " pod="openshift-marketplace/certified-operators-x5h92" Feb 27 11:33:41 crc kubenswrapper[4728]: I0227 11:33:41.262342 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c46bm\" (UniqueName: \"kubernetes.io/projected/05786a93-ff1e-4308-99cf-6b8bcbee1840-kube-api-access-c46bm\") pod \"certified-operators-x5h92\" (UID: \"05786a93-ff1e-4308-99cf-6b8bcbee1840\") " pod="openshift-marketplace/certified-operators-x5h92" Feb 27 11:33:41 crc kubenswrapper[4728]: I0227 11:33:41.364206 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05786a93-ff1e-4308-99cf-6b8bcbee1840-utilities\") pod \"certified-operators-x5h92\" (UID: \"05786a93-ff1e-4308-99cf-6b8bcbee1840\") " pod="openshift-marketplace/certified-operators-x5h92" Feb 27 11:33:41 crc kubenswrapper[4728]: I0227 11:33:41.364272 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05786a93-ff1e-4308-99cf-6b8bcbee1840-catalog-content\") pod \"certified-operators-x5h92\" (UID: \"05786a93-ff1e-4308-99cf-6b8bcbee1840\") " pod="openshift-marketplace/certified-operators-x5h92" Feb 27 11:33:41 crc kubenswrapper[4728]: I0227 11:33:41.364299 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c46bm\" (UniqueName: \"kubernetes.io/projected/05786a93-ff1e-4308-99cf-6b8bcbee1840-kube-api-access-c46bm\") pod \"certified-operators-x5h92\" (UID: \"05786a93-ff1e-4308-99cf-6b8bcbee1840\") " pod="openshift-marketplace/certified-operators-x5h92" Feb 27 11:33:41 crc kubenswrapper[4728]: I0227 11:33:41.364897 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05786a93-ff1e-4308-99cf-6b8bcbee1840-utilities\") pod \"certified-operators-x5h92\" (UID: \"05786a93-ff1e-4308-99cf-6b8bcbee1840\") " pod="openshift-marketplace/certified-operators-x5h92" Feb 27 11:33:41 crc kubenswrapper[4728]: I0227 11:33:41.364939 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05786a93-ff1e-4308-99cf-6b8bcbee1840-catalog-content\") pod \"certified-operators-x5h92\" (UID: \"05786a93-ff1e-4308-99cf-6b8bcbee1840\") " pod="openshift-marketplace/certified-operators-x5h92" Feb 27 11:33:41 crc kubenswrapper[4728]: I0227 11:33:41.391286 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c46bm\" (UniqueName: \"kubernetes.io/projected/05786a93-ff1e-4308-99cf-6b8bcbee1840-kube-api-access-c46bm\") pod \"certified-operators-x5h92\" (UID: \"05786a93-ff1e-4308-99cf-6b8bcbee1840\") " pod="openshift-marketplace/certified-operators-x5h92" Feb 27 11:33:41 crc kubenswrapper[4728]: I0227 11:33:41.486197 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-x5h92" Feb 27 11:33:42 crc kubenswrapper[4728]: I0227 11:33:42.022784 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-x5h92"] Feb 27 11:33:42 crc kubenswrapper[4728]: W0227 11:33:42.024517 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod05786a93_ff1e_4308_99cf_6b8bcbee1840.slice/crio-171683da9b6a659fd8f34549b028a61cefce3c9d95c9e958baff469ed49873de WatchSource:0}: Error finding container 171683da9b6a659fd8f34549b028a61cefce3c9d95c9e958baff469ed49873de: Status 404 returned error can't find the container with id 171683da9b6a659fd8f34549b028a61cefce3c9d95c9e958baff469ed49873de Feb 27 11:33:42 crc kubenswrapper[4728]: I0227 11:33:42.762373 4728 generic.go:334] "Generic (PLEG): container finished" podID="05786a93-ff1e-4308-99cf-6b8bcbee1840" containerID="256ac461d99f7360b3cf643b5421b5eb9c8fe262f251267f58a18a0cb00491d4" exitCode=0 Feb 27 11:33:42 crc kubenswrapper[4728]: I0227 11:33:42.762599 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x5h92" event={"ID":"05786a93-ff1e-4308-99cf-6b8bcbee1840","Type":"ContainerDied","Data":"256ac461d99f7360b3cf643b5421b5eb9c8fe262f251267f58a18a0cb00491d4"} Feb 27 11:33:42 crc kubenswrapper[4728]: I0227 11:33:42.763047 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x5h92" event={"ID":"05786a93-ff1e-4308-99cf-6b8bcbee1840","Type":"ContainerStarted","Data":"171683da9b6a659fd8f34549b028a61cefce3c9d95c9e958baff469ed49873de"} Feb 27 11:33:43 crc kubenswrapper[4728]: I0227 11:33:43.326967 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-npdrg"] Feb 27 11:33:43 crc kubenswrapper[4728]: I0227 11:33:43.331312 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-npdrg" Feb 27 11:33:43 crc kubenswrapper[4728]: I0227 11:33:43.356572 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-npdrg"] Feb 27 11:33:43 crc kubenswrapper[4728]: I0227 11:33:43.519139 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca68a9c0-d329-43ff-b729-1cf81cb44ada-catalog-content\") pod \"redhat-operators-npdrg\" (UID: \"ca68a9c0-d329-43ff-b729-1cf81cb44ada\") " pod="openshift-marketplace/redhat-operators-npdrg" Feb 27 11:33:43 crc kubenswrapper[4728]: I0227 11:33:43.519576 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca68a9c0-d329-43ff-b729-1cf81cb44ada-utilities\") pod \"redhat-operators-npdrg\" (UID: \"ca68a9c0-d329-43ff-b729-1cf81cb44ada\") " pod="openshift-marketplace/redhat-operators-npdrg" Feb 27 11:33:43 crc kubenswrapper[4728]: I0227 11:33:43.519711 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2bdg7\" (UniqueName: \"kubernetes.io/projected/ca68a9c0-d329-43ff-b729-1cf81cb44ada-kube-api-access-2bdg7\") pod \"redhat-operators-npdrg\" (UID: \"ca68a9c0-d329-43ff-b729-1cf81cb44ada\") " pod="openshift-marketplace/redhat-operators-npdrg" Feb 27 11:33:43 crc kubenswrapper[4728]: I0227 11:33:43.623145 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2bdg7\" (UniqueName: \"kubernetes.io/projected/ca68a9c0-d329-43ff-b729-1cf81cb44ada-kube-api-access-2bdg7\") pod \"redhat-operators-npdrg\" (UID: \"ca68a9c0-d329-43ff-b729-1cf81cb44ada\") " pod="openshift-marketplace/redhat-operators-npdrg" Feb 27 11:33:43 crc kubenswrapper[4728]: I0227 11:33:43.623239 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca68a9c0-d329-43ff-b729-1cf81cb44ada-catalog-content\") pod \"redhat-operators-npdrg\" (UID: \"ca68a9c0-d329-43ff-b729-1cf81cb44ada\") " pod="openshift-marketplace/redhat-operators-npdrg" Feb 27 11:33:43 crc kubenswrapper[4728]: I0227 11:33:43.623371 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca68a9c0-d329-43ff-b729-1cf81cb44ada-utilities\") pod \"redhat-operators-npdrg\" (UID: \"ca68a9c0-d329-43ff-b729-1cf81cb44ada\") " pod="openshift-marketplace/redhat-operators-npdrg" Feb 27 11:33:43 crc kubenswrapper[4728]: I0227 11:33:43.623835 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca68a9c0-d329-43ff-b729-1cf81cb44ada-catalog-content\") pod \"redhat-operators-npdrg\" (UID: \"ca68a9c0-d329-43ff-b729-1cf81cb44ada\") " pod="openshift-marketplace/redhat-operators-npdrg" Feb 27 11:33:43 crc kubenswrapper[4728]: I0227 11:33:43.623879 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca68a9c0-d329-43ff-b729-1cf81cb44ada-utilities\") pod \"redhat-operators-npdrg\" (UID: \"ca68a9c0-d329-43ff-b729-1cf81cb44ada\") " pod="openshift-marketplace/redhat-operators-npdrg" Feb 27 11:33:43 crc kubenswrapper[4728]: I0227 11:33:43.643707 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2bdg7\" (UniqueName: \"kubernetes.io/projected/ca68a9c0-d329-43ff-b729-1cf81cb44ada-kube-api-access-2bdg7\") pod \"redhat-operators-npdrg\" (UID: \"ca68a9c0-d329-43ff-b729-1cf81cb44ada\") " pod="openshift-marketplace/redhat-operators-npdrg" Feb 27 11:33:43 crc kubenswrapper[4728]: I0227 11:33:43.671173 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-npdrg" Feb 27 11:33:43 crc kubenswrapper[4728]: I0227 11:33:43.794392 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x5h92" event={"ID":"05786a93-ff1e-4308-99cf-6b8bcbee1840","Type":"ContainerStarted","Data":"612c7e067e1ae059c674ddb63ebaaf2a4de48689be7f2baa41219337673ca34e"} Feb 27 11:33:44 crc kubenswrapper[4728]: I0227 11:33:44.205045 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-npdrg"] Feb 27 11:33:44 crc kubenswrapper[4728]: W0227 11:33:44.211734 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podca68a9c0_d329_43ff_b729_1cf81cb44ada.slice/crio-354c05003ec086d0209239860d766124e2aa01cef4dbfca8149cffece8b464e4 WatchSource:0}: Error finding container 354c05003ec086d0209239860d766124e2aa01cef4dbfca8149cffece8b464e4: Status 404 returned error can't find the container with id 354c05003ec086d0209239860d766124e2aa01cef4dbfca8149cffece8b464e4 Feb 27 11:33:44 crc kubenswrapper[4728]: I0227 11:33:44.815963 4728 generic.go:334] "Generic (PLEG): container finished" podID="ca68a9c0-d329-43ff-b729-1cf81cb44ada" containerID="4322c9be1b38cd1648322fac6b9697381199c4eaa380279b044cf8d6e009b275" exitCode=0 Feb 27 11:33:44 crc kubenswrapper[4728]: I0227 11:33:44.816702 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-npdrg" event={"ID":"ca68a9c0-d329-43ff-b729-1cf81cb44ada","Type":"ContainerDied","Data":"4322c9be1b38cd1648322fac6b9697381199c4eaa380279b044cf8d6e009b275"} Feb 27 11:33:44 crc kubenswrapper[4728]: I0227 11:33:44.816793 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-npdrg" event={"ID":"ca68a9c0-d329-43ff-b729-1cf81cb44ada","Type":"ContainerStarted","Data":"354c05003ec086d0209239860d766124e2aa01cef4dbfca8149cffece8b464e4"} Feb 27 11:33:45 crc kubenswrapper[4728]: I0227 11:33:45.828884 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-npdrg" event={"ID":"ca68a9c0-d329-43ff-b729-1cf81cb44ada","Type":"ContainerStarted","Data":"75588e52fd3a2af1c2cd218a8093bda606f580aae4d4de16a48139108c2be04a"} Feb 27 11:33:45 crc kubenswrapper[4728]: I0227 11:33:45.832174 4728 generic.go:334] "Generic (PLEG): container finished" podID="05786a93-ff1e-4308-99cf-6b8bcbee1840" containerID="612c7e067e1ae059c674ddb63ebaaf2a4de48689be7f2baa41219337673ca34e" exitCode=0 Feb 27 11:33:45 crc kubenswrapper[4728]: I0227 11:33:45.832211 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x5h92" event={"ID":"05786a93-ff1e-4308-99cf-6b8bcbee1840","Type":"ContainerDied","Data":"612c7e067e1ae059c674ddb63ebaaf2a4de48689be7f2baa41219337673ca34e"} Feb 27 11:33:47 crc kubenswrapper[4728]: I0227 11:33:47.859068 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x5h92" event={"ID":"05786a93-ff1e-4308-99cf-6b8bcbee1840","Type":"ContainerStarted","Data":"30ca011dc988406ee115149b53bfe228942402601a5290b1f0f8f2e476b61e79"} Feb 27 11:33:47 crc kubenswrapper[4728]: I0227 11:33:47.893606 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-x5h92" podStartSLOduration=3.044624433 podStartE2EDuration="6.893579135s" podCreationTimestamp="2026-02-27 11:33:41 +0000 UTC" firstStartedPulling="2026-02-27 11:33:42.766150647 +0000 UTC m=+4042.728516743" lastFinishedPulling="2026-02-27 11:33:46.615105349 +0000 UTC m=+4046.577471445" observedRunningTime="2026-02-27 11:33:47.88254347 +0000 UTC m=+4047.844909616" watchObservedRunningTime="2026-02-27 11:33:47.893579135 +0000 UTC m=+4047.855945281" Feb 27 11:33:48 crc kubenswrapper[4728]: I0227 11:33:48.725853 4728 scope.go:117] "RemoveContainer" containerID="d3c42e3baa5b4af9ba58d5b44dbb6d220b09d79c410d643dc0fba85cce7058ed" Feb 27 11:33:48 crc kubenswrapper[4728]: E0227 11:33:48.726713 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mf2hh_openshift-machine-config-operator(c2cfd349-f825-497b-b698-7fb6bc258b22)\"" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" Feb 27 11:33:51 crc kubenswrapper[4728]: I0227 11:33:51.487104 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-x5h92" Feb 27 11:33:51 crc kubenswrapper[4728]: I0227 11:33:51.487639 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-x5h92" Feb 27 11:33:52 crc kubenswrapper[4728]: I0227 11:33:52.269907 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-x5h92" Feb 27 11:33:52 crc kubenswrapper[4728]: I0227 11:33:52.427766 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-x5h92" Feb 27 11:33:52 crc kubenswrapper[4728]: I0227 11:33:52.520728 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-x5h92"] Feb 27 11:33:53 crc kubenswrapper[4728]: I0227 11:33:53.008754 4728 generic.go:334] "Generic (PLEG): container finished" podID="ca68a9c0-d329-43ff-b729-1cf81cb44ada" containerID="75588e52fd3a2af1c2cd218a8093bda606f580aae4d4de16a48139108c2be04a" exitCode=0 Feb 27 11:33:53 crc kubenswrapper[4728]: I0227 11:33:53.008798 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-npdrg" event={"ID":"ca68a9c0-d329-43ff-b729-1cf81cb44ada","Type":"ContainerDied","Data":"75588e52fd3a2af1c2cd218a8093bda606f580aae4d4de16a48139108c2be04a"} Feb 27 11:33:54 crc kubenswrapper[4728]: I0227 11:33:54.026331 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-npdrg" event={"ID":"ca68a9c0-d329-43ff-b729-1cf81cb44ada","Type":"ContainerStarted","Data":"95fc3c8937763257e3e670ddc0747db81676b6126086f9037c86f1d571367c48"} Feb 27 11:33:54 crc kubenswrapper[4728]: I0227 11:33:54.026567 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-x5h92" podUID="05786a93-ff1e-4308-99cf-6b8bcbee1840" containerName="registry-server" containerID="cri-o://30ca011dc988406ee115149b53bfe228942402601a5290b1f0f8f2e476b61e79" gracePeriod=2 Feb 27 11:33:54 crc kubenswrapper[4728]: I0227 11:33:54.068071 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-npdrg" podStartSLOduration=2.452227426 podStartE2EDuration="11.068050307s" podCreationTimestamp="2026-02-27 11:33:43 +0000 UTC" firstStartedPulling="2026-02-27 11:33:44.81921838 +0000 UTC m=+4044.781584486" lastFinishedPulling="2026-02-27 11:33:53.435041241 +0000 UTC m=+4053.397407367" observedRunningTime="2026-02-27 11:33:54.05802214 +0000 UTC m=+4054.020388256" watchObservedRunningTime="2026-02-27 11:33:54.068050307 +0000 UTC m=+4054.030416423" Feb 27 11:33:55 crc kubenswrapper[4728]: I0227 11:33:55.043783 4728 generic.go:334] "Generic (PLEG): container finished" podID="05786a93-ff1e-4308-99cf-6b8bcbee1840" containerID="30ca011dc988406ee115149b53bfe228942402601a5290b1f0f8f2e476b61e79" exitCode=0 Feb 27 11:33:55 crc kubenswrapper[4728]: I0227 11:33:55.043963 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x5h92" event={"ID":"05786a93-ff1e-4308-99cf-6b8bcbee1840","Type":"ContainerDied","Data":"30ca011dc988406ee115149b53bfe228942402601a5290b1f0f8f2e476b61e79"} Feb 27 11:33:55 crc kubenswrapper[4728]: I0227 11:33:55.044081 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x5h92" event={"ID":"05786a93-ff1e-4308-99cf-6b8bcbee1840","Type":"ContainerDied","Data":"171683da9b6a659fd8f34549b028a61cefce3c9d95c9e958baff469ed49873de"} Feb 27 11:33:55 crc kubenswrapper[4728]: I0227 11:33:55.044097 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="171683da9b6a659fd8f34549b028a61cefce3c9d95c9e958baff469ed49873de" Feb 27 11:33:55 crc kubenswrapper[4728]: I0227 11:33:55.109481 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-x5h92" Feb 27 11:33:55 crc kubenswrapper[4728]: I0227 11:33:55.254337 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c46bm\" (UniqueName: \"kubernetes.io/projected/05786a93-ff1e-4308-99cf-6b8bcbee1840-kube-api-access-c46bm\") pod \"05786a93-ff1e-4308-99cf-6b8bcbee1840\" (UID: \"05786a93-ff1e-4308-99cf-6b8bcbee1840\") " Feb 27 11:33:55 crc kubenswrapper[4728]: I0227 11:33:55.254643 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05786a93-ff1e-4308-99cf-6b8bcbee1840-utilities\") pod \"05786a93-ff1e-4308-99cf-6b8bcbee1840\" (UID: \"05786a93-ff1e-4308-99cf-6b8bcbee1840\") " Feb 27 11:33:55 crc kubenswrapper[4728]: I0227 11:33:55.254832 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05786a93-ff1e-4308-99cf-6b8bcbee1840-catalog-content\") pod \"05786a93-ff1e-4308-99cf-6b8bcbee1840\" (UID: \"05786a93-ff1e-4308-99cf-6b8bcbee1840\") " Feb 27 11:33:55 crc kubenswrapper[4728]: I0227 11:33:55.256846 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/05786a93-ff1e-4308-99cf-6b8bcbee1840-utilities" (OuterVolumeSpecName: "utilities") pod "05786a93-ff1e-4308-99cf-6b8bcbee1840" (UID: "05786a93-ff1e-4308-99cf-6b8bcbee1840"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 11:33:55 crc kubenswrapper[4728]: I0227 11:33:55.261885 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05786a93-ff1e-4308-99cf-6b8bcbee1840-kube-api-access-c46bm" (OuterVolumeSpecName: "kube-api-access-c46bm") pod "05786a93-ff1e-4308-99cf-6b8bcbee1840" (UID: "05786a93-ff1e-4308-99cf-6b8bcbee1840"). InnerVolumeSpecName "kube-api-access-c46bm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:33:55 crc kubenswrapper[4728]: I0227 11:33:55.342644 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/05786a93-ff1e-4308-99cf-6b8bcbee1840-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "05786a93-ff1e-4308-99cf-6b8bcbee1840" (UID: "05786a93-ff1e-4308-99cf-6b8bcbee1840"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 11:33:55 crc kubenswrapper[4728]: I0227 11:33:55.358289 4728 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05786a93-ff1e-4308-99cf-6b8bcbee1840-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 11:33:55 crc kubenswrapper[4728]: I0227 11:33:55.358326 4728 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05786a93-ff1e-4308-99cf-6b8bcbee1840-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 11:33:55 crc kubenswrapper[4728]: I0227 11:33:55.358341 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c46bm\" (UniqueName: \"kubernetes.io/projected/05786a93-ff1e-4308-99cf-6b8bcbee1840-kube-api-access-c46bm\") on node \"crc\" DevicePath \"\"" Feb 27 11:33:56 crc kubenswrapper[4728]: I0227 11:33:56.073435 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-x5h92" Feb 27 11:33:56 crc kubenswrapper[4728]: I0227 11:33:56.127339 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-x5h92"] Feb 27 11:33:56 crc kubenswrapper[4728]: I0227 11:33:56.142099 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-x5h92"] Feb 27 11:33:56 crc kubenswrapper[4728]: I0227 11:33:56.747365 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="05786a93-ff1e-4308-99cf-6b8bcbee1840" path="/var/lib/kubelet/pods/05786a93-ff1e-4308-99cf-6b8bcbee1840/volumes" Feb 27 11:34:00 crc kubenswrapper[4728]: I0227 11:34:00.177802 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536534-2kxjm"] Feb 27 11:34:00 crc kubenswrapper[4728]: E0227 11:34:00.179066 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05786a93-ff1e-4308-99cf-6b8bcbee1840" containerName="extract-utilities" Feb 27 11:34:00 crc kubenswrapper[4728]: I0227 11:34:00.179084 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="05786a93-ff1e-4308-99cf-6b8bcbee1840" containerName="extract-utilities" Feb 27 11:34:00 crc kubenswrapper[4728]: E0227 11:34:00.179141 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05786a93-ff1e-4308-99cf-6b8bcbee1840" containerName="registry-server" Feb 27 11:34:00 crc kubenswrapper[4728]: I0227 11:34:00.179150 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="05786a93-ff1e-4308-99cf-6b8bcbee1840" containerName="registry-server" Feb 27 11:34:00 crc kubenswrapper[4728]: E0227 11:34:00.179160 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05786a93-ff1e-4308-99cf-6b8bcbee1840" containerName="extract-content" Feb 27 11:34:00 crc kubenswrapper[4728]: I0227 11:34:00.179169 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="05786a93-ff1e-4308-99cf-6b8bcbee1840" containerName="extract-content" Feb 27 11:34:00 crc kubenswrapper[4728]: I0227 11:34:00.179475 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="05786a93-ff1e-4308-99cf-6b8bcbee1840" containerName="registry-server" Feb 27 11:34:00 crc kubenswrapper[4728]: I0227 11:34:00.180559 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536534-2kxjm" Feb 27 11:34:00 crc kubenswrapper[4728]: I0227 11:34:00.189800 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-wxdfj" Feb 27 11:34:00 crc kubenswrapper[4728]: I0227 11:34:00.189993 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 11:34:00 crc kubenswrapper[4728]: I0227 11:34:00.190773 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 11:34:00 crc kubenswrapper[4728]: I0227 11:34:00.201760 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536534-2kxjm"] Feb 27 11:34:00 crc kubenswrapper[4728]: I0227 11:34:00.327227 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gpfdj\" (UniqueName: \"kubernetes.io/projected/04fe871b-15b6-4e3c-bf8d-d1744de17bd3-kube-api-access-gpfdj\") pod \"auto-csr-approver-29536534-2kxjm\" (UID: \"04fe871b-15b6-4e3c-bf8d-d1744de17bd3\") " pod="openshift-infra/auto-csr-approver-29536534-2kxjm" Feb 27 11:34:00 crc kubenswrapper[4728]: I0227 11:34:00.430303 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gpfdj\" (UniqueName: \"kubernetes.io/projected/04fe871b-15b6-4e3c-bf8d-d1744de17bd3-kube-api-access-gpfdj\") pod \"auto-csr-approver-29536534-2kxjm\" (UID: \"04fe871b-15b6-4e3c-bf8d-d1744de17bd3\") " pod="openshift-infra/auto-csr-approver-29536534-2kxjm" Feb 27 11:34:00 crc kubenswrapper[4728]: I0227 11:34:00.448923 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gpfdj\" (UniqueName: \"kubernetes.io/projected/04fe871b-15b6-4e3c-bf8d-d1744de17bd3-kube-api-access-gpfdj\") pod \"auto-csr-approver-29536534-2kxjm\" (UID: \"04fe871b-15b6-4e3c-bf8d-d1744de17bd3\") " pod="openshift-infra/auto-csr-approver-29536534-2kxjm" Feb 27 11:34:00 crc kubenswrapper[4728]: I0227 11:34:00.550381 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536534-2kxjm" Feb 27 11:34:01 crc kubenswrapper[4728]: I0227 11:34:01.016936 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536534-2kxjm"] Feb 27 11:34:01 crc kubenswrapper[4728]: I0227 11:34:01.135961 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536534-2kxjm" event={"ID":"04fe871b-15b6-4e3c-bf8d-d1744de17bd3","Type":"ContainerStarted","Data":"5445a24ab4ea501c9526e6f30bbab322b942456ce64d1cb8ba24f7bd0a9bbc6b"} Feb 27 11:34:01 crc kubenswrapper[4728]: I0227 11:34:01.725655 4728 scope.go:117] "RemoveContainer" containerID="d3c42e3baa5b4af9ba58d5b44dbb6d220b09d79c410d643dc0fba85cce7058ed" Feb 27 11:34:01 crc kubenswrapper[4728]: E0227 11:34:01.726286 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mf2hh_openshift-machine-config-operator(c2cfd349-f825-497b-b698-7fb6bc258b22)\"" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" Feb 27 11:34:03 crc kubenswrapper[4728]: I0227 11:34:03.163124 4728 generic.go:334] "Generic (PLEG): container finished" podID="04fe871b-15b6-4e3c-bf8d-d1744de17bd3" containerID="3327a804a7ccce4e2c05ae0298afafdc977084ee93873fa6b2f5c9e688a4cdd7" exitCode=0 Feb 27 11:34:03 crc kubenswrapper[4728]: I0227 11:34:03.163186 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536534-2kxjm" event={"ID":"04fe871b-15b6-4e3c-bf8d-d1744de17bd3","Type":"ContainerDied","Data":"3327a804a7ccce4e2c05ae0298afafdc977084ee93873fa6b2f5c9e688a4cdd7"} Feb 27 11:34:03 crc kubenswrapper[4728]: I0227 11:34:03.671229 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-npdrg" Feb 27 11:34:03 crc kubenswrapper[4728]: I0227 11:34:03.671727 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-npdrg" Feb 27 11:34:04 crc kubenswrapper[4728]: I0227 11:34:04.726333 4728 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-npdrg" podUID="ca68a9c0-d329-43ff-b729-1cf81cb44ada" containerName="registry-server" probeResult="failure" output=< Feb 27 11:34:04 crc kubenswrapper[4728]: timeout: failed to connect service ":50051" within 1s Feb 27 11:34:04 crc kubenswrapper[4728]: > Feb 27 11:34:05 crc kubenswrapper[4728]: I0227 11:34:05.166429 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536534-2kxjm" Feb 27 11:34:05 crc kubenswrapper[4728]: I0227 11:34:05.194394 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536534-2kxjm" event={"ID":"04fe871b-15b6-4e3c-bf8d-d1744de17bd3","Type":"ContainerDied","Data":"5445a24ab4ea501c9526e6f30bbab322b942456ce64d1cb8ba24f7bd0a9bbc6b"} Feb 27 11:34:05 crc kubenswrapper[4728]: I0227 11:34:05.194439 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5445a24ab4ea501c9526e6f30bbab322b942456ce64d1cb8ba24f7bd0a9bbc6b" Feb 27 11:34:05 crc kubenswrapper[4728]: I0227 11:34:05.194434 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536534-2kxjm" Feb 27 11:34:05 crc kubenswrapper[4728]: I0227 11:34:05.266351 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gpfdj\" (UniqueName: \"kubernetes.io/projected/04fe871b-15b6-4e3c-bf8d-d1744de17bd3-kube-api-access-gpfdj\") pod \"04fe871b-15b6-4e3c-bf8d-d1744de17bd3\" (UID: \"04fe871b-15b6-4e3c-bf8d-d1744de17bd3\") " Feb 27 11:34:05 crc kubenswrapper[4728]: I0227 11:34:05.272209 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04fe871b-15b6-4e3c-bf8d-d1744de17bd3-kube-api-access-gpfdj" (OuterVolumeSpecName: "kube-api-access-gpfdj") pod "04fe871b-15b6-4e3c-bf8d-d1744de17bd3" (UID: "04fe871b-15b6-4e3c-bf8d-d1744de17bd3"). InnerVolumeSpecName "kube-api-access-gpfdj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:34:05 crc kubenswrapper[4728]: I0227 11:34:05.370410 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gpfdj\" (UniqueName: \"kubernetes.io/projected/04fe871b-15b6-4e3c-bf8d-d1744de17bd3-kube-api-access-gpfdj\") on node \"crc\" DevicePath \"\"" Feb 27 11:34:06 crc kubenswrapper[4728]: I0227 11:34:06.252327 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536528-f2kcm"] Feb 27 11:34:06 crc kubenswrapper[4728]: I0227 11:34:06.264317 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536528-f2kcm"] Feb 27 11:34:06 crc kubenswrapper[4728]: I0227 11:34:06.741824 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b37a29cd-efd0-447e-8f9c-567ea9a93dee" path="/var/lib/kubelet/pods/b37a29cd-efd0-447e-8f9c-567ea9a93dee/volumes" Feb 27 11:34:13 crc kubenswrapper[4728]: I0227 11:34:13.733331 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-npdrg" Feb 27 11:34:13 crc kubenswrapper[4728]: I0227 11:34:13.790894 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-npdrg" Feb 27 11:34:14 crc kubenswrapper[4728]: I0227 11:34:14.530157 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-npdrg"] Feb 27 11:34:14 crc kubenswrapper[4728]: I0227 11:34:14.726431 4728 scope.go:117] "RemoveContainer" containerID="d3c42e3baa5b4af9ba58d5b44dbb6d220b09d79c410d643dc0fba85cce7058ed" Feb 27 11:34:14 crc kubenswrapper[4728]: E0227 11:34:14.726775 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mf2hh_openshift-machine-config-operator(c2cfd349-f825-497b-b698-7fb6bc258b22)\"" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" Feb 27 11:34:15 crc kubenswrapper[4728]: I0227 11:34:15.325850 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-npdrg" podUID="ca68a9c0-d329-43ff-b729-1cf81cb44ada" containerName="registry-server" containerID="cri-o://95fc3c8937763257e3e670ddc0747db81676b6126086f9037c86f1d571367c48" gracePeriod=2 Feb 27 11:34:15 crc kubenswrapper[4728]: I0227 11:34:15.959199 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-npdrg" Feb 27 11:34:16 crc kubenswrapper[4728]: I0227 11:34:16.038708 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2bdg7\" (UniqueName: \"kubernetes.io/projected/ca68a9c0-d329-43ff-b729-1cf81cb44ada-kube-api-access-2bdg7\") pod \"ca68a9c0-d329-43ff-b729-1cf81cb44ada\" (UID: \"ca68a9c0-d329-43ff-b729-1cf81cb44ada\") " Feb 27 11:34:16 crc kubenswrapper[4728]: I0227 11:34:16.039406 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca68a9c0-d329-43ff-b729-1cf81cb44ada-catalog-content\") pod \"ca68a9c0-d329-43ff-b729-1cf81cb44ada\" (UID: \"ca68a9c0-d329-43ff-b729-1cf81cb44ada\") " Feb 27 11:34:16 crc kubenswrapper[4728]: I0227 11:34:16.039582 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca68a9c0-d329-43ff-b729-1cf81cb44ada-utilities\") pod \"ca68a9c0-d329-43ff-b729-1cf81cb44ada\" (UID: \"ca68a9c0-d329-43ff-b729-1cf81cb44ada\") " Feb 27 11:34:16 crc kubenswrapper[4728]: I0227 11:34:16.041567 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ca68a9c0-d329-43ff-b729-1cf81cb44ada-utilities" (OuterVolumeSpecName: "utilities") pod "ca68a9c0-d329-43ff-b729-1cf81cb44ada" (UID: "ca68a9c0-d329-43ff-b729-1cf81cb44ada"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 11:34:16 crc kubenswrapper[4728]: I0227 11:34:16.046034 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca68a9c0-d329-43ff-b729-1cf81cb44ada-kube-api-access-2bdg7" (OuterVolumeSpecName: "kube-api-access-2bdg7") pod "ca68a9c0-d329-43ff-b729-1cf81cb44ada" (UID: "ca68a9c0-d329-43ff-b729-1cf81cb44ada"). InnerVolumeSpecName "kube-api-access-2bdg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:34:16 crc kubenswrapper[4728]: I0227 11:34:16.143029 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2bdg7\" (UniqueName: \"kubernetes.io/projected/ca68a9c0-d329-43ff-b729-1cf81cb44ada-kube-api-access-2bdg7\") on node \"crc\" DevicePath \"\"" Feb 27 11:34:16 crc kubenswrapper[4728]: I0227 11:34:16.143381 4728 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca68a9c0-d329-43ff-b729-1cf81cb44ada-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 11:34:16 crc kubenswrapper[4728]: I0227 11:34:16.154637 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ca68a9c0-d329-43ff-b729-1cf81cb44ada-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ca68a9c0-d329-43ff-b729-1cf81cb44ada" (UID: "ca68a9c0-d329-43ff-b729-1cf81cb44ada"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 11:34:16 crc kubenswrapper[4728]: I0227 11:34:16.250338 4728 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca68a9c0-d329-43ff-b729-1cf81cb44ada-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 11:34:16 crc kubenswrapper[4728]: I0227 11:34:16.338736 4728 generic.go:334] "Generic (PLEG): container finished" podID="ca68a9c0-d329-43ff-b729-1cf81cb44ada" containerID="95fc3c8937763257e3e670ddc0747db81676b6126086f9037c86f1d571367c48" exitCode=0 Feb 27 11:34:16 crc kubenswrapper[4728]: I0227 11:34:16.338799 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-npdrg" event={"ID":"ca68a9c0-d329-43ff-b729-1cf81cb44ada","Type":"ContainerDied","Data":"95fc3c8937763257e3e670ddc0747db81676b6126086f9037c86f1d571367c48"} Feb 27 11:34:16 crc kubenswrapper[4728]: I0227 11:34:16.339062 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-npdrg" event={"ID":"ca68a9c0-d329-43ff-b729-1cf81cb44ada","Type":"ContainerDied","Data":"354c05003ec086d0209239860d766124e2aa01cef4dbfca8149cffece8b464e4"} Feb 27 11:34:16 crc kubenswrapper[4728]: I0227 11:34:16.339085 4728 scope.go:117] "RemoveContainer" containerID="95fc3c8937763257e3e670ddc0747db81676b6126086f9037c86f1d571367c48" Feb 27 11:34:16 crc kubenswrapper[4728]: I0227 11:34:16.338878 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-npdrg" Feb 27 11:34:16 crc kubenswrapper[4728]: I0227 11:34:16.375639 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-npdrg"] Feb 27 11:34:16 crc kubenswrapper[4728]: I0227 11:34:16.382060 4728 scope.go:117] "RemoveContainer" containerID="75588e52fd3a2af1c2cd218a8093bda606f580aae4d4de16a48139108c2be04a" Feb 27 11:34:16 crc kubenswrapper[4728]: I0227 11:34:16.386356 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-npdrg"] Feb 27 11:34:16 crc kubenswrapper[4728]: I0227 11:34:16.432775 4728 scope.go:117] "RemoveContainer" containerID="4322c9be1b38cd1648322fac6b9697381199c4eaa380279b044cf8d6e009b275" Feb 27 11:34:16 crc kubenswrapper[4728]: I0227 11:34:16.486413 4728 scope.go:117] "RemoveContainer" containerID="95fc3c8937763257e3e670ddc0747db81676b6126086f9037c86f1d571367c48" Feb 27 11:34:16 crc kubenswrapper[4728]: E0227 11:34:16.486993 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"95fc3c8937763257e3e670ddc0747db81676b6126086f9037c86f1d571367c48\": container with ID starting with 95fc3c8937763257e3e670ddc0747db81676b6126086f9037c86f1d571367c48 not found: ID does not exist" containerID="95fc3c8937763257e3e670ddc0747db81676b6126086f9037c86f1d571367c48" Feb 27 11:34:16 crc kubenswrapper[4728]: I0227 11:34:16.487129 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"95fc3c8937763257e3e670ddc0747db81676b6126086f9037c86f1d571367c48"} err="failed to get container status \"95fc3c8937763257e3e670ddc0747db81676b6126086f9037c86f1d571367c48\": rpc error: code = NotFound desc = could not find container \"95fc3c8937763257e3e670ddc0747db81676b6126086f9037c86f1d571367c48\": container with ID starting with 95fc3c8937763257e3e670ddc0747db81676b6126086f9037c86f1d571367c48 not found: ID does not exist" Feb 27 11:34:16 crc kubenswrapper[4728]: I0227 11:34:16.487166 4728 scope.go:117] "RemoveContainer" containerID="75588e52fd3a2af1c2cd218a8093bda606f580aae4d4de16a48139108c2be04a" Feb 27 11:34:16 crc kubenswrapper[4728]: E0227 11:34:16.487604 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"75588e52fd3a2af1c2cd218a8093bda606f580aae4d4de16a48139108c2be04a\": container with ID starting with 75588e52fd3a2af1c2cd218a8093bda606f580aae4d4de16a48139108c2be04a not found: ID does not exist" containerID="75588e52fd3a2af1c2cd218a8093bda606f580aae4d4de16a48139108c2be04a" Feb 27 11:34:16 crc kubenswrapper[4728]: I0227 11:34:16.487635 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"75588e52fd3a2af1c2cd218a8093bda606f580aae4d4de16a48139108c2be04a"} err="failed to get container status \"75588e52fd3a2af1c2cd218a8093bda606f580aae4d4de16a48139108c2be04a\": rpc error: code = NotFound desc = could not find container \"75588e52fd3a2af1c2cd218a8093bda606f580aae4d4de16a48139108c2be04a\": container with ID starting with 75588e52fd3a2af1c2cd218a8093bda606f580aae4d4de16a48139108c2be04a not found: ID does not exist" Feb 27 11:34:16 crc kubenswrapper[4728]: I0227 11:34:16.487661 4728 scope.go:117] "RemoveContainer" containerID="4322c9be1b38cd1648322fac6b9697381199c4eaa380279b044cf8d6e009b275" Feb 27 11:34:16 crc kubenswrapper[4728]: E0227 11:34:16.487926 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4322c9be1b38cd1648322fac6b9697381199c4eaa380279b044cf8d6e009b275\": container with ID starting with 4322c9be1b38cd1648322fac6b9697381199c4eaa380279b044cf8d6e009b275 not found: ID does not exist" containerID="4322c9be1b38cd1648322fac6b9697381199c4eaa380279b044cf8d6e009b275" Feb 27 11:34:16 crc kubenswrapper[4728]: I0227 11:34:16.487947 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4322c9be1b38cd1648322fac6b9697381199c4eaa380279b044cf8d6e009b275"} err="failed to get container status \"4322c9be1b38cd1648322fac6b9697381199c4eaa380279b044cf8d6e009b275\": rpc error: code = NotFound desc = could not find container \"4322c9be1b38cd1648322fac6b9697381199c4eaa380279b044cf8d6e009b275\": container with ID starting with 4322c9be1b38cd1648322fac6b9697381199c4eaa380279b044cf8d6e009b275 not found: ID does not exist" Feb 27 11:34:16 crc kubenswrapper[4728]: I0227 11:34:16.748497 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca68a9c0-d329-43ff-b729-1cf81cb44ada" path="/var/lib/kubelet/pods/ca68a9c0-d329-43ff-b729-1cf81cb44ada/volumes" Feb 27 11:34:27 crc kubenswrapper[4728]: I0227 11:34:27.194420 4728 scope.go:117] "RemoveContainer" containerID="4985d1e189a1d195fb24e205379bb9c046b84be64f358b115659602ce1cd2c4e" Feb 27 11:34:29 crc kubenswrapper[4728]: I0227 11:34:29.726031 4728 scope.go:117] "RemoveContainer" containerID="d3c42e3baa5b4af9ba58d5b44dbb6d220b09d79c410d643dc0fba85cce7058ed" Feb 27 11:34:29 crc kubenswrapper[4728]: E0227 11:34:29.727342 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mf2hh_openshift-machine-config-operator(c2cfd349-f825-497b-b698-7fb6bc258b22)\"" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" Feb 27 11:34:36 crc kubenswrapper[4728]: I0227 11:34:36.072118 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-cvbcm"] Feb 27 11:34:36 crc kubenswrapper[4728]: E0227 11:34:36.073137 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04fe871b-15b6-4e3c-bf8d-d1744de17bd3" containerName="oc" Feb 27 11:34:36 crc kubenswrapper[4728]: I0227 11:34:36.073152 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="04fe871b-15b6-4e3c-bf8d-d1744de17bd3" containerName="oc" Feb 27 11:34:36 crc kubenswrapper[4728]: E0227 11:34:36.073174 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca68a9c0-d329-43ff-b729-1cf81cb44ada" containerName="extract-utilities" Feb 27 11:34:36 crc kubenswrapper[4728]: I0227 11:34:36.073182 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca68a9c0-d329-43ff-b729-1cf81cb44ada" containerName="extract-utilities" Feb 27 11:34:36 crc kubenswrapper[4728]: E0227 11:34:36.073197 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca68a9c0-d329-43ff-b729-1cf81cb44ada" containerName="extract-content" Feb 27 11:34:36 crc kubenswrapper[4728]: I0227 11:34:36.073206 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca68a9c0-d329-43ff-b729-1cf81cb44ada" containerName="extract-content" Feb 27 11:34:36 crc kubenswrapper[4728]: E0227 11:34:36.073223 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca68a9c0-d329-43ff-b729-1cf81cb44ada" containerName="registry-server" Feb 27 11:34:36 crc kubenswrapper[4728]: I0227 11:34:36.073230 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca68a9c0-d329-43ff-b729-1cf81cb44ada" containerName="registry-server" Feb 27 11:34:36 crc kubenswrapper[4728]: I0227 11:34:36.073579 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca68a9c0-d329-43ff-b729-1cf81cb44ada" containerName="registry-server" Feb 27 11:34:36 crc kubenswrapper[4728]: I0227 11:34:36.073627 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="04fe871b-15b6-4e3c-bf8d-d1744de17bd3" containerName="oc" Feb 27 11:34:36 crc kubenswrapper[4728]: I0227 11:34:36.075562 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cvbcm" Feb 27 11:34:36 crc kubenswrapper[4728]: I0227 11:34:36.104061 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cvbcm"] Feb 27 11:34:36 crc kubenswrapper[4728]: I0227 11:34:36.224944 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6l7mk\" (UniqueName: \"kubernetes.io/projected/af896d7f-2364-4b61-b1a2-93d68da31088-kube-api-access-6l7mk\") pod \"community-operators-cvbcm\" (UID: \"af896d7f-2364-4b61-b1a2-93d68da31088\") " pod="openshift-marketplace/community-operators-cvbcm" Feb 27 11:34:36 crc kubenswrapper[4728]: I0227 11:34:36.225251 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af896d7f-2364-4b61-b1a2-93d68da31088-utilities\") pod \"community-operators-cvbcm\" (UID: \"af896d7f-2364-4b61-b1a2-93d68da31088\") " pod="openshift-marketplace/community-operators-cvbcm" Feb 27 11:34:36 crc kubenswrapper[4728]: I0227 11:34:36.225597 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af896d7f-2364-4b61-b1a2-93d68da31088-catalog-content\") pod \"community-operators-cvbcm\" (UID: \"af896d7f-2364-4b61-b1a2-93d68da31088\") " pod="openshift-marketplace/community-operators-cvbcm" Feb 27 11:34:36 crc kubenswrapper[4728]: I0227 11:34:36.328549 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6l7mk\" (UniqueName: \"kubernetes.io/projected/af896d7f-2364-4b61-b1a2-93d68da31088-kube-api-access-6l7mk\") pod \"community-operators-cvbcm\" (UID: \"af896d7f-2364-4b61-b1a2-93d68da31088\") " pod="openshift-marketplace/community-operators-cvbcm" Feb 27 11:34:36 crc kubenswrapper[4728]: I0227 11:34:36.328774 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af896d7f-2364-4b61-b1a2-93d68da31088-utilities\") pod \"community-operators-cvbcm\" (UID: \"af896d7f-2364-4b61-b1a2-93d68da31088\") " pod="openshift-marketplace/community-operators-cvbcm" Feb 27 11:34:36 crc kubenswrapper[4728]: I0227 11:34:36.328939 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af896d7f-2364-4b61-b1a2-93d68da31088-catalog-content\") pod \"community-operators-cvbcm\" (UID: \"af896d7f-2364-4b61-b1a2-93d68da31088\") " pod="openshift-marketplace/community-operators-cvbcm" Feb 27 11:34:36 crc kubenswrapper[4728]: I0227 11:34:36.329433 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af896d7f-2364-4b61-b1a2-93d68da31088-utilities\") pod \"community-operators-cvbcm\" (UID: \"af896d7f-2364-4b61-b1a2-93d68da31088\") " pod="openshift-marketplace/community-operators-cvbcm" Feb 27 11:34:36 crc kubenswrapper[4728]: I0227 11:34:36.329465 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af896d7f-2364-4b61-b1a2-93d68da31088-catalog-content\") pod \"community-operators-cvbcm\" (UID: \"af896d7f-2364-4b61-b1a2-93d68da31088\") " pod="openshift-marketplace/community-operators-cvbcm" Feb 27 11:34:36 crc kubenswrapper[4728]: I0227 11:34:36.350380 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6l7mk\" (UniqueName: \"kubernetes.io/projected/af896d7f-2364-4b61-b1a2-93d68da31088-kube-api-access-6l7mk\") pod \"community-operators-cvbcm\" (UID: \"af896d7f-2364-4b61-b1a2-93d68da31088\") " pod="openshift-marketplace/community-operators-cvbcm" Feb 27 11:34:36 crc kubenswrapper[4728]: I0227 11:34:36.396741 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cvbcm" Feb 27 11:34:37 crc kubenswrapper[4728]: I0227 11:34:37.028637 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cvbcm"] Feb 27 11:34:37 crc kubenswrapper[4728]: W0227 11:34:37.036810 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaf896d7f_2364_4b61_b1a2_93d68da31088.slice/crio-52124a91e272812cf3cc05367749a19e4ca1c01ca203feacbeb335852b17ba9f WatchSource:0}: Error finding container 52124a91e272812cf3cc05367749a19e4ca1c01ca203feacbeb335852b17ba9f: Status 404 returned error can't find the container with id 52124a91e272812cf3cc05367749a19e4ca1c01ca203feacbeb335852b17ba9f Feb 27 11:34:37 crc kubenswrapper[4728]: I0227 11:34:37.640893 4728 generic.go:334] "Generic (PLEG): container finished" podID="af896d7f-2364-4b61-b1a2-93d68da31088" containerID="42bb1678d901e303eb4711c561d5b7ba9f5b32cc8aa7344473662e0b8f468dac" exitCode=0 Feb 27 11:34:37 crc kubenswrapper[4728]: I0227 11:34:37.640975 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cvbcm" event={"ID":"af896d7f-2364-4b61-b1a2-93d68da31088","Type":"ContainerDied","Data":"42bb1678d901e303eb4711c561d5b7ba9f5b32cc8aa7344473662e0b8f468dac"} Feb 27 11:34:37 crc kubenswrapper[4728]: I0227 11:34:37.641541 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cvbcm" event={"ID":"af896d7f-2364-4b61-b1a2-93d68da31088","Type":"ContainerStarted","Data":"52124a91e272812cf3cc05367749a19e4ca1c01ca203feacbeb335852b17ba9f"} Feb 27 11:34:38 crc kubenswrapper[4728]: I0227 11:34:38.655184 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cvbcm" event={"ID":"af896d7f-2364-4b61-b1a2-93d68da31088","Type":"ContainerStarted","Data":"a4beb603288db837a18d3c88bd3cbda6cee55bcd63f610083e85b32925b00c6c"} Feb 27 11:34:40 crc kubenswrapper[4728]: I0227 11:34:40.677182 4728 generic.go:334] "Generic (PLEG): container finished" podID="af896d7f-2364-4b61-b1a2-93d68da31088" containerID="a4beb603288db837a18d3c88bd3cbda6cee55bcd63f610083e85b32925b00c6c" exitCode=0 Feb 27 11:34:40 crc kubenswrapper[4728]: I0227 11:34:40.677399 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cvbcm" event={"ID":"af896d7f-2364-4b61-b1a2-93d68da31088","Type":"ContainerDied","Data":"a4beb603288db837a18d3c88bd3cbda6cee55bcd63f610083e85b32925b00c6c"} Feb 27 11:34:41 crc kubenswrapper[4728]: I0227 11:34:41.693492 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cvbcm" event={"ID":"af896d7f-2364-4b61-b1a2-93d68da31088","Type":"ContainerStarted","Data":"ccf4f8f31efe03869efc1996e1725dd8515d76e548a3e2927958d1bbbf8d65b6"} Feb 27 11:34:41 crc kubenswrapper[4728]: I0227 11:34:41.729499 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-cvbcm" podStartSLOduration=2.267436835 podStartE2EDuration="5.72947683s" podCreationTimestamp="2026-02-27 11:34:36 +0000 UTC" firstStartedPulling="2026-02-27 11:34:37.645169066 +0000 UTC m=+4097.607535212" lastFinishedPulling="2026-02-27 11:34:41.107209101 +0000 UTC m=+4101.069575207" observedRunningTime="2026-02-27 11:34:41.717107749 +0000 UTC m=+4101.679473855" watchObservedRunningTime="2026-02-27 11:34:41.72947683 +0000 UTC m=+4101.691842956" Feb 27 11:34:42 crc kubenswrapper[4728]: I0227 11:34:42.725204 4728 scope.go:117] "RemoveContainer" containerID="d3c42e3baa5b4af9ba58d5b44dbb6d220b09d79c410d643dc0fba85cce7058ed" Feb 27 11:34:42 crc kubenswrapper[4728]: E0227 11:34:42.725846 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mf2hh_openshift-machine-config-operator(c2cfd349-f825-497b-b698-7fb6bc258b22)\"" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" Feb 27 11:34:46 crc kubenswrapper[4728]: I0227 11:34:46.397590 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-cvbcm" Feb 27 11:34:46 crc kubenswrapper[4728]: I0227 11:34:46.398097 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-cvbcm" Feb 27 11:34:46 crc kubenswrapper[4728]: I0227 11:34:46.485891 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-cvbcm" Feb 27 11:34:46 crc kubenswrapper[4728]: I0227 11:34:46.818376 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-cvbcm" Feb 27 11:34:46 crc kubenswrapper[4728]: I0227 11:34:46.874141 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-cvbcm"] Feb 27 11:34:48 crc kubenswrapper[4728]: I0227 11:34:48.794002 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-cvbcm" podUID="af896d7f-2364-4b61-b1a2-93d68da31088" containerName="registry-server" containerID="cri-o://ccf4f8f31efe03869efc1996e1725dd8515d76e548a3e2927958d1bbbf8d65b6" gracePeriod=2 Feb 27 11:34:49 crc kubenswrapper[4728]: I0227 11:34:49.381934 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cvbcm" Feb 27 11:34:49 crc kubenswrapper[4728]: I0227 11:34:49.486601 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af896d7f-2364-4b61-b1a2-93d68da31088-catalog-content\") pod \"af896d7f-2364-4b61-b1a2-93d68da31088\" (UID: \"af896d7f-2364-4b61-b1a2-93d68da31088\") " Feb 27 11:34:49 crc kubenswrapper[4728]: I0227 11:34:49.486789 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6l7mk\" (UniqueName: \"kubernetes.io/projected/af896d7f-2364-4b61-b1a2-93d68da31088-kube-api-access-6l7mk\") pod \"af896d7f-2364-4b61-b1a2-93d68da31088\" (UID: \"af896d7f-2364-4b61-b1a2-93d68da31088\") " Feb 27 11:34:49 crc kubenswrapper[4728]: I0227 11:34:49.486912 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af896d7f-2364-4b61-b1a2-93d68da31088-utilities\") pod \"af896d7f-2364-4b61-b1a2-93d68da31088\" (UID: \"af896d7f-2364-4b61-b1a2-93d68da31088\") " Feb 27 11:34:49 crc kubenswrapper[4728]: I0227 11:34:49.488322 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/af896d7f-2364-4b61-b1a2-93d68da31088-utilities" (OuterVolumeSpecName: "utilities") pod "af896d7f-2364-4b61-b1a2-93d68da31088" (UID: "af896d7f-2364-4b61-b1a2-93d68da31088"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 11:34:49 crc kubenswrapper[4728]: I0227 11:34:49.503220 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af896d7f-2364-4b61-b1a2-93d68da31088-kube-api-access-6l7mk" (OuterVolumeSpecName: "kube-api-access-6l7mk") pod "af896d7f-2364-4b61-b1a2-93d68da31088" (UID: "af896d7f-2364-4b61-b1a2-93d68da31088"). InnerVolumeSpecName "kube-api-access-6l7mk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:34:49 crc kubenswrapper[4728]: I0227 11:34:49.591647 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6l7mk\" (UniqueName: \"kubernetes.io/projected/af896d7f-2364-4b61-b1a2-93d68da31088-kube-api-access-6l7mk\") on node \"crc\" DevicePath \"\"" Feb 27 11:34:49 crc kubenswrapper[4728]: I0227 11:34:49.591698 4728 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af896d7f-2364-4b61-b1a2-93d68da31088-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 11:34:49 crc kubenswrapper[4728]: I0227 11:34:49.773685 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/af896d7f-2364-4b61-b1a2-93d68da31088-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "af896d7f-2364-4b61-b1a2-93d68da31088" (UID: "af896d7f-2364-4b61-b1a2-93d68da31088"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 11:34:49 crc kubenswrapper[4728]: I0227 11:34:49.799109 4728 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af896d7f-2364-4b61-b1a2-93d68da31088-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 11:34:49 crc kubenswrapper[4728]: I0227 11:34:49.811338 4728 generic.go:334] "Generic (PLEG): container finished" podID="af896d7f-2364-4b61-b1a2-93d68da31088" containerID="ccf4f8f31efe03869efc1996e1725dd8515d76e548a3e2927958d1bbbf8d65b6" exitCode=0 Feb 27 11:34:49 crc kubenswrapper[4728]: I0227 11:34:49.811395 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cvbcm" event={"ID":"af896d7f-2364-4b61-b1a2-93d68da31088","Type":"ContainerDied","Data":"ccf4f8f31efe03869efc1996e1725dd8515d76e548a3e2927958d1bbbf8d65b6"} Feb 27 11:34:49 crc kubenswrapper[4728]: I0227 11:34:49.811429 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cvbcm" event={"ID":"af896d7f-2364-4b61-b1a2-93d68da31088","Type":"ContainerDied","Data":"52124a91e272812cf3cc05367749a19e4ca1c01ca203feacbeb335852b17ba9f"} Feb 27 11:34:49 crc kubenswrapper[4728]: I0227 11:34:49.811428 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cvbcm" Feb 27 11:34:49 crc kubenswrapper[4728]: I0227 11:34:49.811452 4728 scope.go:117] "RemoveContainer" containerID="ccf4f8f31efe03869efc1996e1725dd8515d76e548a3e2927958d1bbbf8d65b6" Feb 27 11:34:49 crc kubenswrapper[4728]: I0227 11:34:49.870188 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-cvbcm"] Feb 27 11:34:49 crc kubenswrapper[4728]: I0227 11:34:49.883340 4728 scope.go:117] "RemoveContainer" containerID="a4beb603288db837a18d3c88bd3cbda6cee55bcd63f610083e85b32925b00c6c" Feb 27 11:34:49 crc kubenswrapper[4728]: I0227 11:34:49.885128 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-cvbcm"] Feb 27 11:34:49 crc kubenswrapper[4728]: I0227 11:34:49.917332 4728 scope.go:117] "RemoveContainer" containerID="42bb1678d901e303eb4711c561d5b7ba9f5b32cc8aa7344473662e0b8f468dac" Feb 27 11:34:49 crc kubenswrapper[4728]: I0227 11:34:49.977015 4728 scope.go:117] "RemoveContainer" containerID="ccf4f8f31efe03869efc1996e1725dd8515d76e548a3e2927958d1bbbf8d65b6" Feb 27 11:34:49 crc kubenswrapper[4728]: E0227 11:34:49.977718 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ccf4f8f31efe03869efc1996e1725dd8515d76e548a3e2927958d1bbbf8d65b6\": container with ID starting with ccf4f8f31efe03869efc1996e1725dd8515d76e548a3e2927958d1bbbf8d65b6 not found: ID does not exist" containerID="ccf4f8f31efe03869efc1996e1725dd8515d76e548a3e2927958d1bbbf8d65b6" Feb 27 11:34:49 crc kubenswrapper[4728]: I0227 11:34:49.977809 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ccf4f8f31efe03869efc1996e1725dd8515d76e548a3e2927958d1bbbf8d65b6"} err="failed to get container status \"ccf4f8f31efe03869efc1996e1725dd8515d76e548a3e2927958d1bbbf8d65b6\": rpc error: code = NotFound desc = could not find container \"ccf4f8f31efe03869efc1996e1725dd8515d76e548a3e2927958d1bbbf8d65b6\": container with ID starting with ccf4f8f31efe03869efc1996e1725dd8515d76e548a3e2927958d1bbbf8d65b6 not found: ID does not exist" Feb 27 11:34:49 crc kubenswrapper[4728]: I0227 11:34:49.977867 4728 scope.go:117] "RemoveContainer" containerID="a4beb603288db837a18d3c88bd3cbda6cee55bcd63f610083e85b32925b00c6c" Feb 27 11:34:49 crc kubenswrapper[4728]: E0227 11:34:49.979133 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a4beb603288db837a18d3c88bd3cbda6cee55bcd63f610083e85b32925b00c6c\": container with ID starting with a4beb603288db837a18d3c88bd3cbda6cee55bcd63f610083e85b32925b00c6c not found: ID does not exist" containerID="a4beb603288db837a18d3c88bd3cbda6cee55bcd63f610083e85b32925b00c6c" Feb 27 11:34:49 crc kubenswrapper[4728]: I0227 11:34:49.979170 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4beb603288db837a18d3c88bd3cbda6cee55bcd63f610083e85b32925b00c6c"} err="failed to get container status \"a4beb603288db837a18d3c88bd3cbda6cee55bcd63f610083e85b32925b00c6c\": rpc error: code = NotFound desc = could not find container \"a4beb603288db837a18d3c88bd3cbda6cee55bcd63f610083e85b32925b00c6c\": container with ID starting with a4beb603288db837a18d3c88bd3cbda6cee55bcd63f610083e85b32925b00c6c not found: ID does not exist" Feb 27 11:34:49 crc kubenswrapper[4728]: I0227 11:34:49.979203 4728 scope.go:117] "RemoveContainer" containerID="42bb1678d901e303eb4711c561d5b7ba9f5b32cc8aa7344473662e0b8f468dac" Feb 27 11:34:49 crc kubenswrapper[4728]: E0227 11:34:49.979726 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"42bb1678d901e303eb4711c561d5b7ba9f5b32cc8aa7344473662e0b8f468dac\": container with ID starting with 42bb1678d901e303eb4711c561d5b7ba9f5b32cc8aa7344473662e0b8f468dac not found: ID does not exist" containerID="42bb1678d901e303eb4711c561d5b7ba9f5b32cc8aa7344473662e0b8f468dac" Feb 27 11:34:49 crc kubenswrapper[4728]: I0227 11:34:49.979760 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42bb1678d901e303eb4711c561d5b7ba9f5b32cc8aa7344473662e0b8f468dac"} err="failed to get container status \"42bb1678d901e303eb4711c561d5b7ba9f5b32cc8aa7344473662e0b8f468dac\": rpc error: code = NotFound desc = could not find container \"42bb1678d901e303eb4711c561d5b7ba9f5b32cc8aa7344473662e0b8f468dac\": container with ID starting with 42bb1678d901e303eb4711c561d5b7ba9f5b32cc8aa7344473662e0b8f468dac not found: ID does not exist" Feb 27 11:34:50 crc kubenswrapper[4728]: I0227 11:34:50.777564 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af896d7f-2364-4b61-b1a2-93d68da31088" path="/var/lib/kubelet/pods/af896d7f-2364-4b61-b1a2-93d68da31088/volumes" Feb 27 11:34:55 crc kubenswrapper[4728]: I0227 11:34:55.725890 4728 scope.go:117] "RemoveContainer" containerID="d3c42e3baa5b4af9ba58d5b44dbb6d220b09d79c410d643dc0fba85cce7058ed" Feb 27 11:34:55 crc kubenswrapper[4728]: E0227 11:34:55.727230 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mf2hh_openshift-machine-config-operator(c2cfd349-f825-497b-b698-7fb6bc258b22)\"" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" Feb 27 11:35:07 crc kubenswrapper[4728]: I0227 11:35:07.725567 4728 scope.go:117] "RemoveContainer" containerID="d3c42e3baa5b4af9ba58d5b44dbb6d220b09d79c410d643dc0fba85cce7058ed" Feb 27 11:35:07 crc kubenswrapper[4728]: E0227 11:35:07.726674 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mf2hh_openshift-machine-config-operator(c2cfd349-f825-497b-b698-7fb6bc258b22)\"" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" Feb 27 11:35:20 crc kubenswrapper[4728]: I0227 11:35:20.782366 4728 scope.go:117] "RemoveContainer" containerID="d3c42e3baa5b4af9ba58d5b44dbb6d220b09d79c410d643dc0fba85cce7058ed" Feb 27 11:35:20 crc kubenswrapper[4728]: E0227 11:35:20.783347 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mf2hh_openshift-machine-config-operator(c2cfd349-f825-497b-b698-7fb6bc258b22)\"" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" Feb 27 11:35:35 crc kubenswrapper[4728]: I0227 11:35:35.725513 4728 scope.go:117] "RemoveContainer" containerID="d3c42e3baa5b4af9ba58d5b44dbb6d220b09d79c410d643dc0fba85cce7058ed" Feb 27 11:35:35 crc kubenswrapper[4728]: E0227 11:35:35.726296 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mf2hh_openshift-machine-config-operator(c2cfd349-f825-497b-b698-7fb6bc258b22)\"" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" Feb 27 11:35:48 crc kubenswrapper[4728]: I0227 11:35:48.725260 4728 scope.go:117] "RemoveContainer" containerID="d3c42e3baa5b4af9ba58d5b44dbb6d220b09d79c410d643dc0fba85cce7058ed" Feb 27 11:35:48 crc kubenswrapper[4728]: E0227 11:35:48.726853 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mf2hh_openshift-machine-config-operator(c2cfd349-f825-497b-b698-7fb6bc258b22)\"" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" Feb 27 11:35:59 crc kubenswrapper[4728]: I0227 11:35:59.725239 4728 scope.go:117] "RemoveContainer" containerID="d3c42e3baa5b4af9ba58d5b44dbb6d220b09d79c410d643dc0fba85cce7058ed" Feb 27 11:35:59 crc kubenswrapper[4728]: E0227 11:35:59.726050 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mf2hh_openshift-machine-config-operator(c2cfd349-f825-497b-b698-7fb6bc258b22)\"" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" Feb 27 11:36:00 crc kubenswrapper[4728]: I0227 11:36:00.166216 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536536-xk7nh"] Feb 27 11:36:00 crc kubenswrapper[4728]: E0227 11:36:00.166933 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af896d7f-2364-4b61-b1a2-93d68da31088" containerName="extract-utilities" Feb 27 11:36:00 crc kubenswrapper[4728]: I0227 11:36:00.166961 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="af896d7f-2364-4b61-b1a2-93d68da31088" containerName="extract-utilities" Feb 27 11:36:00 crc kubenswrapper[4728]: E0227 11:36:00.166983 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af896d7f-2364-4b61-b1a2-93d68da31088" containerName="extract-content" Feb 27 11:36:00 crc kubenswrapper[4728]: I0227 11:36:00.166994 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="af896d7f-2364-4b61-b1a2-93d68da31088" containerName="extract-content" Feb 27 11:36:00 crc kubenswrapper[4728]: E0227 11:36:00.167031 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af896d7f-2364-4b61-b1a2-93d68da31088" containerName="registry-server" Feb 27 11:36:00 crc kubenswrapper[4728]: I0227 11:36:00.167042 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="af896d7f-2364-4b61-b1a2-93d68da31088" containerName="registry-server" Feb 27 11:36:00 crc kubenswrapper[4728]: I0227 11:36:00.167445 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="af896d7f-2364-4b61-b1a2-93d68da31088" containerName="registry-server" Feb 27 11:36:00 crc kubenswrapper[4728]: I0227 11:36:00.168553 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536536-xk7nh" Feb 27 11:36:00 crc kubenswrapper[4728]: I0227 11:36:00.171327 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 11:36:00 crc kubenswrapper[4728]: I0227 11:36:00.171361 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-wxdfj" Feb 27 11:36:00 crc kubenswrapper[4728]: I0227 11:36:00.172815 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 11:36:00 crc kubenswrapper[4728]: I0227 11:36:00.179814 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536536-xk7nh"] Feb 27 11:36:00 crc kubenswrapper[4728]: I0227 11:36:00.224397 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5v82\" (UniqueName: \"kubernetes.io/projected/c8af8c3c-e1b2-4588-aa43-0e33511b7ae9-kube-api-access-w5v82\") pod \"auto-csr-approver-29536536-xk7nh\" (UID: \"c8af8c3c-e1b2-4588-aa43-0e33511b7ae9\") " pod="openshift-infra/auto-csr-approver-29536536-xk7nh" Feb 27 11:36:00 crc kubenswrapper[4728]: I0227 11:36:00.326015 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w5v82\" (UniqueName: \"kubernetes.io/projected/c8af8c3c-e1b2-4588-aa43-0e33511b7ae9-kube-api-access-w5v82\") pod \"auto-csr-approver-29536536-xk7nh\" (UID: \"c8af8c3c-e1b2-4588-aa43-0e33511b7ae9\") " pod="openshift-infra/auto-csr-approver-29536536-xk7nh" Feb 27 11:36:00 crc kubenswrapper[4728]: I0227 11:36:00.882481 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w5v82\" (UniqueName: \"kubernetes.io/projected/c8af8c3c-e1b2-4588-aa43-0e33511b7ae9-kube-api-access-w5v82\") pod \"auto-csr-approver-29536536-xk7nh\" (UID: \"c8af8c3c-e1b2-4588-aa43-0e33511b7ae9\") " pod="openshift-infra/auto-csr-approver-29536536-xk7nh" Feb 27 11:36:01 crc kubenswrapper[4728]: I0227 11:36:01.093243 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536536-xk7nh" Feb 27 11:36:01 crc kubenswrapper[4728]: I0227 11:36:01.625702 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536536-xk7nh"] Feb 27 11:36:01 crc kubenswrapper[4728]: I0227 11:36:01.702255 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536536-xk7nh" event={"ID":"c8af8c3c-e1b2-4588-aa43-0e33511b7ae9","Type":"ContainerStarted","Data":"2ede2ff02ee10862d450b5d8038a6569417107e55d365eb1c6a9f407fd9be212"} Feb 27 11:36:03 crc kubenswrapper[4728]: I0227 11:36:03.727007 4728 generic.go:334] "Generic (PLEG): container finished" podID="c8af8c3c-e1b2-4588-aa43-0e33511b7ae9" containerID="af1322b85ed8058615ec49a774a0674de9dda5fe024b2489b86fc45ed6223349" exitCode=0 Feb 27 11:36:03 crc kubenswrapper[4728]: I0227 11:36:03.727597 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536536-xk7nh" event={"ID":"c8af8c3c-e1b2-4588-aa43-0e33511b7ae9","Type":"ContainerDied","Data":"af1322b85ed8058615ec49a774a0674de9dda5fe024b2489b86fc45ed6223349"} Feb 27 11:36:05 crc kubenswrapper[4728]: I0227 11:36:05.178602 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536536-xk7nh" Feb 27 11:36:05 crc kubenswrapper[4728]: I0227 11:36:05.363639 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w5v82\" (UniqueName: \"kubernetes.io/projected/c8af8c3c-e1b2-4588-aa43-0e33511b7ae9-kube-api-access-w5v82\") pod \"c8af8c3c-e1b2-4588-aa43-0e33511b7ae9\" (UID: \"c8af8c3c-e1b2-4588-aa43-0e33511b7ae9\") " Feb 27 11:36:05 crc kubenswrapper[4728]: I0227 11:36:05.373794 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8af8c3c-e1b2-4588-aa43-0e33511b7ae9-kube-api-access-w5v82" (OuterVolumeSpecName: "kube-api-access-w5v82") pod "c8af8c3c-e1b2-4588-aa43-0e33511b7ae9" (UID: "c8af8c3c-e1b2-4588-aa43-0e33511b7ae9"). InnerVolumeSpecName "kube-api-access-w5v82". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:36:05 crc kubenswrapper[4728]: I0227 11:36:05.467721 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w5v82\" (UniqueName: \"kubernetes.io/projected/c8af8c3c-e1b2-4588-aa43-0e33511b7ae9-kube-api-access-w5v82\") on node \"crc\" DevicePath \"\"" Feb 27 11:36:05 crc kubenswrapper[4728]: I0227 11:36:05.753249 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536536-xk7nh" event={"ID":"c8af8c3c-e1b2-4588-aa43-0e33511b7ae9","Type":"ContainerDied","Data":"2ede2ff02ee10862d450b5d8038a6569417107e55d365eb1c6a9f407fd9be212"} Feb 27 11:36:05 crc kubenswrapper[4728]: I0227 11:36:05.753309 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2ede2ff02ee10862d450b5d8038a6569417107e55d365eb1c6a9f407fd9be212" Feb 27 11:36:05 crc kubenswrapper[4728]: I0227 11:36:05.753406 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536536-xk7nh" Feb 27 11:36:06 crc kubenswrapper[4728]: I0227 11:36:06.282369 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536530-fbx9s"] Feb 27 11:36:06 crc kubenswrapper[4728]: I0227 11:36:06.296993 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536530-fbx9s"] Feb 27 11:36:06 crc kubenswrapper[4728]: I0227 11:36:06.742250 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e377a6ae-105e-4b6b-86c7-59224b2409d4" path="/var/lib/kubelet/pods/e377a6ae-105e-4b6b-86c7-59224b2409d4/volumes" Feb 27 11:36:14 crc kubenswrapper[4728]: I0227 11:36:14.725726 4728 scope.go:117] "RemoveContainer" containerID="d3c42e3baa5b4af9ba58d5b44dbb6d220b09d79c410d643dc0fba85cce7058ed" Feb 27 11:36:14 crc kubenswrapper[4728]: E0227 11:36:14.727042 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mf2hh_openshift-machine-config-operator(c2cfd349-f825-497b-b698-7fb6bc258b22)\"" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" Feb 27 11:36:27 crc kubenswrapper[4728]: I0227 11:36:27.356583 4728 scope.go:117] "RemoveContainer" containerID="a9e034d6e6a4d90653cdc9a63b2c73396d706e170ff08c383ad3419edb24aa48" Feb 27 11:36:28 crc kubenswrapper[4728]: I0227 11:36:28.726057 4728 scope.go:117] "RemoveContainer" containerID="d3c42e3baa5b4af9ba58d5b44dbb6d220b09d79c410d643dc0fba85cce7058ed" Feb 27 11:36:28 crc kubenswrapper[4728]: E0227 11:36:28.727004 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mf2hh_openshift-machine-config-operator(c2cfd349-f825-497b-b698-7fb6bc258b22)\"" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" Feb 27 11:36:40 crc kubenswrapper[4728]: I0227 11:36:40.734159 4728 scope.go:117] "RemoveContainer" containerID="d3c42e3baa5b4af9ba58d5b44dbb6d220b09d79c410d643dc0fba85cce7058ed" Feb 27 11:36:40 crc kubenswrapper[4728]: E0227 11:36:40.735110 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mf2hh_openshift-machine-config-operator(c2cfd349-f825-497b-b698-7fb6bc258b22)\"" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" Feb 27 11:36:55 crc kubenswrapper[4728]: I0227 11:36:55.725051 4728 scope.go:117] "RemoveContainer" containerID="d3c42e3baa5b4af9ba58d5b44dbb6d220b09d79c410d643dc0fba85cce7058ed" Feb 27 11:36:55 crc kubenswrapper[4728]: E0227 11:36:55.725838 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mf2hh_openshift-machine-config-operator(c2cfd349-f825-497b-b698-7fb6bc258b22)\"" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" Feb 27 11:37:09 crc kubenswrapper[4728]: I0227 11:37:09.726447 4728 scope.go:117] "RemoveContainer" containerID="d3c42e3baa5b4af9ba58d5b44dbb6d220b09d79c410d643dc0fba85cce7058ed" Feb 27 11:37:10 crc kubenswrapper[4728]: I0227 11:37:10.588475 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" event={"ID":"c2cfd349-f825-497b-b698-7fb6bc258b22","Type":"ContainerStarted","Data":"ab30cd3a8640f0fe410f4fe277433311a2edc0921ebb4808792beee5c3183fbd"} Feb 27 11:38:00 crc kubenswrapper[4728]: I0227 11:38:00.153342 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536538-5n245"] Feb 27 11:38:00 crc kubenswrapper[4728]: E0227 11:38:00.154685 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8af8c3c-e1b2-4588-aa43-0e33511b7ae9" containerName="oc" Feb 27 11:38:00 crc kubenswrapper[4728]: I0227 11:38:00.154711 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8af8c3c-e1b2-4588-aa43-0e33511b7ae9" containerName="oc" Feb 27 11:38:00 crc kubenswrapper[4728]: I0227 11:38:00.155286 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8af8c3c-e1b2-4588-aa43-0e33511b7ae9" containerName="oc" Feb 27 11:38:00 crc kubenswrapper[4728]: I0227 11:38:00.156380 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536538-5n245" Feb 27 11:38:00 crc kubenswrapper[4728]: I0227 11:38:00.158218 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 11:38:00 crc kubenswrapper[4728]: I0227 11:38:00.159019 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-wxdfj" Feb 27 11:38:00 crc kubenswrapper[4728]: I0227 11:38:00.159243 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 11:38:00 crc kubenswrapper[4728]: I0227 11:38:00.165157 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536538-5n245"] Feb 27 11:38:00 crc kubenswrapper[4728]: I0227 11:38:00.280023 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wkcfj\" (UniqueName: \"kubernetes.io/projected/74194f81-8740-4f00-af3b-999db7c2ff93-kube-api-access-wkcfj\") pod \"auto-csr-approver-29536538-5n245\" (UID: \"74194f81-8740-4f00-af3b-999db7c2ff93\") " pod="openshift-infra/auto-csr-approver-29536538-5n245" Feb 27 11:38:00 crc kubenswrapper[4728]: I0227 11:38:00.383044 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wkcfj\" (UniqueName: \"kubernetes.io/projected/74194f81-8740-4f00-af3b-999db7c2ff93-kube-api-access-wkcfj\") pod \"auto-csr-approver-29536538-5n245\" (UID: \"74194f81-8740-4f00-af3b-999db7c2ff93\") " pod="openshift-infra/auto-csr-approver-29536538-5n245" Feb 27 11:38:00 crc kubenswrapper[4728]: I0227 11:38:00.404210 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wkcfj\" (UniqueName: \"kubernetes.io/projected/74194f81-8740-4f00-af3b-999db7c2ff93-kube-api-access-wkcfj\") pod \"auto-csr-approver-29536538-5n245\" (UID: \"74194f81-8740-4f00-af3b-999db7c2ff93\") " pod="openshift-infra/auto-csr-approver-29536538-5n245" Feb 27 11:38:00 crc kubenswrapper[4728]: I0227 11:38:00.480863 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536538-5n245" Feb 27 11:38:00 crc kubenswrapper[4728]: I0227 11:38:00.973432 4728 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 27 11:38:00 crc kubenswrapper[4728]: I0227 11:38:00.975470 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536538-5n245"] Feb 27 11:38:01 crc kubenswrapper[4728]: I0227 11:38:01.210192 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536538-5n245" event={"ID":"74194f81-8740-4f00-af3b-999db7c2ff93","Type":"ContainerStarted","Data":"d6a23fc1e23e54c0b1a5614e6dad60c3472b2a411f33f83879b85398659827a4"} Feb 27 11:38:03 crc kubenswrapper[4728]: I0227 11:38:03.238085 4728 generic.go:334] "Generic (PLEG): container finished" podID="74194f81-8740-4f00-af3b-999db7c2ff93" containerID="12648aab44f1d359122962e74b03a69b8bc454d7a6b37231c2e5b1a600d389a8" exitCode=0 Feb 27 11:38:03 crc kubenswrapper[4728]: I0227 11:38:03.238447 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536538-5n245" event={"ID":"74194f81-8740-4f00-af3b-999db7c2ff93","Type":"ContainerDied","Data":"12648aab44f1d359122962e74b03a69b8bc454d7a6b37231c2e5b1a600d389a8"} Feb 27 11:38:04 crc kubenswrapper[4728]: I0227 11:38:04.840569 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536538-5n245" Feb 27 11:38:05 crc kubenswrapper[4728]: I0227 11:38:05.022187 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wkcfj\" (UniqueName: \"kubernetes.io/projected/74194f81-8740-4f00-af3b-999db7c2ff93-kube-api-access-wkcfj\") pod \"74194f81-8740-4f00-af3b-999db7c2ff93\" (UID: \"74194f81-8740-4f00-af3b-999db7c2ff93\") " Feb 27 11:38:05 crc kubenswrapper[4728]: I0227 11:38:05.029012 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74194f81-8740-4f00-af3b-999db7c2ff93-kube-api-access-wkcfj" (OuterVolumeSpecName: "kube-api-access-wkcfj") pod "74194f81-8740-4f00-af3b-999db7c2ff93" (UID: "74194f81-8740-4f00-af3b-999db7c2ff93"). InnerVolumeSpecName "kube-api-access-wkcfj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:38:05 crc kubenswrapper[4728]: I0227 11:38:05.125905 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wkcfj\" (UniqueName: \"kubernetes.io/projected/74194f81-8740-4f00-af3b-999db7c2ff93-kube-api-access-wkcfj\") on node \"crc\" DevicePath \"\"" Feb 27 11:38:05 crc kubenswrapper[4728]: I0227 11:38:05.272326 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536538-5n245" event={"ID":"74194f81-8740-4f00-af3b-999db7c2ff93","Type":"ContainerDied","Data":"d6a23fc1e23e54c0b1a5614e6dad60c3472b2a411f33f83879b85398659827a4"} Feb 27 11:38:05 crc kubenswrapper[4728]: I0227 11:38:05.272683 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d6a23fc1e23e54c0b1a5614e6dad60c3472b2a411f33f83879b85398659827a4" Feb 27 11:38:05 crc kubenswrapper[4728]: I0227 11:38:05.272432 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536538-5n245" Feb 27 11:38:05 crc kubenswrapper[4728]: I0227 11:38:05.926779 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536532-bhg7z"] Feb 27 11:38:05 crc kubenswrapper[4728]: I0227 11:38:05.939934 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536532-bhg7z"] Feb 27 11:38:06 crc kubenswrapper[4728]: I0227 11:38:06.743468 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a2534bc-84b6-49d4-b11f-fc16880d0678" path="/var/lib/kubelet/pods/6a2534bc-84b6-49d4-b11f-fc16880d0678/volumes" Feb 27 11:38:27 crc kubenswrapper[4728]: I0227 11:38:27.575300 4728 scope.go:117] "RemoveContainer" containerID="922f927ff18edaa702920e7b2efe5653ff357816ea647108574d39381cd22aca" Feb 27 11:38:56 crc kubenswrapper[4728]: I0227 11:38:56.913926 4728 trace.go:236] Trace[674634889]: "Calculate volume metrics of glance for pod openstack/glance-default-external-api-0" (27-Feb-2026 11:38:55.832) (total time: 1080ms): Feb 27 11:38:56 crc kubenswrapper[4728]: Trace[674634889]: [1.080229219s] [1.080229219s] END Feb 27 11:39:35 crc kubenswrapper[4728]: I0227 11:39:35.921910 4728 patch_prober.go:28] interesting pod/machine-config-daemon-mf2hh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 11:39:35 crc kubenswrapper[4728]: I0227 11:39:35.922562 4728 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 11:39:37 crc kubenswrapper[4728]: I0227 11:39:37.808320 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-nvr8w"] Feb 27 11:39:37 crc kubenswrapper[4728]: E0227 11:39:37.809272 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74194f81-8740-4f00-af3b-999db7c2ff93" containerName="oc" Feb 27 11:39:37 crc kubenswrapper[4728]: I0227 11:39:37.809293 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="74194f81-8740-4f00-af3b-999db7c2ff93" containerName="oc" Feb 27 11:39:37 crc kubenswrapper[4728]: I0227 11:39:37.809599 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="74194f81-8740-4f00-af3b-999db7c2ff93" containerName="oc" Feb 27 11:39:37 crc kubenswrapper[4728]: I0227 11:39:37.811764 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nvr8w" Feb 27 11:39:37 crc kubenswrapper[4728]: I0227 11:39:37.833882 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nvr8w"] Feb 27 11:39:37 crc kubenswrapper[4728]: I0227 11:39:37.918286 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c9b4a58-e169-48a3-bd79-e71cb5d1d041-catalog-content\") pod \"redhat-marketplace-nvr8w\" (UID: \"5c9b4a58-e169-48a3-bd79-e71cb5d1d041\") " pod="openshift-marketplace/redhat-marketplace-nvr8w" Feb 27 11:39:37 crc kubenswrapper[4728]: I0227 11:39:37.918339 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c9b4a58-e169-48a3-bd79-e71cb5d1d041-utilities\") pod \"redhat-marketplace-nvr8w\" (UID: \"5c9b4a58-e169-48a3-bd79-e71cb5d1d041\") " pod="openshift-marketplace/redhat-marketplace-nvr8w" Feb 27 11:39:37 crc kubenswrapper[4728]: I0227 11:39:37.918587 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xfzx\" (UniqueName: \"kubernetes.io/projected/5c9b4a58-e169-48a3-bd79-e71cb5d1d041-kube-api-access-6xfzx\") pod \"redhat-marketplace-nvr8w\" (UID: \"5c9b4a58-e169-48a3-bd79-e71cb5d1d041\") " pod="openshift-marketplace/redhat-marketplace-nvr8w" Feb 27 11:39:38 crc kubenswrapper[4728]: I0227 11:39:38.021217 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6xfzx\" (UniqueName: \"kubernetes.io/projected/5c9b4a58-e169-48a3-bd79-e71cb5d1d041-kube-api-access-6xfzx\") pod \"redhat-marketplace-nvr8w\" (UID: \"5c9b4a58-e169-48a3-bd79-e71cb5d1d041\") " pod="openshift-marketplace/redhat-marketplace-nvr8w" Feb 27 11:39:38 crc kubenswrapper[4728]: I0227 11:39:38.021412 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c9b4a58-e169-48a3-bd79-e71cb5d1d041-catalog-content\") pod \"redhat-marketplace-nvr8w\" (UID: \"5c9b4a58-e169-48a3-bd79-e71cb5d1d041\") " pod="openshift-marketplace/redhat-marketplace-nvr8w" Feb 27 11:39:38 crc kubenswrapper[4728]: I0227 11:39:38.021438 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c9b4a58-e169-48a3-bd79-e71cb5d1d041-utilities\") pod \"redhat-marketplace-nvr8w\" (UID: \"5c9b4a58-e169-48a3-bd79-e71cb5d1d041\") " pod="openshift-marketplace/redhat-marketplace-nvr8w" Feb 27 11:39:38 crc kubenswrapper[4728]: I0227 11:39:38.021970 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c9b4a58-e169-48a3-bd79-e71cb5d1d041-catalog-content\") pod \"redhat-marketplace-nvr8w\" (UID: \"5c9b4a58-e169-48a3-bd79-e71cb5d1d041\") " pod="openshift-marketplace/redhat-marketplace-nvr8w" Feb 27 11:39:38 crc kubenswrapper[4728]: I0227 11:39:38.022426 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c9b4a58-e169-48a3-bd79-e71cb5d1d041-utilities\") pod \"redhat-marketplace-nvr8w\" (UID: \"5c9b4a58-e169-48a3-bd79-e71cb5d1d041\") " pod="openshift-marketplace/redhat-marketplace-nvr8w" Feb 27 11:39:38 crc kubenswrapper[4728]: I0227 11:39:38.044496 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6xfzx\" (UniqueName: \"kubernetes.io/projected/5c9b4a58-e169-48a3-bd79-e71cb5d1d041-kube-api-access-6xfzx\") pod \"redhat-marketplace-nvr8w\" (UID: \"5c9b4a58-e169-48a3-bd79-e71cb5d1d041\") " pod="openshift-marketplace/redhat-marketplace-nvr8w" Feb 27 11:39:38 crc kubenswrapper[4728]: I0227 11:39:38.146767 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nvr8w" Feb 27 11:39:38 crc kubenswrapper[4728]: I0227 11:39:38.671149 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nvr8w"] Feb 27 11:39:38 crc kubenswrapper[4728]: W0227 11:39:38.677217 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5c9b4a58_e169_48a3_bd79_e71cb5d1d041.slice/crio-c37de7a0590883e41ac2a101ba8912b062834a7d816f71ebc551da7f25b51e41 WatchSource:0}: Error finding container c37de7a0590883e41ac2a101ba8912b062834a7d816f71ebc551da7f25b51e41: Status 404 returned error can't find the container with id c37de7a0590883e41ac2a101ba8912b062834a7d816f71ebc551da7f25b51e41 Feb 27 11:39:39 crc kubenswrapper[4728]: I0227 11:39:39.180673 4728 generic.go:334] "Generic (PLEG): container finished" podID="5c9b4a58-e169-48a3-bd79-e71cb5d1d041" containerID="23e1f069c3821896cfec8dfd59056146753337ed536033cd1be3422c73eb19a4" exitCode=0 Feb 27 11:39:39 crc kubenswrapper[4728]: I0227 11:39:39.180737 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nvr8w" event={"ID":"5c9b4a58-e169-48a3-bd79-e71cb5d1d041","Type":"ContainerDied","Data":"23e1f069c3821896cfec8dfd59056146753337ed536033cd1be3422c73eb19a4"} Feb 27 11:39:39 crc kubenswrapper[4728]: I0227 11:39:39.180959 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nvr8w" event={"ID":"5c9b4a58-e169-48a3-bd79-e71cb5d1d041","Type":"ContainerStarted","Data":"c37de7a0590883e41ac2a101ba8912b062834a7d816f71ebc551da7f25b51e41"} Feb 27 11:39:40 crc kubenswrapper[4728]: I0227 11:39:40.198347 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nvr8w" event={"ID":"5c9b4a58-e169-48a3-bd79-e71cb5d1d041","Type":"ContainerStarted","Data":"5b92ef5e8ec4d0fbab9a512f01c7a326359db1f36801b367d963e885a2681a58"} Feb 27 11:39:42 crc kubenswrapper[4728]: I0227 11:39:42.225031 4728 generic.go:334] "Generic (PLEG): container finished" podID="5c9b4a58-e169-48a3-bd79-e71cb5d1d041" containerID="5b92ef5e8ec4d0fbab9a512f01c7a326359db1f36801b367d963e885a2681a58" exitCode=0 Feb 27 11:39:42 crc kubenswrapper[4728]: I0227 11:39:42.225145 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nvr8w" event={"ID":"5c9b4a58-e169-48a3-bd79-e71cb5d1d041","Type":"ContainerDied","Data":"5b92ef5e8ec4d0fbab9a512f01c7a326359db1f36801b367d963e885a2681a58"} Feb 27 11:39:43 crc kubenswrapper[4728]: I0227 11:39:43.240368 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nvr8w" event={"ID":"5c9b4a58-e169-48a3-bd79-e71cb5d1d041","Type":"ContainerStarted","Data":"4a57ca22208fb9f421fd9e71d4e1699f7810c751ed46c64fdb5ace5a92b63b02"} Feb 27 11:39:43 crc kubenswrapper[4728]: I0227 11:39:43.274276 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-nvr8w" podStartSLOduration=2.823667322 podStartE2EDuration="6.274256868s" podCreationTimestamp="2026-02-27 11:39:37 +0000 UTC" firstStartedPulling="2026-02-27 11:39:39.182735635 +0000 UTC m=+4399.145101741" lastFinishedPulling="2026-02-27 11:39:42.633325181 +0000 UTC m=+4402.595691287" observedRunningTime="2026-02-27 11:39:43.270610959 +0000 UTC m=+4403.232977075" watchObservedRunningTime="2026-02-27 11:39:43.274256868 +0000 UTC m=+4403.236622974" Feb 27 11:39:48 crc kubenswrapper[4728]: I0227 11:39:48.147640 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-nvr8w" Feb 27 11:39:48 crc kubenswrapper[4728]: I0227 11:39:48.148379 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-nvr8w" Feb 27 11:39:48 crc kubenswrapper[4728]: I0227 11:39:48.220038 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-nvr8w" Feb 27 11:39:48 crc kubenswrapper[4728]: I0227 11:39:48.358938 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-nvr8w" Feb 27 11:39:48 crc kubenswrapper[4728]: I0227 11:39:48.467536 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nvr8w"] Feb 27 11:39:50 crc kubenswrapper[4728]: I0227 11:39:50.331148 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-nvr8w" podUID="5c9b4a58-e169-48a3-bd79-e71cb5d1d041" containerName="registry-server" containerID="cri-o://4a57ca22208fb9f421fd9e71d4e1699f7810c751ed46c64fdb5ace5a92b63b02" gracePeriod=2 Feb 27 11:39:50 crc kubenswrapper[4728]: I0227 11:39:50.981668 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nvr8w" Feb 27 11:39:51 crc kubenswrapper[4728]: I0227 11:39:51.067702 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6xfzx\" (UniqueName: \"kubernetes.io/projected/5c9b4a58-e169-48a3-bd79-e71cb5d1d041-kube-api-access-6xfzx\") pod \"5c9b4a58-e169-48a3-bd79-e71cb5d1d041\" (UID: \"5c9b4a58-e169-48a3-bd79-e71cb5d1d041\") " Feb 27 11:39:51 crc kubenswrapper[4728]: I0227 11:39:51.067762 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c9b4a58-e169-48a3-bd79-e71cb5d1d041-utilities\") pod \"5c9b4a58-e169-48a3-bd79-e71cb5d1d041\" (UID: \"5c9b4a58-e169-48a3-bd79-e71cb5d1d041\") " Feb 27 11:39:51 crc kubenswrapper[4728]: I0227 11:39:51.067794 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c9b4a58-e169-48a3-bd79-e71cb5d1d041-catalog-content\") pod \"5c9b4a58-e169-48a3-bd79-e71cb5d1d041\" (UID: \"5c9b4a58-e169-48a3-bd79-e71cb5d1d041\") " Feb 27 11:39:51 crc kubenswrapper[4728]: I0227 11:39:51.068894 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5c9b4a58-e169-48a3-bd79-e71cb5d1d041-utilities" (OuterVolumeSpecName: "utilities") pod "5c9b4a58-e169-48a3-bd79-e71cb5d1d041" (UID: "5c9b4a58-e169-48a3-bd79-e71cb5d1d041"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 11:39:51 crc kubenswrapper[4728]: I0227 11:39:51.073568 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c9b4a58-e169-48a3-bd79-e71cb5d1d041-kube-api-access-6xfzx" (OuterVolumeSpecName: "kube-api-access-6xfzx") pod "5c9b4a58-e169-48a3-bd79-e71cb5d1d041" (UID: "5c9b4a58-e169-48a3-bd79-e71cb5d1d041"). InnerVolumeSpecName "kube-api-access-6xfzx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:39:51 crc kubenswrapper[4728]: I0227 11:39:51.108067 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5c9b4a58-e169-48a3-bd79-e71cb5d1d041-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5c9b4a58-e169-48a3-bd79-e71cb5d1d041" (UID: "5c9b4a58-e169-48a3-bd79-e71cb5d1d041"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 11:39:51 crc kubenswrapper[4728]: I0227 11:39:51.171059 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6xfzx\" (UniqueName: \"kubernetes.io/projected/5c9b4a58-e169-48a3-bd79-e71cb5d1d041-kube-api-access-6xfzx\") on node \"crc\" DevicePath \"\"" Feb 27 11:39:51 crc kubenswrapper[4728]: I0227 11:39:51.171112 4728 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c9b4a58-e169-48a3-bd79-e71cb5d1d041-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 11:39:51 crc kubenswrapper[4728]: I0227 11:39:51.171122 4728 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c9b4a58-e169-48a3-bd79-e71cb5d1d041-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 11:39:51 crc kubenswrapper[4728]: I0227 11:39:51.371293 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nvr8w" event={"ID":"5c9b4a58-e169-48a3-bd79-e71cb5d1d041","Type":"ContainerDied","Data":"4a57ca22208fb9f421fd9e71d4e1699f7810c751ed46c64fdb5ace5a92b63b02"} Feb 27 11:39:51 crc kubenswrapper[4728]: I0227 11:39:51.371335 4728 generic.go:334] "Generic (PLEG): container finished" podID="5c9b4a58-e169-48a3-bd79-e71cb5d1d041" containerID="4a57ca22208fb9f421fd9e71d4e1699f7810c751ed46c64fdb5ace5a92b63b02" exitCode=0 Feb 27 11:39:51 crc kubenswrapper[4728]: I0227 11:39:51.371373 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nvr8w" Feb 27 11:39:51 crc kubenswrapper[4728]: I0227 11:39:51.371405 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nvr8w" event={"ID":"5c9b4a58-e169-48a3-bd79-e71cb5d1d041","Type":"ContainerDied","Data":"c37de7a0590883e41ac2a101ba8912b062834a7d816f71ebc551da7f25b51e41"} Feb 27 11:39:51 crc kubenswrapper[4728]: I0227 11:39:51.371380 4728 scope.go:117] "RemoveContainer" containerID="4a57ca22208fb9f421fd9e71d4e1699f7810c751ed46c64fdb5ace5a92b63b02" Feb 27 11:39:51 crc kubenswrapper[4728]: I0227 11:39:51.424805 4728 scope.go:117] "RemoveContainer" containerID="5b92ef5e8ec4d0fbab9a512f01c7a326359db1f36801b367d963e885a2681a58" Feb 27 11:39:51 crc kubenswrapper[4728]: I0227 11:39:51.429381 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nvr8w"] Feb 27 11:39:51 crc kubenswrapper[4728]: I0227 11:39:51.440305 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-nvr8w"] Feb 27 11:39:51 crc kubenswrapper[4728]: I0227 11:39:51.449183 4728 scope.go:117] "RemoveContainer" containerID="23e1f069c3821896cfec8dfd59056146753337ed536033cd1be3422c73eb19a4" Feb 27 11:39:51 crc kubenswrapper[4728]: I0227 11:39:51.511357 4728 scope.go:117] "RemoveContainer" containerID="4a57ca22208fb9f421fd9e71d4e1699f7810c751ed46c64fdb5ace5a92b63b02" Feb 27 11:39:51 crc kubenswrapper[4728]: E0227 11:39:51.511879 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4a57ca22208fb9f421fd9e71d4e1699f7810c751ed46c64fdb5ace5a92b63b02\": container with ID starting with 4a57ca22208fb9f421fd9e71d4e1699f7810c751ed46c64fdb5ace5a92b63b02 not found: ID does not exist" containerID="4a57ca22208fb9f421fd9e71d4e1699f7810c751ed46c64fdb5ace5a92b63b02" Feb 27 11:39:51 crc kubenswrapper[4728]: I0227 11:39:51.511977 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a57ca22208fb9f421fd9e71d4e1699f7810c751ed46c64fdb5ace5a92b63b02"} err="failed to get container status \"4a57ca22208fb9f421fd9e71d4e1699f7810c751ed46c64fdb5ace5a92b63b02\": rpc error: code = NotFound desc = could not find container \"4a57ca22208fb9f421fd9e71d4e1699f7810c751ed46c64fdb5ace5a92b63b02\": container with ID starting with 4a57ca22208fb9f421fd9e71d4e1699f7810c751ed46c64fdb5ace5a92b63b02 not found: ID does not exist" Feb 27 11:39:51 crc kubenswrapper[4728]: I0227 11:39:51.512055 4728 scope.go:117] "RemoveContainer" containerID="5b92ef5e8ec4d0fbab9a512f01c7a326359db1f36801b367d963e885a2681a58" Feb 27 11:39:51 crc kubenswrapper[4728]: E0227 11:39:51.512497 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5b92ef5e8ec4d0fbab9a512f01c7a326359db1f36801b367d963e885a2681a58\": container with ID starting with 5b92ef5e8ec4d0fbab9a512f01c7a326359db1f36801b367d963e885a2681a58 not found: ID does not exist" containerID="5b92ef5e8ec4d0fbab9a512f01c7a326359db1f36801b367d963e885a2681a58" Feb 27 11:39:51 crc kubenswrapper[4728]: I0227 11:39:51.512548 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b92ef5e8ec4d0fbab9a512f01c7a326359db1f36801b367d963e885a2681a58"} err="failed to get container status \"5b92ef5e8ec4d0fbab9a512f01c7a326359db1f36801b367d963e885a2681a58\": rpc error: code = NotFound desc = could not find container \"5b92ef5e8ec4d0fbab9a512f01c7a326359db1f36801b367d963e885a2681a58\": container with ID starting with 5b92ef5e8ec4d0fbab9a512f01c7a326359db1f36801b367d963e885a2681a58 not found: ID does not exist" Feb 27 11:39:51 crc kubenswrapper[4728]: I0227 11:39:51.512574 4728 scope.go:117] "RemoveContainer" containerID="23e1f069c3821896cfec8dfd59056146753337ed536033cd1be3422c73eb19a4" Feb 27 11:39:51 crc kubenswrapper[4728]: E0227 11:39:51.512886 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"23e1f069c3821896cfec8dfd59056146753337ed536033cd1be3422c73eb19a4\": container with ID starting with 23e1f069c3821896cfec8dfd59056146753337ed536033cd1be3422c73eb19a4 not found: ID does not exist" containerID="23e1f069c3821896cfec8dfd59056146753337ed536033cd1be3422c73eb19a4" Feb 27 11:39:51 crc kubenswrapper[4728]: I0227 11:39:51.512960 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"23e1f069c3821896cfec8dfd59056146753337ed536033cd1be3422c73eb19a4"} err="failed to get container status \"23e1f069c3821896cfec8dfd59056146753337ed536033cd1be3422c73eb19a4\": rpc error: code = NotFound desc = could not find container \"23e1f069c3821896cfec8dfd59056146753337ed536033cd1be3422c73eb19a4\": container with ID starting with 23e1f069c3821896cfec8dfd59056146753337ed536033cd1be3422c73eb19a4 not found: ID does not exist" Feb 27 11:39:52 crc kubenswrapper[4728]: I0227 11:39:52.741992 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c9b4a58-e169-48a3-bd79-e71cb5d1d041" path="/var/lib/kubelet/pods/5c9b4a58-e169-48a3-bd79-e71cb5d1d041/volumes" Feb 27 11:40:00 crc kubenswrapper[4728]: I0227 11:40:00.143136 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536540-2rzrm"] Feb 27 11:40:00 crc kubenswrapper[4728]: E0227 11:40:00.144146 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c9b4a58-e169-48a3-bd79-e71cb5d1d041" containerName="registry-server" Feb 27 11:40:00 crc kubenswrapper[4728]: I0227 11:40:00.144159 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c9b4a58-e169-48a3-bd79-e71cb5d1d041" containerName="registry-server" Feb 27 11:40:00 crc kubenswrapper[4728]: E0227 11:40:00.144181 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c9b4a58-e169-48a3-bd79-e71cb5d1d041" containerName="extract-content" Feb 27 11:40:00 crc kubenswrapper[4728]: I0227 11:40:00.144187 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c9b4a58-e169-48a3-bd79-e71cb5d1d041" containerName="extract-content" Feb 27 11:40:00 crc kubenswrapper[4728]: E0227 11:40:00.144215 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c9b4a58-e169-48a3-bd79-e71cb5d1d041" containerName="extract-utilities" Feb 27 11:40:00 crc kubenswrapper[4728]: I0227 11:40:00.144221 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c9b4a58-e169-48a3-bd79-e71cb5d1d041" containerName="extract-utilities" Feb 27 11:40:00 crc kubenswrapper[4728]: I0227 11:40:00.144423 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c9b4a58-e169-48a3-bd79-e71cb5d1d041" containerName="registry-server" Feb 27 11:40:00 crc kubenswrapper[4728]: I0227 11:40:00.145385 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536540-2rzrm" Feb 27 11:40:00 crc kubenswrapper[4728]: I0227 11:40:00.147595 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 11:40:00 crc kubenswrapper[4728]: I0227 11:40:00.148190 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-wxdfj" Feb 27 11:40:00 crc kubenswrapper[4728]: I0227 11:40:00.148188 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 11:40:00 crc kubenswrapper[4728]: I0227 11:40:00.152100 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536540-2rzrm"] Feb 27 11:40:00 crc kubenswrapper[4728]: I0227 11:40:00.208991 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhcfv\" (UniqueName: \"kubernetes.io/projected/d050f77f-c609-4ddf-aaea-374a492efb32-kube-api-access-xhcfv\") pod \"auto-csr-approver-29536540-2rzrm\" (UID: \"d050f77f-c609-4ddf-aaea-374a492efb32\") " pod="openshift-infra/auto-csr-approver-29536540-2rzrm" Feb 27 11:40:00 crc kubenswrapper[4728]: I0227 11:40:00.311063 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xhcfv\" (UniqueName: \"kubernetes.io/projected/d050f77f-c609-4ddf-aaea-374a492efb32-kube-api-access-xhcfv\") pod \"auto-csr-approver-29536540-2rzrm\" (UID: \"d050f77f-c609-4ddf-aaea-374a492efb32\") " pod="openshift-infra/auto-csr-approver-29536540-2rzrm" Feb 27 11:40:00 crc kubenswrapper[4728]: I0227 11:40:00.337705 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xhcfv\" (UniqueName: \"kubernetes.io/projected/d050f77f-c609-4ddf-aaea-374a492efb32-kube-api-access-xhcfv\") pod \"auto-csr-approver-29536540-2rzrm\" (UID: \"d050f77f-c609-4ddf-aaea-374a492efb32\") " pod="openshift-infra/auto-csr-approver-29536540-2rzrm" Feb 27 11:40:00 crc kubenswrapper[4728]: I0227 11:40:00.469045 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536540-2rzrm" Feb 27 11:40:00 crc kubenswrapper[4728]: I0227 11:40:00.987674 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536540-2rzrm"] Feb 27 11:40:01 crc kubenswrapper[4728]: W0227 11:40:01.005659 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd050f77f_c609_4ddf_aaea_374a492efb32.slice/crio-4560c91842a3d533a7f523cd9f7e0abe827110e36af6bec86184e988eb70210e WatchSource:0}: Error finding container 4560c91842a3d533a7f523cd9f7e0abe827110e36af6bec86184e988eb70210e: Status 404 returned error can't find the container with id 4560c91842a3d533a7f523cd9f7e0abe827110e36af6bec86184e988eb70210e Feb 27 11:40:01 crc kubenswrapper[4728]: I0227 11:40:01.517000 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536540-2rzrm" event={"ID":"d050f77f-c609-4ddf-aaea-374a492efb32","Type":"ContainerStarted","Data":"4560c91842a3d533a7f523cd9f7e0abe827110e36af6bec86184e988eb70210e"} Feb 27 11:40:03 crc kubenswrapper[4728]: I0227 11:40:03.541145 4728 generic.go:334] "Generic (PLEG): container finished" podID="d050f77f-c609-4ddf-aaea-374a492efb32" containerID="eec1271a5c15ed7b585dfa66a8b565a01aeb25dddab92fe6068d3109f91f0f69" exitCode=0 Feb 27 11:40:03 crc kubenswrapper[4728]: I0227 11:40:03.541191 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536540-2rzrm" event={"ID":"d050f77f-c609-4ddf-aaea-374a492efb32","Type":"ContainerDied","Data":"eec1271a5c15ed7b585dfa66a8b565a01aeb25dddab92fe6068d3109f91f0f69"} Feb 27 11:40:05 crc kubenswrapper[4728]: I0227 11:40:05.032256 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536540-2rzrm" Feb 27 11:40:05 crc kubenswrapper[4728]: I0227 11:40:05.129236 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xhcfv\" (UniqueName: \"kubernetes.io/projected/d050f77f-c609-4ddf-aaea-374a492efb32-kube-api-access-xhcfv\") pod \"d050f77f-c609-4ddf-aaea-374a492efb32\" (UID: \"d050f77f-c609-4ddf-aaea-374a492efb32\") " Feb 27 11:40:05 crc kubenswrapper[4728]: I0227 11:40:05.140702 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d050f77f-c609-4ddf-aaea-374a492efb32-kube-api-access-xhcfv" (OuterVolumeSpecName: "kube-api-access-xhcfv") pod "d050f77f-c609-4ddf-aaea-374a492efb32" (UID: "d050f77f-c609-4ddf-aaea-374a492efb32"). InnerVolumeSpecName "kube-api-access-xhcfv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:40:05 crc kubenswrapper[4728]: I0227 11:40:05.231245 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xhcfv\" (UniqueName: \"kubernetes.io/projected/d050f77f-c609-4ddf-aaea-374a492efb32-kube-api-access-xhcfv\") on node \"crc\" DevicePath \"\"" Feb 27 11:40:05 crc kubenswrapper[4728]: I0227 11:40:05.585905 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536540-2rzrm" event={"ID":"d050f77f-c609-4ddf-aaea-374a492efb32","Type":"ContainerDied","Data":"4560c91842a3d533a7f523cd9f7e0abe827110e36af6bec86184e988eb70210e"} Feb 27 11:40:05 crc kubenswrapper[4728]: I0227 11:40:05.586175 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4560c91842a3d533a7f523cd9f7e0abe827110e36af6bec86184e988eb70210e" Feb 27 11:40:05 crc kubenswrapper[4728]: I0227 11:40:05.586009 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536540-2rzrm" Feb 27 11:40:05 crc kubenswrapper[4728]: I0227 11:40:05.922131 4728 patch_prober.go:28] interesting pod/machine-config-daemon-mf2hh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 11:40:05 crc kubenswrapper[4728]: I0227 11:40:05.922211 4728 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 11:40:06 crc kubenswrapper[4728]: I0227 11:40:06.129492 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536534-2kxjm"] Feb 27 11:40:06 crc kubenswrapper[4728]: I0227 11:40:06.144543 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536534-2kxjm"] Feb 27 11:40:06 crc kubenswrapper[4728]: I0227 11:40:06.744997 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="04fe871b-15b6-4e3c-bf8d-d1744de17bd3" path="/var/lib/kubelet/pods/04fe871b-15b6-4e3c-bf8d-d1744de17bd3/volumes" Feb 27 11:40:27 crc kubenswrapper[4728]: I0227 11:40:27.716765 4728 scope.go:117] "RemoveContainer" containerID="256ac461d99f7360b3cf643b5421b5eb9c8fe262f251267f58a18a0cb00491d4" Feb 27 11:40:27 crc kubenswrapper[4728]: I0227 11:40:27.752450 4728 scope.go:117] "RemoveContainer" containerID="3327a804a7ccce4e2c05ae0298afafdc977084ee93873fa6b2f5c9e688a4cdd7" Feb 27 11:40:27 crc kubenswrapper[4728]: I0227 11:40:27.934752 4728 scope.go:117] "RemoveContainer" containerID="30ca011dc988406ee115149b53bfe228942402601a5290b1f0f8f2e476b61e79" Feb 27 11:40:27 crc kubenswrapper[4728]: I0227 11:40:27.964276 4728 scope.go:117] "RemoveContainer" containerID="612c7e067e1ae059c674ddb63ebaaf2a4de48689be7f2baa41219337673ca34e" Feb 27 11:40:35 crc kubenswrapper[4728]: I0227 11:40:35.922970 4728 patch_prober.go:28] interesting pod/machine-config-daemon-mf2hh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 11:40:35 crc kubenswrapper[4728]: I0227 11:40:35.924422 4728 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 11:40:35 crc kubenswrapper[4728]: I0227 11:40:35.924480 4728 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" Feb 27 11:40:35 crc kubenswrapper[4728]: I0227 11:40:35.925580 4728 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ab30cd3a8640f0fe410f4fe277433311a2edc0921ebb4808792beee5c3183fbd"} pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 27 11:40:35 crc kubenswrapper[4728]: I0227 11:40:35.925748 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" containerName="machine-config-daemon" containerID="cri-o://ab30cd3a8640f0fe410f4fe277433311a2edc0921ebb4808792beee5c3183fbd" gracePeriod=600 Feb 27 11:40:37 crc kubenswrapper[4728]: I0227 11:40:37.027468 4728 generic.go:334] "Generic (PLEG): container finished" podID="c2cfd349-f825-497b-b698-7fb6bc258b22" containerID="ab30cd3a8640f0fe410f4fe277433311a2edc0921ebb4808792beee5c3183fbd" exitCode=0 Feb 27 11:40:37 crc kubenswrapper[4728]: I0227 11:40:37.028229 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" event={"ID":"c2cfd349-f825-497b-b698-7fb6bc258b22","Type":"ContainerDied","Data":"ab30cd3a8640f0fe410f4fe277433311a2edc0921ebb4808792beee5c3183fbd"} Feb 27 11:40:37 crc kubenswrapper[4728]: I0227 11:40:37.028273 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" event={"ID":"c2cfd349-f825-497b-b698-7fb6bc258b22","Type":"ContainerStarted","Data":"5040d34bd66f04310bee95071bbc5788775a1a41119e84ef938b20c74c967ae7"} Feb 27 11:40:37 crc kubenswrapper[4728]: I0227 11:40:37.028304 4728 scope.go:117] "RemoveContainer" containerID="d3c42e3baa5b4af9ba58d5b44dbb6d220b09d79c410d643dc0fba85cce7058ed" Feb 27 11:42:00 crc kubenswrapper[4728]: I0227 11:42:00.160819 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536542-967v2"] Feb 27 11:42:00 crc kubenswrapper[4728]: E0227 11:42:00.162037 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d050f77f-c609-4ddf-aaea-374a492efb32" containerName="oc" Feb 27 11:42:00 crc kubenswrapper[4728]: I0227 11:42:00.162059 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="d050f77f-c609-4ddf-aaea-374a492efb32" containerName="oc" Feb 27 11:42:00 crc kubenswrapper[4728]: I0227 11:42:00.162425 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="d050f77f-c609-4ddf-aaea-374a492efb32" containerName="oc" Feb 27 11:42:00 crc kubenswrapper[4728]: I0227 11:42:00.163801 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536542-967v2" Feb 27 11:42:00 crc kubenswrapper[4728]: I0227 11:42:00.166439 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-wxdfj" Feb 27 11:42:00 crc kubenswrapper[4728]: I0227 11:42:00.167293 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 11:42:00 crc kubenswrapper[4728]: I0227 11:42:00.167918 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 11:42:00 crc kubenswrapper[4728]: I0227 11:42:00.176266 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536542-967v2"] Feb 27 11:42:00 crc kubenswrapper[4728]: I0227 11:42:00.240253 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sc86t\" (UniqueName: \"kubernetes.io/projected/c02500ab-fff8-4c91-8db9-422ba4c5254b-kube-api-access-sc86t\") pod \"auto-csr-approver-29536542-967v2\" (UID: \"c02500ab-fff8-4c91-8db9-422ba4c5254b\") " pod="openshift-infra/auto-csr-approver-29536542-967v2" Feb 27 11:42:00 crc kubenswrapper[4728]: I0227 11:42:00.342799 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sc86t\" (UniqueName: \"kubernetes.io/projected/c02500ab-fff8-4c91-8db9-422ba4c5254b-kube-api-access-sc86t\") pod \"auto-csr-approver-29536542-967v2\" (UID: \"c02500ab-fff8-4c91-8db9-422ba4c5254b\") " pod="openshift-infra/auto-csr-approver-29536542-967v2" Feb 27 11:42:00 crc kubenswrapper[4728]: I0227 11:42:00.363746 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sc86t\" (UniqueName: \"kubernetes.io/projected/c02500ab-fff8-4c91-8db9-422ba4c5254b-kube-api-access-sc86t\") pod \"auto-csr-approver-29536542-967v2\" (UID: \"c02500ab-fff8-4c91-8db9-422ba4c5254b\") " pod="openshift-infra/auto-csr-approver-29536542-967v2" Feb 27 11:42:00 crc kubenswrapper[4728]: I0227 11:42:00.499691 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536542-967v2" Feb 27 11:42:01 crc kubenswrapper[4728]: I0227 11:42:01.043239 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536542-967v2"] Feb 27 11:42:01 crc kubenswrapper[4728]: I0227 11:42:01.170527 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536542-967v2" event={"ID":"c02500ab-fff8-4c91-8db9-422ba4c5254b","Type":"ContainerStarted","Data":"e190d6de48c37226ffe0dec93293e7ea11917712d9ddc166abe1891c28c73ebf"} Feb 27 11:42:03 crc kubenswrapper[4728]: I0227 11:42:03.200951 4728 generic.go:334] "Generic (PLEG): container finished" podID="c02500ab-fff8-4c91-8db9-422ba4c5254b" containerID="9996946dd66df0c0e950d5de9dc9784ea0ec3e9a851d49625d6b723c47469be8" exitCode=0 Feb 27 11:42:03 crc kubenswrapper[4728]: I0227 11:42:03.201022 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536542-967v2" event={"ID":"c02500ab-fff8-4c91-8db9-422ba4c5254b","Type":"ContainerDied","Data":"9996946dd66df0c0e950d5de9dc9784ea0ec3e9a851d49625d6b723c47469be8"} Feb 27 11:42:04 crc kubenswrapper[4728]: I0227 11:42:04.727115 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536542-967v2" Feb 27 11:42:04 crc kubenswrapper[4728]: I0227 11:42:04.857823 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sc86t\" (UniqueName: \"kubernetes.io/projected/c02500ab-fff8-4c91-8db9-422ba4c5254b-kube-api-access-sc86t\") pod \"c02500ab-fff8-4c91-8db9-422ba4c5254b\" (UID: \"c02500ab-fff8-4c91-8db9-422ba4c5254b\") " Feb 27 11:42:04 crc kubenswrapper[4728]: I0227 11:42:04.870687 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c02500ab-fff8-4c91-8db9-422ba4c5254b-kube-api-access-sc86t" (OuterVolumeSpecName: "kube-api-access-sc86t") pod "c02500ab-fff8-4c91-8db9-422ba4c5254b" (UID: "c02500ab-fff8-4c91-8db9-422ba4c5254b"). InnerVolumeSpecName "kube-api-access-sc86t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:42:04 crc kubenswrapper[4728]: I0227 11:42:04.961172 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sc86t\" (UniqueName: \"kubernetes.io/projected/c02500ab-fff8-4c91-8db9-422ba4c5254b-kube-api-access-sc86t\") on node \"crc\" DevicePath \"\"" Feb 27 11:42:05 crc kubenswrapper[4728]: I0227 11:42:05.223623 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536542-967v2" event={"ID":"c02500ab-fff8-4c91-8db9-422ba4c5254b","Type":"ContainerDied","Data":"e190d6de48c37226ffe0dec93293e7ea11917712d9ddc166abe1891c28c73ebf"} Feb 27 11:42:05 crc kubenswrapper[4728]: I0227 11:42:05.223942 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e190d6de48c37226ffe0dec93293e7ea11917712d9ddc166abe1891c28c73ebf" Feb 27 11:42:05 crc kubenswrapper[4728]: I0227 11:42:05.223784 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536542-967v2" Feb 27 11:42:05 crc kubenswrapper[4728]: I0227 11:42:05.807144 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536536-xk7nh"] Feb 27 11:42:05 crc kubenswrapper[4728]: I0227 11:42:05.817906 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536536-xk7nh"] Feb 27 11:42:06 crc kubenswrapper[4728]: I0227 11:42:06.738959 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c8af8c3c-e1b2-4588-aa43-0e33511b7ae9" path="/var/lib/kubelet/pods/c8af8c3c-e1b2-4588-aa43-0e33511b7ae9/volumes" Feb 27 11:42:28 crc kubenswrapper[4728]: I0227 11:42:28.115537 4728 scope.go:117] "RemoveContainer" containerID="af1322b85ed8058615ec49a774a0674de9dda5fe024b2489b86fc45ed6223349" Feb 27 11:43:05 crc kubenswrapper[4728]: I0227 11:43:05.922784 4728 patch_prober.go:28] interesting pod/machine-config-daemon-mf2hh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 11:43:05 crc kubenswrapper[4728]: I0227 11:43:05.923444 4728 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 11:43:35 crc kubenswrapper[4728]: I0227 11:43:35.922554 4728 patch_prober.go:28] interesting pod/machine-config-daemon-mf2hh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 11:43:35 crc kubenswrapper[4728]: I0227 11:43:35.923063 4728 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 11:43:52 crc kubenswrapper[4728]: I0227 11:43:52.913935 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-c9q8x"] Feb 27 11:43:52 crc kubenswrapper[4728]: E0227 11:43:52.915478 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c02500ab-fff8-4c91-8db9-422ba4c5254b" containerName="oc" Feb 27 11:43:52 crc kubenswrapper[4728]: I0227 11:43:52.915536 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="c02500ab-fff8-4c91-8db9-422ba4c5254b" containerName="oc" Feb 27 11:43:52 crc kubenswrapper[4728]: I0227 11:43:52.916066 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="c02500ab-fff8-4c91-8db9-422ba4c5254b" containerName="oc" Feb 27 11:43:52 crc kubenswrapper[4728]: I0227 11:43:52.922475 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-c9q8x" Feb 27 11:43:52 crc kubenswrapper[4728]: I0227 11:43:52.934107 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-c9q8x"] Feb 27 11:43:53 crc kubenswrapper[4728]: I0227 11:43:53.017183 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tq28h\" (UniqueName: \"kubernetes.io/projected/24192cb3-ca1f-4a5a-9b09-5a1f481c3665-kube-api-access-tq28h\") pod \"redhat-operators-c9q8x\" (UID: \"24192cb3-ca1f-4a5a-9b09-5a1f481c3665\") " pod="openshift-marketplace/redhat-operators-c9q8x" Feb 27 11:43:53 crc kubenswrapper[4728]: I0227 11:43:53.017330 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24192cb3-ca1f-4a5a-9b09-5a1f481c3665-catalog-content\") pod \"redhat-operators-c9q8x\" (UID: \"24192cb3-ca1f-4a5a-9b09-5a1f481c3665\") " pod="openshift-marketplace/redhat-operators-c9q8x" Feb 27 11:43:53 crc kubenswrapper[4728]: I0227 11:43:53.017446 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24192cb3-ca1f-4a5a-9b09-5a1f481c3665-utilities\") pod \"redhat-operators-c9q8x\" (UID: \"24192cb3-ca1f-4a5a-9b09-5a1f481c3665\") " pod="openshift-marketplace/redhat-operators-c9q8x" Feb 27 11:43:53 crc kubenswrapper[4728]: I0227 11:43:53.119292 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tq28h\" (UniqueName: \"kubernetes.io/projected/24192cb3-ca1f-4a5a-9b09-5a1f481c3665-kube-api-access-tq28h\") pod \"redhat-operators-c9q8x\" (UID: \"24192cb3-ca1f-4a5a-9b09-5a1f481c3665\") " pod="openshift-marketplace/redhat-operators-c9q8x" Feb 27 11:43:53 crc kubenswrapper[4728]: I0227 11:43:53.119732 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24192cb3-ca1f-4a5a-9b09-5a1f481c3665-catalog-content\") pod \"redhat-operators-c9q8x\" (UID: \"24192cb3-ca1f-4a5a-9b09-5a1f481c3665\") " pod="openshift-marketplace/redhat-operators-c9q8x" Feb 27 11:43:53 crc kubenswrapper[4728]: I0227 11:43:53.119790 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24192cb3-ca1f-4a5a-9b09-5a1f481c3665-utilities\") pod \"redhat-operators-c9q8x\" (UID: \"24192cb3-ca1f-4a5a-9b09-5a1f481c3665\") " pod="openshift-marketplace/redhat-operators-c9q8x" Feb 27 11:43:53 crc kubenswrapper[4728]: I0227 11:43:53.120398 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24192cb3-ca1f-4a5a-9b09-5a1f481c3665-utilities\") pod \"redhat-operators-c9q8x\" (UID: \"24192cb3-ca1f-4a5a-9b09-5a1f481c3665\") " pod="openshift-marketplace/redhat-operators-c9q8x" Feb 27 11:43:53 crc kubenswrapper[4728]: I0227 11:43:53.120518 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24192cb3-ca1f-4a5a-9b09-5a1f481c3665-catalog-content\") pod \"redhat-operators-c9q8x\" (UID: \"24192cb3-ca1f-4a5a-9b09-5a1f481c3665\") " pod="openshift-marketplace/redhat-operators-c9q8x" Feb 27 11:43:53 crc kubenswrapper[4728]: I0227 11:43:53.147189 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tq28h\" (UniqueName: \"kubernetes.io/projected/24192cb3-ca1f-4a5a-9b09-5a1f481c3665-kube-api-access-tq28h\") pod \"redhat-operators-c9q8x\" (UID: \"24192cb3-ca1f-4a5a-9b09-5a1f481c3665\") " pod="openshift-marketplace/redhat-operators-c9q8x" Feb 27 11:43:53 crc kubenswrapper[4728]: I0227 11:43:53.250754 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-c9q8x" Feb 27 11:43:53 crc kubenswrapper[4728]: I0227 11:43:53.791129 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-c9q8x"] Feb 27 11:43:54 crc kubenswrapper[4728]: I0227 11:43:54.672788 4728 generic.go:334] "Generic (PLEG): container finished" podID="24192cb3-ca1f-4a5a-9b09-5a1f481c3665" containerID="2974500b88829f7661918d6610e8f54543b42d7bbbcacc3a5ddcae98d27fbdd6" exitCode=0 Feb 27 11:43:54 crc kubenswrapper[4728]: I0227 11:43:54.673200 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c9q8x" event={"ID":"24192cb3-ca1f-4a5a-9b09-5a1f481c3665","Type":"ContainerDied","Data":"2974500b88829f7661918d6610e8f54543b42d7bbbcacc3a5ddcae98d27fbdd6"} Feb 27 11:43:54 crc kubenswrapper[4728]: I0227 11:43:54.673224 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c9q8x" event={"ID":"24192cb3-ca1f-4a5a-9b09-5a1f481c3665","Type":"ContainerStarted","Data":"1aef2468a8aae2476e66070ef0426b8bd5165991d40351491b21f7a9a22f4411"} Feb 27 11:43:54 crc kubenswrapper[4728]: I0227 11:43:54.675981 4728 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 27 11:43:56 crc kubenswrapper[4728]: I0227 11:43:56.702946 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c9q8x" event={"ID":"24192cb3-ca1f-4a5a-9b09-5a1f481c3665","Type":"ContainerStarted","Data":"e815f58af06c77b9ad2531a2e82cfdb2e092becc62f4ee6ddbc3a1df3e913ca6"} Feb 27 11:44:00 crc kubenswrapper[4728]: I0227 11:44:00.162888 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536544-7zmq4"] Feb 27 11:44:00 crc kubenswrapper[4728]: I0227 11:44:00.164910 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536544-7zmq4" Feb 27 11:44:00 crc kubenswrapper[4728]: I0227 11:44:00.167088 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 11:44:00 crc kubenswrapper[4728]: I0227 11:44:00.167129 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-wxdfj" Feb 27 11:44:00 crc kubenswrapper[4728]: I0227 11:44:00.168250 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 11:44:00 crc kubenswrapper[4728]: I0227 11:44:00.174458 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536544-7zmq4"] Feb 27 11:44:00 crc kubenswrapper[4728]: I0227 11:44:00.211325 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dt7bp\" (UniqueName: \"kubernetes.io/projected/6ed52116-68d5-45f7-ab26-ae8973646d4e-kube-api-access-dt7bp\") pod \"auto-csr-approver-29536544-7zmq4\" (UID: \"6ed52116-68d5-45f7-ab26-ae8973646d4e\") " pod="openshift-infra/auto-csr-approver-29536544-7zmq4" Feb 27 11:44:00 crc kubenswrapper[4728]: I0227 11:44:00.314103 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dt7bp\" (UniqueName: \"kubernetes.io/projected/6ed52116-68d5-45f7-ab26-ae8973646d4e-kube-api-access-dt7bp\") pod \"auto-csr-approver-29536544-7zmq4\" (UID: \"6ed52116-68d5-45f7-ab26-ae8973646d4e\") " pod="openshift-infra/auto-csr-approver-29536544-7zmq4" Feb 27 11:44:00 crc kubenswrapper[4728]: I0227 11:44:00.373925 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dt7bp\" (UniqueName: \"kubernetes.io/projected/6ed52116-68d5-45f7-ab26-ae8973646d4e-kube-api-access-dt7bp\") pod \"auto-csr-approver-29536544-7zmq4\" (UID: \"6ed52116-68d5-45f7-ab26-ae8973646d4e\") " pod="openshift-infra/auto-csr-approver-29536544-7zmq4" Feb 27 11:44:00 crc kubenswrapper[4728]: I0227 11:44:00.491702 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536544-7zmq4" Feb 27 11:44:01 crc kubenswrapper[4728]: I0227 11:44:01.136539 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536544-7zmq4"] Feb 27 11:44:01 crc kubenswrapper[4728]: W0227 11:44:01.137372 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6ed52116_68d5_45f7_ab26_ae8973646d4e.slice/crio-611559781dcbdc3e41a1e88426bf5cde1525f4a05cd975afbe33c47e6a4f3161 WatchSource:0}: Error finding container 611559781dcbdc3e41a1e88426bf5cde1525f4a05cd975afbe33c47e6a4f3161: Status 404 returned error can't find the container with id 611559781dcbdc3e41a1e88426bf5cde1525f4a05cd975afbe33c47e6a4f3161 Feb 27 11:44:01 crc kubenswrapper[4728]: I0227 11:44:01.762613 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536544-7zmq4" event={"ID":"6ed52116-68d5-45f7-ab26-ae8973646d4e","Type":"ContainerStarted","Data":"611559781dcbdc3e41a1e88426bf5cde1525f4a05cd975afbe33c47e6a4f3161"} Feb 27 11:44:01 crc kubenswrapper[4728]: I0227 11:44:01.764810 4728 generic.go:334] "Generic (PLEG): container finished" podID="24192cb3-ca1f-4a5a-9b09-5a1f481c3665" containerID="e815f58af06c77b9ad2531a2e82cfdb2e092becc62f4ee6ddbc3a1df3e913ca6" exitCode=0 Feb 27 11:44:01 crc kubenswrapper[4728]: I0227 11:44:01.764858 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c9q8x" event={"ID":"24192cb3-ca1f-4a5a-9b09-5a1f481c3665","Type":"ContainerDied","Data":"e815f58af06c77b9ad2531a2e82cfdb2e092becc62f4ee6ddbc3a1df3e913ca6"} Feb 27 11:44:02 crc kubenswrapper[4728]: I0227 11:44:02.793080 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536544-7zmq4" event={"ID":"6ed52116-68d5-45f7-ab26-ae8973646d4e","Type":"ContainerStarted","Data":"d3ec9edad13897d84c73e534d72f74cf09011c43c7bbbf4a1c057060b8f938c2"} Feb 27 11:44:02 crc kubenswrapper[4728]: I0227 11:44:02.803759 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c9q8x" event={"ID":"24192cb3-ca1f-4a5a-9b09-5a1f481c3665","Type":"ContainerStarted","Data":"eeb74ce058034bd68672ba8420c1860592c10cf66ec3e3b923935787c7d74ab0"} Feb 27 11:44:02 crc kubenswrapper[4728]: I0227 11:44:02.840782 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29536544-7zmq4" podStartSLOduration=1.947651998 podStartE2EDuration="2.840759808s" podCreationTimestamp="2026-02-27 11:44:00 +0000 UTC" firstStartedPulling="2026-02-27 11:44:01.141077313 +0000 UTC m=+4661.103443429" lastFinishedPulling="2026-02-27 11:44:02.034185133 +0000 UTC m=+4661.996551239" observedRunningTime="2026-02-27 11:44:02.820916573 +0000 UTC m=+4662.783282709" watchObservedRunningTime="2026-02-27 11:44:02.840759808 +0000 UTC m=+4662.803125914" Feb 27 11:44:02 crc kubenswrapper[4728]: I0227 11:44:02.848850 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-c9q8x" podStartSLOduration=3.306900589 podStartE2EDuration="10.848824888s" podCreationTimestamp="2026-02-27 11:43:52 +0000 UTC" firstStartedPulling="2026-02-27 11:43:54.67579094 +0000 UTC m=+4654.638157046" lastFinishedPulling="2026-02-27 11:44:02.217715239 +0000 UTC m=+4662.180081345" observedRunningTime="2026-02-27 11:44:02.838459894 +0000 UTC m=+4662.800826010" watchObservedRunningTime="2026-02-27 11:44:02.848824888 +0000 UTC m=+4662.811191004" Feb 27 11:44:03 crc kubenswrapper[4728]: I0227 11:44:03.251075 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-c9q8x" Feb 27 11:44:03 crc kubenswrapper[4728]: I0227 11:44:03.251761 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-c9q8x" Feb 27 11:44:03 crc kubenswrapper[4728]: I0227 11:44:03.818917 4728 generic.go:334] "Generic (PLEG): container finished" podID="6ed52116-68d5-45f7-ab26-ae8973646d4e" containerID="d3ec9edad13897d84c73e534d72f74cf09011c43c7bbbf4a1c057060b8f938c2" exitCode=0 Feb 27 11:44:03 crc kubenswrapper[4728]: I0227 11:44:03.821540 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536544-7zmq4" event={"ID":"6ed52116-68d5-45f7-ab26-ae8973646d4e","Type":"ContainerDied","Data":"d3ec9edad13897d84c73e534d72f74cf09011c43c7bbbf4a1c057060b8f938c2"} Feb 27 11:44:04 crc kubenswrapper[4728]: I0227 11:44:04.524981 4728 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-c9q8x" podUID="24192cb3-ca1f-4a5a-9b09-5a1f481c3665" containerName="registry-server" probeResult="failure" output=< Feb 27 11:44:04 crc kubenswrapper[4728]: timeout: failed to connect service ":50051" within 1s Feb 27 11:44:04 crc kubenswrapper[4728]: > Feb 27 11:44:05 crc kubenswrapper[4728]: I0227 11:44:05.549705 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536544-7zmq4" Feb 27 11:44:05 crc kubenswrapper[4728]: I0227 11:44:05.669885 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dt7bp\" (UniqueName: \"kubernetes.io/projected/6ed52116-68d5-45f7-ab26-ae8973646d4e-kube-api-access-dt7bp\") pod \"6ed52116-68d5-45f7-ab26-ae8973646d4e\" (UID: \"6ed52116-68d5-45f7-ab26-ae8973646d4e\") " Feb 27 11:44:05 crc kubenswrapper[4728]: I0227 11:44:05.679821 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ed52116-68d5-45f7-ab26-ae8973646d4e-kube-api-access-dt7bp" (OuterVolumeSpecName: "kube-api-access-dt7bp") pod "6ed52116-68d5-45f7-ab26-ae8973646d4e" (UID: "6ed52116-68d5-45f7-ab26-ae8973646d4e"). InnerVolumeSpecName "kube-api-access-dt7bp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:44:05 crc kubenswrapper[4728]: I0227 11:44:05.773190 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dt7bp\" (UniqueName: \"kubernetes.io/projected/6ed52116-68d5-45f7-ab26-ae8973646d4e-kube-api-access-dt7bp\") on node \"crc\" DevicePath \"\"" Feb 27 11:44:05 crc kubenswrapper[4728]: I0227 11:44:05.843399 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536544-7zmq4" event={"ID":"6ed52116-68d5-45f7-ab26-ae8973646d4e","Type":"ContainerDied","Data":"611559781dcbdc3e41a1e88426bf5cde1525f4a05cd975afbe33c47e6a4f3161"} Feb 27 11:44:05 crc kubenswrapper[4728]: I0227 11:44:05.843437 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="611559781dcbdc3e41a1e88426bf5cde1525f4a05cd975afbe33c47e6a4f3161" Feb 27 11:44:05 crc kubenswrapper[4728]: I0227 11:44:05.843488 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536544-7zmq4" Feb 27 11:44:05 crc kubenswrapper[4728]: I0227 11:44:05.901983 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536538-5n245"] Feb 27 11:44:05 crc kubenswrapper[4728]: I0227 11:44:05.916433 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536538-5n245"] Feb 27 11:44:05 crc kubenswrapper[4728]: I0227 11:44:05.922469 4728 patch_prober.go:28] interesting pod/machine-config-daemon-mf2hh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 11:44:05 crc kubenswrapper[4728]: I0227 11:44:05.922588 4728 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 11:44:05 crc kubenswrapper[4728]: I0227 11:44:05.922642 4728 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" Feb 27 11:44:05 crc kubenswrapper[4728]: I0227 11:44:05.923850 4728 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5040d34bd66f04310bee95071bbc5788775a1a41119e84ef938b20c74c967ae7"} pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 27 11:44:05 crc kubenswrapper[4728]: I0227 11:44:05.923943 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" containerName="machine-config-daemon" containerID="cri-o://5040d34bd66f04310bee95071bbc5788775a1a41119e84ef938b20c74c967ae7" gracePeriod=600 Feb 27 11:44:06 crc kubenswrapper[4728]: E0227 11:44:06.051333 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mf2hh_openshift-machine-config-operator(c2cfd349-f825-497b-b698-7fb6bc258b22)\"" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" Feb 27 11:44:06 crc kubenswrapper[4728]: I0227 11:44:06.740830 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="74194f81-8740-4f00-af3b-999db7c2ff93" path="/var/lib/kubelet/pods/74194f81-8740-4f00-af3b-999db7c2ff93/volumes" Feb 27 11:44:06 crc kubenswrapper[4728]: I0227 11:44:06.883453 4728 generic.go:334] "Generic (PLEG): container finished" podID="c2cfd349-f825-497b-b698-7fb6bc258b22" containerID="5040d34bd66f04310bee95071bbc5788775a1a41119e84ef938b20c74c967ae7" exitCode=0 Feb 27 11:44:06 crc kubenswrapper[4728]: I0227 11:44:06.883523 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" event={"ID":"c2cfd349-f825-497b-b698-7fb6bc258b22","Type":"ContainerDied","Data":"5040d34bd66f04310bee95071bbc5788775a1a41119e84ef938b20c74c967ae7"} Feb 27 11:44:06 crc kubenswrapper[4728]: I0227 11:44:06.884115 4728 scope.go:117] "RemoveContainer" containerID="ab30cd3a8640f0fe410f4fe277433311a2edc0921ebb4808792beee5c3183fbd" Feb 27 11:44:06 crc kubenswrapper[4728]: I0227 11:44:06.885000 4728 scope.go:117] "RemoveContainer" containerID="5040d34bd66f04310bee95071bbc5788775a1a41119e84ef938b20c74c967ae7" Feb 27 11:44:06 crc kubenswrapper[4728]: E0227 11:44:06.885666 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mf2hh_openshift-machine-config-operator(c2cfd349-f825-497b-b698-7fb6bc258b22)\"" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" Feb 27 11:44:14 crc kubenswrapper[4728]: I0227 11:44:14.317397 4728 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-c9q8x" podUID="24192cb3-ca1f-4a5a-9b09-5a1f481c3665" containerName="registry-server" probeResult="failure" output=< Feb 27 11:44:14 crc kubenswrapper[4728]: timeout: failed to connect service ":50051" within 1s Feb 27 11:44:14 crc kubenswrapper[4728]: > Feb 27 11:44:17 crc kubenswrapper[4728]: I0227 11:44:17.725801 4728 scope.go:117] "RemoveContainer" containerID="5040d34bd66f04310bee95071bbc5788775a1a41119e84ef938b20c74c967ae7" Feb 27 11:44:17 crc kubenswrapper[4728]: E0227 11:44:17.726950 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mf2hh_openshift-machine-config-operator(c2cfd349-f825-497b-b698-7fb6bc258b22)\"" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" Feb 27 11:44:24 crc kubenswrapper[4728]: I0227 11:44:24.029096 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Feb 27 11:44:24 crc kubenswrapper[4728]: E0227 11:44:24.031953 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ed52116-68d5-45f7-ab26-ae8973646d4e" containerName="oc" Feb 27 11:44:24 crc kubenswrapper[4728]: I0227 11:44:24.032071 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ed52116-68d5-45f7-ab26-ae8973646d4e" containerName="oc" Feb 27 11:44:24 crc kubenswrapper[4728]: I0227 11:44:24.032474 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ed52116-68d5-45f7-ab26-ae8973646d4e" containerName="oc" Feb 27 11:44:24 crc kubenswrapper[4728]: I0227 11:44:24.033623 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 27 11:44:24 crc kubenswrapper[4728]: I0227 11:44:24.037471 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-8qr9h" Feb 27 11:44:24 crc kubenswrapper[4728]: I0227 11:44:24.037928 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Feb 27 11:44:24 crc kubenswrapper[4728]: I0227 11:44:24.038281 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Feb 27 11:44:24 crc kubenswrapper[4728]: I0227 11:44:24.038595 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Feb 27 11:44:24 crc kubenswrapper[4728]: I0227 11:44:24.050423 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Feb 27 11:44:24 crc kubenswrapper[4728]: I0227 11:44:24.180324 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/87d887cf-2e16-418f-a903-a24551953f9d-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"87d887cf-2e16-418f-a903-a24551953f9d\") " pod="openstack/tempest-tests-tempest" Feb 27 11:44:24 crc kubenswrapper[4728]: I0227 11:44:24.180407 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/87d887cf-2e16-418f-a903-a24551953f9d-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"87d887cf-2e16-418f-a903-a24551953f9d\") " pod="openstack/tempest-tests-tempest" Feb 27 11:44:24 crc kubenswrapper[4728]: I0227 11:44:24.180602 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"tempest-tests-tempest\" (UID: \"87d887cf-2e16-418f-a903-a24551953f9d\") " pod="openstack/tempest-tests-tempest" Feb 27 11:44:24 crc kubenswrapper[4728]: I0227 11:44:24.180712 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/87d887cf-2e16-418f-a903-a24551953f9d-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"87d887cf-2e16-418f-a903-a24551953f9d\") " pod="openstack/tempest-tests-tempest" Feb 27 11:44:24 crc kubenswrapper[4728]: I0227 11:44:24.180905 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/87d887cf-2e16-418f-a903-a24551953f9d-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"87d887cf-2e16-418f-a903-a24551953f9d\") " pod="openstack/tempest-tests-tempest" Feb 27 11:44:24 crc kubenswrapper[4728]: I0227 11:44:24.180965 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/87d887cf-2e16-418f-a903-a24551953f9d-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"87d887cf-2e16-418f-a903-a24551953f9d\") " pod="openstack/tempest-tests-tempest" Feb 27 11:44:24 crc kubenswrapper[4728]: I0227 11:44:24.181066 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/87d887cf-2e16-418f-a903-a24551953f9d-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"87d887cf-2e16-418f-a903-a24551953f9d\") " pod="openstack/tempest-tests-tempest" Feb 27 11:44:24 crc kubenswrapper[4728]: I0227 11:44:24.185914 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/87d887cf-2e16-418f-a903-a24551953f9d-config-data\") pod \"tempest-tests-tempest\" (UID: \"87d887cf-2e16-418f-a903-a24551953f9d\") " pod="openstack/tempest-tests-tempest" Feb 27 11:44:24 crc kubenswrapper[4728]: I0227 11:44:24.185971 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxfcg\" (UniqueName: \"kubernetes.io/projected/87d887cf-2e16-418f-a903-a24551953f9d-kube-api-access-nxfcg\") pod \"tempest-tests-tempest\" (UID: \"87d887cf-2e16-418f-a903-a24551953f9d\") " pod="openstack/tempest-tests-tempest" Feb 27 11:44:24 crc kubenswrapper[4728]: I0227 11:44:24.289690 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/87d887cf-2e16-418f-a903-a24551953f9d-config-data\") pod \"tempest-tests-tempest\" (UID: \"87d887cf-2e16-418f-a903-a24551953f9d\") " pod="openstack/tempest-tests-tempest" Feb 27 11:44:24 crc kubenswrapper[4728]: I0227 11:44:24.289860 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nxfcg\" (UniqueName: \"kubernetes.io/projected/87d887cf-2e16-418f-a903-a24551953f9d-kube-api-access-nxfcg\") pod \"tempest-tests-tempest\" (UID: \"87d887cf-2e16-418f-a903-a24551953f9d\") " pod="openstack/tempest-tests-tempest" Feb 27 11:44:24 crc kubenswrapper[4728]: I0227 11:44:24.290065 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/87d887cf-2e16-418f-a903-a24551953f9d-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"87d887cf-2e16-418f-a903-a24551953f9d\") " pod="openstack/tempest-tests-tempest" Feb 27 11:44:24 crc kubenswrapper[4728]: I0227 11:44:24.290152 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/87d887cf-2e16-418f-a903-a24551953f9d-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"87d887cf-2e16-418f-a903-a24551953f9d\") " pod="openstack/tempest-tests-tempest" Feb 27 11:44:24 crc kubenswrapper[4728]: I0227 11:44:24.290229 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"tempest-tests-tempest\" (UID: \"87d887cf-2e16-418f-a903-a24551953f9d\") " pod="openstack/tempest-tests-tempest" Feb 27 11:44:24 crc kubenswrapper[4728]: I0227 11:44:24.290295 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/87d887cf-2e16-418f-a903-a24551953f9d-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"87d887cf-2e16-418f-a903-a24551953f9d\") " pod="openstack/tempest-tests-tempest" Feb 27 11:44:24 crc kubenswrapper[4728]: I0227 11:44:24.290425 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/87d887cf-2e16-418f-a903-a24551953f9d-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"87d887cf-2e16-418f-a903-a24551953f9d\") " pod="openstack/tempest-tests-tempest" Feb 27 11:44:24 crc kubenswrapper[4728]: I0227 11:44:24.290478 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/87d887cf-2e16-418f-a903-a24551953f9d-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"87d887cf-2e16-418f-a903-a24551953f9d\") " pod="openstack/tempest-tests-tempest" Feb 27 11:44:24 crc kubenswrapper[4728]: I0227 11:44:24.290521 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/87d887cf-2e16-418f-a903-a24551953f9d-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"87d887cf-2e16-418f-a903-a24551953f9d\") " pod="openstack/tempest-tests-tempest" Feb 27 11:44:24 crc kubenswrapper[4728]: I0227 11:44:24.290621 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/87d887cf-2e16-418f-a903-a24551953f9d-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"87d887cf-2e16-418f-a903-a24551953f9d\") " pod="openstack/tempest-tests-tempest" Feb 27 11:44:24 crc kubenswrapper[4728]: I0227 11:44:24.291083 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/87d887cf-2e16-418f-a903-a24551953f9d-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"87d887cf-2e16-418f-a903-a24551953f9d\") " pod="openstack/tempest-tests-tempest" Feb 27 11:44:24 crc kubenswrapper[4728]: I0227 11:44:24.291432 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/87d887cf-2e16-418f-a903-a24551953f9d-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"87d887cf-2e16-418f-a903-a24551953f9d\") " pod="openstack/tempest-tests-tempest" Feb 27 11:44:24 crc kubenswrapper[4728]: I0227 11:44:24.291615 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/87d887cf-2e16-418f-a903-a24551953f9d-config-data\") pod \"tempest-tests-tempest\" (UID: \"87d887cf-2e16-418f-a903-a24551953f9d\") " pod="openstack/tempest-tests-tempest" Feb 27 11:44:24 crc kubenswrapper[4728]: I0227 11:44:24.296706 4728 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"tempest-tests-tempest\" (UID: \"87d887cf-2e16-418f-a903-a24551953f9d\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/tempest-tests-tempest" Feb 27 11:44:24 crc kubenswrapper[4728]: I0227 11:44:24.296983 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/87d887cf-2e16-418f-a903-a24551953f9d-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"87d887cf-2e16-418f-a903-a24551953f9d\") " pod="openstack/tempest-tests-tempest" Feb 27 11:44:24 crc kubenswrapper[4728]: I0227 11:44:24.297136 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/87d887cf-2e16-418f-a903-a24551953f9d-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"87d887cf-2e16-418f-a903-a24551953f9d\") " pod="openstack/tempest-tests-tempest" Feb 27 11:44:24 crc kubenswrapper[4728]: I0227 11:44:24.304963 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/87d887cf-2e16-418f-a903-a24551953f9d-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"87d887cf-2e16-418f-a903-a24551953f9d\") " pod="openstack/tempest-tests-tempest" Feb 27 11:44:24 crc kubenswrapper[4728]: I0227 11:44:24.310129 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nxfcg\" (UniqueName: \"kubernetes.io/projected/87d887cf-2e16-418f-a903-a24551953f9d-kube-api-access-nxfcg\") pod \"tempest-tests-tempest\" (UID: \"87d887cf-2e16-418f-a903-a24551953f9d\") " pod="openstack/tempest-tests-tempest" Feb 27 11:44:24 crc kubenswrapper[4728]: I0227 11:44:24.315521 4728 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-c9q8x" podUID="24192cb3-ca1f-4a5a-9b09-5a1f481c3665" containerName="registry-server" probeResult="failure" output=< Feb 27 11:44:24 crc kubenswrapper[4728]: timeout: failed to connect service ":50051" within 1s Feb 27 11:44:24 crc kubenswrapper[4728]: > Feb 27 11:44:24 crc kubenswrapper[4728]: I0227 11:44:24.332852 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"tempest-tests-tempest\" (UID: \"87d887cf-2e16-418f-a903-a24551953f9d\") " pod="openstack/tempest-tests-tempest" Feb 27 11:44:24 crc kubenswrapper[4728]: I0227 11:44:24.369405 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 27 11:44:24 crc kubenswrapper[4728]: I0227 11:44:24.958083 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Feb 27 11:44:25 crc kubenswrapper[4728]: I0227 11:44:25.125613 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"87d887cf-2e16-418f-a903-a24551953f9d","Type":"ContainerStarted","Data":"c7581fa2bb148dd2e01c803b2d67edc5be75f52ff25b7d5d95e121afbe540ba1"} Feb 27 11:44:28 crc kubenswrapper[4728]: I0227 11:44:28.276774 4728 scope.go:117] "RemoveContainer" containerID="12648aab44f1d359122962e74b03a69b8bc454d7a6b37231c2e5b1a600d389a8" Feb 27 11:44:29 crc kubenswrapper[4728]: I0227 11:44:29.726549 4728 scope.go:117] "RemoveContainer" containerID="5040d34bd66f04310bee95071bbc5788775a1a41119e84ef938b20c74c967ae7" Feb 27 11:44:29 crc kubenswrapper[4728]: E0227 11:44:29.727172 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mf2hh_openshift-machine-config-operator(c2cfd349-f825-497b-b698-7fb6bc258b22)\"" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" Feb 27 11:44:34 crc kubenswrapper[4728]: I0227 11:44:34.306788 4728 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-c9q8x" podUID="24192cb3-ca1f-4a5a-9b09-5a1f481c3665" containerName="registry-server" probeResult="failure" output=< Feb 27 11:44:34 crc kubenswrapper[4728]: timeout: failed to connect service ":50051" within 1s Feb 27 11:44:34 crc kubenswrapper[4728]: > Feb 27 11:44:43 crc kubenswrapper[4728]: I0227 11:44:43.724845 4728 scope.go:117] "RemoveContainer" containerID="5040d34bd66f04310bee95071bbc5788775a1a41119e84ef938b20c74c967ae7" Feb 27 11:44:43 crc kubenswrapper[4728]: E0227 11:44:43.725692 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mf2hh_openshift-machine-config-operator(c2cfd349-f825-497b-b698-7fb6bc258b22)\"" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" Feb 27 11:44:44 crc kubenswrapper[4728]: I0227 11:44:44.051898 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-c9q8x" Feb 27 11:44:44 crc kubenswrapper[4728]: I0227 11:44:44.120153 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-c9q8x" Feb 27 11:44:44 crc kubenswrapper[4728]: I0227 11:44:44.289898 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-c9q8x"] Feb 27 11:44:45 crc kubenswrapper[4728]: I0227 11:44:45.383314 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-c9q8x" podUID="24192cb3-ca1f-4a5a-9b09-5a1f481c3665" containerName="registry-server" containerID="cri-o://eeb74ce058034bd68672ba8420c1860592c10cf66ec3e3b923935787c7d74ab0" gracePeriod=2 Feb 27 11:44:46 crc kubenswrapper[4728]: I0227 11:44:46.405909 4728 generic.go:334] "Generic (PLEG): container finished" podID="24192cb3-ca1f-4a5a-9b09-5a1f481c3665" containerID="eeb74ce058034bd68672ba8420c1860592c10cf66ec3e3b923935787c7d74ab0" exitCode=0 Feb 27 11:44:46 crc kubenswrapper[4728]: I0227 11:44:46.406256 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c9q8x" event={"ID":"24192cb3-ca1f-4a5a-9b09-5a1f481c3665","Type":"ContainerDied","Data":"eeb74ce058034bd68672ba8420c1860592c10cf66ec3e3b923935787c7d74ab0"} Feb 27 11:44:53 crc kubenswrapper[4728]: E0227 11:44:53.257433 4728 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of eeb74ce058034bd68672ba8420c1860592c10cf66ec3e3b923935787c7d74ab0 is running failed: container process not found" containerID="eeb74ce058034bd68672ba8420c1860592c10cf66ec3e3b923935787c7d74ab0" cmd=["grpc_health_probe","-addr=:50051"] Feb 27 11:44:53 crc kubenswrapper[4728]: E0227 11:44:53.258564 4728 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of eeb74ce058034bd68672ba8420c1860592c10cf66ec3e3b923935787c7d74ab0 is running failed: container process not found" containerID="eeb74ce058034bd68672ba8420c1860592c10cf66ec3e3b923935787c7d74ab0" cmd=["grpc_health_probe","-addr=:50051"] Feb 27 11:44:53 crc kubenswrapper[4728]: E0227 11:44:53.258957 4728 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of eeb74ce058034bd68672ba8420c1860592c10cf66ec3e3b923935787c7d74ab0 is running failed: container process not found" containerID="eeb74ce058034bd68672ba8420c1860592c10cf66ec3e3b923935787c7d74ab0" cmd=["grpc_health_probe","-addr=:50051"] Feb 27 11:44:53 crc kubenswrapper[4728]: E0227 11:44:53.259055 4728 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of eeb74ce058034bd68672ba8420c1860592c10cf66ec3e3b923935787c7d74ab0 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-operators-c9q8x" podUID="24192cb3-ca1f-4a5a-9b09-5a1f481c3665" containerName="registry-server" Feb 27 11:44:57 crc kubenswrapper[4728]: I0227 11:44:57.725111 4728 scope.go:117] "RemoveContainer" containerID="5040d34bd66f04310bee95071bbc5788775a1a41119e84ef938b20c74c967ae7" Feb 27 11:44:57 crc kubenswrapper[4728]: E0227 11:44:57.725944 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mf2hh_openshift-machine-config-operator(c2cfd349-f825-497b-b698-7fb6bc258b22)\"" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" Feb 27 11:45:00 crc kubenswrapper[4728]: I0227 11:45:00.238723 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29536545-4hmh7"] Feb 27 11:45:00 crc kubenswrapper[4728]: I0227 11:45:00.249019 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29536545-4hmh7" Feb 27 11:45:00 crc kubenswrapper[4728]: I0227 11:45:00.251255 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 27 11:45:00 crc kubenswrapper[4728]: I0227 11:45:00.251286 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 27 11:45:00 crc kubenswrapper[4728]: I0227 11:45:00.264090 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29536545-4hmh7"] Feb 27 11:45:00 crc kubenswrapper[4728]: I0227 11:45:00.332847 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/21aa8652-e254-421a-bda3-b25c9419b995-config-volume\") pod \"collect-profiles-29536545-4hmh7\" (UID: \"21aa8652-e254-421a-bda3-b25c9419b995\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536545-4hmh7" Feb 27 11:45:00 crc kubenswrapper[4728]: I0227 11:45:00.333081 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ctnrm\" (UniqueName: \"kubernetes.io/projected/21aa8652-e254-421a-bda3-b25c9419b995-kube-api-access-ctnrm\") pod \"collect-profiles-29536545-4hmh7\" (UID: \"21aa8652-e254-421a-bda3-b25c9419b995\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536545-4hmh7" Feb 27 11:45:00 crc kubenswrapper[4728]: I0227 11:45:00.333161 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/21aa8652-e254-421a-bda3-b25c9419b995-secret-volume\") pod \"collect-profiles-29536545-4hmh7\" (UID: \"21aa8652-e254-421a-bda3-b25c9419b995\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536545-4hmh7" Feb 27 11:45:00 crc kubenswrapper[4728]: I0227 11:45:00.435416 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ctnrm\" (UniqueName: \"kubernetes.io/projected/21aa8652-e254-421a-bda3-b25c9419b995-kube-api-access-ctnrm\") pod \"collect-profiles-29536545-4hmh7\" (UID: \"21aa8652-e254-421a-bda3-b25c9419b995\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536545-4hmh7" Feb 27 11:45:00 crc kubenswrapper[4728]: I0227 11:45:00.435477 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/21aa8652-e254-421a-bda3-b25c9419b995-secret-volume\") pod \"collect-profiles-29536545-4hmh7\" (UID: \"21aa8652-e254-421a-bda3-b25c9419b995\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536545-4hmh7" Feb 27 11:45:00 crc kubenswrapper[4728]: I0227 11:45:00.435711 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/21aa8652-e254-421a-bda3-b25c9419b995-config-volume\") pod \"collect-profiles-29536545-4hmh7\" (UID: \"21aa8652-e254-421a-bda3-b25c9419b995\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536545-4hmh7" Feb 27 11:45:00 crc kubenswrapper[4728]: I0227 11:45:00.436697 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/21aa8652-e254-421a-bda3-b25c9419b995-config-volume\") pod \"collect-profiles-29536545-4hmh7\" (UID: \"21aa8652-e254-421a-bda3-b25c9419b995\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536545-4hmh7" Feb 27 11:45:00 crc kubenswrapper[4728]: I0227 11:45:00.450704 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/21aa8652-e254-421a-bda3-b25c9419b995-secret-volume\") pod \"collect-profiles-29536545-4hmh7\" (UID: \"21aa8652-e254-421a-bda3-b25c9419b995\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536545-4hmh7" Feb 27 11:45:00 crc kubenswrapper[4728]: I0227 11:45:00.453338 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ctnrm\" (UniqueName: \"kubernetes.io/projected/21aa8652-e254-421a-bda3-b25c9419b995-kube-api-access-ctnrm\") pod \"collect-profiles-29536545-4hmh7\" (UID: \"21aa8652-e254-421a-bda3-b25c9419b995\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536545-4hmh7" Feb 27 11:45:00 crc kubenswrapper[4728]: I0227 11:45:00.610414 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29536545-4hmh7" Feb 27 11:45:02 crc kubenswrapper[4728]: E0227 11:45:02.540344 4728 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Feb 27 11:45:02 crc kubenswrapper[4728]: E0227 11:45:02.548803 4728 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nxfcg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(87d887cf-2e16-418f-a903-a24551953f9d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 27 11:45:02 crc kubenswrapper[4728]: E0227 11:45:02.550961 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="87d887cf-2e16-418f-a903-a24551953f9d" Feb 27 11:45:02 crc kubenswrapper[4728]: E0227 11:45:02.630060 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest" podUID="87d887cf-2e16-418f-a903-a24551953f9d" Feb 27 11:45:03 crc kubenswrapper[4728]: I0227 11:45:03.043873 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-c9q8x" Feb 27 11:45:03 crc kubenswrapper[4728]: I0227 11:45:03.102256 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24192cb3-ca1f-4a5a-9b09-5a1f481c3665-catalog-content\") pod \"24192cb3-ca1f-4a5a-9b09-5a1f481c3665\" (UID: \"24192cb3-ca1f-4a5a-9b09-5a1f481c3665\") " Feb 27 11:45:03 crc kubenswrapper[4728]: I0227 11:45:03.102475 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tq28h\" (UniqueName: \"kubernetes.io/projected/24192cb3-ca1f-4a5a-9b09-5a1f481c3665-kube-api-access-tq28h\") pod \"24192cb3-ca1f-4a5a-9b09-5a1f481c3665\" (UID: \"24192cb3-ca1f-4a5a-9b09-5a1f481c3665\") " Feb 27 11:45:03 crc kubenswrapper[4728]: I0227 11:45:03.102539 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24192cb3-ca1f-4a5a-9b09-5a1f481c3665-utilities\") pod \"24192cb3-ca1f-4a5a-9b09-5a1f481c3665\" (UID: \"24192cb3-ca1f-4a5a-9b09-5a1f481c3665\") " Feb 27 11:45:03 crc kubenswrapper[4728]: I0227 11:45:03.103600 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/24192cb3-ca1f-4a5a-9b09-5a1f481c3665-utilities" (OuterVolumeSpecName: "utilities") pod "24192cb3-ca1f-4a5a-9b09-5a1f481c3665" (UID: "24192cb3-ca1f-4a5a-9b09-5a1f481c3665"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 11:45:03 crc kubenswrapper[4728]: I0227 11:45:03.114581 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24192cb3-ca1f-4a5a-9b09-5a1f481c3665-kube-api-access-tq28h" (OuterVolumeSpecName: "kube-api-access-tq28h") pod "24192cb3-ca1f-4a5a-9b09-5a1f481c3665" (UID: "24192cb3-ca1f-4a5a-9b09-5a1f481c3665"). InnerVolumeSpecName "kube-api-access-tq28h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:45:03 crc kubenswrapper[4728]: I0227 11:45:03.161849 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29536545-4hmh7"] Feb 27 11:45:03 crc kubenswrapper[4728]: I0227 11:45:03.204996 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tq28h\" (UniqueName: \"kubernetes.io/projected/24192cb3-ca1f-4a5a-9b09-5a1f481c3665-kube-api-access-tq28h\") on node \"crc\" DevicePath \"\"" Feb 27 11:45:03 crc kubenswrapper[4728]: I0227 11:45:03.205022 4728 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24192cb3-ca1f-4a5a-9b09-5a1f481c3665-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 11:45:03 crc kubenswrapper[4728]: I0227 11:45:03.226733 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/24192cb3-ca1f-4a5a-9b09-5a1f481c3665-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "24192cb3-ca1f-4a5a-9b09-5a1f481c3665" (UID: "24192cb3-ca1f-4a5a-9b09-5a1f481c3665"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 11:45:03 crc kubenswrapper[4728]: I0227 11:45:03.306660 4728 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24192cb3-ca1f-4a5a-9b09-5a1f481c3665-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 11:45:03 crc kubenswrapper[4728]: I0227 11:45:03.649211 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c9q8x" event={"ID":"24192cb3-ca1f-4a5a-9b09-5a1f481c3665","Type":"ContainerDied","Data":"1aef2468a8aae2476e66070ef0426b8bd5165991d40351491b21f7a9a22f4411"} Feb 27 11:45:03 crc kubenswrapper[4728]: I0227 11:45:03.649651 4728 scope.go:117] "RemoveContainer" containerID="eeb74ce058034bd68672ba8420c1860592c10cf66ec3e3b923935787c7d74ab0" Feb 27 11:45:03 crc kubenswrapper[4728]: I0227 11:45:03.649271 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-c9q8x" Feb 27 11:45:03 crc kubenswrapper[4728]: I0227 11:45:03.654110 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29536545-4hmh7" event={"ID":"21aa8652-e254-421a-bda3-b25c9419b995","Type":"ContainerStarted","Data":"931c2743c42a8a97453974b7d4c711b288e0da3c52d3f100c4b9719548717963"} Feb 27 11:45:03 crc kubenswrapper[4728]: I0227 11:45:03.654167 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29536545-4hmh7" event={"ID":"21aa8652-e254-421a-bda3-b25c9419b995","Type":"ContainerStarted","Data":"a6987a993e670ca07670175369c9cd05ea7a5b2e5fddd642508af7754c92d86e"} Feb 27 11:45:03 crc kubenswrapper[4728]: I0227 11:45:03.684107 4728 scope.go:117] "RemoveContainer" containerID="e815f58af06c77b9ad2531a2e82cfdb2e092becc62f4ee6ddbc3a1df3e913ca6" Feb 27 11:45:03 crc kubenswrapper[4728]: I0227 11:45:03.685039 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29536545-4hmh7" podStartSLOduration=3.685024235 podStartE2EDuration="3.685024235s" podCreationTimestamp="2026-02-27 11:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 11:45:03.67862845 +0000 UTC m=+4723.640994566" watchObservedRunningTime="2026-02-27 11:45:03.685024235 +0000 UTC m=+4723.647390341" Feb 27 11:45:03 crc kubenswrapper[4728]: I0227 11:45:03.714352 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-c9q8x"] Feb 27 11:45:03 crc kubenswrapper[4728]: I0227 11:45:03.726300 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-c9q8x"] Feb 27 11:45:03 crc kubenswrapper[4728]: I0227 11:45:03.730802 4728 scope.go:117] "RemoveContainer" containerID="2974500b88829f7661918d6610e8f54543b42d7bbbcacc3a5ddcae98d27fbdd6" Feb 27 11:45:03 crc kubenswrapper[4728]: I0227 11:45:03.946940 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-2zg8k"] Feb 27 11:45:03 crc kubenswrapper[4728]: E0227 11:45:03.947489 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24192cb3-ca1f-4a5a-9b09-5a1f481c3665" containerName="registry-server" Feb 27 11:45:03 crc kubenswrapper[4728]: I0227 11:45:03.947523 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="24192cb3-ca1f-4a5a-9b09-5a1f481c3665" containerName="registry-server" Feb 27 11:45:03 crc kubenswrapper[4728]: E0227 11:45:03.947538 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24192cb3-ca1f-4a5a-9b09-5a1f481c3665" containerName="extract-content" Feb 27 11:45:03 crc kubenswrapper[4728]: I0227 11:45:03.947544 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="24192cb3-ca1f-4a5a-9b09-5a1f481c3665" containerName="extract-content" Feb 27 11:45:03 crc kubenswrapper[4728]: E0227 11:45:03.947559 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24192cb3-ca1f-4a5a-9b09-5a1f481c3665" containerName="extract-utilities" Feb 27 11:45:03 crc kubenswrapper[4728]: I0227 11:45:03.947566 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="24192cb3-ca1f-4a5a-9b09-5a1f481c3665" containerName="extract-utilities" Feb 27 11:45:03 crc kubenswrapper[4728]: I0227 11:45:03.947797 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="24192cb3-ca1f-4a5a-9b09-5a1f481c3665" containerName="registry-server" Feb 27 11:45:03 crc kubenswrapper[4728]: I0227 11:45:03.949606 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2zg8k" Feb 27 11:45:03 crc kubenswrapper[4728]: I0227 11:45:03.957984 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2zg8k"] Feb 27 11:45:04 crc kubenswrapper[4728]: I0227 11:45:04.024239 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wkzv8\" (UniqueName: \"kubernetes.io/projected/7f0088df-fc3f-4dca-9b6b-1800a71c1054-kube-api-access-wkzv8\") pod \"community-operators-2zg8k\" (UID: \"7f0088df-fc3f-4dca-9b6b-1800a71c1054\") " pod="openshift-marketplace/community-operators-2zg8k" Feb 27 11:45:04 crc kubenswrapper[4728]: I0227 11:45:04.024291 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f0088df-fc3f-4dca-9b6b-1800a71c1054-catalog-content\") pod \"community-operators-2zg8k\" (UID: \"7f0088df-fc3f-4dca-9b6b-1800a71c1054\") " pod="openshift-marketplace/community-operators-2zg8k" Feb 27 11:45:04 crc kubenswrapper[4728]: I0227 11:45:04.024322 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f0088df-fc3f-4dca-9b6b-1800a71c1054-utilities\") pod \"community-operators-2zg8k\" (UID: \"7f0088df-fc3f-4dca-9b6b-1800a71c1054\") " pod="openshift-marketplace/community-operators-2zg8k" Feb 27 11:45:04 crc kubenswrapper[4728]: I0227 11:45:04.126248 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wkzv8\" (UniqueName: \"kubernetes.io/projected/7f0088df-fc3f-4dca-9b6b-1800a71c1054-kube-api-access-wkzv8\") pod \"community-operators-2zg8k\" (UID: \"7f0088df-fc3f-4dca-9b6b-1800a71c1054\") " pod="openshift-marketplace/community-operators-2zg8k" Feb 27 11:45:04 crc kubenswrapper[4728]: I0227 11:45:04.126291 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f0088df-fc3f-4dca-9b6b-1800a71c1054-catalog-content\") pod \"community-operators-2zg8k\" (UID: \"7f0088df-fc3f-4dca-9b6b-1800a71c1054\") " pod="openshift-marketplace/community-operators-2zg8k" Feb 27 11:45:04 crc kubenswrapper[4728]: I0227 11:45:04.126324 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f0088df-fc3f-4dca-9b6b-1800a71c1054-utilities\") pod \"community-operators-2zg8k\" (UID: \"7f0088df-fc3f-4dca-9b6b-1800a71c1054\") " pod="openshift-marketplace/community-operators-2zg8k" Feb 27 11:45:04 crc kubenswrapper[4728]: I0227 11:45:04.126804 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f0088df-fc3f-4dca-9b6b-1800a71c1054-catalog-content\") pod \"community-operators-2zg8k\" (UID: \"7f0088df-fc3f-4dca-9b6b-1800a71c1054\") " pod="openshift-marketplace/community-operators-2zg8k" Feb 27 11:45:04 crc kubenswrapper[4728]: I0227 11:45:04.126837 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f0088df-fc3f-4dca-9b6b-1800a71c1054-utilities\") pod \"community-operators-2zg8k\" (UID: \"7f0088df-fc3f-4dca-9b6b-1800a71c1054\") " pod="openshift-marketplace/community-operators-2zg8k" Feb 27 11:45:04 crc kubenswrapper[4728]: I0227 11:45:04.144795 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-2c4gn"] Feb 27 11:45:04 crc kubenswrapper[4728]: I0227 11:45:04.148607 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2c4gn" Feb 27 11:45:04 crc kubenswrapper[4728]: I0227 11:45:04.155451 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wkzv8\" (UniqueName: \"kubernetes.io/projected/7f0088df-fc3f-4dca-9b6b-1800a71c1054-kube-api-access-wkzv8\") pod \"community-operators-2zg8k\" (UID: \"7f0088df-fc3f-4dca-9b6b-1800a71c1054\") " pod="openshift-marketplace/community-operators-2zg8k" Feb 27 11:45:04 crc kubenswrapper[4728]: I0227 11:45:04.171255 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2c4gn"] Feb 27 11:45:04 crc kubenswrapper[4728]: I0227 11:45:04.228483 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f4b28bd-ad68-4508-b57e-b9967394bd4e-catalog-content\") pod \"certified-operators-2c4gn\" (UID: \"5f4b28bd-ad68-4508-b57e-b9967394bd4e\") " pod="openshift-marketplace/certified-operators-2c4gn" Feb 27 11:45:04 crc kubenswrapper[4728]: I0227 11:45:04.228861 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f4b28bd-ad68-4508-b57e-b9967394bd4e-utilities\") pod \"certified-operators-2c4gn\" (UID: \"5f4b28bd-ad68-4508-b57e-b9967394bd4e\") " pod="openshift-marketplace/certified-operators-2c4gn" Feb 27 11:45:04 crc kubenswrapper[4728]: I0227 11:45:04.228897 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7m5fm\" (UniqueName: \"kubernetes.io/projected/5f4b28bd-ad68-4508-b57e-b9967394bd4e-kube-api-access-7m5fm\") pod \"certified-operators-2c4gn\" (UID: \"5f4b28bd-ad68-4508-b57e-b9967394bd4e\") " pod="openshift-marketplace/certified-operators-2c4gn" Feb 27 11:45:04 crc kubenswrapper[4728]: I0227 11:45:04.294414 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2zg8k" Feb 27 11:45:04 crc kubenswrapper[4728]: I0227 11:45:04.331223 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f4b28bd-ad68-4508-b57e-b9967394bd4e-utilities\") pod \"certified-operators-2c4gn\" (UID: \"5f4b28bd-ad68-4508-b57e-b9967394bd4e\") " pod="openshift-marketplace/certified-operators-2c4gn" Feb 27 11:45:04 crc kubenswrapper[4728]: I0227 11:45:04.331304 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7m5fm\" (UniqueName: \"kubernetes.io/projected/5f4b28bd-ad68-4508-b57e-b9967394bd4e-kube-api-access-7m5fm\") pod \"certified-operators-2c4gn\" (UID: \"5f4b28bd-ad68-4508-b57e-b9967394bd4e\") " pod="openshift-marketplace/certified-operators-2c4gn" Feb 27 11:45:04 crc kubenswrapper[4728]: I0227 11:45:04.331577 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f4b28bd-ad68-4508-b57e-b9967394bd4e-catalog-content\") pod \"certified-operators-2c4gn\" (UID: \"5f4b28bd-ad68-4508-b57e-b9967394bd4e\") " pod="openshift-marketplace/certified-operators-2c4gn" Feb 27 11:45:04 crc kubenswrapper[4728]: I0227 11:45:04.331901 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f4b28bd-ad68-4508-b57e-b9967394bd4e-utilities\") pod \"certified-operators-2c4gn\" (UID: \"5f4b28bd-ad68-4508-b57e-b9967394bd4e\") " pod="openshift-marketplace/certified-operators-2c4gn" Feb 27 11:45:04 crc kubenswrapper[4728]: I0227 11:45:04.332039 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f4b28bd-ad68-4508-b57e-b9967394bd4e-catalog-content\") pod \"certified-operators-2c4gn\" (UID: \"5f4b28bd-ad68-4508-b57e-b9967394bd4e\") " pod="openshift-marketplace/certified-operators-2c4gn" Feb 27 11:45:04 crc kubenswrapper[4728]: I0227 11:45:04.356528 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7m5fm\" (UniqueName: \"kubernetes.io/projected/5f4b28bd-ad68-4508-b57e-b9967394bd4e-kube-api-access-7m5fm\") pod \"certified-operators-2c4gn\" (UID: \"5f4b28bd-ad68-4508-b57e-b9967394bd4e\") " pod="openshift-marketplace/certified-operators-2c4gn" Feb 27 11:45:04 crc kubenswrapper[4728]: I0227 11:45:04.526291 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2c4gn" Feb 27 11:45:04 crc kubenswrapper[4728]: I0227 11:45:04.692468 4728 generic.go:334] "Generic (PLEG): container finished" podID="21aa8652-e254-421a-bda3-b25c9419b995" containerID="931c2743c42a8a97453974b7d4c711b288e0da3c52d3f100c4b9719548717963" exitCode=0 Feb 27 11:45:04 crc kubenswrapper[4728]: I0227 11:45:04.692574 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29536545-4hmh7" event={"ID":"21aa8652-e254-421a-bda3-b25c9419b995","Type":"ContainerDied","Data":"931c2743c42a8a97453974b7d4c711b288e0da3c52d3f100c4b9719548717963"} Feb 27 11:45:04 crc kubenswrapper[4728]: I0227 11:45:04.770537 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24192cb3-ca1f-4a5a-9b09-5a1f481c3665" path="/var/lib/kubelet/pods/24192cb3-ca1f-4a5a-9b09-5a1f481c3665/volumes" Feb 27 11:45:04 crc kubenswrapper[4728]: I0227 11:45:04.841955 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2zg8k"] Feb 27 11:45:05 crc kubenswrapper[4728]: W0227 11:45:05.075133 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5f4b28bd_ad68_4508_b57e_b9967394bd4e.slice/crio-ac6bcb4bd0e7c5ffba50984c247a9d59cbc940c4dc7a0500c5dfafbd9ea6ca43 WatchSource:0}: Error finding container ac6bcb4bd0e7c5ffba50984c247a9d59cbc940c4dc7a0500c5dfafbd9ea6ca43: Status 404 returned error can't find the container with id ac6bcb4bd0e7c5ffba50984c247a9d59cbc940c4dc7a0500c5dfafbd9ea6ca43 Feb 27 11:45:05 crc kubenswrapper[4728]: I0227 11:45:05.079234 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2c4gn"] Feb 27 11:45:05 crc kubenswrapper[4728]: I0227 11:45:05.716917 4728 generic.go:334] "Generic (PLEG): container finished" podID="7f0088df-fc3f-4dca-9b6b-1800a71c1054" containerID="ae9b90cbe9b4905df97bca4b76e048fadc1b4126f09dff9ab8303ff2da8df9fb" exitCode=0 Feb 27 11:45:05 crc kubenswrapper[4728]: I0227 11:45:05.717001 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2zg8k" event={"ID":"7f0088df-fc3f-4dca-9b6b-1800a71c1054","Type":"ContainerDied","Data":"ae9b90cbe9b4905df97bca4b76e048fadc1b4126f09dff9ab8303ff2da8df9fb"} Feb 27 11:45:05 crc kubenswrapper[4728]: I0227 11:45:05.717228 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2zg8k" event={"ID":"7f0088df-fc3f-4dca-9b6b-1800a71c1054","Type":"ContainerStarted","Data":"327850dbbd5901238f83be3ce0f22fee29e9e5f734d781cb5e8fc5639bb600b5"} Feb 27 11:45:05 crc kubenswrapper[4728]: I0227 11:45:05.719992 4728 generic.go:334] "Generic (PLEG): container finished" podID="5f4b28bd-ad68-4508-b57e-b9967394bd4e" containerID="a2ea2698bbe640007c03176bdc935af5b7056570a73593c8f837cfb9c60aaf3c" exitCode=0 Feb 27 11:45:05 crc kubenswrapper[4728]: I0227 11:45:05.720071 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2c4gn" event={"ID":"5f4b28bd-ad68-4508-b57e-b9967394bd4e","Type":"ContainerDied","Data":"a2ea2698bbe640007c03176bdc935af5b7056570a73593c8f837cfb9c60aaf3c"} Feb 27 11:45:05 crc kubenswrapper[4728]: I0227 11:45:05.720104 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2c4gn" event={"ID":"5f4b28bd-ad68-4508-b57e-b9967394bd4e","Type":"ContainerStarted","Data":"ac6bcb4bd0e7c5ffba50984c247a9d59cbc940c4dc7a0500c5dfafbd9ea6ca43"} Feb 27 11:45:06 crc kubenswrapper[4728]: I0227 11:45:06.140957 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29536545-4hmh7" Feb 27 11:45:06 crc kubenswrapper[4728]: I0227 11:45:06.207126 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ctnrm\" (UniqueName: \"kubernetes.io/projected/21aa8652-e254-421a-bda3-b25c9419b995-kube-api-access-ctnrm\") pod \"21aa8652-e254-421a-bda3-b25c9419b995\" (UID: \"21aa8652-e254-421a-bda3-b25c9419b995\") " Feb 27 11:45:06 crc kubenswrapper[4728]: I0227 11:45:06.207173 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/21aa8652-e254-421a-bda3-b25c9419b995-config-volume\") pod \"21aa8652-e254-421a-bda3-b25c9419b995\" (UID: \"21aa8652-e254-421a-bda3-b25c9419b995\") " Feb 27 11:45:06 crc kubenswrapper[4728]: I0227 11:45:06.207227 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/21aa8652-e254-421a-bda3-b25c9419b995-secret-volume\") pod \"21aa8652-e254-421a-bda3-b25c9419b995\" (UID: \"21aa8652-e254-421a-bda3-b25c9419b995\") " Feb 27 11:45:06 crc kubenswrapper[4728]: I0227 11:45:06.209271 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/21aa8652-e254-421a-bda3-b25c9419b995-config-volume" (OuterVolumeSpecName: "config-volume") pod "21aa8652-e254-421a-bda3-b25c9419b995" (UID: "21aa8652-e254-421a-bda3-b25c9419b995"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 11:45:06 crc kubenswrapper[4728]: I0227 11:45:06.215247 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21aa8652-e254-421a-bda3-b25c9419b995-kube-api-access-ctnrm" (OuterVolumeSpecName: "kube-api-access-ctnrm") pod "21aa8652-e254-421a-bda3-b25c9419b995" (UID: "21aa8652-e254-421a-bda3-b25c9419b995"). InnerVolumeSpecName "kube-api-access-ctnrm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:45:06 crc kubenswrapper[4728]: I0227 11:45:06.217802 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21aa8652-e254-421a-bda3-b25c9419b995-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "21aa8652-e254-421a-bda3-b25c9419b995" (UID: "21aa8652-e254-421a-bda3-b25c9419b995"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 11:45:06 crc kubenswrapper[4728]: I0227 11:45:06.311253 4728 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/21aa8652-e254-421a-bda3-b25c9419b995-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 27 11:45:06 crc kubenswrapper[4728]: I0227 11:45:06.311702 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ctnrm\" (UniqueName: \"kubernetes.io/projected/21aa8652-e254-421a-bda3-b25c9419b995-kube-api-access-ctnrm\") on node \"crc\" DevicePath \"\"" Feb 27 11:45:06 crc kubenswrapper[4728]: I0227 11:45:06.311857 4728 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/21aa8652-e254-421a-bda3-b25c9419b995-config-volume\") on node \"crc\" DevicePath \"\"" Feb 27 11:45:06 crc kubenswrapper[4728]: I0227 11:45:06.740728 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2zg8k" event={"ID":"7f0088df-fc3f-4dca-9b6b-1800a71c1054","Type":"ContainerStarted","Data":"66001fc20a57203cd71dfcd7f6e73ef7c1c030865801130212281da5ffe8d819"} Feb 27 11:45:06 crc kubenswrapper[4728]: I0227 11:45:06.741075 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2c4gn" event={"ID":"5f4b28bd-ad68-4508-b57e-b9967394bd4e","Type":"ContainerStarted","Data":"c3f1cc568bea5cd9168c6d9910e9f0e305f0bd978226f73bf54eead3c0d16492"} Feb 27 11:45:06 crc kubenswrapper[4728]: I0227 11:45:06.742543 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29536545-4hmh7" event={"ID":"21aa8652-e254-421a-bda3-b25c9419b995","Type":"ContainerDied","Data":"a6987a993e670ca07670175369c9cd05ea7a5b2e5fddd642508af7754c92d86e"} Feb 27 11:45:06 crc kubenswrapper[4728]: I0227 11:45:06.742591 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a6987a993e670ca07670175369c9cd05ea7a5b2e5fddd642508af7754c92d86e" Feb 27 11:45:06 crc kubenswrapper[4728]: I0227 11:45:06.742670 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29536545-4hmh7" Feb 27 11:45:07 crc kubenswrapper[4728]: I0227 11:45:07.250051 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29536500-fxt22"] Feb 27 11:45:07 crc kubenswrapper[4728]: I0227 11:45:07.260301 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29536500-fxt22"] Feb 27 11:45:08 crc kubenswrapper[4728]: I0227 11:45:08.725350 4728 scope.go:117] "RemoveContainer" containerID="5040d34bd66f04310bee95071bbc5788775a1a41119e84ef938b20c74c967ae7" Feb 27 11:45:08 crc kubenswrapper[4728]: E0227 11:45:08.726048 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mf2hh_openshift-machine-config-operator(c2cfd349-f825-497b-b698-7fb6bc258b22)\"" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" Feb 27 11:45:08 crc kubenswrapper[4728]: I0227 11:45:08.739860 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="896c73eb-cb8d-4aa1-88e7-9748213bb799" path="/var/lib/kubelet/pods/896c73eb-cb8d-4aa1-88e7-9748213bb799/volumes" Feb 27 11:45:10 crc kubenswrapper[4728]: I0227 11:45:10.797882 4728 generic.go:334] "Generic (PLEG): container finished" podID="5f4b28bd-ad68-4508-b57e-b9967394bd4e" containerID="c3f1cc568bea5cd9168c6d9910e9f0e305f0bd978226f73bf54eead3c0d16492" exitCode=0 Feb 27 11:45:10 crc kubenswrapper[4728]: I0227 11:45:10.797999 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2c4gn" event={"ID":"5f4b28bd-ad68-4508-b57e-b9967394bd4e","Type":"ContainerDied","Data":"c3f1cc568bea5cd9168c6d9910e9f0e305f0bd978226f73bf54eead3c0d16492"} Feb 27 11:45:10 crc kubenswrapper[4728]: I0227 11:45:10.804285 4728 generic.go:334] "Generic (PLEG): container finished" podID="7f0088df-fc3f-4dca-9b6b-1800a71c1054" containerID="66001fc20a57203cd71dfcd7f6e73ef7c1c030865801130212281da5ffe8d819" exitCode=0 Feb 27 11:45:10 crc kubenswrapper[4728]: I0227 11:45:10.804473 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2zg8k" event={"ID":"7f0088df-fc3f-4dca-9b6b-1800a71c1054","Type":"ContainerDied","Data":"66001fc20a57203cd71dfcd7f6e73ef7c1c030865801130212281da5ffe8d819"} Feb 27 11:45:11 crc kubenswrapper[4728]: I0227 11:45:11.826447 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2zg8k" event={"ID":"7f0088df-fc3f-4dca-9b6b-1800a71c1054","Type":"ContainerStarted","Data":"c79d84111e8ac43bd6740c55035b4ca827b7032b2064ca92d8373be665f1692f"} Feb 27 11:45:11 crc kubenswrapper[4728]: I0227 11:45:11.830623 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2c4gn" event={"ID":"5f4b28bd-ad68-4508-b57e-b9967394bd4e","Type":"ContainerStarted","Data":"fa9eabf1271c50a949d1139f557d757db71e3f45b0478468b709c4dcef64c994"} Feb 27 11:45:11 crc kubenswrapper[4728]: I0227 11:45:11.858827 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-2zg8k" podStartSLOduration=3.352771169 podStartE2EDuration="8.858800472s" podCreationTimestamp="2026-02-27 11:45:03 +0000 UTC" firstStartedPulling="2026-02-27 11:45:05.720141867 +0000 UTC m=+4725.682507963" lastFinishedPulling="2026-02-27 11:45:11.22617115 +0000 UTC m=+4731.188537266" observedRunningTime="2026-02-27 11:45:11.850155705 +0000 UTC m=+4731.812521831" watchObservedRunningTime="2026-02-27 11:45:11.858800472 +0000 UTC m=+4731.821166608" Feb 27 11:45:11 crc kubenswrapper[4728]: I0227 11:45:11.884181 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-2c4gn" podStartSLOduration=2.413090594 podStartE2EDuration="7.884156558s" podCreationTimestamp="2026-02-27 11:45:04 +0000 UTC" firstStartedPulling="2026-02-27 11:45:05.722648476 +0000 UTC m=+4725.685014582" lastFinishedPulling="2026-02-27 11:45:11.19371444 +0000 UTC m=+4731.156080546" observedRunningTime="2026-02-27 11:45:11.869105435 +0000 UTC m=+4731.831471551" watchObservedRunningTime="2026-02-27 11:45:11.884156558 +0000 UTC m=+4731.846522674" Feb 27 11:45:14 crc kubenswrapper[4728]: I0227 11:45:14.295307 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-2zg8k" Feb 27 11:45:14 crc kubenswrapper[4728]: I0227 11:45:14.295712 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-2zg8k" Feb 27 11:45:14 crc kubenswrapper[4728]: I0227 11:45:14.526493 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-2c4gn" Feb 27 11:45:14 crc kubenswrapper[4728]: I0227 11:45:14.526549 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-2c4gn" Feb 27 11:45:15 crc kubenswrapper[4728]: I0227 11:45:15.362612 4728 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-2zg8k" podUID="7f0088df-fc3f-4dca-9b6b-1800a71c1054" containerName="registry-server" probeResult="failure" output=< Feb 27 11:45:15 crc kubenswrapper[4728]: timeout: failed to connect service ":50051" within 1s Feb 27 11:45:15 crc kubenswrapper[4728]: > Feb 27 11:45:15 crc kubenswrapper[4728]: I0227 11:45:15.593046 4728 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-2c4gn" podUID="5f4b28bd-ad68-4508-b57e-b9967394bd4e" containerName="registry-server" probeResult="failure" output=< Feb 27 11:45:15 crc kubenswrapper[4728]: timeout: failed to connect service ":50051" within 1s Feb 27 11:45:15 crc kubenswrapper[4728]: > Feb 27 11:45:17 crc kubenswrapper[4728]: I0227 11:45:17.893469 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"87d887cf-2e16-418f-a903-a24551953f9d","Type":"ContainerStarted","Data":"50a9cf204daae35e814e75511fa98a73e3bfd7bb886ed1c19a155862e2bb376e"} Feb 27 11:45:17 crc kubenswrapper[4728]: I0227 11:45:17.921845 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=5.566332013 podStartE2EDuration="55.921816882s" podCreationTimestamp="2026-02-27 11:44:22 +0000 UTC" firstStartedPulling="2026-02-27 11:44:24.964405478 +0000 UTC m=+4684.926771574" lastFinishedPulling="2026-02-27 11:45:15.319890337 +0000 UTC m=+4735.282256443" observedRunningTime="2026-02-27 11:45:17.915709625 +0000 UTC m=+4737.878075741" watchObservedRunningTime="2026-02-27 11:45:17.921816882 +0000 UTC m=+4737.884183028" Feb 27 11:45:19 crc kubenswrapper[4728]: I0227 11:45:19.725836 4728 scope.go:117] "RemoveContainer" containerID="5040d34bd66f04310bee95071bbc5788775a1a41119e84ef938b20c74c967ae7" Feb 27 11:45:19 crc kubenswrapper[4728]: E0227 11:45:19.727904 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mf2hh_openshift-machine-config-operator(c2cfd349-f825-497b-b698-7fb6bc258b22)\"" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" Feb 27 11:45:24 crc kubenswrapper[4728]: I0227 11:45:24.353722 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-2zg8k" Feb 27 11:45:24 crc kubenswrapper[4728]: I0227 11:45:24.421012 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-2zg8k" Feb 27 11:45:24 crc kubenswrapper[4728]: I0227 11:45:24.583793 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-2c4gn" Feb 27 11:45:24 crc kubenswrapper[4728]: I0227 11:45:24.606943 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2zg8k"] Feb 27 11:45:24 crc kubenswrapper[4728]: I0227 11:45:24.650205 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-2c4gn" Feb 27 11:45:25 crc kubenswrapper[4728]: I0227 11:45:25.997850 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-2zg8k" podUID="7f0088df-fc3f-4dca-9b6b-1800a71c1054" containerName="registry-server" containerID="cri-o://c79d84111e8ac43bd6740c55035b4ca827b7032b2064ca92d8373be665f1692f" gracePeriod=2 Feb 27 11:45:26 crc kubenswrapper[4728]: I0227 11:45:26.621655 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2zg8k" Feb 27 11:45:26 crc kubenswrapper[4728]: I0227 11:45:26.693363 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wkzv8\" (UniqueName: \"kubernetes.io/projected/7f0088df-fc3f-4dca-9b6b-1800a71c1054-kube-api-access-wkzv8\") pod \"7f0088df-fc3f-4dca-9b6b-1800a71c1054\" (UID: \"7f0088df-fc3f-4dca-9b6b-1800a71c1054\") " Feb 27 11:45:26 crc kubenswrapper[4728]: I0227 11:45:26.693549 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f0088df-fc3f-4dca-9b6b-1800a71c1054-catalog-content\") pod \"7f0088df-fc3f-4dca-9b6b-1800a71c1054\" (UID: \"7f0088df-fc3f-4dca-9b6b-1800a71c1054\") " Feb 27 11:45:26 crc kubenswrapper[4728]: I0227 11:45:26.693907 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f0088df-fc3f-4dca-9b6b-1800a71c1054-utilities\") pod \"7f0088df-fc3f-4dca-9b6b-1800a71c1054\" (UID: \"7f0088df-fc3f-4dca-9b6b-1800a71c1054\") " Feb 27 11:45:26 crc kubenswrapper[4728]: I0227 11:45:26.695229 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7f0088df-fc3f-4dca-9b6b-1800a71c1054-utilities" (OuterVolumeSpecName: "utilities") pod "7f0088df-fc3f-4dca-9b6b-1800a71c1054" (UID: "7f0088df-fc3f-4dca-9b6b-1800a71c1054"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 11:45:26 crc kubenswrapper[4728]: I0227 11:45:26.705125 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f0088df-fc3f-4dca-9b6b-1800a71c1054-kube-api-access-wkzv8" (OuterVolumeSpecName: "kube-api-access-wkzv8") pod "7f0088df-fc3f-4dca-9b6b-1800a71c1054" (UID: "7f0088df-fc3f-4dca-9b6b-1800a71c1054"). InnerVolumeSpecName "kube-api-access-wkzv8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:45:26 crc kubenswrapper[4728]: I0227 11:45:26.750866 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7f0088df-fc3f-4dca-9b6b-1800a71c1054-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7f0088df-fc3f-4dca-9b6b-1800a71c1054" (UID: "7f0088df-fc3f-4dca-9b6b-1800a71c1054"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 11:45:26 crc kubenswrapper[4728]: I0227 11:45:26.797190 4728 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f0088df-fc3f-4dca-9b6b-1800a71c1054-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 11:45:26 crc kubenswrapper[4728]: I0227 11:45:26.797218 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wkzv8\" (UniqueName: \"kubernetes.io/projected/7f0088df-fc3f-4dca-9b6b-1800a71c1054-kube-api-access-wkzv8\") on node \"crc\" DevicePath \"\"" Feb 27 11:45:26 crc kubenswrapper[4728]: I0227 11:45:26.797228 4728 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f0088df-fc3f-4dca-9b6b-1800a71c1054-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 11:45:27 crc kubenswrapper[4728]: I0227 11:45:27.007722 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-2c4gn"] Feb 27 11:45:27 crc kubenswrapper[4728]: I0227 11:45:27.008484 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-2c4gn" podUID="5f4b28bd-ad68-4508-b57e-b9967394bd4e" containerName="registry-server" containerID="cri-o://fa9eabf1271c50a949d1139f557d757db71e3f45b0478468b709c4dcef64c994" gracePeriod=2 Feb 27 11:45:27 crc kubenswrapper[4728]: I0227 11:45:27.017130 4728 generic.go:334] "Generic (PLEG): container finished" podID="7f0088df-fc3f-4dca-9b6b-1800a71c1054" containerID="c79d84111e8ac43bd6740c55035b4ca827b7032b2064ca92d8373be665f1692f" exitCode=0 Feb 27 11:45:27 crc kubenswrapper[4728]: I0227 11:45:27.017170 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2zg8k" event={"ID":"7f0088df-fc3f-4dca-9b6b-1800a71c1054","Type":"ContainerDied","Data":"c79d84111e8ac43bd6740c55035b4ca827b7032b2064ca92d8373be665f1692f"} Feb 27 11:45:27 crc kubenswrapper[4728]: I0227 11:45:27.017198 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2zg8k" event={"ID":"7f0088df-fc3f-4dca-9b6b-1800a71c1054","Type":"ContainerDied","Data":"327850dbbd5901238f83be3ce0f22fee29e9e5f734d781cb5e8fc5639bb600b5"} Feb 27 11:45:27 crc kubenswrapper[4728]: I0227 11:45:27.017233 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2zg8k" Feb 27 11:45:27 crc kubenswrapper[4728]: I0227 11:45:27.017246 4728 scope.go:117] "RemoveContainer" containerID="c79d84111e8ac43bd6740c55035b4ca827b7032b2064ca92d8373be665f1692f" Feb 27 11:45:27 crc kubenswrapper[4728]: I0227 11:45:27.062725 4728 scope.go:117] "RemoveContainer" containerID="66001fc20a57203cd71dfcd7f6e73ef7c1c030865801130212281da5ffe8d819" Feb 27 11:45:27 crc kubenswrapper[4728]: I0227 11:45:27.085704 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2zg8k"] Feb 27 11:45:27 crc kubenswrapper[4728]: I0227 11:45:27.093367 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-2zg8k"] Feb 27 11:45:27 crc kubenswrapper[4728]: I0227 11:45:27.228302 4728 scope.go:117] "RemoveContainer" containerID="ae9b90cbe9b4905df97bca4b76e048fadc1b4126f09dff9ab8303ff2da8df9fb" Feb 27 11:45:27 crc kubenswrapper[4728]: I0227 11:45:27.293849 4728 scope.go:117] "RemoveContainer" containerID="c79d84111e8ac43bd6740c55035b4ca827b7032b2064ca92d8373be665f1692f" Feb 27 11:45:27 crc kubenswrapper[4728]: E0227 11:45:27.294337 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c79d84111e8ac43bd6740c55035b4ca827b7032b2064ca92d8373be665f1692f\": container with ID starting with c79d84111e8ac43bd6740c55035b4ca827b7032b2064ca92d8373be665f1692f not found: ID does not exist" containerID="c79d84111e8ac43bd6740c55035b4ca827b7032b2064ca92d8373be665f1692f" Feb 27 11:45:27 crc kubenswrapper[4728]: I0227 11:45:27.294385 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c79d84111e8ac43bd6740c55035b4ca827b7032b2064ca92d8373be665f1692f"} err="failed to get container status \"c79d84111e8ac43bd6740c55035b4ca827b7032b2064ca92d8373be665f1692f\": rpc error: code = NotFound desc = could not find container \"c79d84111e8ac43bd6740c55035b4ca827b7032b2064ca92d8373be665f1692f\": container with ID starting with c79d84111e8ac43bd6740c55035b4ca827b7032b2064ca92d8373be665f1692f not found: ID does not exist" Feb 27 11:45:27 crc kubenswrapper[4728]: I0227 11:45:27.294418 4728 scope.go:117] "RemoveContainer" containerID="66001fc20a57203cd71dfcd7f6e73ef7c1c030865801130212281da5ffe8d819" Feb 27 11:45:27 crc kubenswrapper[4728]: E0227 11:45:27.294733 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"66001fc20a57203cd71dfcd7f6e73ef7c1c030865801130212281da5ffe8d819\": container with ID starting with 66001fc20a57203cd71dfcd7f6e73ef7c1c030865801130212281da5ffe8d819 not found: ID does not exist" containerID="66001fc20a57203cd71dfcd7f6e73ef7c1c030865801130212281da5ffe8d819" Feb 27 11:45:27 crc kubenswrapper[4728]: I0227 11:45:27.295013 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"66001fc20a57203cd71dfcd7f6e73ef7c1c030865801130212281da5ffe8d819"} err="failed to get container status \"66001fc20a57203cd71dfcd7f6e73ef7c1c030865801130212281da5ffe8d819\": rpc error: code = NotFound desc = could not find container \"66001fc20a57203cd71dfcd7f6e73ef7c1c030865801130212281da5ffe8d819\": container with ID starting with 66001fc20a57203cd71dfcd7f6e73ef7c1c030865801130212281da5ffe8d819 not found: ID does not exist" Feb 27 11:45:27 crc kubenswrapper[4728]: I0227 11:45:27.295051 4728 scope.go:117] "RemoveContainer" containerID="ae9b90cbe9b4905df97bca4b76e048fadc1b4126f09dff9ab8303ff2da8df9fb" Feb 27 11:45:27 crc kubenswrapper[4728]: E0227 11:45:27.295534 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ae9b90cbe9b4905df97bca4b76e048fadc1b4126f09dff9ab8303ff2da8df9fb\": container with ID starting with ae9b90cbe9b4905df97bca4b76e048fadc1b4126f09dff9ab8303ff2da8df9fb not found: ID does not exist" containerID="ae9b90cbe9b4905df97bca4b76e048fadc1b4126f09dff9ab8303ff2da8df9fb" Feb 27 11:45:27 crc kubenswrapper[4728]: I0227 11:45:27.295575 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae9b90cbe9b4905df97bca4b76e048fadc1b4126f09dff9ab8303ff2da8df9fb"} err="failed to get container status \"ae9b90cbe9b4905df97bca4b76e048fadc1b4126f09dff9ab8303ff2da8df9fb\": rpc error: code = NotFound desc = could not find container \"ae9b90cbe9b4905df97bca4b76e048fadc1b4126f09dff9ab8303ff2da8df9fb\": container with ID starting with ae9b90cbe9b4905df97bca4b76e048fadc1b4126f09dff9ab8303ff2da8df9fb not found: ID does not exist" Feb 27 11:45:27 crc kubenswrapper[4728]: I0227 11:45:27.623983 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2c4gn" Feb 27 11:45:27 crc kubenswrapper[4728]: I0227 11:45:27.722748 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7m5fm\" (UniqueName: \"kubernetes.io/projected/5f4b28bd-ad68-4508-b57e-b9967394bd4e-kube-api-access-7m5fm\") pod \"5f4b28bd-ad68-4508-b57e-b9967394bd4e\" (UID: \"5f4b28bd-ad68-4508-b57e-b9967394bd4e\") " Feb 27 11:45:27 crc kubenswrapper[4728]: I0227 11:45:27.723001 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f4b28bd-ad68-4508-b57e-b9967394bd4e-utilities\") pod \"5f4b28bd-ad68-4508-b57e-b9967394bd4e\" (UID: \"5f4b28bd-ad68-4508-b57e-b9967394bd4e\") " Feb 27 11:45:27 crc kubenswrapper[4728]: I0227 11:45:27.723223 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f4b28bd-ad68-4508-b57e-b9967394bd4e-catalog-content\") pod \"5f4b28bd-ad68-4508-b57e-b9967394bd4e\" (UID: \"5f4b28bd-ad68-4508-b57e-b9967394bd4e\") " Feb 27 11:45:27 crc kubenswrapper[4728]: I0227 11:45:27.724642 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5f4b28bd-ad68-4508-b57e-b9967394bd4e-utilities" (OuterVolumeSpecName: "utilities") pod "5f4b28bd-ad68-4508-b57e-b9967394bd4e" (UID: "5f4b28bd-ad68-4508-b57e-b9967394bd4e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 11:45:27 crc kubenswrapper[4728]: I0227 11:45:27.728142 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f4b28bd-ad68-4508-b57e-b9967394bd4e-kube-api-access-7m5fm" (OuterVolumeSpecName: "kube-api-access-7m5fm") pod "5f4b28bd-ad68-4508-b57e-b9967394bd4e" (UID: "5f4b28bd-ad68-4508-b57e-b9967394bd4e"). InnerVolumeSpecName "kube-api-access-7m5fm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:45:27 crc kubenswrapper[4728]: I0227 11:45:27.774225 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5f4b28bd-ad68-4508-b57e-b9967394bd4e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5f4b28bd-ad68-4508-b57e-b9967394bd4e" (UID: "5f4b28bd-ad68-4508-b57e-b9967394bd4e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 11:45:27 crc kubenswrapper[4728]: I0227 11:45:27.827402 4728 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f4b28bd-ad68-4508-b57e-b9967394bd4e-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 11:45:27 crc kubenswrapper[4728]: I0227 11:45:27.827444 4728 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f4b28bd-ad68-4508-b57e-b9967394bd4e-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 11:45:27 crc kubenswrapper[4728]: I0227 11:45:27.827461 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7m5fm\" (UniqueName: \"kubernetes.io/projected/5f4b28bd-ad68-4508-b57e-b9967394bd4e-kube-api-access-7m5fm\") on node \"crc\" DevicePath \"\"" Feb 27 11:45:28 crc kubenswrapper[4728]: I0227 11:45:28.046715 4728 generic.go:334] "Generic (PLEG): container finished" podID="5f4b28bd-ad68-4508-b57e-b9967394bd4e" containerID="fa9eabf1271c50a949d1139f557d757db71e3f45b0478468b709c4dcef64c994" exitCode=0 Feb 27 11:45:28 crc kubenswrapper[4728]: I0227 11:45:28.046782 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2c4gn" event={"ID":"5f4b28bd-ad68-4508-b57e-b9967394bd4e","Type":"ContainerDied","Data":"fa9eabf1271c50a949d1139f557d757db71e3f45b0478468b709c4dcef64c994"} Feb 27 11:45:28 crc kubenswrapper[4728]: I0227 11:45:28.046806 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2c4gn" Feb 27 11:45:28 crc kubenswrapper[4728]: I0227 11:45:28.046834 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2c4gn" event={"ID":"5f4b28bd-ad68-4508-b57e-b9967394bd4e","Type":"ContainerDied","Data":"ac6bcb4bd0e7c5ffba50984c247a9d59cbc940c4dc7a0500c5dfafbd9ea6ca43"} Feb 27 11:45:28 crc kubenswrapper[4728]: I0227 11:45:28.046875 4728 scope.go:117] "RemoveContainer" containerID="fa9eabf1271c50a949d1139f557d757db71e3f45b0478468b709c4dcef64c994" Feb 27 11:45:28 crc kubenswrapper[4728]: I0227 11:45:28.083833 4728 scope.go:117] "RemoveContainer" containerID="c3f1cc568bea5cd9168c6d9910e9f0e305f0bd978226f73bf54eead3c0d16492" Feb 27 11:45:28 crc kubenswrapper[4728]: I0227 11:45:28.093758 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-2c4gn"] Feb 27 11:45:28 crc kubenswrapper[4728]: I0227 11:45:28.108728 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-2c4gn"] Feb 27 11:45:28 crc kubenswrapper[4728]: I0227 11:45:28.130550 4728 scope.go:117] "RemoveContainer" containerID="a2ea2698bbe640007c03176bdc935af5b7056570a73593c8f837cfb9c60aaf3c" Feb 27 11:45:28 crc kubenswrapper[4728]: I0227 11:45:28.218766 4728 scope.go:117] "RemoveContainer" containerID="fa9eabf1271c50a949d1139f557d757db71e3f45b0478468b709c4dcef64c994" Feb 27 11:45:28 crc kubenswrapper[4728]: E0227 11:45:28.219413 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa9eabf1271c50a949d1139f557d757db71e3f45b0478468b709c4dcef64c994\": container with ID starting with fa9eabf1271c50a949d1139f557d757db71e3f45b0478468b709c4dcef64c994 not found: ID does not exist" containerID="fa9eabf1271c50a949d1139f557d757db71e3f45b0478468b709c4dcef64c994" Feb 27 11:45:28 crc kubenswrapper[4728]: I0227 11:45:28.219450 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa9eabf1271c50a949d1139f557d757db71e3f45b0478468b709c4dcef64c994"} err="failed to get container status \"fa9eabf1271c50a949d1139f557d757db71e3f45b0478468b709c4dcef64c994\": rpc error: code = NotFound desc = could not find container \"fa9eabf1271c50a949d1139f557d757db71e3f45b0478468b709c4dcef64c994\": container with ID starting with fa9eabf1271c50a949d1139f557d757db71e3f45b0478468b709c4dcef64c994 not found: ID does not exist" Feb 27 11:45:28 crc kubenswrapper[4728]: I0227 11:45:28.219477 4728 scope.go:117] "RemoveContainer" containerID="c3f1cc568bea5cd9168c6d9910e9f0e305f0bd978226f73bf54eead3c0d16492" Feb 27 11:45:28 crc kubenswrapper[4728]: E0227 11:45:28.219892 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c3f1cc568bea5cd9168c6d9910e9f0e305f0bd978226f73bf54eead3c0d16492\": container with ID starting with c3f1cc568bea5cd9168c6d9910e9f0e305f0bd978226f73bf54eead3c0d16492 not found: ID does not exist" containerID="c3f1cc568bea5cd9168c6d9910e9f0e305f0bd978226f73bf54eead3c0d16492" Feb 27 11:45:28 crc kubenswrapper[4728]: I0227 11:45:28.219932 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c3f1cc568bea5cd9168c6d9910e9f0e305f0bd978226f73bf54eead3c0d16492"} err="failed to get container status \"c3f1cc568bea5cd9168c6d9910e9f0e305f0bd978226f73bf54eead3c0d16492\": rpc error: code = NotFound desc = could not find container \"c3f1cc568bea5cd9168c6d9910e9f0e305f0bd978226f73bf54eead3c0d16492\": container with ID starting with c3f1cc568bea5cd9168c6d9910e9f0e305f0bd978226f73bf54eead3c0d16492 not found: ID does not exist" Feb 27 11:45:28 crc kubenswrapper[4728]: I0227 11:45:28.219960 4728 scope.go:117] "RemoveContainer" containerID="a2ea2698bbe640007c03176bdc935af5b7056570a73593c8f837cfb9c60aaf3c" Feb 27 11:45:28 crc kubenswrapper[4728]: E0227 11:45:28.220305 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a2ea2698bbe640007c03176bdc935af5b7056570a73593c8f837cfb9c60aaf3c\": container with ID starting with a2ea2698bbe640007c03176bdc935af5b7056570a73593c8f837cfb9c60aaf3c not found: ID does not exist" containerID="a2ea2698bbe640007c03176bdc935af5b7056570a73593c8f837cfb9c60aaf3c" Feb 27 11:45:28 crc kubenswrapper[4728]: I0227 11:45:28.220348 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a2ea2698bbe640007c03176bdc935af5b7056570a73593c8f837cfb9c60aaf3c"} err="failed to get container status \"a2ea2698bbe640007c03176bdc935af5b7056570a73593c8f837cfb9c60aaf3c\": rpc error: code = NotFound desc = could not find container \"a2ea2698bbe640007c03176bdc935af5b7056570a73593c8f837cfb9c60aaf3c\": container with ID starting with a2ea2698bbe640007c03176bdc935af5b7056570a73593c8f837cfb9c60aaf3c not found: ID does not exist" Feb 27 11:45:28 crc kubenswrapper[4728]: I0227 11:45:28.741420 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f4b28bd-ad68-4508-b57e-b9967394bd4e" path="/var/lib/kubelet/pods/5f4b28bd-ad68-4508-b57e-b9967394bd4e/volumes" Feb 27 11:45:28 crc kubenswrapper[4728]: I0227 11:45:28.742484 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f0088df-fc3f-4dca-9b6b-1800a71c1054" path="/var/lib/kubelet/pods/7f0088df-fc3f-4dca-9b6b-1800a71c1054/volumes" Feb 27 11:45:32 crc kubenswrapper[4728]: I0227 11:45:32.485765 4728 scope.go:117] "RemoveContainer" containerID="48f6ffb83f6a36d6d6a86dc65837a6815f32590ad606cfea7ee28491fb4626d7" Feb 27 11:45:34 crc kubenswrapper[4728]: I0227 11:45:34.725094 4728 scope.go:117] "RemoveContainer" containerID="5040d34bd66f04310bee95071bbc5788775a1a41119e84ef938b20c74c967ae7" Feb 27 11:45:34 crc kubenswrapper[4728]: E0227 11:45:34.725941 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mf2hh_openshift-machine-config-operator(c2cfd349-f825-497b-b698-7fb6bc258b22)\"" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" Feb 27 11:45:38 crc kubenswrapper[4728]: I0227 11:45:38.182498 4728 generic.go:334] "Generic (PLEG): container finished" podID="87d887cf-2e16-418f-a903-a24551953f9d" containerID="50a9cf204daae35e814e75511fa98a73e3bfd7bb886ed1c19a155862e2bb376e" exitCode=123 Feb 27 11:45:38 crc kubenswrapper[4728]: I0227 11:45:38.182574 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"87d887cf-2e16-418f-a903-a24551953f9d","Type":"ContainerDied","Data":"50a9cf204daae35e814e75511fa98a73e3bfd7bb886ed1c19a155862e2bb376e"} Feb 27 11:45:39 crc kubenswrapper[4728]: I0227 11:45:39.710554 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 27 11:45:39 crc kubenswrapper[4728]: I0227 11:45:39.887358 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/87d887cf-2e16-418f-a903-a24551953f9d-test-operator-ephemeral-temporary\") pod \"87d887cf-2e16-418f-a903-a24551953f9d\" (UID: \"87d887cf-2e16-418f-a903-a24551953f9d\") " Feb 27 11:45:39 crc kubenswrapper[4728]: I0227 11:45:39.887434 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/87d887cf-2e16-418f-a903-a24551953f9d-openstack-config-secret\") pod \"87d887cf-2e16-418f-a903-a24551953f9d\" (UID: \"87d887cf-2e16-418f-a903-a24551953f9d\") " Feb 27 11:45:39 crc kubenswrapper[4728]: I0227 11:45:39.887522 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/87d887cf-2e16-418f-a903-a24551953f9d-ssh-key\") pod \"87d887cf-2e16-418f-a903-a24551953f9d\" (UID: \"87d887cf-2e16-418f-a903-a24551953f9d\") " Feb 27 11:45:39 crc kubenswrapper[4728]: I0227 11:45:39.887596 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nxfcg\" (UniqueName: \"kubernetes.io/projected/87d887cf-2e16-418f-a903-a24551953f9d-kube-api-access-nxfcg\") pod \"87d887cf-2e16-418f-a903-a24551953f9d\" (UID: \"87d887cf-2e16-418f-a903-a24551953f9d\") " Feb 27 11:45:39 crc kubenswrapper[4728]: I0227 11:45:39.887665 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/87d887cf-2e16-418f-a903-a24551953f9d-openstack-config\") pod \"87d887cf-2e16-418f-a903-a24551953f9d\" (UID: \"87d887cf-2e16-418f-a903-a24551953f9d\") " Feb 27 11:45:39 crc kubenswrapper[4728]: I0227 11:45:39.887693 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"87d887cf-2e16-418f-a903-a24551953f9d\" (UID: \"87d887cf-2e16-418f-a903-a24551953f9d\") " Feb 27 11:45:39 crc kubenswrapper[4728]: I0227 11:45:39.887768 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/87d887cf-2e16-418f-a903-a24551953f9d-config-data\") pod \"87d887cf-2e16-418f-a903-a24551953f9d\" (UID: \"87d887cf-2e16-418f-a903-a24551953f9d\") " Feb 27 11:45:39 crc kubenswrapper[4728]: I0227 11:45:39.887794 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/87d887cf-2e16-418f-a903-a24551953f9d-test-operator-ephemeral-workdir\") pod \"87d887cf-2e16-418f-a903-a24551953f9d\" (UID: \"87d887cf-2e16-418f-a903-a24551953f9d\") " Feb 27 11:45:39 crc kubenswrapper[4728]: I0227 11:45:39.887836 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/87d887cf-2e16-418f-a903-a24551953f9d-ca-certs\") pod \"87d887cf-2e16-418f-a903-a24551953f9d\" (UID: \"87d887cf-2e16-418f-a903-a24551953f9d\") " Feb 27 11:45:39 crc kubenswrapper[4728]: I0227 11:45:39.890170 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87d887cf-2e16-418f-a903-a24551953f9d-config-data" (OuterVolumeSpecName: "config-data") pod "87d887cf-2e16-418f-a903-a24551953f9d" (UID: "87d887cf-2e16-418f-a903-a24551953f9d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 11:45:39 crc kubenswrapper[4728]: I0227 11:45:39.890305 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/87d887cf-2e16-418f-a903-a24551953f9d-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "87d887cf-2e16-418f-a903-a24551953f9d" (UID: "87d887cf-2e16-418f-a903-a24551953f9d"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 11:45:39 crc kubenswrapper[4728]: I0227 11:45:39.891366 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/87d887cf-2e16-418f-a903-a24551953f9d-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "87d887cf-2e16-418f-a903-a24551953f9d" (UID: "87d887cf-2e16-418f-a903-a24551953f9d"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 11:45:39 crc kubenswrapper[4728]: I0227 11:45:39.894781 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "test-operator-logs") pod "87d887cf-2e16-418f-a903-a24551953f9d" (UID: "87d887cf-2e16-418f-a903-a24551953f9d"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 27 11:45:39 crc kubenswrapper[4728]: I0227 11:45:39.904546 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87d887cf-2e16-418f-a903-a24551953f9d-kube-api-access-nxfcg" (OuterVolumeSpecName: "kube-api-access-nxfcg") pod "87d887cf-2e16-418f-a903-a24551953f9d" (UID: "87d887cf-2e16-418f-a903-a24551953f9d"). InnerVolumeSpecName "kube-api-access-nxfcg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:45:39 crc kubenswrapper[4728]: I0227 11:45:39.922897 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87d887cf-2e16-418f-a903-a24551953f9d-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "87d887cf-2e16-418f-a903-a24551953f9d" (UID: "87d887cf-2e16-418f-a903-a24551953f9d"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 11:45:39 crc kubenswrapper[4728]: I0227 11:45:39.924479 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87d887cf-2e16-418f-a903-a24551953f9d-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "87d887cf-2e16-418f-a903-a24551953f9d" (UID: "87d887cf-2e16-418f-a903-a24551953f9d"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 11:45:39 crc kubenswrapper[4728]: I0227 11:45:39.930935 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87d887cf-2e16-418f-a903-a24551953f9d-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "87d887cf-2e16-418f-a903-a24551953f9d" (UID: "87d887cf-2e16-418f-a903-a24551953f9d"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 11:45:39 crc kubenswrapper[4728]: I0227 11:45:39.946837 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87d887cf-2e16-418f-a903-a24551953f9d-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "87d887cf-2e16-418f-a903-a24551953f9d" (UID: "87d887cf-2e16-418f-a903-a24551953f9d"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 11:45:39 crc kubenswrapper[4728]: I0227 11:45:39.994118 4728 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/87d887cf-2e16-418f-a903-a24551953f9d-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Feb 27 11:45:39 crc kubenswrapper[4728]: I0227 11:45:39.994171 4728 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/87d887cf-2e16-418f-a903-a24551953f9d-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Feb 27 11:45:39 crc kubenswrapper[4728]: I0227 11:45:39.994186 4728 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/87d887cf-2e16-418f-a903-a24551953f9d-ssh-key\") on node \"crc\" DevicePath \"\"" Feb 27 11:45:39 crc kubenswrapper[4728]: I0227 11:45:39.994200 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nxfcg\" (UniqueName: \"kubernetes.io/projected/87d887cf-2e16-418f-a903-a24551953f9d-kube-api-access-nxfcg\") on node \"crc\" DevicePath \"\"" Feb 27 11:45:39 crc kubenswrapper[4728]: I0227 11:45:39.994213 4728 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/87d887cf-2e16-418f-a903-a24551953f9d-openstack-config\") on node \"crc\" DevicePath \"\"" Feb 27 11:45:39 crc kubenswrapper[4728]: I0227 11:45:39.995580 4728 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Feb 27 11:45:39 crc kubenswrapper[4728]: I0227 11:45:39.995612 4728 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/87d887cf-2e16-418f-a903-a24551953f9d-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 11:45:39 crc kubenswrapper[4728]: I0227 11:45:39.995631 4728 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/87d887cf-2e16-418f-a903-a24551953f9d-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Feb 27 11:45:39 crc kubenswrapper[4728]: I0227 11:45:39.995648 4728 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/87d887cf-2e16-418f-a903-a24551953f9d-ca-certs\") on node \"crc\" DevicePath \"\"" Feb 27 11:45:40 crc kubenswrapper[4728]: I0227 11:45:40.021477 4728 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Feb 27 11:45:40 crc kubenswrapper[4728]: I0227 11:45:40.098022 4728 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Feb 27 11:45:40 crc kubenswrapper[4728]: I0227 11:45:40.214049 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"87d887cf-2e16-418f-a903-a24551953f9d","Type":"ContainerDied","Data":"c7581fa2bb148dd2e01c803b2d67edc5be75f52ff25b7d5d95e121afbe540ba1"} Feb 27 11:45:40 crc kubenswrapper[4728]: I0227 11:45:40.214093 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c7581fa2bb148dd2e01c803b2d67edc5be75f52ff25b7d5d95e121afbe540ba1" Feb 27 11:45:40 crc kubenswrapper[4728]: I0227 11:45:40.214096 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 27 11:45:45 crc kubenswrapper[4728]: I0227 11:45:45.727063 4728 scope.go:117] "RemoveContainer" containerID="5040d34bd66f04310bee95071bbc5788775a1a41119e84ef938b20c74c967ae7" Feb 27 11:45:45 crc kubenswrapper[4728]: E0227 11:45:45.728284 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mf2hh_openshift-machine-config-operator(c2cfd349-f825-497b-b698-7fb6bc258b22)\"" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" Feb 27 11:45:51 crc kubenswrapper[4728]: I0227 11:45:51.248658 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 27 11:45:51 crc kubenswrapper[4728]: E0227 11:45:51.252569 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21aa8652-e254-421a-bda3-b25c9419b995" containerName="collect-profiles" Feb 27 11:45:51 crc kubenswrapper[4728]: I0227 11:45:51.252600 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="21aa8652-e254-421a-bda3-b25c9419b995" containerName="collect-profiles" Feb 27 11:45:51 crc kubenswrapper[4728]: E0227 11:45:51.252659 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f0088df-fc3f-4dca-9b6b-1800a71c1054" containerName="registry-server" Feb 27 11:45:51 crc kubenswrapper[4728]: I0227 11:45:51.252669 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f0088df-fc3f-4dca-9b6b-1800a71c1054" containerName="registry-server" Feb 27 11:45:51 crc kubenswrapper[4728]: E0227 11:45:51.252696 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f0088df-fc3f-4dca-9b6b-1800a71c1054" containerName="extract-utilities" Feb 27 11:45:51 crc kubenswrapper[4728]: I0227 11:45:51.252707 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f0088df-fc3f-4dca-9b6b-1800a71c1054" containerName="extract-utilities" Feb 27 11:45:51 crc kubenswrapper[4728]: E0227 11:45:51.252736 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f4b28bd-ad68-4508-b57e-b9967394bd4e" containerName="extract-content" Feb 27 11:45:51 crc kubenswrapper[4728]: I0227 11:45:51.252745 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f4b28bd-ad68-4508-b57e-b9967394bd4e" containerName="extract-content" Feb 27 11:45:51 crc kubenswrapper[4728]: E0227 11:45:51.252823 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f0088df-fc3f-4dca-9b6b-1800a71c1054" containerName="extract-content" Feb 27 11:45:51 crc kubenswrapper[4728]: I0227 11:45:51.252840 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f0088df-fc3f-4dca-9b6b-1800a71c1054" containerName="extract-content" Feb 27 11:45:51 crc kubenswrapper[4728]: E0227 11:45:51.252883 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f4b28bd-ad68-4508-b57e-b9967394bd4e" containerName="extract-utilities" Feb 27 11:45:51 crc kubenswrapper[4728]: I0227 11:45:51.252894 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f4b28bd-ad68-4508-b57e-b9967394bd4e" containerName="extract-utilities" Feb 27 11:45:51 crc kubenswrapper[4728]: E0227 11:45:51.252919 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87d887cf-2e16-418f-a903-a24551953f9d" containerName="tempest-tests-tempest-tests-runner" Feb 27 11:45:51 crc kubenswrapper[4728]: I0227 11:45:51.252930 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="87d887cf-2e16-418f-a903-a24551953f9d" containerName="tempest-tests-tempest-tests-runner" Feb 27 11:45:51 crc kubenswrapper[4728]: E0227 11:45:51.252954 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f4b28bd-ad68-4508-b57e-b9967394bd4e" containerName="registry-server" Feb 27 11:45:51 crc kubenswrapper[4728]: I0227 11:45:51.252967 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f4b28bd-ad68-4508-b57e-b9967394bd4e" containerName="registry-server" Feb 27 11:45:51 crc kubenswrapper[4728]: I0227 11:45:51.253758 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="87d887cf-2e16-418f-a903-a24551953f9d" containerName="tempest-tests-tempest-tests-runner" Feb 27 11:45:51 crc kubenswrapper[4728]: I0227 11:45:51.253795 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f0088df-fc3f-4dca-9b6b-1800a71c1054" containerName="registry-server" Feb 27 11:45:51 crc kubenswrapper[4728]: I0227 11:45:51.253839 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f4b28bd-ad68-4508-b57e-b9967394bd4e" containerName="registry-server" Feb 27 11:45:51 crc kubenswrapper[4728]: I0227 11:45:51.253856 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="21aa8652-e254-421a-bda3-b25c9419b995" containerName="collect-profiles" Feb 27 11:45:51 crc kubenswrapper[4728]: I0227 11:45:51.255489 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 27 11:45:51 crc kubenswrapper[4728]: I0227 11:45:51.263438 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-8qr9h" Feb 27 11:45:51 crc kubenswrapper[4728]: I0227 11:45:51.288045 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 27 11:45:51 crc kubenswrapper[4728]: I0227 11:45:51.436556 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"c5d9d82a-07e5-4e3c-abfe-9a12cd46d757\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 27 11:45:51 crc kubenswrapper[4728]: I0227 11:45:51.436682 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ds749\" (UniqueName: \"kubernetes.io/projected/c5d9d82a-07e5-4e3c-abfe-9a12cd46d757-kube-api-access-ds749\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"c5d9d82a-07e5-4e3c-abfe-9a12cd46d757\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 27 11:45:51 crc kubenswrapper[4728]: I0227 11:45:51.538809 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ds749\" (UniqueName: \"kubernetes.io/projected/c5d9d82a-07e5-4e3c-abfe-9a12cd46d757-kube-api-access-ds749\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"c5d9d82a-07e5-4e3c-abfe-9a12cd46d757\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 27 11:45:51 crc kubenswrapper[4728]: I0227 11:45:51.539031 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"c5d9d82a-07e5-4e3c-abfe-9a12cd46d757\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 27 11:45:51 crc kubenswrapper[4728]: I0227 11:45:51.541852 4728 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"c5d9d82a-07e5-4e3c-abfe-9a12cd46d757\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 27 11:45:51 crc kubenswrapper[4728]: I0227 11:45:51.582193 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ds749\" (UniqueName: \"kubernetes.io/projected/c5d9d82a-07e5-4e3c-abfe-9a12cd46d757-kube-api-access-ds749\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"c5d9d82a-07e5-4e3c-abfe-9a12cd46d757\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 27 11:45:51 crc kubenswrapper[4728]: I0227 11:45:51.722786 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"c5d9d82a-07e5-4e3c-abfe-9a12cd46d757\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 27 11:45:51 crc kubenswrapper[4728]: I0227 11:45:51.937823 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 27 11:45:52 crc kubenswrapper[4728]: I0227 11:45:52.514009 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 27 11:45:53 crc kubenswrapper[4728]: I0227 11:45:53.410612 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"c5d9d82a-07e5-4e3c-abfe-9a12cd46d757","Type":"ContainerStarted","Data":"1ee797ef9202681c15473f038ea6db7ab41a748c3001f99f3071c1f4b3d83331"} Feb 27 11:45:54 crc kubenswrapper[4728]: I0227 11:45:54.424428 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"c5d9d82a-07e5-4e3c-abfe-9a12cd46d757","Type":"ContainerStarted","Data":"3da56849591a0b1f0c641bbafd515e187ebf452873b0c09e189f0116536c9c4a"} Feb 27 11:45:54 crc kubenswrapper[4728]: I0227 11:45:54.450878 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=2.619020749 podStartE2EDuration="3.450856097s" podCreationTimestamp="2026-02-27 11:45:51 +0000 UTC" firstStartedPulling="2026-02-27 11:45:52.504261346 +0000 UTC m=+4772.466627462" lastFinishedPulling="2026-02-27 11:45:53.336096704 +0000 UTC m=+4773.298462810" observedRunningTime="2026-02-27 11:45:54.43965979 +0000 UTC m=+4774.402025896" watchObservedRunningTime="2026-02-27 11:45:54.450856097 +0000 UTC m=+4774.413222203" Feb 27 11:45:59 crc kubenswrapper[4728]: I0227 11:45:59.725877 4728 scope.go:117] "RemoveContainer" containerID="5040d34bd66f04310bee95071bbc5788775a1a41119e84ef938b20c74c967ae7" Feb 27 11:45:59 crc kubenswrapper[4728]: E0227 11:45:59.726575 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mf2hh_openshift-machine-config-operator(c2cfd349-f825-497b-b698-7fb6bc258b22)\"" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" Feb 27 11:46:00 crc kubenswrapper[4728]: I0227 11:46:00.184284 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536546-82f6r"] Feb 27 11:46:00 crc kubenswrapper[4728]: I0227 11:46:00.186548 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536546-82f6r" Feb 27 11:46:00 crc kubenswrapper[4728]: I0227 11:46:00.191738 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 11:46:00 crc kubenswrapper[4728]: I0227 11:46:00.191975 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 11:46:00 crc kubenswrapper[4728]: I0227 11:46:00.192136 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-wxdfj" Feb 27 11:46:00 crc kubenswrapper[4728]: I0227 11:46:00.197544 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536546-82f6r"] Feb 27 11:46:00 crc kubenswrapper[4728]: I0227 11:46:00.261919 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6bjf\" (UniqueName: \"kubernetes.io/projected/cfe76c0b-7c75-47d3-8c24-5a850e5e5ce5-kube-api-access-s6bjf\") pod \"auto-csr-approver-29536546-82f6r\" (UID: \"cfe76c0b-7c75-47d3-8c24-5a850e5e5ce5\") " pod="openshift-infra/auto-csr-approver-29536546-82f6r" Feb 27 11:46:00 crc kubenswrapper[4728]: I0227 11:46:00.364357 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s6bjf\" (UniqueName: \"kubernetes.io/projected/cfe76c0b-7c75-47d3-8c24-5a850e5e5ce5-kube-api-access-s6bjf\") pod \"auto-csr-approver-29536546-82f6r\" (UID: \"cfe76c0b-7c75-47d3-8c24-5a850e5e5ce5\") " pod="openshift-infra/auto-csr-approver-29536546-82f6r" Feb 27 11:46:00 crc kubenswrapper[4728]: I0227 11:46:00.391172 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s6bjf\" (UniqueName: \"kubernetes.io/projected/cfe76c0b-7c75-47d3-8c24-5a850e5e5ce5-kube-api-access-s6bjf\") pod \"auto-csr-approver-29536546-82f6r\" (UID: \"cfe76c0b-7c75-47d3-8c24-5a850e5e5ce5\") " pod="openshift-infra/auto-csr-approver-29536546-82f6r" Feb 27 11:46:00 crc kubenswrapper[4728]: I0227 11:46:00.543396 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536546-82f6r" Feb 27 11:46:01 crc kubenswrapper[4728]: I0227 11:46:01.082736 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536546-82f6r"] Feb 27 11:46:01 crc kubenswrapper[4728]: I0227 11:46:01.528396 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536546-82f6r" event={"ID":"cfe76c0b-7c75-47d3-8c24-5a850e5e5ce5","Type":"ContainerStarted","Data":"da3d58a106d2ef753ca20bae88b5c000a31fea4c7461ad140f6050e6448001cf"} Feb 27 11:46:04 crc kubenswrapper[4728]: I0227 11:46:04.601861 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536546-82f6r" event={"ID":"cfe76c0b-7c75-47d3-8c24-5a850e5e5ce5","Type":"ContainerStarted","Data":"caac517b54a5550a452a5a695286e1b7f404fb97b62ee422dc2d6e0441c4b944"} Feb 27 11:46:04 crc kubenswrapper[4728]: I0227 11:46:04.621043 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29536546-82f6r" podStartSLOduration=3.491466932 podStartE2EDuration="4.621022591s" podCreationTimestamp="2026-02-27 11:46:00 +0000 UTC" firstStartedPulling="2026-02-27 11:46:01.292754902 +0000 UTC m=+4781.255121038" lastFinishedPulling="2026-02-27 11:46:02.422310571 +0000 UTC m=+4782.384676697" observedRunningTime="2026-02-27 11:46:04.615397857 +0000 UTC m=+4784.577763953" watchObservedRunningTime="2026-02-27 11:46:04.621022591 +0000 UTC m=+4784.583388697" Feb 27 11:46:05 crc kubenswrapper[4728]: I0227 11:46:05.616072 4728 generic.go:334] "Generic (PLEG): container finished" podID="cfe76c0b-7c75-47d3-8c24-5a850e5e5ce5" containerID="caac517b54a5550a452a5a695286e1b7f404fb97b62ee422dc2d6e0441c4b944" exitCode=0 Feb 27 11:46:05 crc kubenswrapper[4728]: I0227 11:46:05.616116 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536546-82f6r" event={"ID":"cfe76c0b-7c75-47d3-8c24-5a850e5e5ce5","Type":"ContainerDied","Data":"caac517b54a5550a452a5a695286e1b7f404fb97b62ee422dc2d6e0441c4b944"} Feb 27 11:46:07 crc kubenswrapper[4728]: I0227 11:46:07.099985 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536546-82f6r" Feb 27 11:46:07 crc kubenswrapper[4728]: I0227 11:46:07.148990 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s6bjf\" (UniqueName: \"kubernetes.io/projected/cfe76c0b-7c75-47d3-8c24-5a850e5e5ce5-kube-api-access-s6bjf\") pod \"cfe76c0b-7c75-47d3-8c24-5a850e5e5ce5\" (UID: \"cfe76c0b-7c75-47d3-8c24-5a850e5e5ce5\") " Feb 27 11:46:07 crc kubenswrapper[4728]: I0227 11:46:07.170038 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cfe76c0b-7c75-47d3-8c24-5a850e5e5ce5-kube-api-access-s6bjf" (OuterVolumeSpecName: "kube-api-access-s6bjf") pod "cfe76c0b-7c75-47d3-8c24-5a850e5e5ce5" (UID: "cfe76c0b-7c75-47d3-8c24-5a850e5e5ce5"). InnerVolumeSpecName "kube-api-access-s6bjf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:46:07 crc kubenswrapper[4728]: I0227 11:46:07.252863 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s6bjf\" (UniqueName: \"kubernetes.io/projected/cfe76c0b-7c75-47d3-8c24-5a850e5e5ce5-kube-api-access-s6bjf\") on node \"crc\" DevicePath \"\"" Feb 27 11:46:07 crc kubenswrapper[4728]: I0227 11:46:07.659607 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536546-82f6r" event={"ID":"cfe76c0b-7c75-47d3-8c24-5a850e5e5ce5","Type":"ContainerDied","Data":"da3d58a106d2ef753ca20bae88b5c000a31fea4c7461ad140f6050e6448001cf"} Feb 27 11:46:07 crc kubenswrapper[4728]: I0227 11:46:07.659948 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="da3d58a106d2ef753ca20bae88b5c000a31fea4c7461ad140f6050e6448001cf" Feb 27 11:46:07 crc kubenswrapper[4728]: I0227 11:46:07.659743 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536546-82f6r" Feb 27 11:46:07 crc kubenswrapper[4728]: I0227 11:46:07.742908 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536540-2rzrm"] Feb 27 11:46:07 crc kubenswrapper[4728]: I0227 11:46:07.771680 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536540-2rzrm"] Feb 27 11:46:08 crc kubenswrapper[4728]: I0227 11:46:08.740328 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d050f77f-c609-4ddf-aaea-374a492efb32" path="/var/lib/kubelet/pods/d050f77f-c609-4ddf-aaea-374a492efb32/volumes" Feb 27 11:46:11 crc kubenswrapper[4728]: I0227 11:46:11.726734 4728 scope.go:117] "RemoveContainer" containerID="5040d34bd66f04310bee95071bbc5788775a1a41119e84ef938b20c74c967ae7" Feb 27 11:46:11 crc kubenswrapper[4728]: E0227 11:46:11.727770 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mf2hh_openshift-machine-config-operator(c2cfd349-f825-497b-b698-7fb6bc258b22)\"" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" Feb 27 11:46:26 crc kubenswrapper[4728]: I0227 11:46:26.725070 4728 scope.go:117] "RemoveContainer" containerID="5040d34bd66f04310bee95071bbc5788775a1a41119e84ef938b20c74c967ae7" Feb 27 11:46:26 crc kubenswrapper[4728]: E0227 11:46:26.725918 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mf2hh_openshift-machine-config-operator(c2cfd349-f825-497b-b698-7fb6bc258b22)\"" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" Feb 27 11:46:32 crc kubenswrapper[4728]: I0227 11:46:32.656920 4728 scope.go:117] "RemoveContainer" containerID="eec1271a5c15ed7b585dfa66a8b565a01aeb25dddab92fe6068d3109f91f0f69" Feb 27 11:46:41 crc kubenswrapper[4728]: I0227 11:46:41.725706 4728 scope.go:117] "RemoveContainer" containerID="5040d34bd66f04310bee95071bbc5788775a1a41119e84ef938b20c74c967ae7" Feb 27 11:46:41 crc kubenswrapper[4728]: E0227 11:46:41.726892 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mf2hh_openshift-machine-config-operator(c2cfd349-f825-497b-b698-7fb6bc258b22)\"" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" Feb 27 11:46:45 crc kubenswrapper[4728]: I0227 11:46:45.237623 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-kgv9v/must-gather-4ndc4"] Feb 27 11:46:45 crc kubenswrapper[4728]: E0227 11:46:45.238947 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfe76c0b-7c75-47d3-8c24-5a850e5e5ce5" containerName="oc" Feb 27 11:46:45 crc kubenswrapper[4728]: I0227 11:46:45.238965 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfe76c0b-7c75-47d3-8c24-5a850e5e5ce5" containerName="oc" Feb 27 11:46:45 crc kubenswrapper[4728]: I0227 11:46:45.239246 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="cfe76c0b-7c75-47d3-8c24-5a850e5e5ce5" containerName="oc" Feb 27 11:46:45 crc kubenswrapper[4728]: I0227 11:46:45.240430 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kgv9v/must-gather-4ndc4" Feb 27 11:46:45 crc kubenswrapper[4728]: I0227 11:46:45.243259 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-kgv9v"/"default-dockercfg-5kflh" Feb 27 11:46:45 crc kubenswrapper[4728]: I0227 11:46:45.250242 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-kgv9v/must-gather-4ndc4"] Feb 27 11:46:45 crc kubenswrapper[4728]: I0227 11:46:45.289082 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-kgv9v"/"kube-root-ca.crt" Feb 27 11:46:45 crc kubenswrapper[4728]: I0227 11:46:45.289118 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-kgv9v"/"openshift-service-ca.crt" Feb 27 11:46:45 crc kubenswrapper[4728]: I0227 11:46:45.347135 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/e7a468d6-13f2-49c8-8ccc-faa520a96917-must-gather-output\") pod \"must-gather-4ndc4\" (UID: \"e7a468d6-13f2-49c8-8ccc-faa520a96917\") " pod="openshift-must-gather-kgv9v/must-gather-4ndc4" Feb 27 11:46:45 crc kubenswrapper[4728]: I0227 11:46:45.347399 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ms5n5\" (UniqueName: \"kubernetes.io/projected/e7a468d6-13f2-49c8-8ccc-faa520a96917-kube-api-access-ms5n5\") pod \"must-gather-4ndc4\" (UID: \"e7a468d6-13f2-49c8-8ccc-faa520a96917\") " pod="openshift-must-gather-kgv9v/must-gather-4ndc4" Feb 27 11:46:45 crc kubenswrapper[4728]: I0227 11:46:45.449795 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/e7a468d6-13f2-49c8-8ccc-faa520a96917-must-gather-output\") pod \"must-gather-4ndc4\" (UID: \"e7a468d6-13f2-49c8-8ccc-faa520a96917\") " pod="openshift-must-gather-kgv9v/must-gather-4ndc4" Feb 27 11:46:45 crc kubenswrapper[4728]: I0227 11:46:45.449894 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ms5n5\" (UniqueName: \"kubernetes.io/projected/e7a468d6-13f2-49c8-8ccc-faa520a96917-kube-api-access-ms5n5\") pod \"must-gather-4ndc4\" (UID: \"e7a468d6-13f2-49c8-8ccc-faa520a96917\") " pod="openshift-must-gather-kgv9v/must-gather-4ndc4" Feb 27 11:46:45 crc kubenswrapper[4728]: I0227 11:46:45.450450 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/e7a468d6-13f2-49c8-8ccc-faa520a96917-must-gather-output\") pod \"must-gather-4ndc4\" (UID: \"e7a468d6-13f2-49c8-8ccc-faa520a96917\") " pod="openshift-must-gather-kgv9v/must-gather-4ndc4" Feb 27 11:46:45 crc kubenswrapper[4728]: I0227 11:46:45.479714 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ms5n5\" (UniqueName: \"kubernetes.io/projected/e7a468d6-13f2-49c8-8ccc-faa520a96917-kube-api-access-ms5n5\") pod \"must-gather-4ndc4\" (UID: \"e7a468d6-13f2-49c8-8ccc-faa520a96917\") " pod="openshift-must-gather-kgv9v/must-gather-4ndc4" Feb 27 11:46:45 crc kubenswrapper[4728]: I0227 11:46:45.609125 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kgv9v/must-gather-4ndc4" Feb 27 11:46:46 crc kubenswrapper[4728]: I0227 11:46:46.153212 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-kgv9v/must-gather-4ndc4"] Feb 27 11:46:46 crc kubenswrapper[4728]: W0227 11:46:46.155466 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode7a468d6_13f2_49c8_8ccc_faa520a96917.slice/crio-0c30c461acbedf83ad07fe65f5023bd5fc46c6a97ac120e5f239739d16bb530e WatchSource:0}: Error finding container 0c30c461acbedf83ad07fe65f5023bd5fc46c6a97ac120e5f239739d16bb530e: Status 404 returned error can't find the container with id 0c30c461acbedf83ad07fe65f5023bd5fc46c6a97ac120e5f239739d16bb530e Feb 27 11:46:47 crc kubenswrapper[4728]: I0227 11:46:47.153690 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-kgv9v/must-gather-4ndc4" event={"ID":"e7a468d6-13f2-49c8-8ccc-faa520a96917","Type":"ContainerStarted","Data":"0c30c461acbedf83ad07fe65f5023bd5fc46c6a97ac120e5f239739d16bb530e"} Feb 27 11:46:52 crc kubenswrapper[4728]: I0227 11:46:52.726317 4728 scope.go:117] "RemoveContainer" containerID="5040d34bd66f04310bee95071bbc5788775a1a41119e84ef938b20c74c967ae7" Feb 27 11:46:52 crc kubenswrapper[4728]: E0227 11:46:52.727060 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mf2hh_openshift-machine-config-operator(c2cfd349-f825-497b-b698-7fb6bc258b22)\"" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" Feb 27 11:46:54 crc kubenswrapper[4728]: I0227 11:46:54.242840 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-kgv9v/must-gather-4ndc4" event={"ID":"e7a468d6-13f2-49c8-8ccc-faa520a96917","Type":"ContainerStarted","Data":"d32c491f61d119e9dbf665eb35caf85e80384e37023906e00ca03ff514acc134"} Feb 27 11:46:55 crc kubenswrapper[4728]: I0227 11:46:55.257352 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-kgv9v/must-gather-4ndc4" event={"ID":"e7a468d6-13f2-49c8-8ccc-faa520a96917","Type":"ContainerStarted","Data":"05da44d77e513ea74d50e12042946a53ba90d26b7c2dbbc217ce666a4a0de20d"} Feb 27 11:46:55 crc kubenswrapper[4728]: I0227 11:46:55.283164 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-kgv9v/must-gather-4ndc4" podStartSLOduration=3.18252165 podStartE2EDuration="10.283145081s" podCreationTimestamp="2026-02-27 11:46:45 +0000 UTC" firstStartedPulling="2026-02-27 11:46:46.158685154 +0000 UTC m=+4826.121051260" lastFinishedPulling="2026-02-27 11:46:53.259308535 +0000 UTC m=+4833.221674691" observedRunningTime="2026-02-27 11:46:55.274920066 +0000 UTC m=+4835.237286172" watchObservedRunningTime="2026-02-27 11:46:55.283145081 +0000 UTC m=+4835.245511187" Feb 27 11:46:59 crc kubenswrapper[4728]: I0227 11:46:59.973333 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-kgv9v/crc-debug-xdv5g"] Feb 27 11:46:59 crc kubenswrapper[4728]: I0227 11:46:59.992853 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kgv9v/crc-debug-xdv5g" Feb 27 11:47:00 crc kubenswrapper[4728]: I0227 11:47:00.129393 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6v2wl\" (UniqueName: \"kubernetes.io/projected/bc9b4fcc-704b-4651-97cb-30373d764303-kube-api-access-6v2wl\") pod \"crc-debug-xdv5g\" (UID: \"bc9b4fcc-704b-4651-97cb-30373d764303\") " pod="openshift-must-gather-kgv9v/crc-debug-xdv5g" Feb 27 11:47:00 crc kubenswrapper[4728]: I0227 11:47:00.129780 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bc9b4fcc-704b-4651-97cb-30373d764303-host\") pod \"crc-debug-xdv5g\" (UID: \"bc9b4fcc-704b-4651-97cb-30373d764303\") " pod="openshift-must-gather-kgv9v/crc-debug-xdv5g" Feb 27 11:47:00 crc kubenswrapper[4728]: I0227 11:47:00.232287 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bc9b4fcc-704b-4651-97cb-30373d764303-host\") pod \"crc-debug-xdv5g\" (UID: \"bc9b4fcc-704b-4651-97cb-30373d764303\") " pod="openshift-must-gather-kgv9v/crc-debug-xdv5g" Feb 27 11:47:00 crc kubenswrapper[4728]: I0227 11:47:00.232480 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6v2wl\" (UniqueName: \"kubernetes.io/projected/bc9b4fcc-704b-4651-97cb-30373d764303-kube-api-access-6v2wl\") pod \"crc-debug-xdv5g\" (UID: \"bc9b4fcc-704b-4651-97cb-30373d764303\") " pod="openshift-must-gather-kgv9v/crc-debug-xdv5g" Feb 27 11:47:00 crc kubenswrapper[4728]: I0227 11:47:00.233334 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bc9b4fcc-704b-4651-97cb-30373d764303-host\") pod \"crc-debug-xdv5g\" (UID: \"bc9b4fcc-704b-4651-97cb-30373d764303\") " pod="openshift-must-gather-kgv9v/crc-debug-xdv5g" Feb 27 11:47:00 crc kubenswrapper[4728]: I0227 11:47:00.255904 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6v2wl\" (UniqueName: \"kubernetes.io/projected/bc9b4fcc-704b-4651-97cb-30373d764303-kube-api-access-6v2wl\") pod \"crc-debug-xdv5g\" (UID: \"bc9b4fcc-704b-4651-97cb-30373d764303\") " pod="openshift-must-gather-kgv9v/crc-debug-xdv5g" Feb 27 11:47:00 crc kubenswrapper[4728]: I0227 11:47:00.329440 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kgv9v/crc-debug-xdv5g" Feb 27 11:47:00 crc kubenswrapper[4728]: W0227 11:47:00.386117 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbc9b4fcc_704b_4651_97cb_30373d764303.slice/crio-e128b7c7af1a9df456f1876d19790e418ed81d4bece4bf95bb01227f64c0a787 WatchSource:0}: Error finding container e128b7c7af1a9df456f1876d19790e418ed81d4bece4bf95bb01227f64c0a787: Status 404 returned error can't find the container with id e128b7c7af1a9df456f1876d19790e418ed81d4bece4bf95bb01227f64c0a787 Feb 27 11:47:01 crc kubenswrapper[4728]: I0227 11:47:01.326217 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-kgv9v/crc-debug-xdv5g" event={"ID":"bc9b4fcc-704b-4651-97cb-30373d764303","Type":"ContainerStarted","Data":"e128b7c7af1a9df456f1876d19790e418ed81d4bece4bf95bb01227f64c0a787"} Feb 27 11:47:07 crc kubenswrapper[4728]: I0227 11:47:07.725678 4728 scope.go:117] "RemoveContainer" containerID="5040d34bd66f04310bee95071bbc5788775a1a41119e84ef938b20c74c967ae7" Feb 27 11:47:07 crc kubenswrapper[4728]: E0227 11:47:07.726404 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mf2hh_openshift-machine-config-operator(c2cfd349-f825-497b-b698-7fb6bc258b22)\"" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" Feb 27 11:47:12 crc kubenswrapper[4728]: I0227 11:47:12.463466 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-kgv9v/crc-debug-xdv5g" event={"ID":"bc9b4fcc-704b-4651-97cb-30373d764303","Type":"ContainerStarted","Data":"722cdf861090f3023e0d00cbdbee4450dc3724f1f048be8d2cfde075a5ab1c5c"} Feb 27 11:47:12 crc kubenswrapper[4728]: I0227 11:47:12.486394 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-kgv9v/crc-debug-xdv5g" podStartSLOduration=2.062466008 podStartE2EDuration="13.486375869s" podCreationTimestamp="2026-02-27 11:46:59 +0000 UTC" firstStartedPulling="2026-02-27 11:47:00.389563545 +0000 UTC m=+4840.351929651" lastFinishedPulling="2026-02-27 11:47:11.813473406 +0000 UTC m=+4851.775839512" observedRunningTime="2026-02-27 11:47:12.477822395 +0000 UTC m=+4852.440188501" watchObservedRunningTime="2026-02-27 11:47:12.486375869 +0000 UTC m=+4852.448741975" Feb 27 11:47:22 crc kubenswrapper[4728]: I0227 11:47:22.724838 4728 scope.go:117] "RemoveContainer" containerID="5040d34bd66f04310bee95071bbc5788775a1a41119e84ef938b20c74c967ae7" Feb 27 11:47:22 crc kubenswrapper[4728]: E0227 11:47:22.725604 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mf2hh_openshift-machine-config-operator(c2cfd349-f825-497b-b698-7fb6bc258b22)\"" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" Feb 27 11:47:35 crc kubenswrapper[4728]: I0227 11:47:35.725001 4728 scope.go:117] "RemoveContainer" containerID="5040d34bd66f04310bee95071bbc5788775a1a41119e84ef938b20c74c967ae7" Feb 27 11:47:35 crc kubenswrapper[4728]: E0227 11:47:35.725674 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mf2hh_openshift-machine-config-operator(c2cfd349-f825-497b-b698-7fb6bc258b22)\"" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" Feb 27 11:47:46 crc kubenswrapper[4728]: I0227 11:47:46.727192 4728 scope.go:117] "RemoveContainer" containerID="5040d34bd66f04310bee95071bbc5788775a1a41119e84ef938b20c74c967ae7" Feb 27 11:47:46 crc kubenswrapper[4728]: E0227 11:47:46.728072 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mf2hh_openshift-machine-config-operator(c2cfd349-f825-497b-b698-7fb6bc258b22)\"" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" Feb 27 11:47:50 crc kubenswrapper[4728]: I0227 11:47:50.907131 4728 generic.go:334] "Generic (PLEG): container finished" podID="bc9b4fcc-704b-4651-97cb-30373d764303" containerID="722cdf861090f3023e0d00cbdbee4450dc3724f1f048be8d2cfde075a5ab1c5c" exitCode=0 Feb 27 11:47:50 crc kubenswrapper[4728]: I0227 11:47:50.907213 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-kgv9v/crc-debug-xdv5g" event={"ID":"bc9b4fcc-704b-4651-97cb-30373d764303","Type":"ContainerDied","Data":"722cdf861090f3023e0d00cbdbee4450dc3724f1f048be8d2cfde075a5ab1c5c"} Feb 27 11:47:52 crc kubenswrapper[4728]: I0227 11:47:52.069321 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kgv9v/crc-debug-xdv5g" Feb 27 11:47:52 crc kubenswrapper[4728]: I0227 11:47:52.109800 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-kgv9v/crc-debug-xdv5g"] Feb 27 11:47:52 crc kubenswrapper[4728]: I0227 11:47:52.124868 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-kgv9v/crc-debug-xdv5g"] Feb 27 11:47:52 crc kubenswrapper[4728]: I0227 11:47:52.163088 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6v2wl\" (UniqueName: \"kubernetes.io/projected/bc9b4fcc-704b-4651-97cb-30373d764303-kube-api-access-6v2wl\") pod \"bc9b4fcc-704b-4651-97cb-30373d764303\" (UID: \"bc9b4fcc-704b-4651-97cb-30373d764303\") " Feb 27 11:47:52 crc kubenswrapper[4728]: I0227 11:47:52.163642 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bc9b4fcc-704b-4651-97cb-30373d764303-host\") pod \"bc9b4fcc-704b-4651-97cb-30373d764303\" (UID: \"bc9b4fcc-704b-4651-97cb-30373d764303\") " Feb 27 11:47:52 crc kubenswrapper[4728]: I0227 11:47:52.163773 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bc9b4fcc-704b-4651-97cb-30373d764303-host" (OuterVolumeSpecName: "host") pod "bc9b4fcc-704b-4651-97cb-30373d764303" (UID: "bc9b4fcc-704b-4651-97cb-30373d764303"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 11:47:52 crc kubenswrapper[4728]: I0227 11:47:52.164421 4728 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bc9b4fcc-704b-4651-97cb-30373d764303-host\") on node \"crc\" DevicePath \"\"" Feb 27 11:47:52 crc kubenswrapper[4728]: I0227 11:47:52.171552 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc9b4fcc-704b-4651-97cb-30373d764303-kube-api-access-6v2wl" (OuterVolumeSpecName: "kube-api-access-6v2wl") pod "bc9b4fcc-704b-4651-97cb-30373d764303" (UID: "bc9b4fcc-704b-4651-97cb-30373d764303"). InnerVolumeSpecName "kube-api-access-6v2wl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:47:52 crc kubenswrapper[4728]: I0227 11:47:52.267580 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6v2wl\" (UniqueName: \"kubernetes.io/projected/bc9b4fcc-704b-4651-97cb-30373d764303-kube-api-access-6v2wl\") on node \"crc\" DevicePath \"\"" Feb 27 11:47:52 crc kubenswrapper[4728]: I0227 11:47:52.738198 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc9b4fcc-704b-4651-97cb-30373d764303" path="/var/lib/kubelet/pods/bc9b4fcc-704b-4651-97cb-30373d764303/volumes" Feb 27 11:47:52 crc kubenswrapper[4728]: I0227 11:47:52.932910 4728 scope.go:117] "RemoveContainer" containerID="722cdf861090f3023e0d00cbdbee4450dc3724f1f048be8d2cfde075a5ab1c5c" Feb 27 11:47:52 crc kubenswrapper[4728]: I0227 11:47:52.933482 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kgv9v/crc-debug-xdv5g" Feb 27 11:47:53 crc kubenswrapper[4728]: I0227 11:47:53.333215 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-kgv9v/crc-debug-px48v"] Feb 27 11:47:53 crc kubenswrapper[4728]: E0227 11:47:53.333686 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc9b4fcc-704b-4651-97cb-30373d764303" containerName="container-00" Feb 27 11:47:53 crc kubenswrapper[4728]: I0227 11:47:53.333699 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc9b4fcc-704b-4651-97cb-30373d764303" containerName="container-00" Feb 27 11:47:53 crc kubenswrapper[4728]: I0227 11:47:53.333957 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc9b4fcc-704b-4651-97cb-30373d764303" containerName="container-00" Feb 27 11:47:53 crc kubenswrapper[4728]: I0227 11:47:53.334797 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kgv9v/crc-debug-px48v" Feb 27 11:47:53 crc kubenswrapper[4728]: I0227 11:47:53.496310 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rdgg\" (UniqueName: \"kubernetes.io/projected/9f8d5874-51a9-4822-b3c5-a906ab7c36d2-kube-api-access-8rdgg\") pod \"crc-debug-px48v\" (UID: \"9f8d5874-51a9-4822-b3c5-a906ab7c36d2\") " pod="openshift-must-gather-kgv9v/crc-debug-px48v" Feb 27 11:47:53 crc kubenswrapper[4728]: I0227 11:47:53.496766 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9f8d5874-51a9-4822-b3c5-a906ab7c36d2-host\") pod \"crc-debug-px48v\" (UID: \"9f8d5874-51a9-4822-b3c5-a906ab7c36d2\") " pod="openshift-must-gather-kgv9v/crc-debug-px48v" Feb 27 11:47:53 crc kubenswrapper[4728]: I0227 11:47:53.600620 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9f8d5874-51a9-4822-b3c5-a906ab7c36d2-host\") pod \"crc-debug-px48v\" (UID: \"9f8d5874-51a9-4822-b3c5-a906ab7c36d2\") " pod="openshift-must-gather-kgv9v/crc-debug-px48v" Feb 27 11:47:53 crc kubenswrapper[4728]: I0227 11:47:53.600832 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9f8d5874-51a9-4822-b3c5-a906ab7c36d2-host\") pod \"crc-debug-px48v\" (UID: \"9f8d5874-51a9-4822-b3c5-a906ab7c36d2\") " pod="openshift-must-gather-kgv9v/crc-debug-px48v" Feb 27 11:47:53 crc kubenswrapper[4728]: I0227 11:47:53.601122 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8rdgg\" (UniqueName: \"kubernetes.io/projected/9f8d5874-51a9-4822-b3c5-a906ab7c36d2-kube-api-access-8rdgg\") pod \"crc-debug-px48v\" (UID: \"9f8d5874-51a9-4822-b3c5-a906ab7c36d2\") " pod="openshift-must-gather-kgv9v/crc-debug-px48v" Feb 27 11:47:53 crc kubenswrapper[4728]: I0227 11:47:53.637655 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8rdgg\" (UniqueName: \"kubernetes.io/projected/9f8d5874-51a9-4822-b3c5-a906ab7c36d2-kube-api-access-8rdgg\") pod \"crc-debug-px48v\" (UID: \"9f8d5874-51a9-4822-b3c5-a906ab7c36d2\") " pod="openshift-must-gather-kgv9v/crc-debug-px48v" Feb 27 11:47:53 crc kubenswrapper[4728]: I0227 11:47:53.652295 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kgv9v/crc-debug-px48v" Feb 27 11:47:53 crc kubenswrapper[4728]: I0227 11:47:53.965352 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-kgv9v/crc-debug-px48v" event={"ID":"9f8d5874-51a9-4822-b3c5-a906ab7c36d2","Type":"ContainerStarted","Data":"c40cc2484816bf243d5c98c24d2ac737044680c62a7ca2bbf00396a83e349c57"} Feb 27 11:47:53 crc kubenswrapper[4728]: I0227 11:47:53.965728 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-kgv9v/crc-debug-px48v" event={"ID":"9f8d5874-51a9-4822-b3c5-a906ab7c36d2","Type":"ContainerStarted","Data":"435e4bf5e82ad71c1bf4c8bd5a921bf8558b8bb390ebae74ad19ad0a88058e1c"} Feb 27 11:47:53 crc kubenswrapper[4728]: I0227 11:47:53.991091 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-kgv9v/crc-debug-px48v" podStartSLOduration=0.991074059 podStartE2EDuration="991.074059ms" podCreationTimestamp="2026-02-27 11:47:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 11:47:53.982743792 +0000 UTC m=+4893.945109898" watchObservedRunningTime="2026-02-27 11:47:53.991074059 +0000 UTC m=+4893.953440165" Feb 27 11:47:54 crc kubenswrapper[4728]: E0227 11:47:54.162418 4728 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9f8d5874_51a9_4822_b3c5_a906ab7c36d2.slice/crio-c40cc2484816bf243d5c98c24d2ac737044680c62a7ca2bbf00396a83e349c57.scope\": RecentStats: unable to find data in memory cache]" Feb 27 11:47:54 crc kubenswrapper[4728]: I0227 11:47:54.980910 4728 generic.go:334] "Generic (PLEG): container finished" podID="9f8d5874-51a9-4822-b3c5-a906ab7c36d2" containerID="c40cc2484816bf243d5c98c24d2ac737044680c62a7ca2bbf00396a83e349c57" exitCode=0 Feb 27 11:47:54 crc kubenswrapper[4728]: I0227 11:47:54.980985 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-kgv9v/crc-debug-px48v" event={"ID":"9f8d5874-51a9-4822-b3c5-a906ab7c36d2","Type":"ContainerDied","Data":"c40cc2484816bf243d5c98c24d2ac737044680c62a7ca2bbf00396a83e349c57"} Feb 27 11:47:56 crc kubenswrapper[4728]: I0227 11:47:56.156637 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kgv9v/crc-debug-px48v" Feb 27 11:47:56 crc kubenswrapper[4728]: I0227 11:47:56.215264 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-kgv9v/crc-debug-px48v"] Feb 27 11:47:56 crc kubenswrapper[4728]: I0227 11:47:56.228787 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-kgv9v/crc-debug-px48v"] Feb 27 11:47:56 crc kubenswrapper[4728]: I0227 11:47:56.266068 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9f8d5874-51a9-4822-b3c5-a906ab7c36d2-host\") pod \"9f8d5874-51a9-4822-b3c5-a906ab7c36d2\" (UID: \"9f8d5874-51a9-4822-b3c5-a906ab7c36d2\") " Feb 27 11:47:56 crc kubenswrapper[4728]: I0227 11:47:56.266222 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9f8d5874-51a9-4822-b3c5-a906ab7c36d2-host" (OuterVolumeSpecName: "host") pod "9f8d5874-51a9-4822-b3c5-a906ab7c36d2" (UID: "9f8d5874-51a9-4822-b3c5-a906ab7c36d2"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 11:47:56 crc kubenswrapper[4728]: I0227 11:47:56.266236 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8rdgg\" (UniqueName: \"kubernetes.io/projected/9f8d5874-51a9-4822-b3c5-a906ab7c36d2-kube-api-access-8rdgg\") pod \"9f8d5874-51a9-4822-b3c5-a906ab7c36d2\" (UID: \"9f8d5874-51a9-4822-b3c5-a906ab7c36d2\") " Feb 27 11:47:56 crc kubenswrapper[4728]: I0227 11:47:56.267548 4728 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9f8d5874-51a9-4822-b3c5-a906ab7c36d2-host\") on node \"crc\" DevicePath \"\"" Feb 27 11:47:56 crc kubenswrapper[4728]: I0227 11:47:56.271756 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f8d5874-51a9-4822-b3c5-a906ab7c36d2-kube-api-access-8rdgg" (OuterVolumeSpecName: "kube-api-access-8rdgg") pod "9f8d5874-51a9-4822-b3c5-a906ab7c36d2" (UID: "9f8d5874-51a9-4822-b3c5-a906ab7c36d2"). InnerVolumeSpecName "kube-api-access-8rdgg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:47:56 crc kubenswrapper[4728]: I0227 11:47:56.369664 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8rdgg\" (UniqueName: \"kubernetes.io/projected/9f8d5874-51a9-4822-b3c5-a906ab7c36d2-kube-api-access-8rdgg\") on node \"crc\" DevicePath \"\"" Feb 27 11:47:56 crc kubenswrapper[4728]: I0227 11:47:56.737958 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f8d5874-51a9-4822-b3c5-a906ab7c36d2" path="/var/lib/kubelet/pods/9f8d5874-51a9-4822-b3c5-a906ab7c36d2/volumes" Feb 27 11:47:57 crc kubenswrapper[4728]: I0227 11:47:57.006481 4728 scope.go:117] "RemoveContainer" containerID="c40cc2484816bf243d5c98c24d2ac737044680c62a7ca2bbf00396a83e349c57" Feb 27 11:47:57 crc kubenswrapper[4728]: I0227 11:47:57.006544 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kgv9v/crc-debug-px48v" Feb 27 11:47:57 crc kubenswrapper[4728]: I0227 11:47:57.540997 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-kgv9v/crc-debug-77s6h"] Feb 27 11:47:57 crc kubenswrapper[4728]: E0227 11:47:57.543078 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f8d5874-51a9-4822-b3c5-a906ab7c36d2" containerName="container-00" Feb 27 11:47:57 crc kubenswrapper[4728]: I0227 11:47:57.543097 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f8d5874-51a9-4822-b3c5-a906ab7c36d2" containerName="container-00" Feb 27 11:47:57 crc kubenswrapper[4728]: I0227 11:47:57.543362 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f8d5874-51a9-4822-b3c5-a906ab7c36d2" containerName="container-00" Feb 27 11:47:57 crc kubenswrapper[4728]: I0227 11:47:57.544197 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kgv9v/crc-debug-77s6h" Feb 27 11:47:57 crc kubenswrapper[4728]: I0227 11:47:57.701985 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbrfb\" (UniqueName: \"kubernetes.io/projected/00275d92-096f-43df-817e-179979ff4ae7-kube-api-access-sbrfb\") pod \"crc-debug-77s6h\" (UID: \"00275d92-096f-43df-817e-179979ff4ae7\") " pod="openshift-must-gather-kgv9v/crc-debug-77s6h" Feb 27 11:47:57 crc kubenswrapper[4728]: I0227 11:47:57.702221 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/00275d92-096f-43df-817e-179979ff4ae7-host\") pod \"crc-debug-77s6h\" (UID: \"00275d92-096f-43df-817e-179979ff4ae7\") " pod="openshift-must-gather-kgv9v/crc-debug-77s6h" Feb 27 11:47:57 crc kubenswrapper[4728]: I0227 11:47:57.804191 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/00275d92-096f-43df-817e-179979ff4ae7-host\") pod \"crc-debug-77s6h\" (UID: \"00275d92-096f-43df-817e-179979ff4ae7\") " pod="openshift-must-gather-kgv9v/crc-debug-77s6h" Feb 27 11:47:57 crc kubenswrapper[4728]: I0227 11:47:57.804292 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sbrfb\" (UniqueName: \"kubernetes.io/projected/00275d92-096f-43df-817e-179979ff4ae7-kube-api-access-sbrfb\") pod \"crc-debug-77s6h\" (UID: \"00275d92-096f-43df-817e-179979ff4ae7\") " pod="openshift-must-gather-kgv9v/crc-debug-77s6h" Feb 27 11:47:57 crc kubenswrapper[4728]: I0227 11:47:57.804438 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/00275d92-096f-43df-817e-179979ff4ae7-host\") pod \"crc-debug-77s6h\" (UID: \"00275d92-096f-43df-817e-179979ff4ae7\") " pod="openshift-must-gather-kgv9v/crc-debug-77s6h" Feb 27 11:47:57 crc kubenswrapper[4728]: I0227 11:47:57.821772 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sbrfb\" (UniqueName: \"kubernetes.io/projected/00275d92-096f-43df-817e-179979ff4ae7-kube-api-access-sbrfb\") pod \"crc-debug-77s6h\" (UID: \"00275d92-096f-43df-817e-179979ff4ae7\") " pod="openshift-must-gather-kgv9v/crc-debug-77s6h" Feb 27 11:47:57 crc kubenswrapper[4728]: I0227 11:47:57.860742 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kgv9v/crc-debug-77s6h" Feb 27 11:47:57 crc kubenswrapper[4728]: W0227 11:47:57.893496 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod00275d92_096f_43df_817e_179979ff4ae7.slice/crio-d9bdeb77ee09794b85e86bfa40ccc672c644f5ee7a475f98372ff6f634a9f2ee WatchSource:0}: Error finding container d9bdeb77ee09794b85e86bfa40ccc672c644f5ee7a475f98372ff6f634a9f2ee: Status 404 returned error can't find the container with id d9bdeb77ee09794b85e86bfa40ccc672c644f5ee7a475f98372ff6f634a9f2ee Feb 27 11:47:58 crc kubenswrapper[4728]: I0227 11:47:58.022535 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-kgv9v/crc-debug-77s6h" event={"ID":"00275d92-096f-43df-817e-179979ff4ae7","Type":"ContainerStarted","Data":"d9bdeb77ee09794b85e86bfa40ccc672c644f5ee7a475f98372ff6f634a9f2ee"} Feb 27 11:47:59 crc kubenswrapper[4728]: I0227 11:47:59.034295 4728 generic.go:334] "Generic (PLEG): container finished" podID="00275d92-096f-43df-817e-179979ff4ae7" containerID="5bb7aa3f14ddb37aa56a2be5c215b68bd5167309cc8381d9062ca65aebcc19e7" exitCode=0 Feb 27 11:47:59 crc kubenswrapper[4728]: I0227 11:47:59.034407 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-kgv9v/crc-debug-77s6h" event={"ID":"00275d92-096f-43df-817e-179979ff4ae7","Type":"ContainerDied","Data":"5bb7aa3f14ddb37aa56a2be5c215b68bd5167309cc8381d9062ca65aebcc19e7"} Feb 27 11:47:59 crc kubenswrapper[4728]: I0227 11:47:59.076841 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-kgv9v/crc-debug-77s6h"] Feb 27 11:47:59 crc kubenswrapper[4728]: I0227 11:47:59.090623 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-kgv9v/crc-debug-77s6h"] Feb 27 11:48:00 crc kubenswrapper[4728]: I0227 11:48:00.161877 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536548-kqwg9"] Feb 27 11:48:00 crc kubenswrapper[4728]: E0227 11:48:00.162632 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00275d92-096f-43df-817e-179979ff4ae7" containerName="container-00" Feb 27 11:48:00 crc kubenswrapper[4728]: I0227 11:48:00.162652 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="00275d92-096f-43df-817e-179979ff4ae7" containerName="container-00" Feb 27 11:48:00 crc kubenswrapper[4728]: I0227 11:48:00.162990 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="00275d92-096f-43df-817e-179979ff4ae7" containerName="container-00" Feb 27 11:48:00 crc kubenswrapper[4728]: I0227 11:48:00.164114 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536548-kqwg9" Feb 27 11:48:00 crc kubenswrapper[4728]: I0227 11:48:00.166098 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-wxdfj" Feb 27 11:48:00 crc kubenswrapper[4728]: I0227 11:48:00.166432 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 11:48:00 crc kubenswrapper[4728]: I0227 11:48:00.167769 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 11:48:00 crc kubenswrapper[4728]: I0227 11:48:00.183246 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536548-kqwg9"] Feb 27 11:48:00 crc kubenswrapper[4728]: I0227 11:48:00.261013 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92vq8\" (UniqueName: \"kubernetes.io/projected/135f8f07-1e79-4050-9bb7-f4b3c4fb51c3-kube-api-access-92vq8\") pod \"auto-csr-approver-29536548-kqwg9\" (UID: \"135f8f07-1e79-4050-9bb7-f4b3c4fb51c3\") " pod="openshift-infra/auto-csr-approver-29536548-kqwg9" Feb 27 11:48:00 crc kubenswrapper[4728]: I0227 11:48:00.364538 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-92vq8\" (UniqueName: \"kubernetes.io/projected/135f8f07-1e79-4050-9bb7-f4b3c4fb51c3-kube-api-access-92vq8\") pod \"auto-csr-approver-29536548-kqwg9\" (UID: \"135f8f07-1e79-4050-9bb7-f4b3c4fb51c3\") " pod="openshift-infra/auto-csr-approver-29536548-kqwg9" Feb 27 11:48:00 crc kubenswrapper[4728]: I0227 11:48:00.583081 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-92vq8\" (UniqueName: \"kubernetes.io/projected/135f8f07-1e79-4050-9bb7-f4b3c4fb51c3-kube-api-access-92vq8\") pod \"auto-csr-approver-29536548-kqwg9\" (UID: \"135f8f07-1e79-4050-9bb7-f4b3c4fb51c3\") " pod="openshift-infra/auto-csr-approver-29536548-kqwg9" Feb 27 11:48:00 crc kubenswrapper[4728]: I0227 11:48:00.736837 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kgv9v/crc-debug-77s6h" Feb 27 11:48:00 crc kubenswrapper[4728]: I0227 11:48:00.748817 4728 scope.go:117] "RemoveContainer" containerID="5040d34bd66f04310bee95071bbc5788775a1a41119e84ef938b20c74c967ae7" Feb 27 11:48:00 crc kubenswrapper[4728]: E0227 11:48:00.749841 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mf2hh_openshift-machine-config-operator(c2cfd349-f825-497b-b698-7fb6bc258b22)\"" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" Feb 27 11:48:00 crc kubenswrapper[4728]: I0227 11:48:00.797742 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536548-kqwg9" Feb 27 11:48:00 crc kubenswrapper[4728]: I0227 11:48:00.877202 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/00275d92-096f-43df-817e-179979ff4ae7-host\") pod \"00275d92-096f-43df-817e-179979ff4ae7\" (UID: \"00275d92-096f-43df-817e-179979ff4ae7\") " Feb 27 11:48:00 crc kubenswrapper[4728]: I0227 11:48:00.877277 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/00275d92-096f-43df-817e-179979ff4ae7-host" (OuterVolumeSpecName: "host") pod "00275d92-096f-43df-817e-179979ff4ae7" (UID: "00275d92-096f-43df-817e-179979ff4ae7"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 11:48:00 crc kubenswrapper[4728]: I0227 11:48:00.877382 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sbrfb\" (UniqueName: \"kubernetes.io/projected/00275d92-096f-43df-817e-179979ff4ae7-kube-api-access-sbrfb\") pod \"00275d92-096f-43df-817e-179979ff4ae7\" (UID: \"00275d92-096f-43df-817e-179979ff4ae7\") " Feb 27 11:48:00 crc kubenswrapper[4728]: I0227 11:48:00.879257 4728 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/00275d92-096f-43df-817e-179979ff4ae7-host\") on node \"crc\" DevicePath \"\"" Feb 27 11:48:00 crc kubenswrapper[4728]: I0227 11:48:00.881754 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00275d92-096f-43df-817e-179979ff4ae7-kube-api-access-sbrfb" (OuterVolumeSpecName: "kube-api-access-sbrfb") pod "00275d92-096f-43df-817e-179979ff4ae7" (UID: "00275d92-096f-43df-817e-179979ff4ae7"). InnerVolumeSpecName "kube-api-access-sbrfb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:48:00 crc kubenswrapper[4728]: I0227 11:48:00.982067 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sbrfb\" (UniqueName: \"kubernetes.io/projected/00275d92-096f-43df-817e-179979ff4ae7-kube-api-access-sbrfb\") on node \"crc\" DevicePath \"\"" Feb 27 11:48:01 crc kubenswrapper[4728]: I0227 11:48:01.078821 4728 scope.go:117] "RemoveContainer" containerID="5bb7aa3f14ddb37aa56a2be5c215b68bd5167309cc8381d9062ca65aebcc19e7" Feb 27 11:48:01 crc kubenswrapper[4728]: I0227 11:48:01.078992 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kgv9v/crc-debug-77s6h" Feb 27 11:48:01 crc kubenswrapper[4728]: I0227 11:48:01.919387 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536548-kqwg9"] Feb 27 11:48:02 crc kubenswrapper[4728]: I0227 11:48:02.096448 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536548-kqwg9" event={"ID":"135f8f07-1e79-4050-9bb7-f4b3c4fb51c3","Type":"ContainerStarted","Data":"366ad24f98ff992a2808016a5419d53db864f3d3218578f14cef1906fb28fde3"} Feb 27 11:48:02 crc kubenswrapper[4728]: I0227 11:48:02.735877 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="00275d92-096f-43df-817e-179979ff4ae7" path="/var/lib/kubelet/pods/00275d92-096f-43df-817e-179979ff4ae7/volumes" Feb 27 11:48:04 crc kubenswrapper[4728]: I0227 11:48:04.124356 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536548-kqwg9" event={"ID":"135f8f07-1e79-4050-9bb7-f4b3c4fb51c3","Type":"ContainerStarted","Data":"f24e931150cc0eae672c40e00cd891420df9e31249cb592257092babbb84ab5e"} Feb 27 11:48:04 crc kubenswrapper[4728]: I0227 11:48:04.143252 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29536548-kqwg9" podStartSLOduration=3.073697933 podStartE2EDuration="4.14322126s" podCreationTimestamp="2026-02-27 11:48:00 +0000 UTC" firstStartedPulling="2026-02-27 11:48:01.920034188 +0000 UTC m=+4901.882400294" lastFinishedPulling="2026-02-27 11:48:02.989557515 +0000 UTC m=+4902.951923621" observedRunningTime="2026-02-27 11:48:04.138347596 +0000 UTC m=+4904.100713702" watchObservedRunningTime="2026-02-27 11:48:04.14322126 +0000 UTC m=+4904.105587376" Feb 27 11:48:05 crc kubenswrapper[4728]: I0227 11:48:05.134589 4728 generic.go:334] "Generic (PLEG): container finished" podID="135f8f07-1e79-4050-9bb7-f4b3c4fb51c3" containerID="f24e931150cc0eae672c40e00cd891420df9e31249cb592257092babbb84ab5e" exitCode=0 Feb 27 11:48:05 crc kubenswrapper[4728]: I0227 11:48:05.134656 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536548-kqwg9" event={"ID":"135f8f07-1e79-4050-9bb7-f4b3c4fb51c3","Type":"ContainerDied","Data":"f24e931150cc0eae672c40e00cd891420df9e31249cb592257092babbb84ab5e"} Feb 27 11:48:06 crc kubenswrapper[4728]: I0227 11:48:06.600441 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536548-kqwg9" Feb 27 11:48:06 crc kubenswrapper[4728]: I0227 11:48:06.736159 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-92vq8\" (UniqueName: \"kubernetes.io/projected/135f8f07-1e79-4050-9bb7-f4b3c4fb51c3-kube-api-access-92vq8\") pod \"135f8f07-1e79-4050-9bb7-f4b3c4fb51c3\" (UID: \"135f8f07-1e79-4050-9bb7-f4b3c4fb51c3\") " Feb 27 11:48:06 crc kubenswrapper[4728]: I0227 11:48:06.748939 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/135f8f07-1e79-4050-9bb7-f4b3c4fb51c3-kube-api-access-92vq8" (OuterVolumeSpecName: "kube-api-access-92vq8") pod "135f8f07-1e79-4050-9bb7-f4b3c4fb51c3" (UID: "135f8f07-1e79-4050-9bb7-f4b3c4fb51c3"). InnerVolumeSpecName "kube-api-access-92vq8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:48:06 crc kubenswrapper[4728]: I0227 11:48:06.839598 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-92vq8\" (UniqueName: \"kubernetes.io/projected/135f8f07-1e79-4050-9bb7-f4b3c4fb51c3-kube-api-access-92vq8\") on node \"crc\" DevicePath \"\"" Feb 27 11:48:07 crc kubenswrapper[4728]: I0227 11:48:07.162700 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536548-kqwg9" event={"ID":"135f8f07-1e79-4050-9bb7-f4b3c4fb51c3","Type":"ContainerDied","Data":"366ad24f98ff992a2808016a5419d53db864f3d3218578f14cef1906fb28fde3"} Feb 27 11:48:07 crc kubenswrapper[4728]: I0227 11:48:07.162736 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="366ad24f98ff992a2808016a5419d53db864f3d3218578f14cef1906fb28fde3" Feb 27 11:48:07 crc kubenswrapper[4728]: I0227 11:48:07.162768 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536548-kqwg9" Feb 27 11:48:07 crc kubenswrapper[4728]: I0227 11:48:07.233458 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536542-967v2"] Feb 27 11:48:07 crc kubenswrapper[4728]: I0227 11:48:07.245186 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536542-967v2"] Feb 27 11:48:08 crc kubenswrapper[4728]: I0227 11:48:08.768333 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c02500ab-fff8-4c91-8db9-422ba4c5254b" path="/var/lib/kubelet/pods/c02500ab-fff8-4c91-8db9-422ba4c5254b/volumes" Feb 27 11:48:14 crc kubenswrapper[4728]: I0227 11:48:14.724655 4728 scope.go:117] "RemoveContainer" containerID="5040d34bd66f04310bee95071bbc5788775a1a41119e84ef938b20c74c967ae7" Feb 27 11:48:14 crc kubenswrapper[4728]: E0227 11:48:14.725348 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mf2hh_openshift-machine-config-operator(c2cfd349-f825-497b-b698-7fb6bc258b22)\"" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" Feb 27 11:48:28 crc kubenswrapper[4728]: I0227 11:48:28.730413 4728 scope.go:117] "RemoveContainer" containerID="5040d34bd66f04310bee95071bbc5788775a1a41119e84ef938b20c74c967ae7" Feb 27 11:48:28 crc kubenswrapper[4728]: E0227 11:48:28.731298 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mf2hh_openshift-machine-config-operator(c2cfd349-f825-497b-b698-7fb6bc258b22)\"" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" Feb 27 11:48:32 crc kubenswrapper[4728]: I0227 11:48:32.829899 4728 scope.go:117] "RemoveContainer" containerID="9996946dd66df0c0e950d5de9dc9784ea0ec3e9a851d49625d6b723c47469be8" Feb 27 11:48:39 crc kubenswrapper[4728]: I0227 11:48:39.725205 4728 scope.go:117] "RemoveContainer" containerID="5040d34bd66f04310bee95071bbc5788775a1a41119e84ef938b20c74c967ae7" Feb 27 11:48:39 crc kubenswrapper[4728]: E0227 11:48:39.726156 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mf2hh_openshift-machine-config-operator(c2cfd349-f825-497b-b698-7fb6bc258b22)\"" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" Feb 27 11:48:44 crc kubenswrapper[4728]: I0227 11:48:44.249928 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_c73db32f-59f6-49d6-b9c4-f12c029ce737/aodh-api/0.log" Feb 27 11:48:44 crc kubenswrapper[4728]: I0227 11:48:44.330315 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_c73db32f-59f6-49d6-b9c4-f12c029ce737/aodh-evaluator/0.log" Feb 27 11:48:44 crc kubenswrapper[4728]: I0227 11:48:44.452034 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_c73db32f-59f6-49d6-b9c4-f12c029ce737/aodh-listener/0.log" Feb 27 11:48:44 crc kubenswrapper[4728]: I0227 11:48:44.507143 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_c73db32f-59f6-49d6-b9c4-f12c029ce737/aodh-notifier/0.log" Feb 27 11:48:44 crc kubenswrapper[4728]: I0227 11:48:44.580317 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-57f4b5948b-j7k68_29420e42-ebe7-4df2-8418-30b0fcb5c627/barbican-api/0.log" Feb 27 11:48:44 crc kubenswrapper[4728]: I0227 11:48:44.652779 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-57f4b5948b-j7k68_29420e42-ebe7-4df2-8418-30b0fcb5c627/barbican-api-log/0.log" Feb 27 11:48:44 crc kubenswrapper[4728]: I0227 11:48:44.765942 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-7676cbc4f4-f7krv_e093b475-30b1-478c-b832-8b7b61a5f8f5/barbican-keystone-listener/0.log" Feb 27 11:48:44 crc kubenswrapper[4728]: I0227 11:48:44.858442 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-7676cbc4f4-f7krv_e093b475-30b1-478c-b832-8b7b61a5f8f5/barbican-keystone-listener-log/0.log" Feb 27 11:48:44 crc kubenswrapper[4728]: I0227 11:48:44.983177 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-5b68b655dc-96rhh_a75dd8a4-3980-4825-bacf-fe2f0a9221d6/barbican-worker-log/0.log" Feb 27 11:48:45 crc kubenswrapper[4728]: I0227 11:48:45.011373 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-5b68b655dc-96rhh_a75dd8a4-3980-4825-bacf-fe2f0a9221d6/barbican-worker/0.log" Feb 27 11:48:45 crc kubenswrapper[4728]: I0227 11:48:45.100352 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-lpx8h_e83b55c5-e7f6-4e31-b65e-14e0f39a21ec/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Feb 27 11:48:45 crc kubenswrapper[4728]: I0227 11:48:45.262458 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_77499a0a-be50-4d60-ae26-461a8c9742e5/ceilometer-central-agent/0.log" Feb 27 11:48:45 crc kubenswrapper[4728]: I0227 11:48:45.337627 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_77499a0a-be50-4d60-ae26-461a8c9742e5/ceilometer-notification-agent/0.log" Feb 27 11:48:45 crc kubenswrapper[4728]: I0227 11:48:45.365925 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_77499a0a-be50-4d60-ae26-461a8c9742e5/proxy-httpd/0.log" Feb 27 11:48:45 crc kubenswrapper[4728]: I0227 11:48:45.432325 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_77499a0a-be50-4d60-ae26-461a8c9742e5/sg-core/0.log" Feb 27 11:48:45 crc kubenswrapper[4728]: I0227 11:48:45.605717 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_7d7ce831-a58b-49ab-a571-f4f8072e1dcd/cinder-api-log/0.log" Feb 27 11:48:45 crc kubenswrapper[4728]: I0227 11:48:45.647961 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_7d7ce831-a58b-49ab-a571-f4f8072e1dcd/cinder-api/0.log" Feb 27 11:48:45 crc kubenswrapper[4728]: I0227 11:48:45.791243 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_77e12b0f-4d5d-470b-8a0b-bdf2b09f42e2/cinder-scheduler/0.log" Feb 27 11:48:45 crc kubenswrapper[4728]: I0227 11:48:45.850553 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_77e12b0f-4d5d-470b-8a0b-bdf2b09f42e2/probe/0.log" Feb 27 11:48:45 crc kubenswrapper[4728]: I0227 11:48:45.963050 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-9qk2n_5cf31d36-5693-4ec4-bd25-87524df66974/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 27 11:48:46 crc kubenswrapper[4728]: I0227 11:48:46.082900 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-xmsk4_15e0a257-4a55-402f-b410-fa67a8cc6b7d/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 27 11:48:46 crc kubenswrapper[4728]: I0227 11:48:46.409165 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-5596c69fcc-6qtwm_13a77c59-db09-46ca-aa8c-88651c29be68/init/0.log" Feb 27 11:48:46 crc kubenswrapper[4728]: I0227 11:48:46.543870 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-5596c69fcc-6qtwm_13a77c59-db09-46ca-aa8c-88651c29be68/init/0.log" Feb 27 11:48:46 crc kubenswrapper[4728]: I0227 11:48:46.601222 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-5596c69fcc-6qtwm_13a77c59-db09-46ca-aa8c-88651c29be68/dnsmasq-dns/0.log" Feb 27 11:48:46 crc kubenswrapper[4728]: I0227 11:48:46.634428 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-rxdk5_6a90259d-d214-48ad-ad45-b4c87c9eac15/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Feb 27 11:48:46 crc kubenswrapper[4728]: I0227 11:48:46.840859 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_b4d607ed-8cde-48d2-9d5e-fa0903477b07/glance-httpd/0.log" Feb 27 11:48:46 crc kubenswrapper[4728]: I0227 11:48:46.881207 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_b4d607ed-8cde-48d2-9d5e-fa0903477b07/glance-log/0.log" Feb 27 11:48:47 crc kubenswrapper[4728]: I0227 11:48:47.014710 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_341ec042-9200-431a-b264-4a43228e1010/glance-httpd/0.log" Feb 27 11:48:47 crc kubenswrapper[4728]: I0227 11:48:47.048517 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_341ec042-9200-431a-b264-4a43228e1010/glance-log/0.log" Feb 27 11:48:47 crc kubenswrapper[4728]: I0227 11:48:47.623680 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-api-65cd755798-zq9gs_33024243-7ee6-4299-9858-0f66d98188a4/heat-api/0.log" Feb 27 11:48:47 crc kubenswrapper[4728]: I0227 11:48:47.757870 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-engine-76cf949bdc-hqx69_01624ee9-8f8f-4c71-9eba-0fb900755b87/heat-engine/0.log" Feb 27 11:48:47 crc kubenswrapper[4728]: I0227 11:48:47.812961 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-cfnapi-7b6bc5d6b-5bs8p_f59eff63-8d3b-4ebb-bbb6-cbb1c678a51f/heat-cfnapi/0.log" Feb 27 11:48:47 crc kubenswrapper[4728]: I0227 11:48:47.914777 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-f7k98_73079959-3e26-47dc-8d3a-e7051acb0574/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Feb 27 11:48:47 crc kubenswrapper[4728]: I0227 11:48:47.946414 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-vh27d_1c346330-059d-4998-91ec-3014d3cfe1b9/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 27 11:48:48 crc kubenswrapper[4728]: I0227 11:48:48.872093 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29536501-ptgcq_c3bc7091-e483-405f-8cf6-a3494fa34cdf/keystone-cron/0.log" Feb 27 11:48:49 crc kubenswrapper[4728]: I0227 11:48:49.012737 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-6cb4766cb5-wf4ln_055c18b7-6800-4f96-b544-7fc72a1eb468/keystone-api/0.log" Feb 27 11:48:49 crc kubenswrapper[4728]: I0227 11:48:49.059387 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_9a0e671c-98d8-42e6-bde3-624ca42b4d48/kube-state-metrics/0.log" Feb 27 11:48:49 crc kubenswrapper[4728]: I0227 11:48:49.189356 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-25clk_366ef133-dc99-408d-9a1b-220869733a30/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Feb 27 11:48:49 crc kubenswrapper[4728]: I0227 11:48:49.213549 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_logging-edpm-deployment-openstack-edpm-ipam-8m47v_7b98335a-8a74-44b4-aed8-8a56081f60ab/logging-edpm-deployment-openstack-edpm-ipam/0.log" Feb 27 11:48:49 crc kubenswrapper[4728]: I0227 11:48:49.459943 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mysqld-exporter-0_14f791fe-acdb-4266-a973-2bf0aa766623/mysqld-exporter/0.log" Feb 27 11:48:49 crc kubenswrapper[4728]: I0227 11:48:49.742363 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-8589b5d689-mwmds_0b60bed6-0eb1-40f9-a560-5488d7b2a551/neutron-httpd/0.log" Feb 27 11:48:49 crc kubenswrapper[4728]: I0227 11:48:49.745241 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-8589b5d689-mwmds_0b60bed6-0eb1-40f9-a560-5488d7b2a551/neutron-api/0.log" Feb 27 11:48:49 crc kubenswrapper[4728]: I0227 11:48:49.815772 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-vt26h_63c6a227-e2d7-4d6d-8519-a9f744424f6a/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Feb 27 11:48:50 crc kubenswrapper[4728]: I0227 11:48:50.511145 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_cb4ac27f-bce8-4cac-b444-4bc0921f975d/nova-api-api/0.log" Feb 27 11:48:50 crc kubenswrapper[4728]: I0227 11:48:50.623571 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_3315d635-81af-4a85-b41f-d9736448876a/nova-cell0-conductor-conductor/0.log" Feb 27 11:48:50 crc kubenswrapper[4728]: I0227 11:48:50.801240 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_cb4ac27f-bce8-4cac-b444-4bc0921f975d/nova-api-log/0.log" Feb 27 11:48:50 crc kubenswrapper[4728]: I0227 11:48:50.883703 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_045a229f-359b-4278-b366-233bc1921370/nova-cell1-conductor-conductor/0.log" Feb 27 11:48:51 crc kubenswrapper[4728]: I0227 11:48:51.085668 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_32bb7294-8dfc-4b00-9227-445c322a47a1/nova-cell1-novncproxy-novncproxy/0.log" Feb 27 11:48:51 crc kubenswrapper[4728]: I0227 11:48:51.097247 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-6gmbn_f5b59e71-081a-4fb3-aa6e-4b0e96c0ff03/nova-edpm-deployment-openstack-edpm-ipam/0.log" Feb 27 11:48:51 crc kubenswrapper[4728]: I0227 11:48:51.395981 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_65e3743e-125d-4cf1-b8bb-85cd0197b833/nova-metadata-log/0.log" Feb 27 11:48:51 crc kubenswrapper[4728]: I0227 11:48:51.586066 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_96210d4e-7270-4181-b154-388611ae10fc/nova-scheduler-scheduler/0.log" Feb 27 11:48:51 crc kubenswrapper[4728]: I0227 11:48:51.599601 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_a7b93ac4-55f2-4491-b4b4-f8abfd837dfa/mysql-bootstrap/0.log" Feb 27 11:48:51 crc kubenswrapper[4728]: I0227 11:48:51.880776 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_a7b93ac4-55f2-4491-b4b4-f8abfd837dfa/mysql-bootstrap/0.log" Feb 27 11:48:51 crc kubenswrapper[4728]: I0227 11:48:51.940759 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_a7b93ac4-55f2-4491-b4b4-f8abfd837dfa/galera/0.log" Feb 27 11:48:52 crc kubenswrapper[4728]: I0227 11:48:52.140429 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_803ed01f-b95c-4718-a5e8-3a864b0b7850/mysql-bootstrap/0.log" Feb 27 11:48:52 crc kubenswrapper[4728]: I0227 11:48:52.352107 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_803ed01f-b95c-4718-a5e8-3a864b0b7850/mysql-bootstrap/0.log" Feb 27 11:48:52 crc kubenswrapper[4728]: I0227 11:48:52.433184 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_803ed01f-b95c-4718-a5e8-3a864b0b7850/galera/0.log" Feb 27 11:48:52 crc kubenswrapper[4728]: I0227 11:48:52.579189 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_b6a05f62-98d5-4df6-9cbb-d4bbb1778642/openstackclient/0.log" Feb 27 11:48:52 crc kubenswrapper[4728]: I0227 11:48:52.693360 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-bd5fc_20d22b86-c3cb-4b12-8e88-35369d033e1e/ovn-controller/0.log" Feb 27 11:48:52 crc kubenswrapper[4728]: I0227 11:48:52.961207 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-nrfxq_73bd084b-f8c7-4dcd-8d01-fcf8f8587275/openstack-network-exporter/0.log" Feb 27 11:48:53 crc kubenswrapper[4728]: I0227 11:48:53.010976 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_65e3743e-125d-4cf1-b8bb-85cd0197b833/nova-metadata-metadata/0.log" Feb 27 11:48:53 crc kubenswrapper[4728]: I0227 11:48:53.115491 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-bhldn_e03375ec-5705-44e3-9dda-686c809cf4ef/ovsdb-server-init/0.log" Feb 27 11:48:53 crc kubenswrapper[4728]: I0227 11:48:53.327377 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-bhldn_e03375ec-5705-44e3-9dda-686c809cf4ef/ovsdb-server-init/0.log" Feb 27 11:48:53 crc kubenswrapper[4728]: I0227 11:48:53.358652 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-bhldn_e03375ec-5705-44e3-9dda-686c809cf4ef/ovsdb-server/0.log" Feb 27 11:48:53 crc kubenswrapper[4728]: I0227 11:48:53.370229 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-bhldn_e03375ec-5705-44e3-9dda-686c809cf4ef/ovs-vswitchd/0.log" Feb 27 11:48:53 crc kubenswrapper[4728]: I0227 11:48:53.639285 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_7d741423-4119-4fd9-9314-50153ed061b6/openstack-network-exporter/0.log" Feb 27 11:48:53 crc kubenswrapper[4728]: I0227 11:48:53.657197 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-kjj79_d9757dd1-ea1e-492b-8781-9e64f6965762/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Feb 27 11:48:53 crc kubenswrapper[4728]: I0227 11:48:53.725147 4728 scope.go:117] "RemoveContainer" containerID="5040d34bd66f04310bee95071bbc5788775a1a41119e84ef938b20c74c967ae7" Feb 27 11:48:53 crc kubenswrapper[4728]: E0227 11:48:53.725464 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mf2hh_openshift-machine-config-operator(c2cfd349-f825-497b-b698-7fb6bc258b22)\"" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" Feb 27 11:48:53 crc kubenswrapper[4728]: I0227 11:48:53.734951 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_7d741423-4119-4fd9-9314-50153ed061b6/ovn-northd/0.log" Feb 27 11:48:53 crc kubenswrapper[4728]: I0227 11:48:53.878347 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_94cc80b4-aa0c-439a-be56-22b86add0bb3/ovsdbserver-nb/0.log" Feb 27 11:48:53 crc kubenswrapper[4728]: I0227 11:48:53.891447 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_94cc80b4-aa0c-439a-be56-22b86add0bb3/openstack-network-exporter/0.log" Feb 27 11:48:54 crc kubenswrapper[4728]: I0227 11:48:54.094161 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_0be54b22-6600-4033-92e9-10fd8a540238/openstack-network-exporter/0.log" Feb 27 11:48:54 crc kubenswrapper[4728]: I0227 11:48:54.203845 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_0be54b22-6600-4033-92e9-10fd8a540238/ovsdbserver-sb/0.log" Feb 27 11:48:54 crc kubenswrapper[4728]: I0227 11:48:54.446483 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-768d449478-rt9ff_89946113-053b-4a87-acd4-50402c66d0c2/placement-api/0.log" Feb 27 11:48:54 crc kubenswrapper[4728]: I0227 11:48:54.461918 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-768d449478-rt9ff_89946113-053b-4a87-acd4-50402c66d0c2/placement-log/0.log" Feb 27 11:48:54 crc kubenswrapper[4728]: I0227 11:48:54.463748 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_d232c99d-32bc-45e3-bc7b-c9dca99571ac/init-config-reloader/0.log" Feb 27 11:48:54 crc kubenswrapper[4728]: I0227 11:48:54.702143 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_d232c99d-32bc-45e3-bc7b-c9dca99571ac/init-config-reloader/0.log" Feb 27 11:48:54 crc kubenswrapper[4728]: I0227 11:48:54.705761 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_d232c99d-32bc-45e3-bc7b-c9dca99571ac/prometheus/0.log" Feb 27 11:48:54 crc kubenswrapper[4728]: I0227 11:48:54.737244 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_d232c99d-32bc-45e3-bc7b-c9dca99571ac/config-reloader/0.log" Feb 27 11:48:54 crc kubenswrapper[4728]: I0227 11:48:54.780712 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_d232c99d-32bc-45e3-bc7b-c9dca99571ac/thanos-sidecar/0.log" Feb 27 11:48:54 crc kubenswrapper[4728]: I0227 11:48:54.970846 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_7363c956-6c7e-4e11-bfb1-6be6ba94771e/setup-container/0.log" Feb 27 11:48:55 crc kubenswrapper[4728]: I0227 11:48:55.198466 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_7363c956-6c7e-4e11-bfb1-6be6ba94771e/rabbitmq/0.log" Feb 27 11:48:55 crc kubenswrapper[4728]: I0227 11:48:55.258794 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_7363c956-6c7e-4e11-bfb1-6be6ba94771e/setup-container/0.log" Feb 27 11:48:55 crc kubenswrapper[4728]: I0227 11:48:55.341311 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_73e4e1d2-5dc4-4578-bd53-f83b95638094/setup-container/0.log" Feb 27 11:48:55 crc kubenswrapper[4728]: I0227 11:48:55.513463 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_73e4e1d2-5dc4-4578-bd53-f83b95638094/setup-container/0.log" Feb 27 11:48:55 crc kubenswrapper[4728]: I0227 11:48:55.583028 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-1_2fdabfa1-9c8c-4434-8a44-30d40a18a023/setup-container/0.log" Feb 27 11:48:55 crc kubenswrapper[4728]: I0227 11:48:55.613591 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_73e4e1d2-5dc4-4578-bd53-f83b95638094/rabbitmq/0.log" Feb 27 11:48:55 crc kubenswrapper[4728]: I0227 11:48:55.864613 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-1_2fdabfa1-9c8c-4434-8a44-30d40a18a023/setup-container/0.log" Feb 27 11:48:55 crc kubenswrapper[4728]: I0227 11:48:55.950343 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-1_2fdabfa1-9c8c-4434-8a44-30d40a18a023/rabbitmq/0.log" Feb 27 11:48:55 crc kubenswrapper[4728]: I0227 11:48:55.984009 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-2_008a6414-799f-47de-a238-a5fdefc314ca/setup-container/0.log" Feb 27 11:48:56 crc kubenswrapper[4728]: I0227 11:48:56.183087 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-2_008a6414-799f-47de-a238-a5fdefc314ca/setup-container/0.log" Feb 27 11:48:56 crc kubenswrapper[4728]: I0227 11:48:56.267719 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-7t7vd_da0fc581-2d10-45bd-aecf-8af4e8964c24/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 27 11:48:56 crc kubenswrapper[4728]: I0227 11:48:56.277271 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-2_008a6414-799f-47de-a238-a5fdefc314ca/rabbitmq/0.log" Feb 27 11:48:56 crc kubenswrapper[4728]: I0227 11:48:56.488346 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-f57ll_f08cf8c5-14af-42cf-be14-97e3871f6801/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Feb 27 11:48:56 crc kubenswrapper[4728]: I0227 11:48:56.488832 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-7mr2p_7b022b91-04fe-443e-af6c-d47673e6f22f/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Feb 27 11:48:56 crc kubenswrapper[4728]: I0227 11:48:56.771167 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-6v98n_72eee809-e748-4af7-a5b9-3f59015b2d8d/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 27 11:48:56 crc kubenswrapper[4728]: I0227 11:48:56.859641 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-8tlqs_bfce4106-f546-4fdb-af8a-adc09ccd8b17/ssh-known-hosts-edpm-deployment/0.log" Feb 27 11:48:57 crc kubenswrapper[4728]: I0227 11:48:57.094163 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-6bfbb66dbc-8ddj8_0633a337-c673-43b4-a012-ac41403a02a1/proxy-server/0.log" Feb 27 11:48:57 crc kubenswrapper[4728]: I0227 11:48:57.130569 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-6bfbb66dbc-8ddj8_0633a337-c673-43b4-a012-ac41403a02a1/proxy-httpd/0.log" Feb 27 11:48:57 crc kubenswrapper[4728]: I0227 11:48:57.261615 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-wnkfg_ec9639a6-9853-49cd-8215-0301af98d73b/swift-ring-rebalance/0.log" Feb 27 11:48:57 crc kubenswrapper[4728]: I0227 11:48:57.397736 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ec0a9664-7538-43dd-904d-c386d569999e/account-auditor/0.log" Feb 27 11:48:57 crc kubenswrapper[4728]: I0227 11:48:57.500175 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ec0a9664-7538-43dd-904d-c386d569999e/account-server/0.log" Feb 27 11:48:57 crc kubenswrapper[4728]: I0227 11:48:57.533774 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ec0a9664-7538-43dd-904d-c386d569999e/account-reaper/0.log" Feb 27 11:48:57 crc kubenswrapper[4728]: I0227 11:48:57.534633 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ec0a9664-7538-43dd-904d-c386d569999e/account-replicator/0.log" Feb 27 11:48:57 crc kubenswrapper[4728]: I0227 11:48:57.641518 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ec0a9664-7538-43dd-904d-c386d569999e/container-auditor/0.log" Feb 27 11:48:57 crc kubenswrapper[4728]: I0227 11:48:57.778324 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ec0a9664-7538-43dd-904d-c386d569999e/container-replicator/0.log" Feb 27 11:48:57 crc kubenswrapper[4728]: I0227 11:48:57.852356 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ec0a9664-7538-43dd-904d-c386d569999e/container-server/0.log" Feb 27 11:48:57 crc kubenswrapper[4728]: I0227 11:48:57.857350 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ec0a9664-7538-43dd-904d-c386d569999e/container-updater/0.log" Feb 27 11:48:57 crc kubenswrapper[4728]: I0227 11:48:57.945497 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ec0a9664-7538-43dd-904d-c386d569999e/object-auditor/0.log" Feb 27 11:48:58 crc kubenswrapper[4728]: I0227 11:48:58.021314 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ec0a9664-7538-43dd-904d-c386d569999e/object-expirer/0.log" Feb 27 11:48:58 crc kubenswrapper[4728]: I0227 11:48:58.194721 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ec0a9664-7538-43dd-904d-c386d569999e/object-updater/0.log" Feb 27 11:48:58 crc kubenswrapper[4728]: I0227 11:48:58.208645 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ec0a9664-7538-43dd-904d-c386d569999e/object-server/0.log" Feb 27 11:48:58 crc kubenswrapper[4728]: I0227 11:48:58.234680 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ec0a9664-7538-43dd-904d-c386d569999e/object-replicator/0.log" Feb 27 11:48:58 crc kubenswrapper[4728]: I0227 11:48:58.259018 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ec0a9664-7538-43dd-904d-c386d569999e/rsync/0.log" Feb 27 11:48:58 crc kubenswrapper[4728]: I0227 11:48:58.458614 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ec0a9664-7538-43dd-904d-c386d569999e/swift-recon-cron/0.log" Feb 27 11:48:58 crc kubenswrapper[4728]: I0227 11:48:58.571014 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-mjg5q_f509a2d6-f273-4497-8dad-171d2f53d125/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Feb 27 11:48:59 crc kubenswrapper[4728]: I0227 11:48:59.494749 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-power-monitoring-edpm-deployment-openstack-edpm-s8ngc_db929680-5ba2-4112-8eaf-c94cdd3b0f89/telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam/0.log" Feb 27 11:48:59 crc kubenswrapper[4728]: I0227 11:48:59.527600 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_87d887cf-2e16-418f-a903-a24551953f9d/tempest-tests-tempest-tests-runner/0.log" Feb 27 11:48:59 crc kubenswrapper[4728]: I0227 11:48:59.875253 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_c5d9d82a-07e5-4e3c-abfe-9a12cd46d757/test-operator-logs-container/0.log" Feb 27 11:48:59 crc kubenswrapper[4728]: I0227 11:48:59.879539 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-bhhg4_699102c6-20ff-4e22-8981-e8c12a0c5a01/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 27 11:49:04 crc kubenswrapper[4728]: I0227 11:49:04.724976 4728 scope.go:117] "RemoveContainer" containerID="5040d34bd66f04310bee95071bbc5788775a1a41119e84ef938b20c74c967ae7" Feb 27 11:49:04 crc kubenswrapper[4728]: E0227 11:49:04.725811 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mf2hh_openshift-machine-config-operator(c2cfd349-f825-497b-b698-7fb6bc258b22)\"" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" Feb 27 11:49:10 crc kubenswrapper[4728]: I0227 11:49:10.759235 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_6e834d11-1d93-42ba-8dfe-f17c9faddff2/memcached/0.log" Feb 27 11:49:15 crc kubenswrapper[4728]: I0227 11:49:15.725460 4728 scope.go:117] "RemoveContainer" containerID="5040d34bd66f04310bee95071bbc5788775a1a41119e84ef938b20c74c967ae7" Feb 27 11:49:15 crc kubenswrapper[4728]: I0227 11:49:15.988541 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" event={"ID":"c2cfd349-f825-497b-b698-7fb6bc258b22","Type":"ContainerStarted","Data":"e81ecfd96a35ea0b894753598830f159eb681a2f01ac3ed1749128d3d4f90dbd"} Feb 27 11:49:32 crc kubenswrapper[4728]: I0227 11:49:32.399577 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_434c282bf32f6a987b87de30c7c4520ff9c0517cb06f6f8414bcb82fa6slzlq_a67acb1e-a1c9-44b6-805f-39313a1961cf/util/0.log" Feb 27 11:49:32 crc kubenswrapper[4728]: I0227 11:49:32.591844 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_434c282bf32f6a987b87de30c7c4520ff9c0517cb06f6f8414bcb82fa6slzlq_a67acb1e-a1c9-44b6-805f-39313a1961cf/pull/0.log" Feb 27 11:49:32 crc kubenswrapper[4728]: I0227 11:49:32.625145 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_434c282bf32f6a987b87de30c7c4520ff9c0517cb06f6f8414bcb82fa6slzlq_a67acb1e-a1c9-44b6-805f-39313a1961cf/util/0.log" Feb 27 11:49:32 crc kubenswrapper[4728]: I0227 11:49:32.649675 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_434c282bf32f6a987b87de30c7c4520ff9c0517cb06f6f8414bcb82fa6slzlq_a67acb1e-a1c9-44b6-805f-39313a1961cf/pull/0.log" Feb 27 11:49:33 crc kubenswrapper[4728]: I0227 11:49:33.405205 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_434c282bf32f6a987b87de30c7c4520ff9c0517cb06f6f8414bcb82fa6slzlq_a67acb1e-a1c9-44b6-805f-39313a1961cf/pull/0.log" Feb 27 11:49:33 crc kubenswrapper[4728]: I0227 11:49:33.425620 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_434c282bf32f6a987b87de30c7c4520ff9c0517cb06f6f8414bcb82fa6slzlq_a67acb1e-a1c9-44b6-805f-39313a1961cf/extract/0.log" Feb 27 11:49:33 crc kubenswrapper[4728]: I0227 11:49:33.447862 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_434c282bf32f6a987b87de30c7c4520ff9c0517cb06f6f8414bcb82fa6slzlq_a67acb1e-a1c9-44b6-805f-39313a1961cf/util/0.log" Feb 27 11:49:33 crc kubenswrapper[4728]: I0227 11:49:33.825744 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-5d87c9d997-l2ffb_65769a2d-f0c4-4af4-b5ce-f5918e90bfbf/manager/0.log" Feb 27 11:49:34 crc kubenswrapper[4728]: I0227 11:49:34.195784 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-64db6967f8-j8qfr_7f038385-91be-41e0-b79c-0f6160bdf07a/manager/0.log" Feb 27 11:49:34 crc kubenswrapper[4728]: I0227 11:49:34.553852 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-cf99c678f-sfznl_c34e67f9-f7a4-4918-85c5-9d26f0f47f83/manager/0.log" Feb 27 11:49:34 crc kubenswrapper[4728]: I0227 11:49:34.665034 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-78bc7f9bd9-2prk8_35b1713c-2089-47f7-8537-468fc7a8f79e/manager/0.log" Feb 27 11:49:35 crc kubenswrapper[4728]: I0227 11:49:35.367840 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-545456dc4-qrz29_20a64934-5596-44eb-9646-115ff6b4e9c8/manager/0.log" Feb 27 11:49:35 crc kubenswrapper[4728]: I0227 11:49:35.386795 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-f7fcc58b9-2gxfg_0c1ed09d-4dfa-4cf1-b7c3-4562f3811ed9/manager/0.log" Feb 27 11:49:35 crc kubenswrapper[4728]: I0227 11:49:35.790809 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-55ffd4876b-qkrxj_df32cd75-8d92-4dbe-9e96-0c943a0f2614/manager/0.log" Feb 27 11:49:35 crc kubenswrapper[4728]: I0227 11:49:35.867896 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-67d996989d-ngg85_9556c640-95cc-4030-a15d-eb61a6bcca3b/manager/0.log" Feb 27 11:49:36 crc kubenswrapper[4728]: I0227 11:49:36.117446 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-556b8b874-rxh2d_9a392698-f140-4562-b72c-7cbe0a868f1c/manager/0.log" Feb 27 11:49:36 crc kubenswrapper[4728]: I0227 11:49:36.339012 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-54688575f-5xx5w_c0bf29f3-9fc3-4d2e-b2f6-979e2ed8f3ce/manager/0.log" Feb 27 11:49:36 crc kubenswrapper[4728]: I0227 11:49:36.366204 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-55d77d7b5c-2whpj_6fd71010-803b-40dc-8cba-72c0a8987b5b/manager/0.log" Feb 27 11:49:36 crc kubenswrapper[4728]: I0227 11:49:36.465485 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-74b6b5dc96-zwlhz_2343eecc-62d9-4859-9cf9-6a3ec71f4906/manager/0.log" Feb 27 11:49:36 crc kubenswrapper[4728]: I0227 11:49:36.588999 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-5d86c7ddb7-6f5z2_8303b246-6796-4521-8c7c-95234d371456/manager/0.log" Feb 27 11:49:36 crc kubenswrapper[4728]: I0227 11:49:36.683369 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-7c6767dc9cht92p_d0eb0111-8103-4481-af5b-9507858073ef/manager/0.log" Feb 27 11:49:37 crc kubenswrapper[4728]: I0227 11:49:37.113712 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-76c4d9668d-f265t_d0eba51d-03cc-4b20-88ea-58f1d34276e5/operator/0.log" Feb 27 11:49:37 crc kubenswrapper[4728]: I0227 11:49:37.121149 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-z7cr9_da26da61-4baa-47ad-a1eb-57d3fd410f22/registry-server/0.log" Feb 27 11:49:37 crc kubenswrapper[4728]: I0227 11:49:37.427852 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-75684d597f-hk2dt_10e9cde6-9b86-4bb8-9171-f73a3a034411/manager/0.log" Feb 27 11:49:37 crc kubenswrapper[4728]: I0227 11:49:37.583121 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-648564c9fc-rrwdq_4c16843a-96c9-45b3-9203-6beef6d0c61c/manager/0.log" Feb 27 11:49:37 crc kubenswrapper[4728]: I0227 11:49:37.693290 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-5z9sj_980c029c-25d4-4063-be4d-ea30564c2120/operator/0.log" Feb 27 11:49:37 crc kubenswrapper[4728]: I0227 11:49:37.834930 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-9b9ff9f4d-4s265_11dda47d-9181-4a72-a9ea-68874d9ca367/manager/0.log" Feb 27 11:49:38 crc kubenswrapper[4728]: I0227 11:49:38.226288 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-55b5ff4dbb-tq6s8_e81ced4b-a6cf-4dba-964a-52f8bcbd82ae/manager/0.log" Feb 27 11:49:38 crc kubenswrapper[4728]: I0227 11:49:38.452975 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-bccc79885-jnbr4_d246350e-c4cc-4f56-948b-3515032e7645/manager/0.log" Feb 27 11:49:38 crc kubenswrapper[4728]: I0227 11:49:38.578475 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-7776f7b585-jhzvv_4af1c14f-ff9c-4c9c-a110-4f6b462c7acd/manager/0.log" Feb 27 11:49:39 crc kubenswrapper[4728]: I0227 11:49:39.009244 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-6d5879f6b9-x7nrw_08f5d7df-9e0f-4c13-8799-bcb605842ffd/manager/0.log" Feb 27 11:49:45 crc kubenswrapper[4728]: I0227 11:49:45.352994 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-6db6876945-8dzvn_46e1e2f4-2677-4d4f-88c3-22c7f3942e12/manager/0.log" Feb 27 11:50:00 crc kubenswrapper[4728]: I0227 11:50:00.176411 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536550-6m6lp"] Feb 27 11:50:00 crc kubenswrapper[4728]: E0227 11:50:00.177470 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="135f8f07-1e79-4050-9bb7-f4b3c4fb51c3" containerName="oc" Feb 27 11:50:00 crc kubenswrapper[4728]: I0227 11:50:00.177484 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="135f8f07-1e79-4050-9bb7-f4b3c4fb51c3" containerName="oc" Feb 27 11:50:00 crc kubenswrapper[4728]: I0227 11:50:00.177722 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="135f8f07-1e79-4050-9bb7-f4b3c4fb51c3" containerName="oc" Feb 27 11:50:00 crc kubenswrapper[4728]: I0227 11:50:00.179640 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536550-6m6lp" Feb 27 11:50:00 crc kubenswrapper[4728]: I0227 11:50:00.183011 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 11:50:00 crc kubenswrapper[4728]: I0227 11:50:00.183390 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 11:50:00 crc kubenswrapper[4728]: I0227 11:50:00.183710 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-wxdfj" Feb 27 11:50:00 crc kubenswrapper[4728]: I0227 11:50:00.192138 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536550-6m6lp"] Feb 27 11:50:00 crc kubenswrapper[4728]: I0227 11:50:00.281946 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bsh9h\" (UniqueName: \"kubernetes.io/projected/e0ba33a7-8d7d-49b3-9c9b-72902af18214-kube-api-access-bsh9h\") pod \"auto-csr-approver-29536550-6m6lp\" (UID: \"e0ba33a7-8d7d-49b3-9c9b-72902af18214\") " pod="openshift-infra/auto-csr-approver-29536550-6m6lp" Feb 27 11:50:00 crc kubenswrapper[4728]: I0227 11:50:00.383378 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bsh9h\" (UniqueName: \"kubernetes.io/projected/e0ba33a7-8d7d-49b3-9c9b-72902af18214-kube-api-access-bsh9h\") pod \"auto-csr-approver-29536550-6m6lp\" (UID: \"e0ba33a7-8d7d-49b3-9c9b-72902af18214\") " pod="openshift-infra/auto-csr-approver-29536550-6m6lp" Feb 27 11:50:00 crc kubenswrapper[4728]: I0227 11:50:00.415538 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bsh9h\" (UniqueName: \"kubernetes.io/projected/e0ba33a7-8d7d-49b3-9c9b-72902af18214-kube-api-access-bsh9h\") pod \"auto-csr-approver-29536550-6m6lp\" (UID: \"e0ba33a7-8d7d-49b3-9c9b-72902af18214\") " pod="openshift-infra/auto-csr-approver-29536550-6m6lp" Feb 27 11:50:00 crc kubenswrapper[4728]: I0227 11:50:00.550896 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536550-6m6lp" Feb 27 11:50:01 crc kubenswrapper[4728]: I0227 11:50:01.166663 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536550-6m6lp"] Feb 27 11:50:01 crc kubenswrapper[4728]: W0227 11:50:01.191400 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode0ba33a7_8d7d_49b3_9c9b_72902af18214.slice/crio-d4aba6979a3a42b5eadaed1d2aae9adcceda0b360a9a096efff5a722e323bd53 WatchSource:0}: Error finding container d4aba6979a3a42b5eadaed1d2aae9adcceda0b360a9a096efff5a722e323bd53: Status 404 returned error can't find the container with id d4aba6979a3a42b5eadaed1d2aae9adcceda0b360a9a096efff5a722e323bd53 Feb 27 11:50:01 crc kubenswrapper[4728]: I0227 11:50:01.202513 4728 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 27 11:50:01 crc kubenswrapper[4728]: I0227 11:50:01.552840 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536550-6m6lp" event={"ID":"e0ba33a7-8d7d-49b3-9c9b-72902af18214","Type":"ContainerStarted","Data":"d4aba6979a3a42b5eadaed1d2aae9adcceda0b360a9a096efff5a722e323bd53"} Feb 27 11:50:03 crc kubenswrapper[4728]: I0227 11:50:03.576847 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536550-6m6lp" event={"ID":"e0ba33a7-8d7d-49b3-9c9b-72902af18214","Type":"ContainerStarted","Data":"fb740cad9b5de1bd6362c3716b8ed2e64d984fe7d104cf2e42db49148be9c2ed"} Feb 27 11:50:03 crc kubenswrapper[4728]: I0227 11:50:03.600578 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29536550-6m6lp" podStartSLOduration=2.551077586 podStartE2EDuration="3.600555026s" podCreationTimestamp="2026-02-27 11:50:00 +0000 UTC" firstStartedPulling="2026-02-27 11:50:01.200691948 +0000 UTC m=+5021.163058054" lastFinishedPulling="2026-02-27 11:50:02.250169358 +0000 UTC m=+5022.212535494" observedRunningTime="2026-02-27 11:50:03.588429866 +0000 UTC m=+5023.550795972" watchObservedRunningTime="2026-02-27 11:50:03.600555026 +0000 UTC m=+5023.562921132" Feb 27 11:50:03 crc kubenswrapper[4728]: I0227 11:50:03.602856 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-jzdfv_5bbc683f-19d5-4c72-83a3-511c300446ad/control-plane-machine-set-operator/0.log" Feb 27 11:50:03 crc kubenswrapper[4728]: I0227 11:50:03.803565 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-fwnnw_ba876990-999b-4cd2-bb68-624cbf1b5701/kube-rbac-proxy/0.log" Feb 27 11:50:03 crc kubenswrapper[4728]: I0227 11:50:03.832279 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-fwnnw_ba876990-999b-4cd2-bb68-624cbf1b5701/machine-api-operator/0.log" Feb 27 11:50:05 crc kubenswrapper[4728]: I0227 11:50:05.597821 4728 generic.go:334] "Generic (PLEG): container finished" podID="e0ba33a7-8d7d-49b3-9c9b-72902af18214" containerID="fb740cad9b5de1bd6362c3716b8ed2e64d984fe7d104cf2e42db49148be9c2ed" exitCode=0 Feb 27 11:50:05 crc kubenswrapper[4728]: I0227 11:50:05.597898 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536550-6m6lp" event={"ID":"e0ba33a7-8d7d-49b3-9c9b-72902af18214","Type":"ContainerDied","Data":"fb740cad9b5de1bd6362c3716b8ed2e64d984fe7d104cf2e42db49148be9c2ed"} Feb 27 11:50:07 crc kubenswrapper[4728]: I0227 11:50:07.102263 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536550-6m6lp" Feb 27 11:50:07 crc kubenswrapper[4728]: I0227 11:50:07.152721 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bsh9h\" (UniqueName: \"kubernetes.io/projected/e0ba33a7-8d7d-49b3-9c9b-72902af18214-kube-api-access-bsh9h\") pod \"e0ba33a7-8d7d-49b3-9c9b-72902af18214\" (UID: \"e0ba33a7-8d7d-49b3-9c9b-72902af18214\") " Feb 27 11:50:07 crc kubenswrapper[4728]: I0227 11:50:07.173481 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0ba33a7-8d7d-49b3-9c9b-72902af18214-kube-api-access-bsh9h" (OuterVolumeSpecName: "kube-api-access-bsh9h") pod "e0ba33a7-8d7d-49b3-9c9b-72902af18214" (UID: "e0ba33a7-8d7d-49b3-9c9b-72902af18214"). InnerVolumeSpecName "kube-api-access-bsh9h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:50:07 crc kubenswrapper[4728]: I0227 11:50:07.257216 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bsh9h\" (UniqueName: \"kubernetes.io/projected/e0ba33a7-8d7d-49b3-9c9b-72902af18214-kube-api-access-bsh9h\") on node \"crc\" DevicePath \"\"" Feb 27 11:50:07 crc kubenswrapper[4728]: I0227 11:50:07.639450 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536550-6m6lp" event={"ID":"e0ba33a7-8d7d-49b3-9c9b-72902af18214","Type":"ContainerDied","Data":"d4aba6979a3a42b5eadaed1d2aae9adcceda0b360a9a096efff5a722e323bd53"} Feb 27 11:50:07 crc kubenswrapper[4728]: I0227 11:50:07.639915 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d4aba6979a3a42b5eadaed1d2aae9adcceda0b360a9a096efff5a722e323bd53" Feb 27 11:50:07 crc kubenswrapper[4728]: I0227 11:50:07.639593 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536550-6m6lp" Feb 27 11:50:07 crc kubenswrapper[4728]: I0227 11:50:07.713023 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536544-7zmq4"] Feb 27 11:50:07 crc kubenswrapper[4728]: I0227 11:50:07.728880 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536544-7zmq4"] Feb 27 11:50:08 crc kubenswrapper[4728]: I0227 11:50:08.746562 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ed52116-68d5-45f7-ab26-ae8973646d4e" path="/var/lib/kubelet/pods/6ed52116-68d5-45f7-ab26-ae8973646d4e/volumes" Feb 27 11:50:18 crc kubenswrapper[4728]: I0227 11:50:18.779019 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-m79gz_c8862af5-495d-455a-9b63-aeb694a4f768/cert-manager-controller/0.log" Feb 27 11:50:18 crc kubenswrapper[4728]: I0227 11:50:18.958707 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-cf62w_f79a2ce3-db43-4848-8b1a-4ebee40a850b/cert-manager-cainjector/0.log" Feb 27 11:50:19 crc kubenswrapper[4728]: I0227 11:50:19.049856 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-lgkck_8bbbf6f7-c5e4-4942-8b37-cd24c2e729d9/cert-manager-webhook/0.log" Feb 27 11:50:33 crc kubenswrapper[4728]: I0227 11:50:33.040689 4728 scope.go:117] "RemoveContainer" containerID="d3ec9edad13897d84c73e534d72f74cf09011c43c7bbbf4a1c057060b8f938c2" Feb 27 11:50:36 crc kubenswrapper[4728]: I0227 11:50:36.485513 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-5dcbbd79cf-4cmrc_9e8ec681-17c4-4bcd-b81a-92de549c1523/nmstate-console-plugin/0.log" Feb 27 11:50:37 crc kubenswrapper[4728]: I0227 11:50:37.240220 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-njfzf_c3efa330-29e0-41a2-aa09-86babc8aa9b4/nmstate-handler/0.log" Feb 27 11:50:37 crc kubenswrapper[4728]: I0227 11:50:37.241277 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-69594cc75-cz9mk_339eedb6-7c4e-4837-9cef-a76ee6398990/kube-rbac-proxy/0.log" Feb 27 11:50:37 crc kubenswrapper[4728]: I0227 11:50:37.586837 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-69594cc75-cz9mk_339eedb6-7c4e-4837-9cef-a76ee6398990/nmstate-metrics/0.log" Feb 27 11:50:37 crc kubenswrapper[4728]: I0227 11:50:37.641999 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-75c5dccd6c-964rz_f98e3444-0dcb-40fd-8833-4d492381c226/nmstate-operator/0.log" Feb 27 11:50:37 crc kubenswrapper[4728]: I0227 11:50:37.802235 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-786f45cff4-tw7dm_a28afdcc-7a97-430e-a333-5c1eac61d005/nmstate-webhook/0.log" Feb 27 11:50:54 crc kubenswrapper[4728]: I0227 11:50:54.543889 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-5bcdfff8f4-2p27k_392252ea-72ab-456c-9462-1c85678476cb/kube-rbac-proxy/0.log" Feb 27 11:50:54 crc kubenswrapper[4728]: I0227 11:50:54.625577 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-5bcdfff8f4-2p27k_392252ea-72ab-456c-9462-1c85678476cb/manager/0.log" Feb 27 11:51:12 crc kubenswrapper[4728]: I0227 11:51:12.045779 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-zzdsp_936f8a2d-37fa-4d39-9de8-a07aa8efaf6a/prometheus-operator/0.log" Feb 27 11:51:12 crc kubenswrapper[4728]: I0227 11:51:12.194184 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-6bf87b847d-6b5tt_18b42bdb-6a22-4f0f-88bd-2bff0d1a4b82/prometheus-operator-admission-webhook/0.log" Feb 27 11:51:12 crc kubenswrapper[4728]: I0227 11:51:12.405529 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-n82qn_685b65a4-9d96-4018-b8df-a45eccc1e923/operator/0.log" Feb 27 11:51:12 crc kubenswrapper[4728]: I0227 11:51:12.406439 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-6bf87b847d-g67pg_c4fb8c6f-07d9-4e5c-8f99-80776c0e62bb/prometheus-operator-admission-webhook/0.log" Feb 27 11:51:13 crc kubenswrapper[4728]: I0227 11:51:13.499640 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-ui-dashboards-66cbf594b5-jc2j2_fec191d1-b76f-4b8c-94c2-2d217a21951c/observability-ui-dashboards/0.log" Feb 27 11:51:13 crc kubenswrapper[4728]: I0227 11:51:13.570894 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-2bd92_52f3fd68-b1f1-4b15-b15c-5356d08aeedd/perses-operator/0.log" Feb 27 11:51:30 crc kubenswrapper[4728]: I0227 11:51:30.962376 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_cluster-logging-operator-c769fd969-c8l9h_6aa8d145-6b1c-4547-8f98-de72f2f0b5e2/cluster-logging-operator/0.log" Feb 27 11:51:31 crc kubenswrapper[4728]: I0227 11:51:31.123020 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_collector-44qdc_7cf5ad18-3d58-4d35-8255-c581c2d2b722/collector/0.log" Feb 27 11:51:31 crc kubenswrapper[4728]: I0227 11:51:31.298124 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-compactor-0_d201247f-3eb4-46c6-a46c-6c37f5a28219/loki-compactor/0.log" Feb 27 11:51:31 crc kubenswrapper[4728]: I0227 11:51:31.346926 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-distributor-5d5548c9f5-mgsnx_404c78e7-51af-4a23-8db8-5c19bf876bdc/loki-distributor/0.log" Feb 27 11:51:31 crc kubenswrapper[4728]: I0227 11:51:31.457581 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-6c7d6ccd54-mdl8x_9d307382-547a-4a24-b552-9c3c2390a947/gateway/0.log" Feb 27 11:51:31 crc kubenswrapper[4728]: I0227 11:51:31.468041 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-6c7d6ccd54-mdl8x_9d307382-547a-4a24-b552-9c3c2390a947/opa/0.log" Feb 27 11:51:31 crc kubenswrapper[4728]: I0227 11:51:31.613403 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-6c7d6ccd54-z9tn8_a623f6cb-5632-4d1d-9754-7d146be81c79/gateway/0.log" Feb 27 11:51:31 crc kubenswrapper[4728]: I0227 11:51:31.673732 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-6c7d6ccd54-z9tn8_a623f6cb-5632-4d1d-9754-7d146be81c79/opa/0.log" Feb 27 11:51:31 crc kubenswrapper[4728]: I0227 11:51:31.737777 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-index-gateway-0_bc136f96-e5d5-4201-a560-a9759fee98d5/loki-index-gateway/0.log" Feb 27 11:51:31 crc kubenswrapper[4728]: I0227 11:51:31.893113 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-ingester-0_30b747c6-8aaf-4862-ab83-c642456f025a/loki-ingester/0.log" Feb 27 11:51:31 crc kubenswrapper[4728]: I0227 11:51:31.952596 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-querier-76bf7b6d45-thpcv_bd0adbab-805e-4ac7-b2a2-3c67275176e0/loki-querier/0.log" Feb 27 11:51:32 crc kubenswrapper[4728]: I0227 11:51:32.054755 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-query-frontend-6d6859c548-958gn_63bf6e97-3442-45f7-a57b-b811efabb073/loki-query-frontend/0.log" Feb 27 11:51:34 crc kubenswrapper[4728]: I0227 11:51:34.783043 4728 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="a7b93ac4-55f2-4491-b4b4-f8abfd837dfa" containerName="galera" probeResult="failure" output="command timed out" Feb 27 11:51:34 crc kubenswrapper[4728]: I0227 11:51:34.783051 4728 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="a7b93ac4-55f2-4491-b4b4-f8abfd837dfa" containerName="galera" probeResult="failure" output="command timed out" Feb 27 11:51:35 crc kubenswrapper[4728]: I0227 11:51:35.922403 4728 patch_prober.go:28] interesting pod/machine-config-daemon-mf2hh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 11:51:35 crc kubenswrapper[4728]: I0227 11:51:35.923548 4728 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 11:51:51 crc kubenswrapper[4728]: I0227 11:51:51.695924 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-86ddb6bd46-6jf48_e32ec3a5-e73e-4a97-806c-3108464a20ef/kube-rbac-proxy/0.log" Feb 27 11:51:51 crc kubenswrapper[4728]: I0227 11:51:51.881432 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-86ddb6bd46-6jf48_e32ec3a5-e73e-4a97-806c-3108464a20ef/controller/0.log" Feb 27 11:51:51 crc kubenswrapper[4728]: I0227 11:51:51.947361 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2mg6c_59e183d9-0890-454a-8d87-779c957c6b18/cp-frr-files/0.log" Feb 27 11:51:52 crc kubenswrapper[4728]: I0227 11:51:52.175176 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2mg6c_59e183d9-0890-454a-8d87-779c957c6b18/cp-frr-files/0.log" Feb 27 11:51:52 crc kubenswrapper[4728]: I0227 11:51:52.213421 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2mg6c_59e183d9-0890-454a-8d87-779c957c6b18/cp-reloader/0.log" Feb 27 11:51:52 crc kubenswrapper[4728]: I0227 11:51:52.267220 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2mg6c_59e183d9-0890-454a-8d87-779c957c6b18/cp-reloader/0.log" Feb 27 11:51:52 crc kubenswrapper[4728]: I0227 11:51:52.267349 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2mg6c_59e183d9-0890-454a-8d87-779c957c6b18/cp-metrics/0.log" Feb 27 11:51:52 crc kubenswrapper[4728]: I0227 11:51:52.402410 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2mg6c_59e183d9-0890-454a-8d87-779c957c6b18/cp-frr-files/0.log" Feb 27 11:51:52 crc kubenswrapper[4728]: I0227 11:51:52.446769 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2mg6c_59e183d9-0890-454a-8d87-779c957c6b18/cp-reloader/0.log" Feb 27 11:51:52 crc kubenswrapper[4728]: I0227 11:51:52.462303 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2mg6c_59e183d9-0890-454a-8d87-779c957c6b18/cp-metrics/0.log" Feb 27 11:51:52 crc kubenswrapper[4728]: I0227 11:51:52.475171 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2mg6c_59e183d9-0890-454a-8d87-779c957c6b18/cp-metrics/0.log" Feb 27 11:51:52 crc kubenswrapper[4728]: I0227 11:51:52.859712 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2mg6c_59e183d9-0890-454a-8d87-779c957c6b18/cp-frr-files/0.log" Feb 27 11:51:52 crc kubenswrapper[4728]: I0227 11:51:52.928079 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2mg6c_59e183d9-0890-454a-8d87-779c957c6b18/controller/0.log" Feb 27 11:51:52 crc kubenswrapper[4728]: I0227 11:51:52.981906 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2mg6c_59e183d9-0890-454a-8d87-779c957c6b18/cp-metrics/0.log" Feb 27 11:51:53 crc kubenswrapper[4728]: I0227 11:51:53.010092 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2mg6c_59e183d9-0890-454a-8d87-779c957c6b18/cp-reloader/0.log" Feb 27 11:51:53 crc kubenswrapper[4728]: I0227 11:51:53.215832 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2mg6c_59e183d9-0890-454a-8d87-779c957c6b18/frr-metrics/0.log" Feb 27 11:51:53 crc kubenswrapper[4728]: I0227 11:51:53.235916 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2mg6c_59e183d9-0890-454a-8d87-779c957c6b18/kube-rbac-proxy/0.log" Feb 27 11:51:53 crc kubenswrapper[4728]: I0227 11:51:53.253464 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2mg6c_59e183d9-0890-454a-8d87-779c957c6b18/kube-rbac-proxy-frr/0.log" Feb 27 11:51:53 crc kubenswrapper[4728]: I0227 11:51:53.442859 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2mg6c_59e183d9-0890-454a-8d87-779c957c6b18/reloader/0.log" Feb 27 11:51:53 crc kubenswrapper[4728]: I0227 11:51:53.466924 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7f989f654f-4cwsw_5d70cb53-6bc9-4605-bd13-8a60aa7fff09/frr-k8s-webhook-server/0.log" Feb 27 11:51:53 crc kubenswrapper[4728]: I0227 11:51:53.720771 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-7dcfc4d7fd-n4jxt_7a354dce-cc4d-4d02-bfe1-24cdd16ad1c1/manager/0.log" Feb 27 11:51:54 crc kubenswrapper[4728]: I0227 11:51:54.089255 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-cld6r_bfdda6c7-f942-4d1b-b0f6-7f169505a8cc/kube-rbac-proxy/0.log" Feb 27 11:51:54 crc kubenswrapper[4728]: I0227 11:51:54.095260 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-676f4c84b9-j2lx5_54c10ca0-417a-46fc-9d21-ffc4e939073d/webhook-server/0.log" Feb 27 11:51:54 crc kubenswrapper[4728]: I0227 11:51:54.966664 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2mg6c_59e183d9-0890-454a-8d87-779c957c6b18/frr/0.log" Feb 27 11:51:55 crc kubenswrapper[4728]: I0227 11:51:55.022403 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-cld6r_bfdda6c7-f942-4d1b-b0f6-7f169505a8cc/speaker/0.log" Feb 27 11:52:00 crc kubenswrapper[4728]: I0227 11:52:00.175131 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536552-qxxzx"] Feb 27 11:52:00 crc kubenswrapper[4728]: E0227 11:52:00.177024 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0ba33a7-8d7d-49b3-9c9b-72902af18214" containerName="oc" Feb 27 11:52:00 crc kubenswrapper[4728]: I0227 11:52:00.177054 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0ba33a7-8d7d-49b3-9c9b-72902af18214" containerName="oc" Feb 27 11:52:00 crc kubenswrapper[4728]: I0227 11:52:00.177729 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0ba33a7-8d7d-49b3-9c9b-72902af18214" containerName="oc" Feb 27 11:52:00 crc kubenswrapper[4728]: I0227 11:52:00.179332 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536552-qxxzx" Feb 27 11:52:00 crc kubenswrapper[4728]: I0227 11:52:00.181772 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-wxdfj" Feb 27 11:52:00 crc kubenswrapper[4728]: I0227 11:52:00.183458 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 11:52:00 crc kubenswrapper[4728]: I0227 11:52:00.183982 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 11:52:00 crc kubenswrapper[4728]: I0227 11:52:00.188073 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536552-qxxzx"] Feb 27 11:52:00 crc kubenswrapper[4728]: I0227 11:52:00.337908 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xn6gq\" (UniqueName: \"kubernetes.io/projected/ff8bde6d-9c7d-4085-9299-3d6117b9ff7f-kube-api-access-xn6gq\") pod \"auto-csr-approver-29536552-qxxzx\" (UID: \"ff8bde6d-9c7d-4085-9299-3d6117b9ff7f\") " pod="openshift-infra/auto-csr-approver-29536552-qxxzx" Feb 27 11:52:00 crc kubenswrapper[4728]: I0227 11:52:00.440572 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xn6gq\" (UniqueName: \"kubernetes.io/projected/ff8bde6d-9c7d-4085-9299-3d6117b9ff7f-kube-api-access-xn6gq\") pod \"auto-csr-approver-29536552-qxxzx\" (UID: \"ff8bde6d-9c7d-4085-9299-3d6117b9ff7f\") " pod="openshift-infra/auto-csr-approver-29536552-qxxzx" Feb 27 11:52:00 crc kubenswrapper[4728]: I0227 11:52:00.472418 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xn6gq\" (UniqueName: \"kubernetes.io/projected/ff8bde6d-9c7d-4085-9299-3d6117b9ff7f-kube-api-access-xn6gq\") pod \"auto-csr-approver-29536552-qxxzx\" (UID: \"ff8bde6d-9c7d-4085-9299-3d6117b9ff7f\") " pod="openshift-infra/auto-csr-approver-29536552-qxxzx" Feb 27 11:52:00 crc kubenswrapper[4728]: I0227 11:52:00.530579 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536552-qxxzx" Feb 27 11:52:01 crc kubenswrapper[4728]: I0227 11:52:01.150650 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536552-qxxzx"] Feb 27 11:52:01 crc kubenswrapper[4728]: I0227 11:52:01.356689 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536552-qxxzx" event={"ID":"ff8bde6d-9c7d-4085-9299-3d6117b9ff7f","Type":"ContainerStarted","Data":"c878f9ba59e3d6431021891daad40d744626f5e9a512e190afa902d2e06e3e8b"} Feb 27 11:52:03 crc kubenswrapper[4728]: I0227 11:52:03.381893 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536552-qxxzx" event={"ID":"ff8bde6d-9c7d-4085-9299-3d6117b9ff7f","Type":"ContainerStarted","Data":"34aa87de018383cd0a4a17a0bcaf0a2b855522bae1b080f3693d21a7ac8ada91"} Feb 27 11:52:03 crc kubenswrapper[4728]: I0227 11:52:03.404093 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29536552-qxxzx" podStartSLOduration=2.449078638 podStartE2EDuration="3.404074397s" podCreationTimestamp="2026-02-27 11:52:00 +0000 UTC" firstStartedPulling="2026-02-27 11:52:01.136577534 +0000 UTC m=+5141.098943640" lastFinishedPulling="2026-02-27 11:52:02.091573283 +0000 UTC m=+5142.053939399" observedRunningTime="2026-02-27 11:52:03.403439099 +0000 UTC m=+5143.365805215" watchObservedRunningTime="2026-02-27 11:52:03.404074397 +0000 UTC m=+5143.366440503" Feb 27 11:52:04 crc kubenswrapper[4728]: I0227 11:52:04.399718 4728 generic.go:334] "Generic (PLEG): container finished" podID="ff8bde6d-9c7d-4085-9299-3d6117b9ff7f" containerID="34aa87de018383cd0a4a17a0bcaf0a2b855522bae1b080f3693d21a7ac8ada91" exitCode=0 Feb 27 11:52:04 crc kubenswrapper[4728]: I0227 11:52:04.400018 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536552-qxxzx" event={"ID":"ff8bde6d-9c7d-4085-9299-3d6117b9ff7f","Type":"ContainerDied","Data":"34aa87de018383cd0a4a17a0bcaf0a2b855522bae1b080f3693d21a7ac8ada91"} Feb 27 11:52:05 crc kubenswrapper[4728]: I0227 11:52:05.922614 4728 patch_prober.go:28] interesting pod/machine-config-daemon-mf2hh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 11:52:05 crc kubenswrapper[4728]: I0227 11:52:05.923069 4728 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 11:52:05 crc kubenswrapper[4728]: I0227 11:52:05.954241 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536552-qxxzx" Feb 27 11:52:06 crc kubenswrapper[4728]: I0227 11:52:06.100740 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xn6gq\" (UniqueName: \"kubernetes.io/projected/ff8bde6d-9c7d-4085-9299-3d6117b9ff7f-kube-api-access-xn6gq\") pod \"ff8bde6d-9c7d-4085-9299-3d6117b9ff7f\" (UID: \"ff8bde6d-9c7d-4085-9299-3d6117b9ff7f\") " Feb 27 11:52:06 crc kubenswrapper[4728]: I0227 11:52:06.120745 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff8bde6d-9c7d-4085-9299-3d6117b9ff7f-kube-api-access-xn6gq" (OuterVolumeSpecName: "kube-api-access-xn6gq") pod "ff8bde6d-9c7d-4085-9299-3d6117b9ff7f" (UID: "ff8bde6d-9c7d-4085-9299-3d6117b9ff7f"). InnerVolumeSpecName "kube-api-access-xn6gq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:52:06 crc kubenswrapper[4728]: I0227 11:52:06.204857 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xn6gq\" (UniqueName: \"kubernetes.io/projected/ff8bde6d-9c7d-4085-9299-3d6117b9ff7f-kube-api-access-xn6gq\") on node \"crc\" DevicePath \"\"" Feb 27 11:52:06 crc kubenswrapper[4728]: I0227 11:52:06.420817 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536552-qxxzx" event={"ID":"ff8bde6d-9c7d-4085-9299-3d6117b9ff7f","Type":"ContainerDied","Data":"c878f9ba59e3d6431021891daad40d744626f5e9a512e190afa902d2e06e3e8b"} Feb 27 11:52:06 crc kubenswrapper[4728]: I0227 11:52:06.420853 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c878f9ba59e3d6431021891daad40d744626f5e9a512e190afa902d2e06e3e8b" Feb 27 11:52:06 crc kubenswrapper[4728]: I0227 11:52:06.420867 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536552-qxxzx" Feb 27 11:52:06 crc kubenswrapper[4728]: I0227 11:52:06.508062 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536546-82f6r"] Feb 27 11:52:06 crc kubenswrapper[4728]: I0227 11:52:06.519887 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536546-82f6r"] Feb 27 11:52:06 crc kubenswrapper[4728]: I0227 11:52:06.738156 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cfe76c0b-7c75-47d3-8c24-5a850e5e5ce5" path="/var/lib/kubelet/pods/cfe76c0b-7c75-47d3-8c24-5a850e5e5ce5/volumes" Feb 27 11:52:12 crc kubenswrapper[4728]: I0227 11:52:12.359330 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82wts77_98dca69a-84ef-4bb6-b656-c345ecc13939/util/0.log" Feb 27 11:52:12 crc kubenswrapper[4728]: I0227 11:52:12.541921 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82wts77_98dca69a-84ef-4bb6-b656-c345ecc13939/pull/0.log" Feb 27 11:52:12 crc kubenswrapper[4728]: I0227 11:52:12.577868 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82wts77_98dca69a-84ef-4bb6-b656-c345ecc13939/pull/0.log" Feb 27 11:52:12 crc kubenswrapper[4728]: I0227 11:52:12.637643 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82wts77_98dca69a-84ef-4bb6-b656-c345ecc13939/util/0.log" Feb 27 11:52:12 crc kubenswrapper[4728]: I0227 11:52:12.726291 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82wts77_98dca69a-84ef-4bb6-b656-c345ecc13939/util/0.log" Feb 27 11:52:12 crc kubenswrapper[4728]: I0227 11:52:12.775103 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82wts77_98dca69a-84ef-4bb6-b656-c345ecc13939/pull/0.log" Feb 27 11:52:12 crc kubenswrapper[4728]: I0227 11:52:12.836909 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82wts77_98dca69a-84ef-4bb6-b656-c345ecc13939/extract/0.log" Feb 27 11:52:12 crc kubenswrapper[4728]: I0227 11:52:12.908441 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19l6pwv_9c824e4a-0c58-41aa-8bee-1bd53bb4967d/util/0.log" Feb 27 11:52:13 crc kubenswrapper[4728]: I0227 11:52:13.085369 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19l6pwv_9c824e4a-0c58-41aa-8bee-1bd53bb4967d/util/0.log" Feb 27 11:52:13 crc kubenswrapper[4728]: I0227 11:52:13.109697 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19l6pwv_9c824e4a-0c58-41aa-8bee-1bd53bb4967d/pull/0.log" Feb 27 11:52:13 crc kubenswrapper[4728]: I0227 11:52:13.110572 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19l6pwv_9c824e4a-0c58-41aa-8bee-1bd53bb4967d/pull/0.log" Feb 27 11:52:13 crc kubenswrapper[4728]: I0227 11:52:13.285378 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19l6pwv_9c824e4a-0c58-41aa-8bee-1bd53bb4967d/extract/0.log" Feb 27 11:52:13 crc kubenswrapper[4728]: I0227 11:52:13.320349 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19l6pwv_9c824e4a-0c58-41aa-8bee-1bd53bb4967d/pull/0.log" Feb 27 11:52:13 crc kubenswrapper[4728]: I0227 11:52:13.324440 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19l6pwv_9c824e4a-0c58-41aa-8bee-1bd53bb4967d/util/0.log" Feb 27 11:52:13 crc kubenswrapper[4728]: I0227 11:52:13.456331 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gvb56_aa621064-b4a3-4d02-873b-c67b5a2f311c/util/0.log" Feb 27 11:52:13 crc kubenswrapper[4728]: I0227 11:52:13.655559 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gvb56_aa621064-b4a3-4d02-873b-c67b5a2f311c/util/0.log" Feb 27 11:52:13 crc kubenswrapper[4728]: I0227 11:52:13.689226 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gvb56_aa621064-b4a3-4d02-873b-c67b5a2f311c/pull/0.log" Feb 27 11:52:13 crc kubenswrapper[4728]: I0227 11:52:13.757059 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gvb56_aa621064-b4a3-4d02-873b-c67b5a2f311c/pull/0.log" Feb 27 11:52:13 crc kubenswrapper[4728]: I0227 11:52:13.917423 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gvb56_aa621064-b4a3-4d02-873b-c67b5a2f311c/pull/0.log" Feb 27 11:52:13 crc kubenswrapper[4728]: I0227 11:52:13.934096 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gvb56_aa621064-b4a3-4d02-873b-c67b5a2f311c/util/0.log" Feb 27 11:52:13 crc kubenswrapper[4728]: I0227 11:52:13.946627 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gvb56_aa621064-b4a3-4d02-873b-c67b5a2f311c/extract/0.log" Feb 27 11:52:14 crc kubenswrapper[4728]: I0227 11:52:14.113726 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-6xxv8_d87df44b-fc24-4c81-8c22-94a12665da84/extract-utilities/0.log" Feb 27 11:52:14 crc kubenswrapper[4728]: I0227 11:52:14.328588 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-6xxv8_d87df44b-fc24-4c81-8c22-94a12665da84/extract-utilities/0.log" Feb 27 11:52:14 crc kubenswrapper[4728]: I0227 11:52:14.498309 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-6xxv8_d87df44b-fc24-4c81-8c22-94a12665da84/extract-content/0.log" Feb 27 11:52:14 crc kubenswrapper[4728]: I0227 11:52:14.606449 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-6xxv8_d87df44b-fc24-4c81-8c22-94a12665da84/extract-content/0.log" Feb 27 11:52:14 crc kubenswrapper[4728]: I0227 11:52:14.808953 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-6xxv8_d87df44b-fc24-4c81-8c22-94a12665da84/extract-utilities/0.log" Feb 27 11:52:14 crc kubenswrapper[4728]: I0227 11:52:14.854887 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-6xxv8_d87df44b-fc24-4c81-8c22-94a12665da84/extract-content/0.log" Feb 27 11:52:15 crc kubenswrapper[4728]: I0227 11:52:15.090637 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-ktqsf_00b4334c-3f12-4336-bb78-6be880bfc501/extract-utilities/0.log" Feb 27 11:52:15 crc kubenswrapper[4728]: I0227 11:52:15.227410 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-ktqsf_00b4334c-3f12-4336-bb78-6be880bfc501/extract-content/0.log" Feb 27 11:52:15 crc kubenswrapper[4728]: I0227 11:52:15.244766 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-ktqsf_00b4334c-3f12-4336-bb78-6be880bfc501/extract-utilities/0.log" Feb 27 11:52:15 crc kubenswrapper[4728]: I0227 11:52:15.281540 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-6xxv8_d87df44b-fc24-4c81-8c22-94a12665da84/registry-server/0.log" Feb 27 11:52:15 crc kubenswrapper[4728]: I0227 11:52:15.304938 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-ktqsf_00b4334c-3f12-4336-bb78-6be880bfc501/extract-content/0.log" Feb 27 11:52:15 crc kubenswrapper[4728]: I0227 11:52:15.477184 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-ktqsf_00b4334c-3f12-4336-bb78-6be880bfc501/extract-utilities/0.log" Feb 27 11:52:15 crc kubenswrapper[4728]: I0227 11:52:15.484617 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-ktqsf_00b4334c-3f12-4336-bb78-6be880bfc501/extract-content/0.log" Feb 27 11:52:15 crc kubenswrapper[4728]: I0227 11:52:15.515360 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4ffgjc_a6dd47e0-f719-4fb7-99ea-33468a6bdc97/util/0.log" Feb 27 11:52:15 crc kubenswrapper[4728]: I0227 11:52:15.665946 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4ffgjc_a6dd47e0-f719-4fb7-99ea-33468a6bdc97/util/0.log" Feb 27 11:52:15 crc kubenswrapper[4728]: I0227 11:52:15.737758 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4ffgjc_a6dd47e0-f719-4fb7-99ea-33468a6bdc97/pull/0.log" Feb 27 11:52:15 crc kubenswrapper[4728]: I0227 11:52:15.783980 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-ktqsf_00b4334c-3f12-4336-bb78-6be880bfc501/registry-server/0.log" Feb 27 11:52:15 crc kubenswrapper[4728]: I0227 11:52:15.785632 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4ffgjc_a6dd47e0-f719-4fb7-99ea-33468a6bdc97/pull/0.log" Feb 27 11:52:15 crc kubenswrapper[4728]: I0227 11:52:15.933278 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4ffgjc_a6dd47e0-f719-4fb7-99ea-33468a6bdc97/pull/0.log" Feb 27 11:52:15 crc kubenswrapper[4728]: I0227 11:52:15.945663 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4ffgjc_a6dd47e0-f719-4fb7-99ea-33468a6bdc97/extract/0.log" Feb 27 11:52:15 crc kubenswrapper[4728]: I0227 11:52:15.978495 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989pr292_53d1767e-571f-4e76-8eba-d92dd82716ce/util/0.log" Feb 27 11:52:15 crc kubenswrapper[4728]: I0227 11:52:15.990472 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4ffgjc_a6dd47e0-f719-4fb7-99ea-33468a6bdc97/util/0.log" Feb 27 11:52:16 crc kubenswrapper[4728]: I0227 11:52:16.204080 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989pr292_53d1767e-571f-4e76-8eba-d92dd82716ce/util/0.log" Feb 27 11:52:16 crc kubenswrapper[4728]: I0227 11:52:16.207242 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989pr292_53d1767e-571f-4e76-8eba-d92dd82716ce/pull/0.log" Feb 27 11:52:16 crc kubenswrapper[4728]: I0227 11:52:16.211981 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989pr292_53d1767e-571f-4e76-8eba-d92dd82716ce/pull/0.log" Feb 27 11:52:16 crc kubenswrapper[4728]: I0227 11:52:16.383679 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989pr292_53d1767e-571f-4e76-8eba-d92dd82716ce/util/0.log" Feb 27 11:52:16 crc kubenswrapper[4728]: I0227 11:52:16.406886 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989pr292_53d1767e-571f-4e76-8eba-d92dd82716ce/extract/0.log" Feb 27 11:52:16 crc kubenswrapper[4728]: I0227 11:52:16.440206 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989pr292_53d1767e-571f-4e76-8eba-d92dd82716ce/pull/0.log" Feb 27 11:52:16 crc kubenswrapper[4728]: I0227 11:52:16.493168 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-sgd7f_fe9f039c-47c3-4535-9098-1fcc175a79e6/marketplace-operator/0.log" Feb 27 11:52:16 crc kubenswrapper[4728]: I0227 11:52:16.606399 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-lcx6b_4523ffb8-c347-4cce-8db7-95b428446b0e/extract-utilities/0.log" Feb 27 11:52:16 crc kubenswrapper[4728]: I0227 11:52:16.795973 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-lcx6b_4523ffb8-c347-4cce-8db7-95b428446b0e/extract-content/0.log" Feb 27 11:52:16 crc kubenswrapper[4728]: I0227 11:52:16.821930 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-lcx6b_4523ffb8-c347-4cce-8db7-95b428446b0e/extract-content/0.log" Feb 27 11:52:16 crc kubenswrapper[4728]: I0227 11:52:16.826676 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-lcx6b_4523ffb8-c347-4cce-8db7-95b428446b0e/extract-utilities/0.log" Feb 27 11:52:16 crc kubenswrapper[4728]: I0227 11:52:16.987255 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-lcx6b_4523ffb8-c347-4cce-8db7-95b428446b0e/extract-content/0.log" Feb 27 11:52:17 crc kubenswrapper[4728]: I0227 11:52:17.044482 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-lcx6b_4523ffb8-c347-4cce-8db7-95b428446b0e/extract-utilities/0.log" Feb 27 11:52:17 crc kubenswrapper[4728]: I0227 11:52:17.048327 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-pkrw7_06035bc4-873a-492a-baa0-0760ddb0da68/extract-utilities/0.log" Feb 27 11:52:17 crc kubenswrapper[4728]: I0227 11:52:17.141079 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-lcx6b_4523ffb8-c347-4cce-8db7-95b428446b0e/registry-server/0.log" Feb 27 11:52:17 crc kubenswrapper[4728]: I0227 11:52:17.234355 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-pkrw7_06035bc4-873a-492a-baa0-0760ddb0da68/extract-utilities/0.log" Feb 27 11:52:17 crc kubenswrapper[4728]: I0227 11:52:17.240928 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-pkrw7_06035bc4-873a-492a-baa0-0760ddb0da68/extract-content/0.log" Feb 27 11:52:17 crc kubenswrapper[4728]: I0227 11:52:17.270838 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-pkrw7_06035bc4-873a-492a-baa0-0760ddb0da68/extract-content/0.log" Feb 27 11:52:17 crc kubenswrapper[4728]: I0227 11:52:17.456030 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-pkrw7_06035bc4-873a-492a-baa0-0760ddb0da68/extract-content/0.log" Feb 27 11:52:17 crc kubenswrapper[4728]: I0227 11:52:17.474693 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-pkrw7_06035bc4-873a-492a-baa0-0760ddb0da68/extract-utilities/0.log" Feb 27 11:52:18 crc kubenswrapper[4728]: I0227 11:52:18.076187 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-pkrw7_06035bc4-873a-492a-baa0-0760ddb0da68/registry-server/0.log" Feb 27 11:52:33 crc kubenswrapper[4728]: I0227 11:52:33.234002 4728 scope.go:117] "RemoveContainer" containerID="caac517b54a5550a452a5a695286e1b7f404fb97b62ee422dc2d6e0441c4b944" Feb 27 11:52:35 crc kubenswrapper[4728]: I0227 11:52:35.626674 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-zzdsp_936f8a2d-37fa-4d39-9de8-a07aa8efaf6a/prometheus-operator/0.log" Feb 27 11:52:35 crc kubenswrapper[4728]: I0227 11:52:35.659652 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-6bf87b847d-g67pg_c4fb8c6f-07d9-4e5c-8f99-80776c0e62bb/prometheus-operator-admission-webhook/0.log" Feb 27 11:52:35 crc kubenswrapper[4728]: I0227 11:52:35.730004 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-6bf87b847d-6b5tt_18b42bdb-6a22-4f0f-88bd-2bff0d1a4b82/prometheus-operator-admission-webhook/0.log" Feb 27 11:52:35 crc kubenswrapper[4728]: I0227 11:52:35.869279 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-ui-dashboards-66cbf594b5-jc2j2_fec191d1-b76f-4b8c-94c2-2d217a21951c/observability-ui-dashboards/0.log" Feb 27 11:52:35 crc kubenswrapper[4728]: I0227 11:52:35.881003 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-n82qn_685b65a4-9d96-4018-b8df-a45eccc1e923/operator/0.log" Feb 27 11:52:35 crc kubenswrapper[4728]: I0227 11:52:35.921983 4728 patch_prober.go:28] interesting pod/machine-config-daemon-mf2hh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 11:52:35 crc kubenswrapper[4728]: I0227 11:52:35.922096 4728 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 11:52:35 crc kubenswrapper[4728]: I0227 11:52:35.922187 4728 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" Feb 27 11:52:35 crc kubenswrapper[4728]: I0227 11:52:35.924537 4728 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e81ecfd96a35ea0b894753598830f159eb681a2f01ac3ed1749128d3d4f90dbd"} pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 27 11:52:35 crc kubenswrapper[4728]: I0227 11:52:35.925086 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" containerName="machine-config-daemon" containerID="cri-o://e81ecfd96a35ea0b894753598830f159eb681a2f01ac3ed1749128d3d4f90dbd" gracePeriod=600 Feb 27 11:52:35 crc kubenswrapper[4728]: I0227 11:52:35.967126 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-2bd92_52f3fd68-b1f1-4b15-b15c-5356d08aeedd/perses-operator/0.log" Feb 27 11:52:36 crc kubenswrapper[4728]: I0227 11:52:36.747468 4728 generic.go:334] "Generic (PLEG): container finished" podID="c2cfd349-f825-497b-b698-7fb6bc258b22" containerID="e81ecfd96a35ea0b894753598830f159eb681a2f01ac3ed1749128d3d4f90dbd" exitCode=0 Feb 27 11:52:36 crc kubenswrapper[4728]: I0227 11:52:36.747533 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" event={"ID":"c2cfd349-f825-497b-b698-7fb6bc258b22","Type":"ContainerDied","Data":"e81ecfd96a35ea0b894753598830f159eb681a2f01ac3ed1749128d3d4f90dbd"} Feb 27 11:52:36 crc kubenswrapper[4728]: I0227 11:52:36.748226 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" event={"ID":"c2cfd349-f825-497b-b698-7fb6bc258b22","Type":"ContainerStarted","Data":"9b3890ae50eebd358cd76204b52309ec97b933f23feef3dbb7097656248b44be"} Feb 27 11:52:36 crc kubenswrapper[4728]: I0227 11:52:36.748250 4728 scope.go:117] "RemoveContainer" containerID="5040d34bd66f04310bee95071bbc5788775a1a41119e84ef938b20c74c967ae7" Feb 27 11:52:52 crc kubenswrapper[4728]: I0227 11:52:52.383681 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-5bcdfff8f4-2p27k_392252ea-72ab-456c-9462-1c85678476cb/kube-rbac-proxy/0.log" Feb 27 11:52:52 crc kubenswrapper[4728]: I0227 11:52:52.410431 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-5bcdfff8f4-2p27k_392252ea-72ab-456c-9462-1c85678476cb/manager/0.log" Feb 27 11:54:00 crc kubenswrapper[4728]: I0227 11:54:00.180750 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536554-t2f52"] Feb 27 11:54:00 crc kubenswrapper[4728]: E0227 11:54:00.181971 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff8bde6d-9c7d-4085-9299-3d6117b9ff7f" containerName="oc" Feb 27 11:54:00 crc kubenswrapper[4728]: I0227 11:54:00.181990 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff8bde6d-9c7d-4085-9299-3d6117b9ff7f" containerName="oc" Feb 27 11:54:00 crc kubenswrapper[4728]: I0227 11:54:00.182316 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff8bde6d-9c7d-4085-9299-3d6117b9ff7f" containerName="oc" Feb 27 11:54:00 crc kubenswrapper[4728]: I0227 11:54:00.183427 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536554-t2f52" Feb 27 11:54:00 crc kubenswrapper[4728]: I0227 11:54:00.186186 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-wxdfj" Feb 27 11:54:00 crc kubenswrapper[4728]: I0227 11:54:00.187115 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 11:54:00 crc kubenswrapper[4728]: I0227 11:54:00.198105 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 11:54:00 crc kubenswrapper[4728]: I0227 11:54:00.206152 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536554-t2f52"] Feb 27 11:54:00 crc kubenswrapper[4728]: I0227 11:54:00.234143 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qx2g\" (UniqueName: \"kubernetes.io/projected/c50863b8-d265-469c-abef-c687b38cd733-kube-api-access-9qx2g\") pod \"auto-csr-approver-29536554-t2f52\" (UID: \"c50863b8-d265-469c-abef-c687b38cd733\") " pod="openshift-infra/auto-csr-approver-29536554-t2f52" Feb 27 11:54:00 crc kubenswrapper[4728]: I0227 11:54:00.337752 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9qx2g\" (UniqueName: \"kubernetes.io/projected/c50863b8-d265-469c-abef-c687b38cd733-kube-api-access-9qx2g\") pod \"auto-csr-approver-29536554-t2f52\" (UID: \"c50863b8-d265-469c-abef-c687b38cd733\") " pod="openshift-infra/auto-csr-approver-29536554-t2f52" Feb 27 11:54:00 crc kubenswrapper[4728]: I0227 11:54:00.375541 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9qx2g\" (UniqueName: \"kubernetes.io/projected/c50863b8-d265-469c-abef-c687b38cd733-kube-api-access-9qx2g\") pod \"auto-csr-approver-29536554-t2f52\" (UID: \"c50863b8-d265-469c-abef-c687b38cd733\") " pod="openshift-infra/auto-csr-approver-29536554-t2f52" Feb 27 11:54:00 crc kubenswrapper[4728]: I0227 11:54:00.522863 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536554-t2f52" Feb 27 11:54:01 crc kubenswrapper[4728]: I0227 11:54:01.279666 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536554-t2f52"] Feb 27 11:54:01 crc kubenswrapper[4728]: I0227 11:54:01.884233 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536554-t2f52" event={"ID":"c50863b8-d265-469c-abef-c687b38cd733","Type":"ContainerStarted","Data":"6a1d5f4c8ba03f427aa2cfe2d21294489d9feb060ff93287a9dbaa7d632fb315"} Feb 27 11:54:03 crc kubenswrapper[4728]: I0227 11:54:03.922829 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536554-t2f52" event={"ID":"c50863b8-d265-469c-abef-c687b38cd733","Type":"ContainerStarted","Data":"73fe2eb2fec8b51b0f1be06be96e5eab21330f150d1efcb60122861cd4f119bc"} Feb 27 11:54:03 crc kubenswrapper[4728]: I0227 11:54:03.970278 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29536554-t2f52" podStartSLOduration=2.742028164 podStartE2EDuration="3.970252528s" podCreationTimestamp="2026-02-27 11:54:00 +0000 UTC" firstStartedPulling="2026-02-27 11:54:01.290779779 +0000 UTC m=+5261.253145915" lastFinishedPulling="2026-02-27 11:54:02.519004143 +0000 UTC m=+5262.481370279" observedRunningTime="2026-02-27 11:54:03.942582845 +0000 UTC m=+5263.904948971" watchObservedRunningTime="2026-02-27 11:54:03.970252528 +0000 UTC m=+5263.932618634" Feb 27 11:54:04 crc kubenswrapper[4728]: I0227 11:54:04.941822 4728 generic.go:334] "Generic (PLEG): container finished" podID="c50863b8-d265-469c-abef-c687b38cd733" containerID="73fe2eb2fec8b51b0f1be06be96e5eab21330f150d1efcb60122861cd4f119bc" exitCode=0 Feb 27 11:54:04 crc kubenswrapper[4728]: I0227 11:54:04.941905 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536554-t2f52" event={"ID":"c50863b8-d265-469c-abef-c687b38cd733","Type":"ContainerDied","Data":"73fe2eb2fec8b51b0f1be06be96e5eab21330f150d1efcb60122861cd4f119bc"} Feb 27 11:54:06 crc kubenswrapper[4728]: I0227 11:54:06.534729 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536554-t2f52" Feb 27 11:54:06 crc kubenswrapper[4728]: I0227 11:54:06.638002 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9qx2g\" (UniqueName: \"kubernetes.io/projected/c50863b8-d265-469c-abef-c687b38cd733-kube-api-access-9qx2g\") pod \"c50863b8-d265-469c-abef-c687b38cd733\" (UID: \"c50863b8-d265-469c-abef-c687b38cd733\") " Feb 27 11:54:06 crc kubenswrapper[4728]: I0227 11:54:06.647935 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c50863b8-d265-469c-abef-c687b38cd733-kube-api-access-9qx2g" (OuterVolumeSpecName: "kube-api-access-9qx2g") pod "c50863b8-d265-469c-abef-c687b38cd733" (UID: "c50863b8-d265-469c-abef-c687b38cd733"). InnerVolumeSpecName "kube-api-access-9qx2g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:54:06 crc kubenswrapper[4728]: I0227 11:54:06.742083 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9qx2g\" (UniqueName: \"kubernetes.io/projected/c50863b8-d265-469c-abef-c687b38cd733-kube-api-access-9qx2g\") on node \"crc\" DevicePath \"\"" Feb 27 11:54:06 crc kubenswrapper[4728]: I0227 11:54:06.986741 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536554-t2f52" event={"ID":"c50863b8-d265-469c-abef-c687b38cd733","Type":"ContainerDied","Data":"6a1d5f4c8ba03f427aa2cfe2d21294489d9feb060ff93287a9dbaa7d632fb315"} Feb 27 11:54:06 crc kubenswrapper[4728]: I0227 11:54:06.987125 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6a1d5f4c8ba03f427aa2cfe2d21294489d9feb060ff93287a9dbaa7d632fb315" Feb 27 11:54:06 crc kubenswrapper[4728]: I0227 11:54:06.986825 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536554-t2f52" Feb 27 11:54:07 crc kubenswrapper[4728]: I0227 11:54:07.029565 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536548-kqwg9"] Feb 27 11:54:07 crc kubenswrapper[4728]: I0227 11:54:07.041890 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536548-kqwg9"] Feb 27 11:54:08 crc kubenswrapper[4728]: I0227 11:54:08.743650 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="135f8f07-1e79-4050-9bb7-f4b3c4fb51c3" path="/var/lib/kubelet/pods/135f8f07-1e79-4050-9bb7-f4b3c4fb51c3/volumes" Feb 27 11:54:33 crc kubenswrapper[4728]: I0227 11:54:33.397747 4728 scope.go:117] "RemoveContainer" containerID="f24e931150cc0eae672c40e00cd891420df9e31249cb592257092babbb84ab5e" Feb 27 11:54:43 crc kubenswrapper[4728]: I0227 11:54:43.584968 4728 generic.go:334] "Generic (PLEG): container finished" podID="e7a468d6-13f2-49c8-8ccc-faa520a96917" containerID="d32c491f61d119e9dbf665eb35caf85e80384e37023906e00ca03ff514acc134" exitCode=0 Feb 27 11:54:43 crc kubenswrapper[4728]: I0227 11:54:43.585088 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-kgv9v/must-gather-4ndc4" event={"ID":"e7a468d6-13f2-49c8-8ccc-faa520a96917","Type":"ContainerDied","Data":"d32c491f61d119e9dbf665eb35caf85e80384e37023906e00ca03ff514acc134"} Feb 27 11:54:43 crc kubenswrapper[4728]: I0227 11:54:43.586538 4728 scope.go:117] "RemoveContainer" containerID="d32c491f61d119e9dbf665eb35caf85e80384e37023906e00ca03ff514acc134" Feb 27 11:54:44 crc kubenswrapper[4728]: I0227 11:54:44.078460 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-kgv9v_must-gather-4ndc4_e7a468d6-13f2-49c8-8ccc-faa520a96917/gather/0.log" Feb 27 11:54:51 crc kubenswrapper[4728]: I0227 11:54:51.712659 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-kgv9v/must-gather-4ndc4"] Feb 27 11:54:51 crc kubenswrapper[4728]: I0227 11:54:51.714136 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-kgv9v/must-gather-4ndc4" podUID="e7a468d6-13f2-49c8-8ccc-faa520a96917" containerName="copy" containerID="cri-o://05da44d77e513ea74d50e12042946a53ba90d26b7c2dbbc217ce666a4a0de20d" gracePeriod=2 Feb 27 11:54:51 crc kubenswrapper[4728]: I0227 11:54:51.734861 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-kgv9v/must-gather-4ndc4"] Feb 27 11:54:52 crc kubenswrapper[4728]: I0227 11:54:52.267622 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-kgv9v_must-gather-4ndc4_e7a468d6-13f2-49c8-8ccc-faa520a96917/copy/0.log" Feb 27 11:54:52 crc kubenswrapper[4728]: I0227 11:54:52.268318 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kgv9v/must-gather-4ndc4" Feb 27 11:54:52 crc kubenswrapper[4728]: I0227 11:54:52.393272 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ms5n5\" (UniqueName: \"kubernetes.io/projected/e7a468d6-13f2-49c8-8ccc-faa520a96917-kube-api-access-ms5n5\") pod \"e7a468d6-13f2-49c8-8ccc-faa520a96917\" (UID: \"e7a468d6-13f2-49c8-8ccc-faa520a96917\") " Feb 27 11:54:52 crc kubenswrapper[4728]: I0227 11:54:52.393761 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/e7a468d6-13f2-49c8-8ccc-faa520a96917-must-gather-output\") pod \"e7a468d6-13f2-49c8-8ccc-faa520a96917\" (UID: \"e7a468d6-13f2-49c8-8ccc-faa520a96917\") " Feb 27 11:54:52 crc kubenswrapper[4728]: I0227 11:54:52.398838 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7a468d6-13f2-49c8-8ccc-faa520a96917-kube-api-access-ms5n5" (OuterVolumeSpecName: "kube-api-access-ms5n5") pod "e7a468d6-13f2-49c8-8ccc-faa520a96917" (UID: "e7a468d6-13f2-49c8-8ccc-faa520a96917"). InnerVolumeSpecName "kube-api-access-ms5n5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:54:52 crc kubenswrapper[4728]: I0227 11:54:52.497709 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ms5n5\" (UniqueName: \"kubernetes.io/projected/e7a468d6-13f2-49c8-8ccc-faa520a96917-kube-api-access-ms5n5\") on node \"crc\" DevicePath \"\"" Feb 27 11:54:52 crc kubenswrapper[4728]: I0227 11:54:52.588275 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e7a468d6-13f2-49c8-8ccc-faa520a96917-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "e7a468d6-13f2-49c8-8ccc-faa520a96917" (UID: "e7a468d6-13f2-49c8-8ccc-faa520a96917"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 11:54:52 crc kubenswrapper[4728]: I0227 11:54:52.600603 4728 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/e7a468d6-13f2-49c8-8ccc-faa520a96917-must-gather-output\") on node \"crc\" DevicePath \"\"" Feb 27 11:54:52 crc kubenswrapper[4728]: I0227 11:54:52.709836 4728 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-kgv9v_must-gather-4ndc4_e7a468d6-13f2-49c8-8ccc-faa520a96917/copy/0.log" Feb 27 11:54:52 crc kubenswrapper[4728]: I0227 11:54:52.710189 4728 generic.go:334] "Generic (PLEG): container finished" podID="e7a468d6-13f2-49c8-8ccc-faa520a96917" containerID="05da44d77e513ea74d50e12042946a53ba90d26b7c2dbbc217ce666a4a0de20d" exitCode=143 Feb 27 11:54:52 crc kubenswrapper[4728]: I0227 11:54:52.710259 4728 scope.go:117] "RemoveContainer" containerID="05da44d77e513ea74d50e12042946a53ba90d26b7c2dbbc217ce666a4a0de20d" Feb 27 11:54:52 crc kubenswrapper[4728]: I0227 11:54:52.710442 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kgv9v/must-gather-4ndc4" Feb 27 11:54:52 crc kubenswrapper[4728]: I0227 11:54:52.759584 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7a468d6-13f2-49c8-8ccc-faa520a96917" path="/var/lib/kubelet/pods/e7a468d6-13f2-49c8-8ccc-faa520a96917/volumes" Feb 27 11:54:52 crc kubenswrapper[4728]: I0227 11:54:52.781233 4728 scope.go:117] "RemoveContainer" containerID="d32c491f61d119e9dbf665eb35caf85e80384e37023906e00ca03ff514acc134" Feb 27 11:54:52 crc kubenswrapper[4728]: I0227 11:54:52.840321 4728 scope.go:117] "RemoveContainer" containerID="05da44d77e513ea74d50e12042946a53ba90d26b7c2dbbc217ce666a4a0de20d" Feb 27 11:54:52 crc kubenswrapper[4728]: E0227 11:54:52.840792 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"05da44d77e513ea74d50e12042946a53ba90d26b7c2dbbc217ce666a4a0de20d\": container with ID starting with 05da44d77e513ea74d50e12042946a53ba90d26b7c2dbbc217ce666a4a0de20d not found: ID does not exist" containerID="05da44d77e513ea74d50e12042946a53ba90d26b7c2dbbc217ce666a4a0de20d" Feb 27 11:54:52 crc kubenswrapper[4728]: I0227 11:54:52.840856 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"05da44d77e513ea74d50e12042946a53ba90d26b7c2dbbc217ce666a4a0de20d"} err="failed to get container status \"05da44d77e513ea74d50e12042946a53ba90d26b7c2dbbc217ce666a4a0de20d\": rpc error: code = NotFound desc = could not find container \"05da44d77e513ea74d50e12042946a53ba90d26b7c2dbbc217ce666a4a0de20d\": container with ID starting with 05da44d77e513ea74d50e12042946a53ba90d26b7c2dbbc217ce666a4a0de20d not found: ID does not exist" Feb 27 11:54:52 crc kubenswrapper[4728]: I0227 11:54:52.840904 4728 scope.go:117] "RemoveContainer" containerID="d32c491f61d119e9dbf665eb35caf85e80384e37023906e00ca03ff514acc134" Feb 27 11:54:52 crc kubenswrapper[4728]: E0227 11:54:52.841319 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d32c491f61d119e9dbf665eb35caf85e80384e37023906e00ca03ff514acc134\": container with ID starting with d32c491f61d119e9dbf665eb35caf85e80384e37023906e00ca03ff514acc134 not found: ID does not exist" containerID="d32c491f61d119e9dbf665eb35caf85e80384e37023906e00ca03ff514acc134" Feb 27 11:54:52 crc kubenswrapper[4728]: I0227 11:54:52.841351 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d32c491f61d119e9dbf665eb35caf85e80384e37023906e00ca03ff514acc134"} err="failed to get container status \"d32c491f61d119e9dbf665eb35caf85e80384e37023906e00ca03ff514acc134\": rpc error: code = NotFound desc = could not find container \"d32c491f61d119e9dbf665eb35caf85e80384e37023906e00ca03ff514acc134\": container with ID starting with d32c491f61d119e9dbf665eb35caf85e80384e37023906e00ca03ff514acc134 not found: ID does not exist" Feb 27 11:54:53 crc kubenswrapper[4728]: I0227 11:54:53.358038 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-6r49w"] Feb 27 11:54:53 crc kubenswrapper[4728]: E0227 11:54:53.359221 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7a468d6-13f2-49c8-8ccc-faa520a96917" containerName="copy" Feb 27 11:54:53 crc kubenswrapper[4728]: I0227 11:54:53.359253 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7a468d6-13f2-49c8-8ccc-faa520a96917" containerName="copy" Feb 27 11:54:53 crc kubenswrapper[4728]: E0227 11:54:53.359311 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7a468d6-13f2-49c8-8ccc-faa520a96917" containerName="gather" Feb 27 11:54:53 crc kubenswrapper[4728]: I0227 11:54:53.359324 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7a468d6-13f2-49c8-8ccc-faa520a96917" containerName="gather" Feb 27 11:54:53 crc kubenswrapper[4728]: E0227 11:54:53.359352 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c50863b8-d265-469c-abef-c687b38cd733" containerName="oc" Feb 27 11:54:53 crc kubenswrapper[4728]: I0227 11:54:53.359364 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="c50863b8-d265-469c-abef-c687b38cd733" containerName="oc" Feb 27 11:54:53 crc kubenswrapper[4728]: I0227 11:54:53.359829 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7a468d6-13f2-49c8-8ccc-faa520a96917" containerName="copy" Feb 27 11:54:53 crc kubenswrapper[4728]: I0227 11:54:53.359891 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="c50863b8-d265-469c-abef-c687b38cd733" containerName="oc" Feb 27 11:54:53 crc kubenswrapper[4728]: I0227 11:54:53.359930 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7a468d6-13f2-49c8-8ccc-faa520a96917" containerName="gather" Feb 27 11:54:53 crc kubenswrapper[4728]: I0227 11:54:53.367101 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6r49w" Feb 27 11:54:53 crc kubenswrapper[4728]: I0227 11:54:53.383906 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6r49w"] Feb 27 11:54:53 crc kubenswrapper[4728]: I0227 11:54:53.421963 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c115aed-312c-4b5d-8b75-bcc68320a21e-catalog-content\") pod \"redhat-operators-6r49w\" (UID: \"1c115aed-312c-4b5d-8b75-bcc68320a21e\") " pod="openshift-marketplace/redhat-operators-6r49w" Feb 27 11:54:53 crc kubenswrapper[4728]: I0227 11:54:53.422651 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c115aed-312c-4b5d-8b75-bcc68320a21e-utilities\") pod \"redhat-operators-6r49w\" (UID: \"1c115aed-312c-4b5d-8b75-bcc68320a21e\") " pod="openshift-marketplace/redhat-operators-6r49w" Feb 27 11:54:53 crc kubenswrapper[4728]: I0227 11:54:53.422726 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67j2b\" (UniqueName: \"kubernetes.io/projected/1c115aed-312c-4b5d-8b75-bcc68320a21e-kube-api-access-67j2b\") pod \"redhat-operators-6r49w\" (UID: \"1c115aed-312c-4b5d-8b75-bcc68320a21e\") " pod="openshift-marketplace/redhat-operators-6r49w" Feb 27 11:54:53 crc kubenswrapper[4728]: I0227 11:54:53.524865 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c115aed-312c-4b5d-8b75-bcc68320a21e-catalog-content\") pod \"redhat-operators-6r49w\" (UID: \"1c115aed-312c-4b5d-8b75-bcc68320a21e\") " pod="openshift-marketplace/redhat-operators-6r49w" Feb 27 11:54:53 crc kubenswrapper[4728]: I0227 11:54:53.525146 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c115aed-312c-4b5d-8b75-bcc68320a21e-utilities\") pod \"redhat-operators-6r49w\" (UID: \"1c115aed-312c-4b5d-8b75-bcc68320a21e\") " pod="openshift-marketplace/redhat-operators-6r49w" Feb 27 11:54:53 crc kubenswrapper[4728]: I0227 11:54:53.525185 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-67j2b\" (UniqueName: \"kubernetes.io/projected/1c115aed-312c-4b5d-8b75-bcc68320a21e-kube-api-access-67j2b\") pod \"redhat-operators-6r49w\" (UID: \"1c115aed-312c-4b5d-8b75-bcc68320a21e\") " pod="openshift-marketplace/redhat-operators-6r49w" Feb 27 11:54:53 crc kubenswrapper[4728]: I0227 11:54:53.525371 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c115aed-312c-4b5d-8b75-bcc68320a21e-catalog-content\") pod \"redhat-operators-6r49w\" (UID: \"1c115aed-312c-4b5d-8b75-bcc68320a21e\") " pod="openshift-marketplace/redhat-operators-6r49w" Feb 27 11:54:53 crc kubenswrapper[4728]: I0227 11:54:53.525719 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c115aed-312c-4b5d-8b75-bcc68320a21e-utilities\") pod \"redhat-operators-6r49w\" (UID: \"1c115aed-312c-4b5d-8b75-bcc68320a21e\") " pod="openshift-marketplace/redhat-operators-6r49w" Feb 27 11:54:53 crc kubenswrapper[4728]: I0227 11:54:53.545004 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-67j2b\" (UniqueName: \"kubernetes.io/projected/1c115aed-312c-4b5d-8b75-bcc68320a21e-kube-api-access-67j2b\") pod \"redhat-operators-6r49w\" (UID: \"1c115aed-312c-4b5d-8b75-bcc68320a21e\") " pod="openshift-marketplace/redhat-operators-6r49w" Feb 27 11:54:53 crc kubenswrapper[4728]: I0227 11:54:53.695430 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6r49w" Feb 27 11:54:54 crc kubenswrapper[4728]: I0227 11:54:54.254437 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6r49w"] Feb 27 11:54:54 crc kubenswrapper[4728]: I0227 11:54:54.751707 4728 generic.go:334] "Generic (PLEG): container finished" podID="1c115aed-312c-4b5d-8b75-bcc68320a21e" containerID="cefbe0f0bf0963e521c3683a187043a7ed0984ad937044a02ad6738590991b30" exitCode=0 Feb 27 11:54:54 crc kubenswrapper[4728]: I0227 11:54:54.751789 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6r49w" event={"ID":"1c115aed-312c-4b5d-8b75-bcc68320a21e","Type":"ContainerDied","Data":"cefbe0f0bf0963e521c3683a187043a7ed0984ad937044a02ad6738590991b30"} Feb 27 11:54:54 crc kubenswrapper[4728]: I0227 11:54:54.752059 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6r49w" event={"ID":"1c115aed-312c-4b5d-8b75-bcc68320a21e","Type":"ContainerStarted","Data":"99da7ef53db329fabbeb0a72deeae123684adcc6ddd676140a7925ac6aa10856"} Feb 27 11:54:55 crc kubenswrapper[4728]: I0227 11:54:55.762842 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6r49w" event={"ID":"1c115aed-312c-4b5d-8b75-bcc68320a21e","Type":"ContainerStarted","Data":"656ecff4ced4888623ad7ac4fb14a24f6007de5963e16c20fbc7c4b82cc56126"} Feb 27 11:55:01 crc kubenswrapper[4728]: I0227 11:55:01.856799 4728 generic.go:334] "Generic (PLEG): container finished" podID="1c115aed-312c-4b5d-8b75-bcc68320a21e" containerID="656ecff4ced4888623ad7ac4fb14a24f6007de5963e16c20fbc7c4b82cc56126" exitCode=0 Feb 27 11:55:01 crc kubenswrapper[4728]: I0227 11:55:01.856840 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6r49w" event={"ID":"1c115aed-312c-4b5d-8b75-bcc68320a21e","Type":"ContainerDied","Data":"656ecff4ced4888623ad7ac4fb14a24f6007de5963e16c20fbc7c4b82cc56126"} Feb 27 11:55:01 crc kubenswrapper[4728]: I0227 11:55:01.861491 4728 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 27 11:55:02 crc kubenswrapper[4728]: I0227 11:55:02.870194 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6r49w" event={"ID":"1c115aed-312c-4b5d-8b75-bcc68320a21e","Type":"ContainerStarted","Data":"bf697459bcf31e8a171233f0dbdf1dd89dd78a4a5cf05101e9038cc7f858da59"} Feb 27 11:55:02 crc kubenswrapper[4728]: I0227 11:55:02.892697 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-6r49w" podStartSLOduration=2.258038954 podStartE2EDuration="9.892678083s" podCreationTimestamp="2026-02-27 11:54:53 +0000 UTC" firstStartedPulling="2026-02-27 11:54:54.75475915 +0000 UTC m=+5314.717125296" lastFinishedPulling="2026-02-27 11:55:02.389398269 +0000 UTC m=+5322.351764425" observedRunningTime="2026-02-27 11:55:02.891032968 +0000 UTC m=+5322.853399074" watchObservedRunningTime="2026-02-27 11:55:02.892678083 +0000 UTC m=+5322.855044199" Feb 27 11:55:03 crc kubenswrapper[4728]: I0227 11:55:03.696995 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-6r49w" Feb 27 11:55:03 crc kubenswrapper[4728]: I0227 11:55:03.697315 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-6r49w" Feb 27 11:55:04 crc kubenswrapper[4728]: I0227 11:55:04.746555 4728 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-6r49w" podUID="1c115aed-312c-4b5d-8b75-bcc68320a21e" containerName="registry-server" probeResult="failure" output=< Feb 27 11:55:04 crc kubenswrapper[4728]: timeout: failed to connect service ":50051" within 1s Feb 27 11:55:04 crc kubenswrapper[4728]: > Feb 27 11:55:05 crc kubenswrapper[4728]: I0227 11:55:05.922349 4728 patch_prober.go:28] interesting pod/machine-config-daemon-mf2hh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 11:55:05 crc kubenswrapper[4728]: I0227 11:55:05.922668 4728 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 11:55:15 crc kubenswrapper[4728]: I0227 11:55:15.133023 4728 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-6r49w" podUID="1c115aed-312c-4b5d-8b75-bcc68320a21e" containerName="registry-server" probeResult="failure" output=< Feb 27 11:55:15 crc kubenswrapper[4728]: timeout: failed to connect service ":50051" within 1s Feb 27 11:55:15 crc kubenswrapper[4728]: > Feb 27 11:55:24 crc kubenswrapper[4728]: I0227 11:55:24.757060 4728 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-6r49w" podUID="1c115aed-312c-4b5d-8b75-bcc68320a21e" containerName="registry-server" probeResult="failure" output=< Feb 27 11:55:24 crc kubenswrapper[4728]: timeout: failed to connect service ":50051" within 1s Feb 27 11:55:24 crc kubenswrapper[4728]: > Feb 27 11:55:34 crc kubenswrapper[4728]: I0227 11:55:34.767828 4728 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-6r49w" podUID="1c115aed-312c-4b5d-8b75-bcc68320a21e" containerName="registry-server" probeResult="failure" output=< Feb 27 11:55:34 crc kubenswrapper[4728]: timeout: failed to connect service ":50051" within 1s Feb 27 11:55:34 crc kubenswrapper[4728]: > Feb 27 11:55:35 crc kubenswrapper[4728]: I0227 11:55:35.922877 4728 patch_prober.go:28] interesting pod/machine-config-daemon-mf2hh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 11:55:35 crc kubenswrapper[4728]: I0227 11:55:35.922987 4728 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 11:55:43 crc kubenswrapper[4728]: I0227 11:55:43.766470 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-6r49w" Feb 27 11:55:43 crc kubenswrapper[4728]: I0227 11:55:43.853222 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-6r49w" Feb 27 11:55:44 crc kubenswrapper[4728]: I0227 11:55:44.015354 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6r49w"] Feb 27 11:55:45 crc kubenswrapper[4728]: I0227 11:55:45.371632 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-6r49w" podUID="1c115aed-312c-4b5d-8b75-bcc68320a21e" containerName="registry-server" containerID="cri-o://bf697459bcf31e8a171233f0dbdf1dd89dd78a4a5cf05101e9038cc7f858da59" gracePeriod=2 Feb 27 11:55:46 crc kubenswrapper[4728]: I0227 11:55:46.386118 4728 generic.go:334] "Generic (PLEG): container finished" podID="1c115aed-312c-4b5d-8b75-bcc68320a21e" containerID="bf697459bcf31e8a171233f0dbdf1dd89dd78a4a5cf05101e9038cc7f858da59" exitCode=0 Feb 27 11:55:46 crc kubenswrapper[4728]: I0227 11:55:46.386217 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6r49w" event={"ID":"1c115aed-312c-4b5d-8b75-bcc68320a21e","Type":"ContainerDied","Data":"bf697459bcf31e8a171233f0dbdf1dd89dd78a4a5cf05101e9038cc7f858da59"} Feb 27 11:55:46 crc kubenswrapper[4728]: I0227 11:55:46.387650 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6r49w" event={"ID":"1c115aed-312c-4b5d-8b75-bcc68320a21e","Type":"ContainerDied","Data":"99da7ef53db329fabbeb0a72deeae123684adcc6ddd676140a7925ac6aa10856"} Feb 27 11:55:46 crc kubenswrapper[4728]: I0227 11:55:46.387669 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="99da7ef53db329fabbeb0a72deeae123684adcc6ddd676140a7925ac6aa10856" Feb 27 11:55:46 crc kubenswrapper[4728]: I0227 11:55:46.420865 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6r49w" Feb 27 11:55:46 crc kubenswrapper[4728]: I0227 11:55:46.436300 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c115aed-312c-4b5d-8b75-bcc68320a21e-catalog-content\") pod \"1c115aed-312c-4b5d-8b75-bcc68320a21e\" (UID: \"1c115aed-312c-4b5d-8b75-bcc68320a21e\") " Feb 27 11:55:46 crc kubenswrapper[4728]: I0227 11:55:46.436363 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-67j2b\" (UniqueName: \"kubernetes.io/projected/1c115aed-312c-4b5d-8b75-bcc68320a21e-kube-api-access-67j2b\") pod \"1c115aed-312c-4b5d-8b75-bcc68320a21e\" (UID: \"1c115aed-312c-4b5d-8b75-bcc68320a21e\") " Feb 27 11:55:46 crc kubenswrapper[4728]: I0227 11:55:46.436457 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c115aed-312c-4b5d-8b75-bcc68320a21e-utilities\") pod \"1c115aed-312c-4b5d-8b75-bcc68320a21e\" (UID: \"1c115aed-312c-4b5d-8b75-bcc68320a21e\") " Feb 27 11:55:46 crc kubenswrapper[4728]: I0227 11:55:46.437669 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1c115aed-312c-4b5d-8b75-bcc68320a21e-utilities" (OuterVolumeSpecName: "utilities") pod "1c115aed-312c-4b5d-8b75-bcc68320a21e" (UID: "1c115aed-312c-4b5d-8b75-bcc68320a21e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 11:55:46 crc kubenswrapper[4728]: I0227 11:55:46.463847 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c115aed-312c-4b5d-8b75-bcc68320a21e-kube-api-access-67j2b" (OuterVolumeSpecName: "kube-api-access-67j2b") pod "1c115aed-312c-4b5d-8b75-bcc68320a21e" (UID: "1c115aed-312c-4b5d-8b75-bcc68320a21e"). InnerVolumeSpecName "kube-api-access-67j2b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:55:46 crc kubenswrapper[4728]: I0227 11:55:46.539111 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-67j2b\" (UniqueName: \"kubernetes.io/projected/1c115aed-312c-4b5d-8b75-bcc68320a21e-kube-api-access-67j2b\") on node \"crc\" DevicePath \"\"" Feb 27 11:55:46 crc kubenswrapper[4728]: I0227 11:55:46.539150 4728 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c115aed-312c-4b5d-8b75-bcc68320a21e-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 11:55:46 crc kubenswrapper[4728]: I0227 11:55:46.617097 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1c115aed-312c-4b5d-8b75-bcc68320a21e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1c115aed-312c-4b5d-8b75-bcc68320a21e" (UID: "1c115aed-312c-4b5d-8b75-bcc68320a21e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 11:55:46 crc kubenswrapper[4728]: I0227 11:55:46.641239 4728 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c115aed-312c-4b5d-8b75-bcc68320a21e-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 11:55:47 crc kubenswrapper[4728]: I0227 11:55:47.399415 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6r49w" Feb 27 11:55:47 crc kubenswrapper[4728]: I0227 11:55:47.441631 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6r49w"] Feb 27 11:55:47 crc kubenswrapper[4728]: I0227 11:55:47.456735 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-6r49w"] Feb 27 11:55:48 crc kubenswrapper[4728]: I0227 11:55:48.743906 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c115aed-312c-4b5d-8b75-bcc68320a21e" path="/var/lib/kubelet/pods/1c115aed-312c-4b5d-8b75-bcc68320a21e/volumes" Feb 27 11:56:00 crc kubenswrapper[4728]: I0227 11:56:00.164670 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536556-rkql5"] Feb 27 11:56:00 crc kubenswrapper[4728]: E0227 11:56:00.167407 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c115aed-312c-4b5d-8b75-bcc68320a21e" containerName="registry-server" Feb 27 11:56:00 crc kubenswrapper[4728]: I0227 11:56:00.167591 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c115aed-312c-4b5d-8b75-bcc68320a21e" containerName="registry-server" Feb 27 11:56:00 crc kubenswrapper[4728]: E0227 11:56:00.167759 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c115aed-312c-4b5d-8b75-bcc68320a21e" containerName="extract-utilities" Feb 27 11:56:00 crc kubenswrapper[4728]: I0227 11:56:00.167888 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c115aed-312c-4b5d-8b75-bcc68320a21e" containerName="extract-utilities" Feb 27 11:56:00 crc kubenswrapper[4728]: E0227 11:56:00.168011 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c115aed-312c-4b5d-8b75-bcc68320a21e" containerName="extract-content" Feb 27 11:56:00 crc kubenswrapper[4728]: I0227 11:56:00.168128 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c115aed-312c-4b5d-8b75-bcc68320a21e" containerName="extract-content" Feb 27 11:56:00 crc kubenswrapper[4728]: I0227 11:56:00.168691 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c115aed-312c-4b5d-8b75-bcc68320a21e" containerName="registry-server" Feb 27 11:56:00 crc kubenswrapper[4728]: I0227 11:56:00.170366 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536556-rkql5" Feb 27 11:56:00 crc kubenswrapper[4728]: I0227 11:56:00.174588 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 11:56:00 crc kubenswrapper[4728]: I0227 11:56:00.175318 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 11:56:00 crc kubenswrapper[4728]: I0227 11:56:00.177764 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-wxdfj" Feb 27 11:56:00 crc kubenswrapper[4728]: I0227 11:56:00.185455 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536556-rkql5"] Feb 27 11:56:00 crc kubenswrapper[4728]: I0227 11:56:00.316327 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hnzrr\" (UniqueName: \"kubernetes.io/projected/cdec228f-83cb-4dec-8cfe-9b602b906473-kube-api-access-hnzrr\") pod \"auto-csr-approver-29536556-rkql5\" (UID: \"cdec228f-83cb-4dec-8cfe-9b602b906473\") " pod="openshift-infra/auto-csr-approver-29536556-rkql5" Feb 27 11:56:00 crc kubenswrapper[4728]: I0227 11:56:00.418876 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hnzrr\" (UniqueName: \"kubernetes.io/projected/cdec228f-83cb-4dec-8cfe-9b602b906473-kube-api-access-hnzrr\") pod \"auto-csr-approver-29536556-rkql5\" (UID: \"cdec228f-83cb-4dec-8cfe-9b602b906473\") " pod="openshift-infra/auto-csr-approver-29536556-rkql5" Feb 27 11:56:00 crc kubenswrapper[4728]: I0227 11:56:00.446663 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hnzrr\" (UniqueName: \"kubernetes.io/projected/cdec228f-83cb-4dec-8cfe-9b602b906473-kube-api-access-hnzrr\") pod \"auto-csr-approver-29536556-rkql5\" (UID: \"cdec228f-83cb-4dec-8cfe-9b602b906473\") " pod="openshift-infra/auto-csr-approver-29536556-rkql5" Feb 27 11:56:00 crc kubenswrapper[4728]: I0227 11:56:00.495817 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536556-rkql5" Feb 27 11:56:01 crc kubenswrapper[4728]: I0227 11:56:01.060793 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536556-rkql5"] Feb 27 11:56:01 crc kubenswrapper[4728]: W0227 11:56:01.064560 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcdec228f_83cb_4dec_8cfe_9b602b906473.slice/crio-45c3afc809702abede5bb85ca2af17aeae0ddca92b7f2a40488a970bba31f182 WatchSource:0}: Error finding container 45c3afc809702abede5bb85ca2af17aeae0ddca92b7f2a40488a970bba31f182: Status 404 returned error can't find the container with id 45c3afc809702abede5bb85ca2af17aeae0ddca92b7f2a40488a970bba31f182 Feb 27 11:56:01 crc kubenswrapper[4728]: I0227 11:56:01.589391 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536556-rkql5" event={"ID":"cdec228f-83cb-4dec-8cfe-9b602b906473","Type":"ContainerStarted","Data":"45c3afc809702abede5bb85ca2af17aeae0ddca92b7f2a40488a970bba31f182"} Feb 27 11:56:03 crc kubenswrapper[4728]: I0227 11:56:03.637143 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536556-rkql5" event={"ID":"cdec228f-83cb-4dec-8cfe-9b602b906473","Type":"ContainerStarted","Data":"6fccba046e37a1921a632ed51173098ea0729a21d021f597f49e367c2116c431"} Feb 27 11:56:03 crc kubenswrapper[4728]: I0227 11:56:03.659853 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29536556-rkql5" podStartSLOduration=2.583880863 podStartE2EDuration="3.659837924s" podCreationTimestamp="2026-02-27 11:56:00 +0000 UTC" firstStartedPulling="2026-02-27 11:56:01.067899232 +0000 UTC m=+5381.030265358" lastFinishedPulling="2026-02-27 11:56:02.143856313 +0000 UTC m=+5382.106222419" observedRunningTime="2026-02-27 11:56:03.658344233 +0000 UTC m=+5383.620710339" watchObservedRunningTime="2026-02-27 11:56:03.659837924 +0000 UTC m=+5383.622204030" Feb 27 11:56:04 crc kubenswrapper[4728]: I0227 11:56:04.665234 4728 generic.go:334] "Generic (PLEG): container finished" podID="cdec228f-83cb-4dec-8cfe-9b602b906473" containerID="6fccba046e37a1921a632ed51173098ea0729a21d021f597f49e367c2116c431" exitCode=0 Feb 27 11:56:04 crc kubenswrapper[4728]: I0227 11:56:04.665337 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536556-rkql5" event={"ID":"cdec228f-83cb-4dec-8cfe-9b602b906473","Type":"ContainerDied","Data":"6fccba046e37a1921a632ed51173098ea0729a21d021f597f49e367c2116c431"} Feb 27 11:56:05 crc kubenswrapper[4728]: I0227 11:56:05.912966 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-q7dlt"] Feb 27 11:56:05 crc kubenswrapper[4728]: I0227 11:56:05.916805 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q7dlt" Feb 27 11:56:05 crc kubenswrapper[4728]: I0227 11:56:05.922012 4728 patch_prober.go:28] interesting pod/machine-config-daemon-mf2hh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 11:56:05 crc kubenswrapper[4728]: I0227 11:56:05.922061 4728 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 11:56:05 crc kubenswrapper[4728]: I0227 11:56:05.922110 4728 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" Feb 27 11:56:05 crc kubenswrapper[4728]: I0227 11:56:05.924047 4728 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9b3890ae50eebd358cd76204b52309ec97b933f23feef3dbb7097656248b44be"} pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 27 11:56:05 crc kubenswrapper[4728]: I0227 11:56:05.924099 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" containerName="machine-config-daemon" containerID="cri-o://9b3890ae50eebd358cd76204b52309ec97b933f23feef3dbb7097656248b44be" gracePeriod=600 Feb 27 11:56:05 crc kubenswrapper[4728]: I0227 11:56:05.961187 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-q7dlt"] Feb 27 11:56:06 crc kubenswrapper[4728]: I0227 11:56:06.073609 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51872522-34bd-4061-80b0-a54fd91d9b36-catalog-content\") pod \"community-operators-q7dlt\" (UID: \"51872522-34bd-4061-80b0-a54fd91d9b36\") " pod="openshift-marketplace/community-operators-q7dlt" Feb 27 11:56:06 crc kubenswrapper[4728]: I0227 11:56:06.074130 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51872522-34bd-4061-80b0-a54fd91d9b36-utilities\") pod \"community-operators-q7dlt\" (UID: \"51872522-34bd-4061-80b0-a54fd91d9b36\") " pod="openshift-marketplace/community-operators-q7dlt" Feb 27 11:56:06 crc kubenswrapper[4728]: I0227 11:56:06.074182 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4cxf\" (UniqueName: \"kubernetes.io/projected/51872522-34bd-4061-80b0-a54fd91d9b36-kube-api-access-p4cxf\") pod \"community-operators-q7dlt\" (UID: \"51872522-34bd-4061-80b0-a54fd91d9b36\") " pod="openshift-marketplace/community-operators-q7dlt" Feb 27 11:56:06 crc kubenswrapper[4728]: I0227 11:56:06.175950 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51872522-34bd-4061-80b0-a54fd91d9b36-utilities\") pod \"community-operators-q7dlt\" (UID: \"51872522-34bd-4061-80b0-a54fd91d9b36\") " pod="openshift-marketplace/community-operators-q7dlt" Feb 27 11:56:06 crc kubenswrapper[4728]: I0227 11:56:06.176015 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p4cxf\" (UniqueName: \"kubernetes.io/projected/51872522-34bd-4061-80b0-a54fd91d9b36-kube-api-access-p4cxf\") pod \"community-operators-q7dlt\" (UID: \"51872522-34bd-4061-80b0-a54fd91d9b36\") " pod="openshift-marketplace/community-operators-q7dlt" Feb 27 11:56:06 crc kubenswrapper[4728]: I0227 11:56:06.176071 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51872522-34bd-4061-80b0-a54fd91d9b36-catalog-content\") pod \"community-operators-q7dlt\" (UID: \"51872522-34bd-4061-80b0-a54fd91d9b36\") " pod="openshift-marketplace/community-operators-q7dlt" Feb 27 11:56:06 crc kubenswrapper[4728]: I0227 11:56:06.176644 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51872522-34bd-4061-80b0-a54fd91d9b36-catalog-content\") pod \"community-operators-q7dlt\" (UID: \"51872522-34bd-4061-80b0-a54fd91d9b36\") " pod="openshift-marketplace/community-operators-q7dlt" Feb 27 11:56:06 crc kubenswrapper[4728]: I0227 11:56:06.176945 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51872522-34bd-4061-80b0-a54fd91d9b36-utilities\") pod \"community-operators-q7dlt\" (UID: \"51872522-34bd-4061-80b0-a54fd91d9b36\") " pod="openshift-marketplace/community-operators-q7dlt" Feb 27 11:56:06 crc kubenswrapper[4728]: E0227 11:56:06.220658 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mf2hh_openshift-machine-config-operator(c2cfd349-f825-497b-b698-7fb6bc258b22)\"" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" Feb 27 11:56:06 crc kubenswrapper[4728]: I0227 11:56:06.230179 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536556-rkql5" Feb 27 11:56:06 crc kubenswrapper[4728]: I0227 11:56:06.236029 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p4cxf\" (UniqueName: \"kubernetes.io/projected/51872522-34bd-4061-80b0-a54fd91d9b36-kube-api-access-p4cxf\") pod \"community-operators-q7dlt\" (UID: \"51872522-34bd-4061-80b0-a54fd91d9b36\") " pod="openshift-marketplace/community-operators-q7dlt" Feb 27 11:56:06 crc kubenswrapper[4728]: I0227 11:56:06.263381 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q7dlt" Feb 27 11:56:06 crc kubenswrapper[4728]: I0227 11:56:06.382519 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hnzrr\" (UniqueName: \"kubernetes.io/projected/cdec228f-83cb-4dec-8cfe-9b602b906473-kube-api-access-hnzrr\") pod \"cdec228f-83cb-4dec-8cfe-9b602b906473\" (UID: \"cdec228f-83cb-4dec-8cfe-9b602b906473\") " Feb 27 11:56:06 crc kubenswrapper[4728]: I0227 11:56:06.391102 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cdec228f-83cb-4dec-8cfe-9b602b906473-kube-api-access-hnzrr" (OuterVolumeSpecName: "kube-api-access-hnzrr") pod "cdec228f-83cb-4dec-8cfe-9b602b906473" (UID: "cdec228f-83cb-4dec-8cfe-9b602b906473"). InnerVolumeSpecName "kube-api-access-hnzrr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:56:06 crc kubenswrapper[4728]: I0227 11:56:06.485727 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hnzrr\" (UniqueName: \"kubernetes.io/projected/cdec228f-83cb-4dec-8cfe-9b602b906473-kube-api-access-hnzrr\") on node \"crc\" DevicePath \"\"" Feb 27 11:56:06 crc kubenswrapper[4728]: I0227 11:56:06.702621 4728 generic.go:334] "Generic (PLEG): container finished" podID="c2cfd349-f825-497b-b698-7fb6bc258b22" containerID="9b3890ae50eebd358cd76204b52309ec97b933f23feef3dbb7097656248b44be" exitCode=0 Feb 27 11:56:06 crc kubenswrapper[4728]: I0227 11:56:06.702681 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" event={"ID":"c2cfd349-f825-497b-b698-7fb6bc258b22","Type":"ContainerDied","Data":"9b3890ae50eebd358cd76204b52309ec97b933f23feef3dbb7097656248b44be"} Feb 27 11:56:06 crc kubenswrapper[4728]: I0227 11:56:06.702763 4728 scope.go:117] "RemoveContainer" containerID="e81ecfd96a35ea0b894753598830f159eb681a2f01ac3ed1749128d3d4f90dbd" Feb 27 11:56:06 crc kubenswrapper[4728]: I0227 11:56:06.703749 4728 scope.go:117] "RemoveContainer" containerID="9b3890ae50eebd358cd76204b52309ec97b933f23feef3dbb7097656248b44be" Feb 27 11:56:06 crc kubenswrapper[4728]: E0227 11:56:06.704445 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mf2hh_openshift-machine-config-operator(c2cfd349-f825-497b-b698-7fb6bc258b22)\"" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" Feb 27 11:56:06 crc kubenswrapper[4728]: I0227 11:56:06.705346 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536556-rkql5" event={"ID":"cdec228f-83cb-4dec-8cfe-9b602b906473","Type":"ContainerDied","Data":"45c3afc809702abede5bb85ca2af17aeae0ddca92b7f2a40488a970bba31f182"} Feb 27 11:56:06 crc kubenswrapper[4728]: I0227 11:56:06.705377 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="45c3afc809702abede5bb85ca2af17aeae0ddca92b7f2a40488a970bba31f182" Feb 27 11:56:06 crc kubenswrapper[4728]: I0227 11:56:06.705397 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536556-rkql5" Feb 27 11:56:06 crc kubenswrapper[4728]: I0227 11:56:06.767476 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536550-6m6lp"] Feb 27 11:56:06 crc kubenswrapper[4728]: I0227 11:56:06.815364 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536550-6m6lp"] Feb 27 11:56:06 crc kubenswrapper[4728]: I0227 11:56:06.838266 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-q7dlt"] Feb 27 11:56:07 crc kubenswrapper[4728]: I0227 11:56:07.738201 4728 generic.go:334] "Generic (PLEG): container finished" podID="51872522-34bd-4061-80b0-a54fd91d9b36" containerID="76d359264bb2a672f97036b944cec7f72508013193cdc3d7e04a3270e6cafec3" exitCode=0 Feb 27 11:56:07 crc kubenswrapper[4728]: I0227 11:56:07.738317 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q7dlt" event={"ID":"51872522-34bd-4061-80b0-a54fd91d9b36","Type":"ContainerDied","Data":"76d359264bb2a672f97036b944cec7f72508013193cdc3d7e04a3270e6cafec3"} Feb 27 11:56:07 crc kubenswrapper[4728]: I0227 11:56:07.738789 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q7dlt" event={"ID":"51872522-34bd-4061-80b0-a54fd91d9b36","Type":"ContainerStarted","Data":"64f86e56e79a5d90e86f745ffc673a348a7eafa98f00bb9e6076847083e47f9b"} Feb 27 11:56:08 crc kubenswrapper[4728]: I0227 11:56:08.742387 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e0ba33a7-8d7d-49b3-9c9b-72902af18214" path="/var/lib/kubelet/pods/e0ba33a7-8d7d-49b3-9c9b-72902af18214/volumes" Feb 27 11:56:08 crc kubenswrapper[4728]: I0227 11:56:08.754863 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q7dlt" event={"ID":"51872522-34bd-4061-80b0-a54fd91d9b36","Type":"ContainerStarted","Data":"f25bd072bb773cbe23baac1610b59286c671c4e85953f91f96ec71b07083f877"} Feb 27 11:56:10 crc kubenswrapper[4728]: I0227 11:56:10.910235 4728 generic.go:334] "Generic (PLEG): container finished" podID="51872522-34bd-4061-80b0-a54fd91d9b36" containerID="f25bd072bb773cbe23baac1610b59286c671c4e85953f91f96ec71b07083f877" exitCode=0 Feb 27 11:56:10 crc kubenswrapper[4728]: I0227 11:56:10.910314 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q7dlt" event={"ID":"51872522-34bd-4061-80b0-a54fd91d9b36","Type":"ContainerDied","Data":"f25bd072bb773cbe23baac1610b59286c671c4e85953f91f96ec71b07083f877"} Feb 27 11:56:12 crc kubenswrapper[4728]: I0227 11:56:12.941877 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q7dlt" event={"ID":"51872522-34bd-4061-80b0-a54fd91d9b36","Type":"ContainerStarted","Data":"7b790905952e2b1493ad0d43abcb423e8ac053d782f087a5136bdc96aa875c14"} Feb 27 11:56:12 crc kubenswrapper[4728]: I0227 11:56:12.968636 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-q7dlt" podStartSLOduration=4.305962883 podStartE2EDuration="7.968609575s" podCreationTimestamp="2026-02-27 11:56:05 +0000 UTC" firstStartedPulling="2026-02-27 11:56:07.742659422 +0000 UTC m=+5387.705025528" lastFinishedPulling="2026-02-27 11:56:11.405306114 +0000 UTC m=+5391.367672220" observedRunningTime="2026-02-27 11:56:12.965630403 +0000 UTC m=+5392.927996529" watchObservedRunningTime="2026-02-27 11:56:12.968609575 +0000 UTC m=+5392.930975691" Feb 27 11:56:16 crc kubenswrapper[4728]: I0227 11:56:16.264481 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-q7dlt" Feb 27 11:56:16 crc kubenswrapper[4728]: I0227 11:56:16.265674 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-q7dlt" Feb 27 11:56:16 crc kubenswrapper[4728]: I0227 11:56:16.360846 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-q7dlt" Feb 27 11:56:17 crc kubenswrapper[4728]: I0227 11:56:17.112741 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-q7dlt" Feb 27 11:56:17 crc kubenswrapper[4728]: I0227 11:56:17.193540 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-q7dlt"] Feb 27 11:56:19 crc kubenswrapper[4728]: I0227 11:56:19.044728 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-q7dlt" podUID="51872522-34bd-4061-80b0-a54fd91d9b36" containerName="registry-server" containerID="cri-o://7b790905952e2b1493ad0d43abcb423e8ac053d782f087a5136bdc96aa875c14" gracePeriod=2 Feb 27 11:56:19 crc kubenswrapper[4728]: I0227 11:56:19.683080 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q7dlt" Feb 27 11:56:19 crc kubenswrapper[4728]: I0227 11:56:19.717496 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51872522-34bd-4061-80b0-a54fd91d9b36-utilities\") pod \"51872522-34bd-4061-80b0-a54fd91d9b36\" (UID: \"51872522-34bd-4061-80b0-a54fd91d9b36\") " Feb 27 11:56:19 crc kubenswrapper[4728]: I0227 11:56:19.717788 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p4cxf\" (UniqueName: \"kubernetes.io/projected/51872522-34bd-4061-80b0-a54fd91d9b36-kube-api-access-p4cxf\") pod \"51872522-34bd-4061-80b0-a54fd91d9b36\" (UID: \"51872522-34bd-4061-80b0-a54fd91d9b36\") " Feb 27 11:56:19 crc kubenswrapper[4728]: I0227 11:56:19.718233 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51872522-34bd-4061-80b0-a54fd91d9b36-catalog-content\") pod \"51872522-34bd-4061-80b0-a54fd91d9b36\" (UID: \"51872522-34bd-4061-80b0-a54fd91d9b36\") " Feb 27 11:56:19 crc kubenswrapper[4728]: I0227 11:56:19.718730 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/51872522-34bd-4061-80b0-a54fd91d9b36-utilities" (OuterVolumeSpecName: "utilities") pod "51872522-34bd-4061-80b0-a54fd91d9b36" (UID: "51872522-34bd-4061-80b0-a54fd91d9b36"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 11:56:19 crc kubenswrapper[4728]: I0227 11:56:19.720010 4728 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51872522-34bd-4061-80b0-a54fd91d9b36-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 11:56:19 crc kubenswrapper[4728]: I0227 11:56:19.747374 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51872522-34bd-4061-80b0-a54fd91d9b36-kube-api-access-p4cxf" (OuterVolumeSpecName: "kube-api-access-p4cxf") pod "51872522-34bd-4061-80b0-a54fd91d9b36" (UID: "51872522-34bd-4061-80b0-a54fd91d9b36"). InnerVolumeSpecName "kube-api-access-p4cxf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:56:19 crc kubenswrapper[4728]: I0227 11:56:19.812026 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/51872522-34bd-4061-80b0-a54fd91d9b36-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "51872522-34bd-4061-80b0-a54fd91d9b36" (UID: "51872522-34bd-4061-80b0-a54fd91d9b36"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 11:56:19 crc kubenswrapper[4728]: I0227 11:56:19.824008 4728 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51872522-34bd-4061-80b0-a54fd91d9b36-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 11:56:19 crc kubenswrapper[4728]: I0227 11:56:19.824664 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p4cxf\" (UniqueName: \"kubernetes.io/projected/51872522-34bd-4061-80b0-a54fd91d9b36-kube-api-access-p4cxf\") on node \"crc\" DevicePath \"\"" Feb 27 11:56:20 crc kubenswrapper[4728]: I0227 11:56:20.061700 4728 generic.go:334] "Generic (PLEG): container finished" podID="51872522-34bd-4061-80b0-a54fd91d9b36" containerID="7b790905952e2b1493ad0d43abcb423e8ac053d782f087a5136bdc96aa875c14" exitCode=0 Feb 27 11:56:20 crc kubenswrapper[4728]: I0227 11:56:20.061756 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q7dlt" event={"ID":"51872522-34bd-4061-80b0-a54fd91d9b36","Type":"ContainerDied","Data":"7b790905952e2b1493ad0d43abcb423e8ac053d782f087a5136bdc96aa875c14"} Feb 27 11:56:20 crc kubenswrapper[4728]: I0227 11:56:20.061840 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q7dlt" event={"ID":"51872522-34bd-4061-80b0-a54fd91d9b36","Type":"ContainerDied","Data":"64f86e56e79a5d90e86f745ffc673a348a7eafa98f00bb9e6076847083e47f9b"} Feb 27 11:56:20 crc kubenswrapper[4728]: I0227 11:56:20.061856 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q7dlt" Feb 27 11:56:20 crc kubenswrapper[4728]: I0227 11:56:20.061883 4728 scope.go:117] "RemoveContainer" containerID="7b790905952e2b1493ad0d43abcb423e8ac053d782f087a5136bdc96aa875c14" Feb 27 11:56:20 crc kubenswrapper[4728]: I0227 11:56:20.107413 4728 scope.go:117] "RemoveContainer" containerID="f25bd072bb773cbe23baac1610b59286c671c4e85953f91f96ec71b07083f877" Feb 27 11:56:20 crc kubenswrapper[4728]: I0227 11:56:20.124241 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-q7dlt"] Feb 27 11:56:20 crc kubenswrapper[4728]: I0227 11:56:20.142133 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-q7dlt"] Feb 27 11:56:20 crc kubenswrapper[4728]: I0227 11:56:20.145783 4728 scope.go:117] "RemoveContainer" containerID="76d359264bb2a672f97036b944cec7f72508013193cdc3d7e04a3270e6cafec3" Feb 27 11:56:20 crc kubenswrapper[4728]: I0227 11:56:20.226233 4728 scope.go:117] "RemoveContainer" containerID="7b790905952e2b1493ad0d43abcb423e8ac053d782f087a5136bdc96aa875c14" Feb 27 11:56:20 crc kubenswrapper[4728]: E0227 11:56:20.227990 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7b790905952e2b1493ad0d43abcb423e8ac053d782f087a5136bdc96aa875c14\": container with ID starting with 7b790905952e2b1493ad0d43abcb423e8ac053d782f087a5136bdc96aa875c14 not found: ID does not exist" containerID="7b790905952e2b1493ad0d43abcb423e8ac053d782f087a5136bdc96aa875c14" Feb 27 11:56:20 crc kubenswrapper[4728]: I0227 11:56:20.228035 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b790905952e2b1493ad0d43abcb423e8ac053d782f087a5136bdc96aa875c14"} err="failed to get container status \"7b790905952e2b1493ad0d43abcb423e8ac053d782f087a5136bdc96aa875c14\": rpc error: code = NotFound desc = could not find container \"7b790905952e2b1493ad0d43abcb423e8ac053d782f087a5136bdc96aa875c14\": container with ID starting with 7b790905952e2b1493ad0d43abcb423e8ac053d782f087a5136bdc96aa875c14 not found: ID does not exist" Feb 27 11:56:20 crc kubenswrapper[4728]: I0227 11:56:20.228069 4728 scope.go:117] "RemoveContainer" containerID="f25bd072bb773cbe23baac1610b59286c671c4e85953f91f96ec71b07083f877" Feb 27 11:56:20 crc kubenswrapper[4728]: E0227 11:56:20.228990 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f25bd072bb773cbe23baac1610b59286c671c4e85953f91f96ec71b07083f877\": container with ID starting with f25bd072bb773cbe23baac1610b59286c671c4e85953f91f96ec71b07083f877 not found: ID does not exist" containerID="f25bd072bb773cbe23baac1610b59286c671c4e85953f91f96ec71b07083f877" Feb 27 11:56:20 crc kubenswrapper[4728]: I0227 11:56:20.229025 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f25bd072bb773cbe23baac1610b59286c671c4e85953f91f96ec71b07083f877"} err="failed to get container status \"f25bd072bb773cbe23baac1610b59286c671c4e85953f91f96ec71b07083f877\": rpc error: code = NotFound desc = could not find container \"f25bd072bb773cbe23baac1610b59286c671c4e85953f91f96ec71b07083f877\": container with ID starting with f25bd072bb773cbe23baac1610b59286c671c4e85953f91f96ec71b07083f877 not found: ID does not exist" Feb 27 11:56:20 crc kubenswrapper[4728]: I0227 11:56:20.229052 4728 scope.go:117] "RemoveContainer" containerID="76d359264bb2a672f97036b944cec7f72508013193cdc3d7e04a3270e6cafec3" Feb 27 11:56:20 crc kubenswrapper[4728]: E0227 11:56:20.230908 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"76d359264bb2a672f97036b944cec7f72508013193cdc3d7e04a3270e6cafec3\": container with ID starting with 76d359264bb2a672f97036b944cec7f72508013193cdc3d7e04a3270e6cafec3 not found: ID does not exist" containerID="76d359264bb2a672f97036b944cec7f72508013193cdc3d7e04a3270e6cafec3" Feb 27 11:56:20 crc kubenswrapper[4728]: I0227 11:56:20.230953 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"76d359264bb2a672f97036b944cec7f72508013193cdc3d7e04a3270e6cafec3"} err="failed to get container status \"76d359264bb2a672f97036b944cec7f72508013193cdc3d7e04a3270e6cafec3\": rpc error: code = NotFound desc = could not find container \"76d359264bb2a672f97036b944cec7f72508013193cdc3d7e04a3270e6cafec3\": container with ID starting with 76d359264bb2a672f97036b944cec7f72508013193cdc3d7e04a3270e6cafec3 not found: ID does not exist" Feb 27 11:56:20 crc kubenswrapper[4728]: I0227 11:56:20.752378 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="51872522-34bd-4061-80b0-a54fd91d9b36" path="/var/lib/kubelet/pods/51872522-34bd-4061-80b0-a54fd91d9b36/volumes" Feb 27 11:56:21 crc kubenswrapper[4728]: I0227 11:56:21.725237 4728 scope.go:117] "RemoveContainer" containerID="9b3890ae50eebd358cd76204b52309ec97b933f23feef3dbb7097656248b44be" Feb 27 11:56:21 crc kubenswrapper[4728]: E0227 11:56:21.725674 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mf2hh_openshift-machine-config-operator(c2cfd349-f825-497b-b698-7fb6bc258b22)\"" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" Feb 27 11:56:33 crc kubenswrapper[4728]: I0227 11:56:33.608783 4728 scope.go:117] "RemoveContainer" containerID="fb740cad9b5de1bd6362c3716b8ed2e64d984fe7d104cf2e42db49148be9c2ed" Feb 27 11:56:34 crc kubenswrapper[4728]: I0227 11:56:34.725645 4728 scope.go:117] "RemoveContainer" containerID="9b3890ae50eebd358cd76204b52309ec97b933f23feef3dbb7097656248b44be" Feb 27 11:56:34 crc kubenswrapper[4728]: E0227 11:56:34.726646 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mf2hh_openshift-machine-config-operator(c2cfd349-f825-497b-b698-7fb6bc258b22)\"" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" Feb 27 11:56:38 crc kubenswrapper[4728]: I0227 11:56:38.096722 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-qmf9n"] Feb 27 11:56:38 crc kubenswrapper[4728]: E0227 11:56:38.098087 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cdec228f-83cb-4dec-8cfe-9b602b906473" containerName="oc" Feb 27 11:56:38 crc kubenswrapper[4728]: I0227 11:56:38.098104 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="cdec228f-83cb-4dec-8cfe-9b602b906473" containerName="oc" Feb 27 11:56:38 crc kubenswrapper[4728]: E0227 11:56:38.098134 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51872522-34bd-4061-80b0-a54fd91d9b36" containerName="extract-utilities" Feb 27 11:56:38 crc kubenswrapper[4728]: I0227 11:56:38.098144 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="51872522-34bd-4061-80b0-a54fd91d9b36" containerName="extract-utilities" Feb 27 11:56:38 crc kubenswrapper[4728]: E0227 11:56:38.098172 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51872522-34bd-4061-80b0-a54fd91d9b36" containerName="extract-content" Feb 27 11:56:38 crc kubenswrapper[4728]: I0227 11:56:38.098181 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="51872522-34bd-4061-80b0-a54fd91d9b36" containerName="extract-content" Feb 27 11:56:38 crc kubenswrapper[4728]: E0227 11:56:38.098218 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51872522-34bd-4061-80b0-a54fd91d9b36" containerName="registry-server" Feb 27 11:56:38 crc kubenswrapper[4728]: I0227 11:56:38.098226 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="51872522-34bd-4061-80b0-a54fd91d9b36" containerName="registry-server" Feb 27 11:56:38 crc kubenswrapper[4728]: I0227 11:56:38.098563 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="cdec228f-83cb-4dec-8cfe-9b602b906473" containerName="oc" Feb 27 11:56:38 crc kubenswrapper[4728]: I0227 11:56:38.098608 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="51872522-34bd-4061-80b0-a54fd91d9b36" containerName="registry-server" Feb 27 11:56:38 crc kubenswrapper[4728]: I0227 11:56:38.104179 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qmf9n" Feb 27 11:56:38 crc kubenswrapper[4728]: I0227 11:56:38.136797 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qmf9n"] Feb 27 11:56:38 crc kubenswrapper[4728]: I0227 11:56:38.268278 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f09bd01-e7ef-447d-87a6-6833e8861688-utilities\") pod \"redhat-marketplace-qmf9n\" (UID: \"6f09bd01-e7ef-447d-87a6-6833e8861688\") " pod="openshift-marketplace/redhat-marketplace-qmf9n" Feb 27 11:56:38 crc kubenswrapper[4728]: I0227 11:56:38.268362 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rs5n8\" (UniqueName: \"kubernetes.io/projected/6f09bd01-e7ef-447d-87a6-6833e8861688-kube-api-access-rs5n8\") pod \"redhat-marketplace-qmf9n\" (UID: \"6f09bd01-e7ef-447d-87a6-6833e8861688\") " pod="openshift-marketplace/redhat-marketplace-qmf9n" Feb 27 11:56:38 crc kubenswrapper[4728]: I0227 11:56:38.269076 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f09bd01-e7ef-447d-87a6-6833e8861688-catalog-content\") pod \"redhat-marketplace-qmf9n\" (UID: \"6f09bd01-e7ef-447d-87a6-6833e8861688\") " pod="openshift-marketplace/redhat-marketplace-qmf9n" Feb 27 11:56:38 crc kubenswrapper[4728]: I0227 11:56:38.294372 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-tszck"] Feb 27 11:56:38 crc kubenswrapper[4728]: I0227 11:56:38.336757 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tszck" Feb 27 11:56:38 crc kubenswrapper[4728]: I0227 11:56:38.373267 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f09bd01-e7ef-447d-87a6-6833e8861688-catalog-content\") pod \"redhat-marketplace-qmf9n\" (UID: \"6f09bd01-e7ef-447d-87a6-6833e8861688\") " pod="openshift-marketplace/redhat-marketplace-qmf9n" Feb 27 11:56:38 crc kubenswrapper[4728]: I0227 11:56:38.373341 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a266b82-2ee2-40d2-a2f3-7673165658a5-catalog-content\") pod \"certified-operators-tszck\" (UID: \"5a266b82-2ee2-40d2-a2f3-7673165658a5\") " pod="openshift-marketplace/certified-operators-tszck" Feb 27 11:56:38 crc kubenswrapper[4728]: I0227 11:56:38.373422 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f09bd01-e7ef-447d-87a6-6833e8861688-utilities\") pod \"redhat-marketplace-qmf9n\" (UID: \"6f09bd01-e7ef-447d-87a6-6833e8861688\") " pod="openshift-marketplace/redhat-marketplace-qmf9n" Feb 27 11:56:38 crc kubenswrapper[4728]: I0227 11:56:38.373458 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rs5n8\" (UniqueName: \"kubernetes.io/projected/6f09bd01-e7ef-447d-87a6-6833e8861688-kube-api-access-rs5n8\") pod \"redhat-marketplace-qmf9n\" (UID: \"6f09bd01-e7ef-447d-87a6-6833e8861688\") " pod="openshift-marketplace/redhat-marketplace-qmf9n" Feb 27 11:56:38 crc kubenswrapper[4728]: I0227 11:56:38.373480 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a266b82-2ee2-40d2-a2f3-7673165658a5-utilities\") pod \"certified-operators-tszck\" (UID: \"5a266b82-2ee2-40d2-a2f3-7673165658a5\") " pod="openshift-marketplace/certified-operators-tszck" Feb 27 11:56:38 crc kubenswrapper[4728]: I0227 11:56:38.373559 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qc4n\" (UniqueName: \"kubernetes.io/projected/5a266b82-2ee2-40d2-a2f3-7673165658a5-kube-api-access-4qc4n\") pod \"certified-operators-tszck\" (UID: \"5a266b82-2ee2-40d2-a2f3-7673165658a5\") " pod="openshift-marketplace/certified-operators-tszck" Feb 27 11:56:38 crc kubenswrapper[4728]: I0227 11:56:38.374082 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f09bd01-e7ef-447d-87a6-6833e8861688-catalog-content\") pod \"redhat-marketplace-qmf9n\" (UID: \"6f09bd01-e7ef-447d-87a6-6833e8861688\") " pod="openshift-marketplace/redhat-marketplace-qmf9n" Feb 27 11:56:38 crc kubenswrapper[4728]: I0227 11:56:38.374335 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f09bd01-e7ef-447d-87a6-6833e8861688-utilities\") pod \"redhat-marketplace-qmf9n\" (UID: \"6f09bd01-e7ef-447d-87a6-6833e8861688\") " pod="openshift-marketplace/redhat-marketplace-qmf9n" Feb 27 11:56:38 crc kubenswrapper[4728]: I0227 11:56:38.396347 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tszck"] Feb 27 11:56:38 crc kubenswrapper[4728]: I0227 11:56:38.475974 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a266b82-2ee2-40d2-a2f3-7673165658a5-utilities\") pod \"certified-operators-tszck\" (UID: \"5a266b82-2ee2-40d2-a2f3-7673165658a5\") " pod="openshift-marketplace/certified-operators-tszck" Feb 27 11:56:38 crc kubenswrapper[4728]: I0227 11:56:38.476135 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4qc4n\" (UniqueName: \"kubernetes.io/projected/5a266b82-2ee2-40d2-a2f3-7673165658a5-kube-api-access-4qc4n\") pod \"certified-operators-tszck\" (UID: \"5a266b82-2ee2-40d2-a2f3-7673165658a5\") " pod="openshift-marketplace/certified-operators-tszck" Feb 27 11:56:38 crc kubenswrapper[4728]: I0227 11:56:38.476343 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a266b82-2ee2-40d2-a2f3-7673165658a5-catalog-content\") pod \"certified-operators-tszck\" (UID: \"5a266b82-2ee2-40d2-a2f3-7673165658a5\") " pod="openshift-marketplace/certified-operators-tszck" Feb 27 11:56:38 crc kubenswrapper[4728]: I0227 11:56:38.476440 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a266b82-2ee2-40d2-a2f3-7673165658a5-utilities\") pod \"certified-operators-tszck\" (UID: \"5a266b82-2ee2-40d2-a2f3-7673165658a5\") " pod="openshift-marketplace/certified-operators-tszck" Feb 27 11:56:38 crc kubenswrapper[4728]: I0227 11:56:38.476989 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a266b82-2ee2-40d2-a2f3-7673165658a5-catalog-content\") pod \"certified-operators-tszck\" (UID: \"5a266b82-2ee2-40d2-a2f3-7673165658a5\") " pod="openshift-marketplace/certified-operators-tszck" Feb 27 11:56:38 crc kubenswrapper[4728]: I0227 11:56:38.909519 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4qc4n\" (UniqueName: \"kubernetes.io/projected/5a266b82-2ee2-40d2-a2f3-7673165658a5-kube-api-access-4qc4n\") pod \"certified-operators-tszck\" (UID: \"5a266b82-2ee2-40d2-a2f3-7673165658a5\") " pod="openshift-marketplace/certified-operators-tszck" Feb 27 11:56:38 crc kubenswrapper[4728]: I0227 11:56:38.909866 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rs5n8\" (UniqueName: \"kubernetes.io/projected/6f09bd01-e7ef-447d-87a6-6833e8861688-kube-api-access-rs5n8\") pod \"redhat-marketplace-qmf9n\" (UID: \"6f09bd01-e7ef-447d-87a6-6833e8861688\") " pod="openshift-marketplace/redhat-marketplace-qmf9n" Feb 27 11:56:38 crc kubenswrapper[4728]: I0227 11:56:38.972876 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tszck" Feb 27 11:56:39 crc kubenswrapper[4728]: I0227 11:56:39.052926 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qmf9n" Feb 27 11:56:39 crc kubenswrapper[4728]: I0227 11:56:39.487700 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tszck"] Feb 27 11:56:39 crc kubenswrapper[4728]: W0227 11:56:39.491544 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5a266b82_2ee2_40d2_a2f3_7673165658a5.slice/crio-48ac8541dbf6d6ed14a4fa3a31a8c12139bd60236798dd9afa8f805f84550070 WatchSource:0}: Error finding container 48ac8541dbf6d6ed14a4fa3a31a8c12139bd60236798dd9afa8f805f84550070: Status 404 returned error can't find the container with id 48ac8541dbf6d6ed14a4fa3a31a8c12139bd60236798dd9afa8f805f84550070 Feb 27 11:56:39 crc kubenswrapper[4728]: I0227 11:56:39.622672 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qmf9n"] Feb 27 11:56:39 crc kubenswrapper[4728]: W0227 11:56:39.629094 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6f09bd01_e7ef_447d_87a6_6833e8861688.slice/crio-a368188e0554eada4bcd65ff3381a3f61bf1e8c839ca5a56f66f421ca7fc1a86 WatchSource:0}: Error finding container a368188e0554eada4bcd65ff3381a3f61bf1e8c839ca5a56f66f421ca7fc1a86: Status 404 returned error can't find the container with id a368188e0554eada4bcd65ff3381a3f61bf1e8c839ca5a56f66f421ca7fc1a86 Feb 27 11:56:40 crc kubenswrapper[4728]: I0227 11:56:40.373111 4728 generic.go:334] "Generic (PLEG): container finished" podID="6f09bd01-e7ef-447d-87a6-6833e8861688" containerID="deb6edb6c88af1abf453c023d3ff98ca8eba81f6fae89ed971e7cf35f9b7d2c3" exitCode=0 Feb 27 11:56:40 crc kubenswrapper[4728]: I0227 11:56:40.373303 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qmf9n" event={"ID":"6f09bd01-e7ef-447d-87a6-6833e8861688","Type":"ContainerDied","Data":"deb6edb6c88af1abf453c023d3ff98ca8eba81f6fae89ed971e7cf35f9b7d2c3"} Feb 27 11:56:40 crc kubenswrapper[4728]: I0227 11:56:40.373457 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qmf9n" event={"ID":"6f09bd01-e7ef-447d-87a6-6833e8861688","Type":"ContainerStarted","Data":"a368188e0554eada4bcd65ff3381a3f61bf1e8c839ca5a56f66f421ca7fc1a86"} Feb 27 11:56:40 crc kubenswrapper[4728]: I0227 11:56:40.378587 4728 generic.go:334] "Generic (PLEG): container finished" podID="5a266b82-2ee2-40d2-a2f3-7673165658a5" containerID="24164cfe445ede9bef371ab566b7a86a7ab5a0c2396dd5fa303a35236daa975f" exitCode=0 Feb 27 11:56:40 crc kubenswrapper[4728]: I0227 11:56:40.378612 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tszck" event={"ID":"5a266b82-2ee2-40d2-a2f3-7673165658a5","Type":"ContainerDied","Data":"24164cfe445ede9bef371ab566b7a86a7ab5a0c2396dd5fa303a35236daa975f"} Feb 27 11:56:40 crc kubenswrapper[4728]: I0227 11:56:40.378627 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tszck" event={"ID":"5a266b82-2ee2-40d2-a2f3-7673165658a5","Type":"ContainerStarted","Data":"48ac8541dbf6d6ed14a4fa3a31a8c12139bd60236798dd9afa8f805f84550070"} Feb 27 11:56:41 crc kubenswrapper[4728]: I0227 11:56:41.316609 4728 trace.go:236] Trace[320243706]: "Calculate volume metrics of persistence for pod openstack/rabbitmq-server-0" (27-Feb-2026 11:56:40.292) (total time: 1023ms): Feb 27 11:56:41 crc kubenswrapper[4728]: Trace[320243706]: [1.02312835s] [1.02312835s] END Feb 27 11:56:42 crc kubenswrapper[4728]: I0227 11:56:42.416007 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tszck" event={"ID":"5a266b82-2ee2-40d2-a2f3-7673165658a5","Type":"ContainerStarted","Data":"50a46b678741e8b8a5f21d0e2384c2c30f189fee0d55eeb841d997fce9ec7066"} Feb 27 11:56:42 crc kubenswrapper[4728]: I0227 11:56:42.419089 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qmf9n" event={"ID":"6f09bd01-e7ef-447d-87a6-6833e8861688","Type":"ContainerStarted","Data":"22081a77eef1a39a35562468a614ff58ce9d3f089cb6c95dc52f1ff6c3ec9ba2"} Feb 27 11:56:44 crc kubenswrapper[4728]: I0227 11:56:44.450634 4728 generic.go:334] "Generic (PLEG): container finished" podID="6f09bd01-e7ef-447d-87a6-6833e8861688" containerID="22081a77eef1a39a35562468a614ff58ce9d3f089cb6c95dc52f1ff6c3ec9ba2" exitCode=0 Feb 27 11:56:44 crc kubenswrapper[4728]: I0227 11:56:44.450760 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qmf9n" event={"ID":"6f09bd01-e7ef-447d-87a6-6833e8861688","Type":"ContainerDied","Data":"22081a77eef1a39a35562468a614ff58ce9d3f089cb6c95dc52f1ff6c3ec9ba2"} Feb 27 11:56:45 crc kubenswrapper[4728]: I0227 11:56:45.467292 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qmf9n" event={"ID":"6f09bd01-e7ef-447d-87a6-6833e8861688","Type":"ContainerStarted","Data":"e4b31a9010e03ca195f1d5bba0c1ae2a2007d1b90c46ff49d0d20bc6373a2fa4"} Feb 27 11:56:45 crc kubenswrapper[4728]: I0227 11:56:45.470107 4728 generic.go:334] "Generic (PLEG): container finished" podID="5a266b82-2ee2-40d2-a2f3-7673165658a5" containerID="50a46b678741e8b8a5f21d0e2384c2c30f189fee0d55eeb841d997fce9ec7066" exitCode=0 Feb 27 11:56:45 crc kubenswrapper[4728]: I0227 11:56:45.470156 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tszck" event={"ID":"5a266b82-2ee2-40d2-a2f3-7673165658a5","Type":"ContainerDied","Data":"50a46b678741e8b8a5f21d0e2384c2c30f189fee0d55eeb841d997fce9ec7066"} Feb 27 11:56:45 crc kubenswrapper[4728]: I0227 11:56:45.507860 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-qmf9n" podStartSLOduration=3.020256828 podStartE2EDuration="7.507837485s" podCreationTimestamp="2026-02-27 11:56:38 +0000 UTC" firstStartedPulling="2026-02-27 11:56:40.376557545 +0000 UTC m=+5420.338923651" lastFinishedPulling="2026-02-27 11:56:44.864138162 +0000 UTC m=+5424.826504308" observedRunningTime="2026-02-27 11:56:45.496013053 +0000 UTC m=+5425.458379179" watchObservedRunningTime="2026-02-27 11:56:45.507837485 +0000 UTC m=+5425.470203591" Feb 27 11:56:46 crc kubenswrapper[4728]: I0227 11:56:46.484044 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tszck" event={"ID":"5a266b82-2ee2-40d2-a2f3-7673165658a5","Type":"ContainerStarted","Data":"b51edc40a8ce5993b20ecd2489265c8c9aaa13cdb748135d60b2dbff356d15d7"} Feb 27 11:56:46 crc kubenswrapper[4728]: I0227 11:56:46.501780 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-tszck" podStartSLOduration=2.9851294790000003 podStartE2EDuration="8.501756119s" podCreationTimestamp="2026-02-27 11:56:38 +0000 UTC" firstStartedPulling="2026-02-27 11:56:40.382015283 +0000 UTC m=+5420.344381419" lastFinishedPulling="2026-02-27 11:56:45.898641943 +0000 UTC m=+5425.861008059" observedRunningTime="2026-02-27 11:56:46.499893249 +0000 UTC m=+5426.462259355" watchObservedRunningTime="2026-02-27 11:56:46.501756119 +0000 UTC m=+5426.464122215" Feb 27 11:56:48 crc kubenswrapper[4728]: I0227 11:56:48.973587 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-tszck" Feb 27 11:56:48 crc kubenswrapper[4728]: I0227 11:56:48.974396 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-tszck" Feb 27 11:56:49 crc kubenswrapper[4728]: I0227 11:56:49.053315 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-qmf9n" Feb 27 11:56:49 crc kubenswrapper[4728]: I0227 11:56:49.054430 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-qmf9n" Feb 27 11:56:49 crc kubenswrapper[4728]: I0227 11:56:49.725427 4728 scope.go:117] "RemoveContainer" containerID="9b3890ae50eebd358cd76204b52309ec97b933f23feef3dbb7097656248b44be" Feb 27 11:56:49 crc kubenswrapper[4728]: E0227 11:56:49.726130 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mf2hh_openshift-machine-config-operator(c2cfd349-f825-497b-b698-7fb6bc258b22)\"" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" Feb 27 11:56:50 crc kubenswrapper[4728]: I0227 11:56:50.049284 4728 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-tszck" podUID="5a266b82-2ee2-40d2-a2f3-7673165658a5" containerName="registry-server" probeResult="failure" output=< Feb 27 11:56:50 crc kubenswrapper[4728]: timeout: failed to connect service ":50051" within 1s Feb 27 11:56:50 crc kubenswrapper[4728]: > Feb 27 11:56:50 crc kubenswrapper[4728]: I0227 11:56:50.100002 4728 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-qmf9n" podUID="6f09bd01-e7ef-447d-87a6-6833e8861688" containerName="registry-server" probeResult="failure" output=< Feb 27 11:56:50 crc kubenswrapper[4728]: timeout: failed to connect service ":50051" within 1s Feb 27 11:56:50 crc kubenswrapper[4728]: > Feb 27 11:56:59 crc kubenswrapper[4728]: I0227 11:56:59.460184 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-qmf9n" Feb 27 11:56:59 crc kubenswrapper[4728]: I0227 11:56:59.514075 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-qmf9n" Feb 27 11:56:59 crc kubenswrapper[4728]: I0227 11:56:59.705598 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qmf9n"] Feb 27 11:57:00 crc kubenswrapper[4728]: I0227 11:57:00.468878 4728 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-tszck" podUID="5a266b82-2ee2-40d2-a2f3-7673165658a5" containerName="registry-server" probeResult="failure" output=< Feb 27 11:57:00 crc kubenswrapper[4728]: timeout: failed to connect service ":50051" within 1s Feb 27 11:57:00 crc kubenswrapper[4728]: > Feb 27 11:57:00 crc kubenswrapper[4728]: I0227 11:57:00.651050 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-qmf9n" podUID="6f09bd01-e7ef-447d-87a6-6833e8861688" containerName="registry-server" containerID="cri-o://e4b31a9010e03ca195f1d5bba0c1ae2a2007d1b90c46ff49d0d20bc6373a2fa4" gracePeriod=2 Feb 27 11:57:01 crc kubenswrapper[4728]: I0227 11:57:01.666550 4728 generic.go:334] "Generic (PLEG): container finished" podID="6f09bd01-e7ef-447d-87a6-6833e8861688" containerID="e4b31a9010e03ca195f1d5bba0c1ae2a2007d1b90c46ff49d0d20bc6373a2fa4" exitCode=0 Feb 27 11:57:01 crc kubenswrapper[4728]: I0227 11:57:01.666576 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qmf9n" event={"ID":"6f09bd01-e7ef-447d-87a6-6833e8861688","Type":"ContainerDied","Data":"e4b31a9010e03ca195f1d5bba0c1ae2a2007d1b90c46ff49d0d20bc6373a2fa4"} Feb 27 11:57:01 crc kubenswrapper[4728]: I0227 11:57:01.725747 4728 scope.go:117] "RemoveContainer" containerID="9b3890ae50eebd358cd76204b52309ec97b933f23feef3dbb7097656248b44be" Feb 27 11:57:01 crc kubenswrapper[4728]: E0227 11:57:01.726926 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mf2hh_openshift-machine-config-operator(c2cfd349-f825-497b-b698-7fb6bc258b22)\"" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" Feb 27 11:57:01 crc kubenswrapper[4728]: I0227 11:57:01.914639 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qmf9n" Feb 27 11:57:01 crc kubenswrapper[4728]: I0227 11:57:01.958997 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rs5n8\" (UniqueName: \"kubernetes.io/projected/6f09bd01-e7ef-447d-87a6-6833e8861688-kube-api-access-rs5n8\") pod \"6f09bd01-e7ef-447d-87a6-6833e8861688\" (UID: \"6f09bd01-e7ef-447d-87a6-6833e8861688\") " Feb 27 11:57:01 crc kubenswrapper[4728]: I0227 11:57:01.959076 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f09bd01-e7ef-447d-87a6-6833e8861688-utilities\") pod \"6f09bd01-e7ef-447d-87a6-6833e8861688\" (UID: \"6f09bd01-e7ef-447d-87a6-6833e8861688\") " Feb 27 11:57:01 crc kubenswrapper[4728]: I0227 11:57:01.959105 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f09bd01-e7ef-447d-87a6-6833e8861688-catalog-content\") pod \"6f09bd01-e7ef-447d-87a6-6833e8861688\" (UID: \"6f09bd01-e7ef-447d-87a6-6833e8861688\") " Feb 27 11:57:01 crc kubenswrapper[4728]: I0227 11:57:01.961176 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6f09bd01-e7ef-447d-87a6-6833e8861688-utilities" (OuterVolumeSpecName: "utilities") pod "6f09bd01-e7ef-447d-87a6-6833e8861688" (UID: "6f09bd01-e7ef-447d-87a6-6833e8861688"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 11:57:01 crc kubenswrapper[4728]: I0227 11:57:01.973369 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f09bd01-e7ef-447d-87a6-6833e8861688-kube-api-access-rs5n8" (OuterVolumeSpecName: "kube-api-access-rs5n8") pod "6f09bd01-e7ef-447d-87a6-6833e8861688" (UID: "6f09bd01-e7ef-447d-87a6-6833e8861688"). InnerVolumeSpecName "kube-api-access-rs5n8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:57:01 crc kubenswrapper[4728]: I0227 11:57:01.983756 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6f09bd01-e7ef-447d-87a6-6833e8861688-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6f09bd01-e7ef-447d-87a6-6833e8861688" (UID: "6f09bd01-e7ef-447d-87a6-6833e8861688"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 11:57:02 crc kubenswrapper[4728]: I0227 11:57:02.062729 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rs5n8\" (UniqueName: \"kubernetes.io/projected/6f09bd01-e7ef-447d-87a6-6833e8861688-kube-api-access-rs5n8\") on node \"crc\" DevicePath \"\"" Feb 27 11:57:02 crc kubenswrapper[4728]: I0227 11:57:02.062763 4728 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f09bd01-e7ef-447d-87a6-6833e8861688-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 11:57:02 crc kubenswrapper[4728]: I0227 11:57:02.062776 4728 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f09bd01-e7ef-447d-87a6-6833e8861688-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 11:57:02 crc kubenswrapper[4728]: I0227 11:57:02.680994 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qmf9n" event={"ID":"6f09bd01-e7ef-447d-87a6-6833e8861688","Type":"ContainerDied","Data":"a368188e0554eada4bcd65ff3381a3f61bf1e8c839ca5a56f66f421ca7fc1a86"} Feb 27 11:57:02 crc kubenswrapper[4728]: I0227 11:57:02.681401 4728 scope.go:117] "RemoveContainer" containerID="e4b31a9010e03ca195f1d5bba0c1ae2a2007d1b90c46ff49d0d20bc6373a2fa4" Feb 27 11:57:02 crc kubenswrapper[4728]: I0227 11:57:02.681596 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qmf9n" Feb 27 11:57:02 crc kubenswrapper[4728]: I0227 11:57:02.712630 4728 scope.go:117] "RemoveContainer" containerID="22081a77eef1a39a35562468a614ff58ce9d3f089cb6c95dc52f1ff6c3ec9ba2" Feb 27 11:57:02 crc kubenswrapper[4728]: I0227 11:57:02.744972 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qmf9n"] Feb 27 11:57:02 crc kubenswrapper[4728]: I0227 11:57:02.745009 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-qmf9n"] Feb 27 11:57:02 crc kubenswrapper[4728]: I0227 11:57:02.750565 4728 scope.go:117] "RemoveContainer" containerID="deb6edb6c88af1abf453c023d3ff98ca8eba81f6fae89ed971e7cf35f9b7d2c3" Feb 27 11:57:04 crc kubenswrapper[4728]: I0227 11:57:04.755999 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f09bd01-e7ef-447d-87a6-6833e8861688" path="/var/lib/kubelet/pods/6f09bd01-e7ef-447d-87a6-6833e8861688/volumes" Feb 27 11:57:09 crc kubenswrapper[4728]: I0227 11:57:09.053160 4728 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-tszck" Feb 27 11:57:09 crc kubenswrapper[4728]: I0227 11:57:09.142418 4728 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-tszck" Feb 27 11:57:10 crc kubenswrapper[4728]: I0227 11:57:10.561903 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tszck"] Feb 27 11:57:10 crc kubenswrapper[4728]: I0227 11:57:10.805145 4728 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-tszck" podUID="5a266b82-2ee2-40d2-a2f3-7673165658a5" containerName="registry-server" containerID="cri-o://b51edc40a8ce5993b20ecd2489265c8c9aaa13cdb748135d60b2dbff356d15d7" gracePeriod=2 Feb 27 11:57:11 crc kubenswrapper[4728]: I0227 11:57:11.398103 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tszck" Feb 27 11:57:11 crc kubenswrapper[4728]: I0227 11:57:11.464941 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4qc4n\" (UniqueName: \"kubernetes.io/projected/5a266b82-2ee2-40d2-a2f3-7673165658a5-kube-api-access-4qc4n\") pod \"5a266b82-2ee2-40d2-a2f3-7673165658a5\" (UID: \"5a266b82-2ee2-40d2-a2f3-7673165658a5\") " Feb 27 11:57:11 crc kubenswrapper[4728]: I0227 11:57:11.465121 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a266b82-2ee2-40d2-a2f3-7673165658a5-catalog-content\") pod \"5a266b82-2ee2-40d2-a2f3-7673165658a5\" (UID: \"5a266b82-2ee2-40d2-a2f3-7673165658a5\") " Feb 27 11:57:11 crc kubenswrapper[4728]: I0227 11:57:11.465234 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a266b82-2ee2-40d2-a2f3-7673165658a5-utilities\") pod \"5a266b82-2ee2-40d2-a2f3-7673165658a5\" (UID: \"5a266b82-2ee2-40d2-a2f3-7673165658a5\") " Feb 27 11:57:11 crc kubenswrapper[4728]: I0227 11:57:11.466438 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5a266b82-2ee2-40d2-a2f3-7673165658a5-utilities" (OuterVolumeSpecName: "utilities") pod "5a266b82-2ee2-40d2-a2f3-7673165658a5" (UID: "5a266b82-2ee2-40d2-a2f3-7673165658a5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 11:57:11 crc kubenswrapper[4728]: I0227 11:57:11.474063 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a266b82-2ee2-40d2-a2f3-7673165658a5-kube-api-access-4qc4n" (OuterVolumeSpecName: "kube-api-access-4qc4n") pod "5a266b82-2ee2-40d2-a2f3-7673165658a5" (UID: "5a266b82-2ee2-40d2-a2f3-7673165658a5"). InnerVolumeSpecName "kube-api-access-4qc4n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:57:11 crc kubenswrapper[4728]: I0227 11:57:11.535114 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5a266b82-2ee2-40d2-a2f3-7673165658a5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5a266b82-2ee2-40d2-a2f3-7673165658a5" (UID: "5a266b82-2ee2-40d2-a2f3-7673165658a5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 11:57:11 crc kubenswrapper[4728]: I0227 11:57:11.568579 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4qc4n\" (UniqueName: \"kubernetes.io/projected/5a266b82-2ee2-40d2-a2f3-7673165658a5-kube-api-access-4qc4n\") on node \"crc\" DevicePath \"\"" Feb 27 11:57:11 crc kubenswrapper[4728]: I0227 11:57:11.568610 4728 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a266b82-2ee2-40d2-a2f3-7673165658a5-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 11:57:11 crc kubenswrapper[4728]: I0227 11:57:11.568624 4728 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a266b82-2ee2-40d2-a2f3-7673165658a5-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 11:57:11 crc kubenswrapper[4728]: I0227 11:57:11.821471 4728 generic.go:334] "Generic (PLEG): container finished" podID="5a266b82-2ee2-40d2-a2f3-7673165658a5" containerID="b51edc40a8ce5993b20ecd2489265c8c9aaa13cdb748135d60b2dbff356d15d7" exitCode=0 Feb 27 11:57:11 crc kubenswrapper[4728]: I0227 11:57:11.821557 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tszck" event={"ID":"5a266b82-2ee2-40d2-a2f3-7673165658a5","Type":"ContainerDied","Data":"b51edc40a8ce5993b20ecd2489265c8c9aaa13cdb748135d60b2dbff356d15d7"} Feb 27 11:57:11 crc kubenswrapper[4728]: I0227 11:57:11.821577 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tszck" Feb 27 11:57:11 crc kubenswrapper[4728]: I0227 11:57:11.821614 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tszck" event={"ID":"5a266b82-2ee2-40d2-a2f3-7673165658a5","Type":"ContainerDied","Data":"48ac8541dbf6d6ed14a4fa3a31a8c12139bd60236798dd9afa8f805f84550070"} Feb 27 11:57:11 crc kubenswrapper[4728]: I0227 11:57:11.821647 4728 scope.go:117] "RemoveContainer" containerID="b51edc40a8ce5993b20ecd2489265c8c9aaa13cdb748135d60b2dbff356d15d7" Feb 27 11:57:11 crc kubenswrapper[4728]: I0227 11:57:11.860065 4728 scope.go:117] "RemoveContainer" containerID="50a46b678741e8b8a5f21d0e2384c2c30f189fee0d55eeb841d997fce9ec7066" Feb 27 11:57:11 crc kubenswrapper[4728]: I0227 11:57:11.866049 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tszck"] Feb 27 11:57:11 crc kubenswrapper[4728]: I0227 11:57:11.876593 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-tszck"] Feb 27 11:57:11 crc kubenswrapper[4728]: I0227 11:57:11.905329 4728 scope.go:117] "RemoveContainer" containerID="24164cfe445ede9bef371ab566b7a86a7ab5a0c2396dd5fa303a35236daa975f" Feb 27 11:57:11 crc kubenswrapper[4728]: I0227 11:57:11.976439 4728 scope.go:117] "RemoveContainer" containerID="b51edc40a8ce5993b20ecd2489265c8c9aaa13cdb748135d60b2dbff356d15d7" Feb 27 11:57:11 crc kubenswrapper[4728]: E0227 11:57:11.977083 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b51edc40a8ce5993b20ecd2489265c8c9aaa13cdb748135d60b2dbff356d15d7\": container with ID starting with b51edc40a8ce5993b20ecd2489265c8c9aaa13cdb748135d60b2dbff356d15d7 not found: ID does not exist" containerID="b51edc40a8ce5993b20ecd2489265c8c9aaa13cdb748135d60b2dbff356d15d7" Feb 27 11:57:11 crc kubenswrapper[4728]: I0227 11:57:11.977145 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b51edc40a8ce5993b20ecd2489265c8c9aaa13cdb748135d60b2dbff356d15d7"} err="failed to get container status \"b51edc40a8ce5993b20ecd2489265c8c9aaa13cdb748135d60b2dbff356d15d7\": rpc error: code = NotFound desc = could not find container \"b51edc40a8ce5993b20ecd2489265c8c9aaa13cdb748135d60b2dbff356d15d7\": container with ID starting with b51edc40a8ce5993b20ecd2489265c8c9aaa13cdb748135d60b2dbff356d15d7 not found: ID does not exist" Feb 27 11:57:11 crc kubenswrapper[4728]: I0227 11:57:11.977176 4728 scope.go:117] "RemoveContainer" containerID="50a46b678741e8b8a5f21d0e2384c2c30f189fee0d55eeb841d997fce9ec7066" Feb 27 11:57:11 crc kubenswrapper[4728]: E0227 11:57:11.977731 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"50a46b678741e8b8a5f21d0e2384c2c30f189fee0d55eeb841d997fce9ec7066\": container with ID starting with 50a46b678741e8b8a5f21d0e2384c2c30f189fee0d55eeb841d997fce9ec7066 not found: ID does not exist" containerID="50a46b678741e8b8a5f21d0e2384c2c30f189fee0d55eeb841d997fce9ec7066" Feb 27 11:57:11 crc kubenswrapper[4728]: I0227 11:57:11.977763 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"50a46b678741e8b8a5f21d0e2384c2c30f189fee0d55eeb841d997fce9ec7066"} err="failed to get container status \"50a46b678741e8b8a5f21d0e2384c2c30f189fee0d55eeb841d997fce9ec7066\": rpc error: code = NotFound desc = could not find container \"50a46b678741e8b8a5f21d0e2384c2c30f189fee0d55eeb841d997fce9ec7066\": container with ID starting with 50a46b678741e8b8a5f21d0e2384c2c30f189fee0d55eeb841d997fce9ec7066 not found: ID does not exist" Feb 27 11:57:11 crc kubenswrapper[4728]: I0227 11:57:11.977783 4728 scope.go:117] "RemoveContainer" containerID="24164cfe445ede9bef371ab566b7a86a7ab5a0c2396dd5fa303a35236daa975f" Feb 27 11:57:11 crc kubenswrapper[4728]: E0227 11:57:11.978182 4728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"24164cfe445ede9bef371ab566b7a86a7ab5a0c2396dd5fa303a35236daa975f\": container with ID starting with 24164cfe445ede9bef371ab566b7a86a7ab5a0c2396dd5fa303a35236daa975f not found: ID does not exist" containerID="24164cfe445ede9bef371ab566b7a86a7ab5a0c2396dd5fa303a35236daa975f" Feb 27 11:57:11 crc kubenswrapper[4728]: I0227 11:57:11.978213 4728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"24164cfe445ede9bef371ab566b7a86a7ab5a0c2396dd5fa303a35236daa975f"} err="failed to get container status \"24164cfe445ede9bef371ab566b7a86a7ab5a0c2396dd5fa303a35236daa975f\": rpc error: code = NotFound desc = could not find container \"24164cfe445ede9bef371ab566b7a86a7ab5a0c2396dd5fa303a35236daa975f\": container with ID starting with 24164cfe445ede9bef371ab566b7a86a7ab5a0c2396dd5fa303a35236daa975f not found: ID does not exist" Feb 27 11:57:12 crc kubenswrapper[4728]: I0227 11:57:12.759332 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a266b82-2ee2-40d2-a2f3-7673165658a5" path="/var/lib/kubelet/pods/5a266b82-2ee2-40d2-a2f3-7673165658a5/volumes" Feb 27 11:57:16 crc kubenswrapper[4728]: I0227 11:57:16.727221 4728 scope.go:117] "RemoveContainer" containerID="9b3890ae50eebd358cd76204b52309ec97b933f23feef3dbb7097656248b44be" Feb 27 11:57:16 crc kubenswrapper[4728]: E0227 11:57:16.728064 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mf2hh_openshift-machine-config-operator(c2cfd349-f825-497b-b698-7fb6bc258b22)\"" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" Feb 27 11:57:29 crc kubenswrapper[4728]: I0227 11:57:29.725836 4728 scope.go:117] "RemoveContainer" containerID="9b3890ae50eebd358cd76204b52309ec97b933f23feef3dbb7097656248b44be" Feb 27 11:57:29 crc kubenswrapper[4728]: E0227 11:57:29.726954 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mf2hh_openshift-machine-config-operator(c2cfd349-f825-497b-b698-7fb6bc258b22)\"" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" Feb 27 11:57:43 crc kubenswrapper[4728]: I0227 11:57:43.729097 4728 scope.go:117] "RemoveContainer" containerID="9b3890ae50eebd358cd76204b52309ec97b933f23feef3dbb7097656248b44be" Feb 27 11:57:43 crc kubenswrapper[4728]: E0227 11:57:43.730765 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mf2hh_openshift-machine-config-operator(c2cfd349-f825-497b-b698-7fb6bc258b22)\"" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" Feb 27 11:57:58 crc kubenswrapper[4728]: I0227 11:57:58.725531 4728 scope.go:117] "RemoveContainer" containerID="9b3890ae50eebd358cd76204b52309ec97b933f23feef3dbb7097656248b44be" Feb 27 11:57:58 crc kubenswrapper[4728]: E0227 11:57:58.726836 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mf2hh_openshift-machine-config-operator(c2cfd349-f825-497b-b698-7fb6bc258b22)\"" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" Feb 27 11:58:00 crc kubenswrapper[4728]: I0227 11:58:00.175022 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536558-t5b2f"] Feb 27 11:58:00 crc kubenswrapper[4728]: E0227 11:58:00.175855 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f09bd01-e7ef-447d-87a6-6833e8861688" containerName="extract-content" Feb 27 11:58:00 crc kubenswrapper[4728]: I0227 11:58:00.175868 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f09bd01-e7ef-447d-87a6-6833e8861688" containerName="extract-content" Feb 27 11:58:00 crc kubenswrapper[4728]: E0227 11:58:00.175884 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a266b82-2ee2-40d2-a2f3-7673165658a5" containerName="extract-content" Feb 27 11:58:00 crc kubenswrapper[4728]: I0227 11:58:00.175890 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a266b82-2ee2-40d2-a2f3-7673165658a5" containerName="extract-content" Feb 27 11:58:00 crc kubenswrapper[4728]: E0227 11:58:00.175917 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f09bd01-e7ef-447d-87a6-6833e8861688" containerName="extract-utilities" Feb 27 11:58:00 crc kubenswrapper[4728]: I0227 11:58:00.175924 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f09bd01-e7ef-447d-87a6-6833e8861688" containerName="extract-utilities" Feb 27 11:58:00 crc kubenswrapper[4728]: E0227 11:58:00.175933 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f09bd01-e7ef-447d-87a6-6833e8861688" containerName="registry-server" Feb 27 11:58:00 crc kubenswrapper[4728]: I0227 11:58:00.175939 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f09bd01-e7ef-447d-87a6-6833e8861688" containerName="registry-server" Feb 27 11:58:00 crc kubenswrapper[4728]: E0227 11:58:00.175951 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a266b82-2ee2-40d2-a2f3-7673165658a5" containerName="extract-utilities" Feb 27 11:58:00 crc kubenswrapper[4728]: I0227 11:58:00.175957 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a266b82-2ee2-40d2-a2f3-7673165658a5" containerName="extract-utilities" Feb 27 11:58:00 crc kubenswrapper[4728]: E0227 11:58:00.175967 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a266b82-2ee2-40d2-a2f3-7673165658a5" containerName="registry-server" Feb 27 11:58:00 crc kubenswrapper[4728]: I0227 11:58:00.175973 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a266b82-2ee2-40d2-a2f3-7673165658a5" containerName="registry-server" Feb 27 11:58:00 crc kubenswrapper[4728]: I0227 11:58:00.176192 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a266b82-2ee2-40d2-a2f3-7673165658a5" containerName="registry-server" Feb 27 11:58:00 crc kubenswrapper[4728]: I0227 11:58:00.176214 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f09bd01-e7ef-447d-87a6-6833e8861688" containerName="registry-server" Feb 27 11:58:00 crc kubenswrapper[4728]: I0227 11:58:00.177065 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536558-t5b2f" Feb 27 11:58:00 crc kubenswrapper[4728]: I0227 11:58:00.179996 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 11:58:00 crc kubenswrapper[4728]: I0227 11:58:00.180124 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 11:58:00 crc kubenswrapper[4728]: I0227 11:58:00.192869 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-wxdfj" Feb 27 11:58:00 crc kubenswrapper[4728]: I0227 11:58:00.201076 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536558-t5b2f"] Feb 27 11:58:00 crc kubenswrapper[4728]: I0227 11:58:00.357293 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tsnqb\" (UniqueName: \"kubernetes.io/projected/0ba02054-dea6-4d8e-9958-f57a02026ebc-kube-api-access-tsnqb\") pod \"auto-csr-approver-29536558-t5b2f\" (UID: \"0ba02054-dea6-4d8e-9958-f57a02026ebc\") " pod="openshift-infra/auto-csr-approver-29536558-t5b2f" Feb 27 11:58:00 crc kubenswrapper[4728]: I0227 11:58:00.460523 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tsnqb\" (UniqueName: \"kubernetes.io/projected/0ba02054-dea6-4d8e-9958-f57a02026ebc-kube-api-access-tsnqb\") pod \"auto-csr-approver-29536558-t5b2f\" (UID: \"0ba02054-dea6-4d8e-9958-f57a02026ebc\") " pod="openshift-infra/auto-csr-approver-29536558-t5b2f" Feb 27 11:58:00 crc kubenswrapper[4728]: I0227 11:58:00.486767 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tsnqb\" (UniqueName: \"kubernetes.io/projected/0ba02054-dea6-4d8e-9958-f57a02026ebc-kube-api-access-tsnqb\") pod \"auto-csr-approver-29536558-t5b2f\" (UID: \"0ba02054-dea6-4d8e-9958-f57a02026ebc\") " pod="openshift-infra/auto-csr-approver-29536558-t5b2f" Feb 27 11:58:00 crc kubenswrapper[4728]: I0227 11:58:00.526105 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536558-t5b2f" Feb 27 11:58:00 crc kubenswrapper[4728]: I0227 11:58:00.874275 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536558-t5b2f"] Feb 27 11:58:00 crc kubenswrapper[4728]: W0227 11:58:00.882762 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0ba02054_dea6_4d8e_9958_f57a02026ebc.slice/crio-fc48003f8e2f5334fc8e716cbc7b2c51a5a6d61348bb2236a90e81777410f3ab WatchSource:0}: Error finding container fc48003f8e2f5334fc8e716cbc7b2c51a5a6d61348bb2236a90e81777410f3ab: Status 404 returned error can't find the container with id fc48003f8e2f5334fc8e716cbc7b2c51a5a6d61348bb2236a90e81777410f3ab Feb 27 11:58:01 crc kubenswrapper[4728]: I0227 11:58:01.564621 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536558-t5b2f" event={"ID":"0ba02054-dea6-4d8e-9958-f57a02026ebc","Type":"ContainerStarted","Data":"fc48003f8e2f5334fc8e716cbc7b2c51a5a6d61348bb2236a90e81777410f3ab"} Feb 27 11:58:02 crc kubenswrapper[4728]: I0227 11:58:02.619009 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536558-t5b2f" event={"ID":"0ba02054-dea6-4d8e-9958-f57a02026ebc","Type":"ContainerStarted","Data":"dc3a2870ac12166ddbba447ce748d4bb82174e140514ffaecb13f006f25afc4e"} Feb 27 11:58:02 crc kubenswrapper[4728]: I0227 11:58:02.659762 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29536558-t5b2f" podStartSLOduration=1.621842253 podStartE2EDuration="2.659738151s" podCreationTimestamp="2026-02-27 11:58:00 +0000 UTC" firstStartedPulling="2026-02-27 11:58:00.891200631 +0000 UTC m=+5500.853566747" lastFinishedPulling="2026-02-27 11:58:01.929096509 +0000 UTC m=+5501.891462645" observedRunningTime="2026-02-27 11:58:02.637263148 +0000 UTC m=+5502.599629254" watchObservedRunningTime="2026-02-27 11:58:02.659738151 +0000 UTC m=+5502.622104267" Feb 27 11:58:03 crc kubenswrapper[4728]: I0227 11:58:03.638160 4728 generic.go:334] "Generic (PLEG): container finished" podID="0ba02054-dea6-4d8e-9958-f57a02026ebc" containerID="dc3a2870ac12166ddbba447ce748d4bb82174e140514ffaecb13f006f25afc4e" exitCode=0 Feb 27 11:58:03 crc kubenswrapper[4728]: I0227 11:58:03.638231 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536558-t5b2f" event={"ID":"0ba02054-dea6-4d8e-9958-f57a02026ebc","Type":"ContainerDied","Data":"dc3a2870ac12166ddbba447ce748d4bb82174e140514ffaecb13f006f25afc4e"} Feb 27 11:58:05 crc kubenswrapper[4728]: I0227 11:58:05.130635 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536558-t5b2f" Feb 27 11:58:05 crc kubenswrapper[4728]: I0227 11:58:05.218015 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tsnqb\" (UniqueName: \"kubernetes.io/projected/0ba02054-dea6-4d8e-9958-f57a02026ebc-kube-api-access-tsnqb\") pod \"0ba02054-dea6-4d8e-9958-f57a02026ebc\" (UID: \"0ba02054-dea6-4d8e-9958-f57a02026ebc\") " Feb 27 11:58:05 crc kubenswrapper[4728]: I0227 11:58:05.230852 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ba02054-dea6-4d8e-9958-f57a02026ebc-kube-api-access-tsnqb" (OuterVolumeSpecName: "kube-api-access-tsnqb") pod "0ba02054-dea6-4d8e-9958-f57a02026ebc" (UID: "0ba02054-dea6-4d8e-9958-f57a02026ebc"). InnerVolumeSpecName "kube-api-access-tsnqb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 11:58:05 crc kubenswrapper[4728]: I0227 11:58:05.321406 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tsnqb\" (UniqueName: \"kubernetes.io/projected/0ba02054-dea6-4d8e-9958-f57a02026ebc-kube-api-access-tsnqb\") on node \"crc\" DevicePath \"\"" Feb 27 11:58:05 crc kubenswrapper[4728]: I0227 11:58:05.684162 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536558-t5b2f" event={"ID":"0ba02054-dea6-4d8e-9958-f57a02026ebc","Type":"ContainerDied","Data":"fc48003f8e2f5334fc8e716cbc7b2c51a5a6d61348bb2236a90e81777410f3ab"} Feb 27 11:58:05 crc kubenswrapper[4728]: I0227 11:58:05.684593 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fc48003f8e2f5334fc8e716cbc7b2c51a5a6d61348bb2236a90e81777410f3ab" Feb 27 11:58:05 crc kubenswrapper[4728]: I0227 11:58:05.684281 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536558-t5b2f" Feb 27 11:58:05 crc kubenswrapper[4728]: I0227 11:58:05.753167 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536552-qxxzx"] Feb 27 11:58:05 crc kubenswrapper[4728]: I0227 11:58:05.765554 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536552-qxxzx"] Feb 27 11:58:06 crc kubenswrapper[4728]: I0227 11:58:06.749325 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff8bde6d-9c7d-4085-9299-3d6117b9ff7f" path="/var/lib/kubelet/pods/ff8bde6d-9c7d-4085-9299-3d6117b9ff7f/volumes" Feb 27 11:58:09 crc kubenswrapper[4728]: I0227 11:58:09.724585 4728 scope.go:117] "RemoveContainer" containerID="9b3890ae50eebd358cd76204b52309ec97b933f23feef3dbb7097656248b44be" Feb 27 11:58:09 crc kubenswrapper[4728]: E0227 11:58:09.725629 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mf2hh_openshift-machine-config-operator(c2cfd349-f825-497b-b698-7fb6bc258b22)\"" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" Feb 27 11:58:12 crc kubenswrapper[4728]: I0227 11:58:12.811024 4728 trace.go:236] Trace[1404808875]: "Calculate volume metrics of storage for pod openshift-logging/logging-loki-compactor-0" (27-Feb-2026 11:58:11.582) (total time: 1228ms): Feb 27 11:58:12 crc kubenswrapper[4728]: Trace[1404808875]: [1.228337741s] [1.228337741s] END Feb 27 11:58:21 crc kubenswrapper[4728]: I0227 11:58:21.725852 4728 scope.go:117] "RemoveContainer" containerID="9b3890ae50eebd358cd76204b52309ec97b933f23feef3dbb7097656248b44be" Feb 27 11:58:21 crc kubenswrapper[4728]: E0227 11:58:21.726705 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mf2hh_openshift-machine-config-operator(c2cfd349-f825-497b-b698-7fb6bc258b22)\"" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" Feb 27 11:58:33 crc kubenswrapper[4728]: I0227 11:58:33.930465 4728 scope.go:117] "RemoveContainer" containerID="34aa87de018383cd0a4a17a0bcaf0a2b855522bae1b080f3693d21a7ac8ada91" Feb 27 11:58:36 crc kubenswrapper[4728]: I0227 11:58:36.726396 4728 scope.go:117] "RemoveContainer" containerID="9b3890ae50eebd358cd76204b52309ec97b933f23feef3dbb7097656248b44be" Feb 27 11:58:36 crc kubenswrapper[4728]: E0227 11:58:36.727642 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mf2hh_openshift-machine-config-operator(c2cfd349-f825-497b-b698-7fb6bc258b22)\"" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" Feb 27 11:58:48 crc kubenswrapper[4728]: I0227 11:58:48.725651 4728 scope.go:117] "RemoveContainer" containerID="9b3890ae50eebd358cd76204b52309ec97b933f23feef3dbb7097656248b44be" Feb 27 11:58:48 crc kubenswrapper[4728]: E0227 11:58:48.727003 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mf2hh_openshift-machine-config-operator(c2cfd349-f825-497b-b698-7fb6bc258b22)\"" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" Feb 27 11:59:03 crc kubenswrapper[4728]: I0227 11:59:03.725351 4728 scope.go:117] "RemoveContainer" containerID="9b3890ae50eebd358cd76204b52309ec97b933f23feef3dbb7097656248b44be" Feb 27 11:59:03 crc kubenswrapper[4728]: E0227 11:59:03.726480 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mf2hh_openshift-machine-config-operator(c2cfd349-f825-497b-b698-7fb6bc258b22)\"" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" Feb 27 11:59:16 crc kubenswrapper[4728]: I0227 11:59:16.725649 4728 scope.go:117] "RemoveContainer" containerID="9b3890ae50eebd358cd76204b52309ec97b933f23feef3dbb7097656248b44be" Feb 27 11:59:16 crc kubenswrapper[4728]: E0227 11:59:16.726785 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mf2hh_openshift-machine-config-operator(c2cfd349-f825-497b-b698-7fb6bc258b22)\"" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" Feb 27 11:59:30 crc kubenswrapper[4728]: I0227 11:59:30.733577 4728 scope.go:117] "RemoveContainer" containerID="9b3890ae50eebd358cd76204b52309ec97b933f23feef3dbb7097656248b44be" Feb 27 11:59:30 crc kubenswrapper[4728]: E0227 11:59:30.734890 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mf2hh_openshift-machine-config-operator(c2cfd349-f825-497b-b698-7fb6bc258b22)\"" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" Feb 27 11:59:42 crc kubenswrapper[4728]: I0227 11:59:42.725341 4728 scope.go:117] "RemoveContainer" containerID="9b3890ae50eebd358cd76204b52309ec97b933f23feef3dbb7097656248b44be" Feb 27 11:59:42 crc kubenswrapper[4728]: E0227 11:59:42.726060 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mf2hh_openshift-machine-config-operator(c2cfd349-f825-497b-b698-7fb6bc258b22)\"" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" Feb 27 11:59:53 crc kubenswrapper[4728]: I0227 11:59:53.725658 4728 scope.go:117] "RemoveContainer" containerID="9b3890ae50eebd358cd76204b52309ec97b933f23feef3dbb7097656248b44be" Feb 27 11:59:53 crc kubenswrapper[4728]: E0227 11:59:53.726847 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mf2hh_openshift-machine-config-operator(c2cfd349-f825-497b-b698-7fb6bc258b22)\"" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" Feb 27 12:00:00 crc kubenswrapper[4728]: I0227 12:00:00.176521 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536560-dncct"] Feb 27 12:00:00 crc kubenswrapper[4728]: E0227 12:00:00.177829 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ba02054-dea6-4d8e-9958-f57a02026ebc" containerName="oc" Feb 27 12:00:00 crc kubenswrapper[4728]: I0227 12:00:00.177851 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ba02054-dea6-4d8e-9958-f57a02026ebc" containerName="oc" Feb 27 12:00:00 crc kubenswrapper[4728]: I0227 12:00:00.178239 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ba02054-dea6-4d8e-9958-f57a02026ebc" containerName="oc" Feb 27 12:00:00 crc kubenswrapper[4728]: I0227 12:00:00.179608 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536560-dncct" Feb 27 12:00:00 crc kubenswrapper[4728]: I0227 12:00:00.185149 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 12:00:00 crc kubenswrapper[4728]: I0227 12:00:00.185956 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 12:00:00 crc kubenswrapper[4728]: I0227 12:00:00.186304 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-wxdfj" Feb 27 12:00:00 crc kubenswrapper[4728]: I0227 12:00:00.213427 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29536560-9qcrt"] Feb 27 12:00:00 crc kubenswrapper[4728]: I0227 12:00:00.219699 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29536560-9qcrt" Feb 27 12:00:00 crc kubenswrapper[4728]: I0227 12:00:00.222054 4728 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 27 12:00:00 crc kubenswrapper[4728]: I0227 12:00:00.222246 4728 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 27 12:00:00 crc kubenswrapper[4728]: I0227 12:00:00.242780 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536560-dncct"] Feb 27 12:00:00 crc kubenswrapper[4728]: I0227 12:00:00.260319 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29536560-9qcrt"] Feb 27 12:00:00 crc kubenswrapper[4728]: I0227 12:00:00.310608 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6p8s9\" (UniqueName: \"kubernetes.io/projected/6932272c-c54a-4b06-9f0e-f531b4482819-kube-api-access-6p8s9\") pod \"auto-csr-approver-29536560-dncct\" (UID: \"6932272c-c54a-4b06-9f0e-f531b4482819\") " pod="openshift-infra/auto-csr-approver-29536560-dncct" Feb 27 12:00:00 crc kubenswrapper[4728]: I0227 12:00:00.414933 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9wh2\" (UniqueName: \"kubernetes.io/projected/afeb536a-f7e3-4b88-ab78-04f4cd8cefaf-kube-api-access-j9wh2\") pod \"collect-profiles-29536560-9qcrt\" (UID: \"afeb536a-f7e3-4b88-ab78-04f4cd8cefaf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536560-9qcrt" Feb 27 12:00:00 crc kubenswrapper[4728]: I0227 12:00:00.414996 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/afeb536a-f7e3-4b88-ab78-04f4cd8cefaf-config-volume\") pod \"collect-profiles-29536560-9qcrt\" (UID: \"afeb536a-f7e3-4b88-ab78-04f4cd8cefaf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536560-9qcrt" Feb 27 12:00:00 crc kubenswrapper[4728]: I0227 12:00:00.415030 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/afeb536a-f7e3-4b88-ab78-04f4cd8cefaf-secret-volume\") pod \"collect-profiles-29536560-9qcrt\" (UID: \"afeb536a-f7e3-4b88-ab78-04f4cd8cefaf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536560-9qcrt" Feb 27 12:00:00 crc kubenswrapper[4728]: I0227 12:00:00.415160 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6p8s9\" (UniqueName: \"kubernetes.io/projected/6932272c-c54a-4b06-9f0e-f531b4482819-kube-api-access-6p8s9\") pod \"auto-csr-approver-29536560-dncct\" (UID: \"6932272c-c54a-4b06-9f0e-f531b4482819\") " pod="openshift-infra/auto-csr-approver-29536560-dncct" Feb 27 12:00:00 crc kubenswrapper[4728]: I0227 12:00:00.517580 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j9wh2\" (UniqueName: \"kubernetes.io/projected/afeb536a-f7e3-4b88-ab78-04f4cd8cefaf-kube-api-access-j9wh2\") pod \"collect-profiles-29536560-9qcrt\" (UID: \"afeb536a-f7e3-4b88-ab78-04f4cd8cefaf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536560-9qcrt" Feb 27 12:00:00 crc kubenswrapper[4728]: I0227 12:00:00.517647 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/afeb536a-f7e3-4b88-ab78-04f4cd8cefaf-config-volume\") pod \"collect-profiles-29536560-9qcrt\" (UID: \"afeb536a-f7e3-4b88-ab78-04f4cd8cefaf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536560-9qcrt" Feb 27 12:00:00 crc kubenswrapper[4728]: I0227 12:00:00.517678 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/afeb536a-f7e3-4b88-ab78-04f4cd8cefaf-secret-volume\") pod \"collect-profiles-29536560-9qcrt\" (UID: \"afeb536a-f7e3-4b88-ab78-04f4cd8cefaf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536560-9qcrt" Feb 27 12:00:00 crc kubenswrapper[4728]: I0227 12:00:00.518990 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/afeb536a-f7e3-4b88-ab78-04f4cd8cefaf-config-volume\") pod \"collect-profiles-29536560-9qcrt\" (UID: \"afeb536a-f7e3-4b88-ab78-04f4cd8cefaf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536560-9qcrt" Feb 27 12:00:00 crc kubenswrapper[4728]: I0227 12:00:00.782244 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6p8s9\" (UniqueName: \"kubernetes.io/projected/6932272c-c54a-4b06-9f0e-f531b4482819-kube-api-access-6p8s9\") pod \"auto-csr-approver-29536560-dncct\" (UID: \"6932272c-c54a-4b06-9f0e-f531b4482819\") " pod="openshift-infra/auto-csr-approver-29536560-dncct" Feb 27 12:00:00 crc kubenswrapper[4728]: I0227 12:00:00.782655 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/afeb536a-f7e3-4b88-ab78-04f4cd8cefaf-secret-volume\") pod \"collect-profiles-29536560-9qcrt\" (UID: \"afeb536a-f7e3-4b88-ab78-04f4cd8cefaf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536560-9qcrt" Feb 27 12:00:00 crc kubenswrapper[4728]: I0227 12:00:00.785030 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j9wh2\" (UniqueName: \"kubernetes.io/projected/afeb536a-f7e3-4b88-ab78-04f4cd8cefaf-kube-api-access-j9wh2\") pod \"collect-profiles-29536560-9qcrt\" (UID: \"afeb536a-f7e3-4b88-ab78-04f4cd8cefaf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536560-9qcrt" Feb 27 12:00:00 crc kubenswrapper[4728]: I0227 12:00:00.807997 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536560-dncct" Feb 27 12:00:00 crc kubenswrapper[4728]: I0227 12:00:00.842146 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29536560-9qcrt" Feb 27 12:00:01 crc kubenswrapper[4728]: I0227 12:00:01.437192 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536560-dncct"] Feb 27 12:00:01 crc kubenswrapper[4728]: W0227 12:00:01.499830 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podafeb536a_f7e3_4b88_ab78_04f4cd8cefaf.slice/crio-0998770a1537274f1c055d817f7c6588c86936277564c37e9087fd76e71a430a WatchSource:0}: Error finding container 0998770a1537274f1c055d817f7c6588c86936277564c37e9087fd76e71a430a: Status 404 returned error can't find the container with id 0998770a1537274f1c055d817f7c6588c86936277564c37e9087fd76e71a430a Feb 27 12:00:01 crc kubenswrapper[4728]: I0227 12:00:01.503300 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29536560-9qcrt"] Feb 27 12:00:02 crc kubenswrapper[4728]: I0227 12:00:02.359272 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29536560-9qcrt" event={"ID":"afeb536a-f7e3-4b88-ab78-04f4cd8cefaf","Type":"ContainerStarted","Data":"196e3ce0ad42b76f4c1d7ddd65d8e7a26680443f45bb09a86b5ed427fb5a35ad"} Feb 27 12:00:02 crc kubenswrapper[4728]: I0227 12:00:02.359673 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29536560-9qcrt" event={"ID":"afeb536a-f7e3-4b88-ab78-04f4cd8cefaf","Type":"ContainerStarted","Data":"0998770a1537274f1c055d817f7c6588c86936277564c37e9087fd76e71a430a"} Feb 27 12:00:02 crc kubenswrapper[4728]: I0227 12:00:02.360857 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536560-dncct" event={"ID":"6932272c-c54a-4b06-9f0e-f531b4482819","Type":"ContainerStarted","Data":"9e63c282352685068ec11d0882301de08d7e4a1155f6909592b0b8a9c0a89b57"} Feb 27 12:00:02 crc kubenswrapper[4728]: I0227 12:00:02.380011 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29536560-9qcrt" podStartSLOduration=2.379972732 podStartE2EDuration="2.379972732s" podCreationTimestamp="2026-02-27 12:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 12:00:02.374915724 +0000 UTC m=+5622.337281850" watchObservedRunningTime="2026-02-27 12:00:02.379972732 +0000 UTC m=+5622.342338838" Feb 27 12:00:03 crc kubenswrapper[4728]: I0227 12:00:03.371371 4728 generic.go:334] "Generic (PLEG): container finished" podID="afeb536a-f7e3-4b88-ab78-04f4cd8cefaf" containerID="196e3ce0ad42b76f4c1d7ddd65d8e7a26680443f45bb09a86b5ed427fb5a35ad" exitCode=0 Feb 27 12:00:03 crc kubenswrapper[4728]: I0227 12:00:03.371551 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29536560-9qcrt" event={"ID":"afeb536a-f7e3-4b88-ab78-04f4cd8cefaf","Type":"ContainerDied","Data":"196e3ce0ad42b76f4c1d7ddd65d8e7a26680443f45bb09a86b5ed427fb5a35ad"} Feb 27 12:00:04 crc kubenswrapper[4728]: I0227 12:00:04.869385 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29536560-9qcrt" Feb 27 12:00:04 crc kubenswrapper[4728]: I0227 12:00:04.982897 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/afeb536a-f7e3-4b88-ab78-04f4cd8cefaf-config-volume\") pod \"afeb536a-f7e3-4b88-ab78-04f4cd8cefaf\" (UID: \"afeb536a-f7e3-4b88-ab78-04f4cd8cefaf\") " Feb 27 12:00:04 crc kubenswrapper[4728]: I0227 12:00:04.983052 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/afeb536a-f7e3-4b88-ab78-04f4cd8cefaf-secret-volume\") pod \"afeb536a-f7e3-4b88-ab78-04f4cd8cefaf\" (UID: \"afeb536a-f7e3-4b88-ab78-04f4cd8cefaf\") " Feb 27 12:00:04 crc kubenswrapper[4728]: I0227 12:00:04.983298 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j9wh2\" (UniqueName: \"kubernetes.io/projected/afeb536a-f7e3-4b88-ab78-04f4cd8cefaf-kube-api-access-j9wh2\") pod \"afeb536a-f7e3-4b88-ab78-04f4cd8cefaf\" (UID: \"afeb536a-f7e3-4b88-ab78-04f4cd8cefaf\") " Feb 27 12:00:04 crc kubenswrapper[4728]: I0227 12:00:04.984000 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/afeb536a-f7e3-4b88-ab78-04f4cd8cefaf-config-volume" (OuterVolumeSpecName: "config-volume") pod "afeb536a-f7e3-4b88-ab78-04f4cd8cefaf" (UID: "afeb536a-f7e3-4b88-ab78-04f4cd8cefaf"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 12:00:04 crc kubenswrapper[4728]: I0227 12:00:04.990621 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/afeb536a-f7e3-4b88-ab78-04f4cd8cefaf-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "afeb536a-f7e3-4b88-ab78-04f4cd8cefaf" (UID: "afeb536a-f7e3-4b88-ab78-04f4cd8cefaf"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 12:00:04 crc kubenswrapper[4728]: I0227 12:00:04.990945 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/afeb536a-f7e3-4b88-ab78-04f4cd8cefaf-kube-api-access-j9wh2" (OuterVolumeSpecName: "kube-api-access-j9wh2") pod "afeb536a-f7e3-4b88-ab78-04f4cd8cefaf" (UID: "afeb536a-f7e3-4b88-ab78-04f4cd8cefaf"). InnerVolumeSpecName "kube-api-access-j9wh2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 12:00:05 crc kubenswrapper[4728]: I0227 12:00:05.086958 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j9wh2\" (UniqueName: \"kubernetes.io/projected/afeb536a-f7e3-4b88-ab78-04f4cd8cefaf-kube-api-access-j9wh2\") on node \"crc\" DevicePath \"\"" Feb 27 12:00:05 crc kubenswrapper[4728]: I0227 12:00:05.086992 4728 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/afeb536a-f7e3-4b88-ab78-04f4cd8cefaf-config-volume\") on node \"crc\" DevicePath \"\"" Feb 27 12:00:05 crc kubenswrapper[4728]: I0227 12:00:05.087007 4728 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/afeb536a-f7e3-4b88-ab78-04f4cd8cefaf-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 27 12:00:05 crc kubenswrapper[4728]: I0227 12:00:05.421638 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536560-dncct" event={"ID":"6932272c-c54a-4b06-9f0e-f531b4482819","Type":"ContainerStarted","Data":"1fff2d65124e604b9fdc23c1993989d8dc3548204619fdf76456369853af9037"} Feb 27 12:00:05 crc kubenswrapper[4728]: I0227 12:00:05.423476 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29536560-9qcrt" event={"ID":"afeb536a-f7e3-4b88-ab78-04f4cd8cefaf","Type":"ContainerDied","Data":"0998770a1537274f1c055d817f7c6588c86936277564c37e9087fd76e71a430a"} Feb 27 12:00:05 crc kubenswrapper[4728]: I0227 12:00:05.423671 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0998770a1537274f1c055d817f7c6588c86936277564c37e9087fd76e71a430a" Feb 27 12:00:05 crc kubenswrapper[4728]: I0227 12:00:05.423630 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29536560-9qcrt" Feb 27 12:00:05 crc kubenswrapper[4728]: I0227 12:00:05.455242 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29536560-dncct" podStartSLOduration=1.843665849 podStartE2EDuration="5.45521759s" podCreationTimestamp="2026-02-27 12:00:00 +0000 UTC" firstStartedPulling="2026-02-27 12:00:01.437902406 +0000 UTC m=+5621.400268512" lastFinishedPulling="2026-02-27 12:00:05.049454137 +0000 UTC m=+5625.011820253" observedRunningTime="2026-02-27 12:00:05.437479697 +0000 UTC m=+5625.399845813" watchObservedRunningTime="2026-02-27 12:00:05.45521759 +0000 UTC m=+5625.417583706" Feb 27 12:00:05 crc kubenswrapper[4728]: I0227 12:00:05.725207 4728 scope.go:117] "RemoveContainer" containerID="9b3890ae50eebd358cd76204b52309ec97b933f23feef3dbb7097656248b44be" Feb 27 12:00:05 crc kubenswrapper[4728]: E0227 12:00:05.725433 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mf2hh_openshift-machine-config-operator(c2cfd349-f825-497b-b698-7fb6bc258b22)\"" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" Feb 27 12:00:05 crc kubenswrapper[4728]: I0227 12:00:05.957643 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29536515-tdtdb"] Feb 27 12:00:05 crc kubenswrapper[4728]: I0227 12:00:05.966553 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29536515-tdtdb"] Feb 27 12:00:06 crc kubenswrapper[4728]: I0227 12:00:06.442058 4728 generic.go:334] "Generic (PLEG): container finished" podID="6932272c-c54a-4b06-9f0e-f531b4482819" containerID="1fff2d65124e604b9fdc23c1993989d8dc3548204619fdf76456369853af9037" exitCode=0 Feb 27 12:00:06 crc kubenswrapper[4728]: I0227 12:00:06.442434 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536560-dncct" event={"ID":"6932272c-c54a-4b06-9f0e-f531b4482819","Type":"ContainerDied","Data":"1fff2d65124e604b9fdc23c1993989d8dc3548204619fdf76456369853af9037"} Feb 27 12:00:06 crc kubenswrapper[4728]: I0227 12:00:06.741568 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="75a807f2-ce70-431e-8122-9aa5c60d2efb" path="/var/lib/kubelet/pods/75a807f2-ce70-431e-8122-9aa5c60d2efb/volumes" Feb 27 12:00:08 crc kubenswrapper[4728]: I0227 12:00:08.015406 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536560-dncct" Feb 27 12:00:08 crc kubenswrapper[4728]: I0227 12:00:08.161674 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6p8s9\" (UniqueName: \"kubernetes.io/projected/6932272c-c54a-4b06-9f0e-f531b4482819-kube-api-access-6p8s9\") pod \"6932272c-c54a-4b06-9f0e-f531b4482819\" (UID: \"6932272c-c54a-4b06-9f0e-f531b4482819\") " Feb 27 12:00:08 crc kubenswrapper[4728]: I0227 12:00:08.173320 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6932272c-c54a-4b06-9f0e-f531b4482819-kube-api-access-6p8s9" (OuterVolumeSpecName: "kube-api-access-6p8s9") pod "6932272c-c54a-4b06-9f0e-f531b4482819" (UID: "6932272c-c54a-4b06-9f0e-f531b4482819"). InnerVolumeSpecName "kube-api-access-6p8s9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 12:00:08 crc kubenswrapper[4728]: I0227 12:00:08.264790 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6p8s9\" (UniqueName: \"kubernetes.io/projected/6932272c-c54a-4b06-9f0e-f531b4482819-kube-api-access-6p8s9\") on node \"crc\" DevicePath \"\"" Feb 27 12:00:08 crc kubenswrapper[4728]: I0227 12:00:08.467434 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536560-dncct" event={"ID":"6932272c-c54a-4b06-9f0e-f531b4482819","Type":"ContainerDied","Data":"9e63c282352685068ec11d0882301de08d7e4a1155f6909592b0b8a9c0a89b57"} Feb 27 12:00:08 crc kubenswrapper[4728]: I0227 12:00:08.467492 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9e63c282352685068ec11d0882301de08d7e4a1155f6909592b0b8a9c0a89b57" Feb 27 12:00:08 crc kubenswrapper[4728]: I0227 12:00:08.467563 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536560-dncct" Feb 27 12:00:08 crc kubenswrapper[4728]: I0227 12:00:08.546442 4728 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536554-t2f52"] Feb 27 12:00:08 crc kubenswrapper[4728]: I0227 12:00:08.559202 4728 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536554-t2f52"] Feb 27 12:00:08 crc kubenswrapper[4728]: I0227 12:00:08.756800 4728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c50863b8-d265-469c-abef-c687b38cd733" path="/var/lib/kubelet/pods/c50863b8-d265-469c-abef-c687b38cd733/volumes" Feb 27 12:00:20 crc kubenswrapper[4728]: I0227 12:00:20.737449 4728 scope.go:117] "RemoveContainer" containerID="9b3890ae50eebd358cd76204b52309ec97b933f23feef3dbb7097656248b44be" Feb 27 12:00:20 crc kubenswrapper[4728]: E0227 12:00:20.740888 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mf2hh_openshift-machine-config-operator(c2cfd349-f825-497b-b698-7fb6bc258b22)\"" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" Feb 27 12:00:34 crc kubenswrapper[4728]: I0227 12:00:34.079955 4728 scope.go:117] "RemoveContainer" containerID="73fe2eb2fec8b51b0f1be06be96e5eab21330f150d1efcb60122861cd4f119bc" Feb 27 12:00:34 crc kubenswrapper[4728]: I0227 12:00:34.131297 4728 scope.go:117] "RemoveContainer" containerID="2c513bb8834b63f61b290eeddbb7ef88059e4ba37289e40354e132c4a2a74ff6" Feb 27 12:00:34 crc kubenswrapper[4728]: I0227 12:00:34.726069 4728 scope.go:117] "RemoveContainer" containerID="9b3890ae50eebd358cd76204b52309ec97b933f23feef3dbb7097656248b44be" Feb 27 12:00:34 crc kubenswrapper[4728]: E0227 12:00:34.726762 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mf2hh_openshift-machine-config-operator(c2cfd349-f825-497b-b698-7fb6bc258b22)\"" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" Feb 27 12:00:47 crc kubenswrapper[4728]: I0227 12:00:47.725891 4728 scope.go:117] "RemoveContainer" containerID="9b3890ae50eebd358cd76204b52309ec97b933f23feef3dbb7097656248b44be" Feb 27 12:00:47 crc kubenswrapper[4728]: E0227 12:00:47.727065 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mf2hh_openshift-machine-config-operator(c2cfd349-f825-497b-b698-7fb6bc258b22)\"" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" Feb 27 12:00:58 crc kubenswrapper[4728]: I0227 12:00:58.725414 4728 scope.go:117] "RemoveContainer" containerID="9b3890ae50eebd358cd76204b52309ec97b933f23feef3dbb7097656248b44be" Feb 27 12:00:58 crc kubenswrapper[4728]: E0227 12:00:58.727427 4728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mf2hh_openshift-machine-config-operator(c2cfd349-f825-497b-b698-7fb6bc258b22)\"" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" podUID="c2cfd349-f825-497b-b698-7fb6bc258b22" Feb 27 12:01:00 crc kubenswrapper[4728]: I0227 12:01:00.203556 4728 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29536561-knlcq"] Feb 27 12:01:00 crc kubenswrapper[4728]: E0227 12:01:00.204662 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afeb536a-f7e3-4b88-ab78-04f4cd8cefaf" containerName="collect-profiles" Feb 27 12:01:00 crc kubenswrapper[4728]: I0227 12:01:00.204679 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="afeb536a-f7e3-4b88-ab78-04f4cd8cefaf" containerName="collect-profiles" Feb 27 12:01:00 crc kubenswrapper[4728]: E0227 12:01:00.204724 4728 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6932272c-c54a-4b06-9f0e-f531b4482819" containerName="oc" Feb 27 12:01:00 crc kubenswrapper[4728]: I0227 12:01:00.204732 4728 state_mem.go:107] "Deleted CPUSet assignment" podUID="6932272c-c54a-4b06-9f0e-f531b4482819" containerName="oc" Feb 27 12:01:00 crc kubenswrapper[4728]: I0227 12:01:00.205016 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="6932272c-c54a-4b06-9f0e-f531b4482819" containerName="oc" Feb 27 12:01:00 crc kubenswrapper[4728]: I0227 12:01:00.205058 4728 memory_manager.go:354] "RemoveStaleState removing state" podUID="afeb536a-f7e3-4b88-ab78-04f4cd8cefaf" containerName="collect-profiles" Feb 27 12:01:00 crc kubenswrapper[4728]: I0227 12:01:00.206071 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29536561-knlcq" Feb 27 12:01:00 crc kubenswrapper[4728]: I0227 12:01:00.251798 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29536561-knlcq"] Feb 27 12:01:00 crc kubenswrapper[4728]: I0227 12:01:00.356086 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f11619e7-7d0f-40f6-95fd-487291ac5522-config-data\") pod \"keystone-cron-29536561-knlcq\" (UID: \"f11619e7-7d0f-40f6-95fd-487291ac5522\") " pod="openstack/keystone-cron-29536561-knlcq" Feb 27 12:01:00 crc kubenswrapper[4728]: I0227 12:01:00.356751 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f11619e7-7d0f-40f6-95fd-487291ac5522-fernet-keys\") pod \"keystone-cron-29536561-knlcq\" (UID: \"f11619e7-7d0f-40f6-95fd-487291ac5522\") " pod="openstack/keystone-cron-29536561-knlcq" Feb 27 12:01:00 crc kubenswrapper[4728]: I0227 12:01:00.356830 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f11619e7-7d0f-40f6-95fd-487291ac5522-combined-ca-bundle\") pod \"keystone-cron-29536561-knlcq\" (UID: \"f11619e7-7d0f-40f6-95fd-487291ac5522\") " pod="openstack/keystone-cron-29536561-knlcq" Feb 27 12:01:00 crc kubenswrapper[4728]: I0227 12:01:00.357080 4728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-768xc\" (UniqueName: \"kubernetes.io/projected/f11619e7-7d0f-40f6-95fd-487291ac5522-kube-api-access-768xc\") pod \"keystone-cron-29536561-knlcq\" (UID: \"f11619e7-7d0f-40f6-95fd-487291ac5522\") " pod="openstack/keystone-cron-29536561-knlcq" Feb 27 12:01:00 crc kubenswrapper[4728]: I0227 12:01:00.460326 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f11619e7-7d0f-40f6-95fd-487291ac5522-fernet-keys\") pod \"keystone-cron-29536561-knlcq\" (UID: \"f11619e7-7d0f-40f6-95fd-487291ac5522\") " pod="openstack/keystone-cron-29536561-knlcq" Feb 27 12:01:00 crc kubenswrapper[4728]: I0227 12:01:00.460432 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f11619e7-7d0f-40f6-95fd-487291ac5522-combined-ca-bundle\") pod \"keystone-cron-29536561-knlcq\" (UID: \"f11619e7-7d0f-40f6-95fd-487291ac5522\") " pod="openstack/keystone-cron-29536561-knlcq" Feb 27 12:01:00 crc kubenswrapper[4728]: I0227 12:01:00.460666 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-768xc\" (UniqueName: \"kubernetes.io/projected/f11619e7-7d0f-40f6-95fd-487291ac5522-kube-api-access-768xc\") pod \"keystone-cron-29536561-knlcq\" (UID: \"f11619e7-7d0f-40f6-95fd-487291ac5522\") " pod="openstack/keystone-cron-29536561-knlcq" Feb 27 12:01:00 crc kubenswrapper[4728]: I0227 12:01:00.460929 4728 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f11619e7-7d0f-40f6-95fd-487291ac5522-config-data\") pod \"keystone-cron-29536561-knlcq\" (UID: \"f11619e7-7d0f-40f6-95fd-487291ac5522\") " pod="openstack/keystone-cron-29536561-knlcq" Feb 27 12:01:00 crc kubenswrapper[4728]: I0227 12:01:00.583282 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f11619e7-7d0f-40f6-95fd-487291ac5522-combined-ca-bundle\") pod \"keystone-cron-29536561-knlcq\" (UID: \"f11619e7-7d0f-40f6-95fd-487291ac5522\") " pod="openstack/keystone-cron-29536561-knlcq" Feb 27 12:01:00 crc kubenswrapper[4728]: I0227 12:01:00.598458 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-768xc\" (UniqueName: \"kubernetes.io/projected/f11619e7-7d0f-40f6-95fd-487291ac5522-kube-api-access-768xc\") pod \"keystone-cron-29536561-knlcq\" (UID: \"f11619e7-7d0f-40f6-95fd-487291ac5522\") " pod="openstack/keystone-cron-29536561-knlcq" Feb 27 12:01:00 crc kubenswrapper[4728]: I0227 12:01:00.598719 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f11619e7-7d0f-40f6-95fd-487291ac5522-fernet-keys\") pod \"keystone-cron-29536561-knlcq\" (UID: \"f11619e7-7d0f-40f6-95fd-487291ac5522\") " pod="openstack/keystone-cron-29536561-knlcq" Feb 27 12:01:00 crc kubenswrapper[4728]: I0227 12:01:00.602021 4728 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f11619e7-7d0f-40f6-95fd-487291ac5522-config-data\") pod \"keystone-cron-29536561-knlcq\" (UID: \"f11619e7-7d0f-40f6-95fd-487291ac5522\") " pod="openstack/keystone-cron-29536561-knlcq" Feb 27 12:01:00 crc kubenswrapper[4728]: I0227 12:01:00.866958 4728 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29536561-knlcq" Feb 27 12:01:01 crc kubenswrapper[4728]: W0227 12:01:01.517411 4728 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf11619e7_7d0f_40f6_95fd_487291ac5522.slice/crio-45163c536b4950d7be62b124b4f39e2c4346f88adc05087e758ce24bc667ebfe WatchSource:0}: Error finding container 45163c536b4950d7be62b124b4f39e2c4346f88adc05087e758ce24bc667ebfe: Status 404 returned error can't find the container with id 45163c536b4950d7be62b124b4f39e2c4346f88adc05087e758ce24bc667ebfe Feb 27 12:01:01 crc kubenswrapper[4728]: I0227 12:01:01.522405 4728 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29536561-knlcq"] Feb 27 12:01:02 crc kubenswrapper[4728]: I0227 12:01:02.198198 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29536561-knlcq" event={"ID":"f11619e7-7d0f-40f6-95fd-487291ac5522","Type":"ContainerStarted","Data":"5dd7418fb48eed035f91334ac67429f2a99fdf29d0926bc41d4af7419a8e8331"} Feb 27 12:01:02 crc kubenswrapper[4728]: I0227 12:01:02.199009 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29536561-knlcq" event={"ID":"f11619e7-7d0f-40f6-95fd-487291ac5522","Type":"ContainerStarted","Data":"45163c536b4950d7be62b124b4f39e2c4346f88adc05087e758ce24bc667ebfe"} Feb 27 12:01:02 crc kubenswrapper[4728]: I0227 12:01:02.228833 4728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29536561-knlcq" podStartSLOduration=2.228811632 podStartE2EDuration="2.228811632s" podCreationTimestamp="2026-02-27 12:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 12:01:02.215287013 +0000 UTC m=+5682.177653119" watchObservedRunningTime="2026-02-27 12:01:02.228811632 +0000 UTC m=+5682.191177738" Feb 27 12:01:05 crc kubenswrapper[4728]: I0227 12:01:05.262821 4728 generic.go:334] "Generic (PLEG): container finished" podID="f11619e7-7d0f-40f6-95fd-487291ac5522" containerID="5dd7418fb48eed035f91334ac67429f2a99fdf29d0926bc41d4af7419a8e8331" exitCode=0 Feb 27 12:01:05 crc kubenswrapper[4728]: I0227 12:01:05.263546 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29536561-knlcq" event={"ID":"f11619e7-7d0f-40f6-95fd-487291ac5522","Type":"ContainerDied","Data":"5dd7418fb48eed035f91334ac67429f2a99fdf29d0926bc41d4af7419a8e8331"} Feb 27 12:01:06 crc kubenswrapper[4728]: I0227 12:01:06.747409 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29536561-knlcq" Feb 27 12:01:06 crc kubenswrapper[4728]: I0227 12:01:06.867349 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f11619e7-7d0f-40f6-95fd-487291ac5522-combined-ca-bundle\") pod \"f11619e7-7d0f-40f6-95fd-487291ac5522\" (UID: \"f11619e7-7d0f-40f6-95fd-487291ac5522\") " Feb 27 12:01:06 crc kubenswrapper[4728]: I0227 12:01:06.867520 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f11619e7-7d0f-40f6-95fd-487291ac5522-config-data\") pod \"f11619e7-7d0f-40f6-95fd-487291ac5522\" (UID: \"f11619e7-7d0f-40f6-95fd-487291ac5522\") " Feb 27 12:01:06 crc kubenswrapper[4728]: I0227 12:01:06.867583 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f11619e7-7d0f-40f6-95fd-487291ac5522-fernet-keys\") pod \"f11619e7-7d0f-40f6-95fd-487291ac5522\" (UID: \"f11619e7-7d0f-40f6-95fd-487291ac5522\") " Feb 27 12:01:06 crc kubenswrapper[4728]: I0227 12:01:06.867631 4728 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-768xc\" (UniqueName: \"kubernetes.io/projected/f11619e7-7d0f-40f6-95fd-487291ac5522-kube-api-access-768xc\") pod \"f11619e7-7d0f-40f6-95fd-487291ac5522\" (UID: \"f11619e7-7d0f-40f6-95fd-487291ac5522\") " Feb 27 12:01:06 crc kubenswrapper[4728]: I0227 12:01:06.874555 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f11619e7-7d0f-40f6-95fd-487291ac5522-kube-api-access-768xc" (OuterVolumeSpecName: "kube-api-access-768xc") pod "f11619e7-7d0f-40f6-95fd-487291ac5522" (UID: "f11619e7-7d0f-40f6-95fd-487291ac5522"). InnerVolumeSpecName "kube-api-access-768xc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 12:01:06 crc kubenswrapper[4728]: I0227 12:01:06.875273 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f11619e7-7d0f-40f6-95fd-487291ac5522-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "f11619e7-7d0f-40f6-95fd-487291ac5522" (UID: "f11619e7-7d0f-40f6-95fd-487291ac5522"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 12:01:06 crc kubenswrapper[4728]: I0227 12:01:06.905096 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f11619e7-7d0f-40f6-95fd-487291ac5522-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f11619e7-7d0f-40f6-95fd-487291ac5522" (UID: "f11619e7-7d0f-40f6-95fd-487291ac5522"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 12:01:06 crc kubenswrapper[4728]: I0227 12:01:06.959656 4728 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f11619e7-7d0f-40f6-95fd-487291ac5522-config-data" (OuterVolumeSpecName: "config-data") pod "f11619e7-7d0f-40f6-95fd-487291ac5522" (UID: "f11619e7-7d0f-40f6-95fd-487291ac5522"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 12:01:06 crc kubenswrapper[4728]: I0227 12:01:06.973292 4728 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f11619e7-7d0f-40f6-95fd-487291ac5522-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 12:01:06 crc kubenswrapper[4728]: I0227 12:01:06.973352 4728 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f11619e7-7d0f-40f6-95fd-487291ac5522-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 27 12:01:06 crc kubenswrapper[4728]: I0227 12:01:06.973367 4728 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-768xc\" (UniqueName: \"kubernetes.io/projected/f11619e7-7d0f-40f6-95fd-487291ac5522-kube-api-access-768xc\") on node \"crc\" DevicePath \"\"" Feb 27 12:01:06 crc kubenswrapper[4728]: I0227 12:01:06.973429 4728 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f11619e7-7d0f-40f6-95fd-487291ac5522-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 12:01:07 crc kubenswrapper[4728]: I0227 12:01:07.294454 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29536561-knlcq" event={"ID":"f11619e7-7d0f-40f6-95fd-487291ac5522","Type":"ContainerDied","Data":"45163c536b4950d7be62b124b4f39e2c4346f88adc05087e758ce24bc667ebfe"} Feb 27 12:01:07 crc kubenswrapper[4728]: I0227 12:01:07.294544 4728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="45163c536b4950d7be62b124b4f39e2c4346f88adc05087e758ce24bc667ebfe" Feb 27 12:01:07 crc kubenswrapper[4728]: I0227 12:01:07.294539 4728 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29536561-knlcq" Feb 27 12:01:13 crc kubenswrapper[4728]: I0227 12:01:13.725665 4728 scope.go:117] "RemoveContainer" containerID="9b3890ae50eebd358cd76204b52309ec97b933f23feef3dbb7097656248b44be" Feb 27 12:01:14 crc kubenswrapper[4728]: I0227 12:01:14.426989 4728 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mf2hh" event={"ID":"c2cfd349-f825-497b-b698-7fb6bc258b22","Type":"ContainerStarted","Data":"998aff27f43c3fe52b1c8164fba9fa2c0979cc9df8c3598090f332477e14d782"} Feb 27 12:01:34 crc kubenswrapper[4728]: I0227 12:01:34.246662 4728 scope.go:117] "RemoveContainer" containerID="656ecff4ced4888623ad7ac4fb14a24f6007de5963e16c20fbc7c4b82cc56126" Feb 27 12:01:34 crc kubenswrapper[4728]: I0227 12:01:34.293433 4728 scope.go:117] "RemoveContainer" containerID="cefbe0f0bf0963e521c3683a187043a7ed0984ad937044a02ad6738590991b30" Feb 27 12:01:34 crc kubenswrapper[4728]: I0227 12:01:34.390318 4728 scope.go:117] "RemoveContainer" containerID="bf697459bcf31e8a171233f0dbdf1dd89dd78a4a5cf05101e9038cc7f858da59"